#terraform (2020-07)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2020-07-31

walicolc avatar
walicolc

Hi all - terraform’s explanation of self is rather confusing for someone using terraform for the first time like myself. Can anyone ELI5 it for me please, thanks in advance.

Denys avatar
Denys

This is common programming reference, https://en.wikipedia.org/wiki/This_(computer_programming)

For example

resource "aws_instance" "web" {
  # ...

  provisioner "local-exec" {
    command = "echo The server's IP address is ${self.private_ip}"
  }
}

self.private_ip means that is private_ip atribute of resource containing this reference == aws_instance.web.private_ip (can be used any from listed here: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/instance)

Terraform could not use direct reference to this aws_instance.web.private_ip from inside local provisioner because it will create dependency cycle

walicolc avatar
walicolc

Hi Denys, thanks for the ping. That does make sense, taking a look at the below example:

data "aws_ami" "selected" {
  most_recent = true
  owners      = ["self"]
  filter {
    name   = "foo"
    values = ["bar-*"]
  }
}

Would I be correct in saying, it’ll use the owner ID of the individual executing the terraform apply in stdin to retrieve that data source

Denys avatar
Denys

Oh, no. This has another meaning. It means that owner is the current AWS account.

Find all ami images using filter foo=bar-* created by the same AWS account as you running terraform from https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ami

walicolc avatar
walicolc

Ok got it, would it be possible to run and see the output of self in terraform console for instance, I’m just curious to see what sort of stuff it returns. Or is it just the aws account id

Denys avatar
Denys

actually there is most_recent = true, means only latest one

Denys avatar
Denys

I suppose https://www.terraform.io/docs/commands/show.html is what you are looking for

Command: show - Terraform by HashiCorp

The terraform show command is used to provide human-readable output from a state or plan file. This can be used to inspect a plan to ensure that the planned operations are expected, or to inspect the current state as Terraform sees it.

walicolc avatar
walicolc

Appreciate the help Denys, I’ll go grab lunch and read it when i get back to my workstation. Thanks again

joshmyers avatar
joshmyers

Random question. During a TF run, I want to pull in a JSON file from another github repo….can’t use git submodules as a) they are kinda nasty b) (upstream) Atlantis doesn’t support cloning submodules. Anything bad about (ab)using the terraform module source code to pull in the repo?

joshmyers avatar
joshmyers
module "badgers" {
  source = "git::[email protected]:foo/bar"
}

output "schema" {
  value = jsondecode(file("${path.module}/badgers/db_schema/table_account_schema.json"))
}
thumbsup_all1
joshmyers avatar
joshmyers

works…but I’m not sure if it is a terrible idea

joshmyers avatar
joshmyers

to be clear, [email protected]:foo/bar is not a Terraform codebase.

joshmyers avatar
joshmyers
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:

schema = {
  "AttributeDefinitions" = [
    {
      "AttributeName" = "id"
      "AttributeType" = "S"
    },
  ]
  "KeySchema" = [
    {
      "AttributeName" = "id"
      "KeyType" = "HASH"
    },
  ]
  "ProvisionedThroughput" = {
    "ReadCapacityUnits" = 1
    "WriteCapacityUnits" = 1
  }
  "TableName" = "foobar"
}
joshmyers avatar
joshmyers
01:52:20 PM

¯_(ツ)_/¯

loren avatar
loren

and you can’t add a little tf to that remote git repo to just output the file, so you can reference module outputs?

joshmyers avatar
joshmyers

Could, but rather not, there are a lot of services

loren avatar
loren

the only “bad” thing about it i can think of is that you need to embed a lot of info in this module about how the remote repo is structured

loren avatar
loren

can protect against changes in the remote using a ref, of course

joshmyers avatar
joshmyers

Aye, this will be at the top level wrapper module (terraform-root-module style) and I actually want to get a few JSON files from there

joshmyers avatar
joshmyers

Aye

loren avatar
loren

i kinda like it. i’ve vendored entire repos before, but this is easier to maintain

thumbsup_all2
joshmyers avatar
joshmyers

Cool, thanks for the people

RB avatar

im using atlantis 0.14.0 with submodules without any issue

joshmyers avatar
joshmyers

I kinda like it too, but it feels too easy hah

RB avatar
RB
01:58:28 PM

¯_(ツ)_/¯

joshmyers avatar
joshmyers

cloudposse/atlantis @RB?

RB avatar

nope, using the official one

joshmyers avatar
joshmyers

hmm, are you doing something special in your atlantis.yml ?

RB avatar

nah

RB avatar

“it just works”

joshmyers avatar
joshmyers
Support Git Submodules · Issue #311 · runatlantis/atlantis

what Support git submodules why It appears that clones are not recursive: atlantis/server/events/working_dir.go Line 113 in f057d6d cloneCmd := exec.Command("git", "clone", clon…

joshmyers avatar
joshmyers

hmm

RB avatar

oh oh wait a minute…

RB avatar

i was thinking “submodules” were specific directories (modules) in a git repo

RB avatar

that is for “git submodules”

RB avatar

yes, i dont use “git submodules”

joshmyers avatar
joshmyers

OK, makes sense, cheeers

joshmyers avatar
joshmyers

TIL during a terraform init for a git source it will also pull in submodules if there are any.

cool-doge1
thumbsup_all1
contact871 avatar
contact871
hashicorp/go-getter

Package for downloading things from a string URL using a variety of protocols. - hashicorp/go-getter

:--1:1
joshmyers avatar
joshmyers

Nice, thanks @contact871

2020-07-30

Frank avatar
Frank

Hello. Is anyone using the terraform-aws-ecs-web-app module with EFS Volumes on Fargate? I’m looking for an example on how to configure it

RB avatar

you can create your own container definition with any mounted volumes if you like and then pass it into the ecs-web-app module using var.container_definition

RB avatar

looks like there is a new EFSVolumeConfiguration directive in the task definition

RB avatar
EFSVolumeConfiguration - Amazon Elastic Container Service

This parameter is specified when you are using an Amazon Elastic File System file system for task storage. For more information, see Amazon EFS Volumes in the Amazon Elastic Container Service Developer Guide .

Frank avatar
Frank

Ah via container_definition would be another way of doing that worth investigating. Thus far I’ve been trying to get it to work using the efs_volume_configuration , but for some reason it’s not working

zeid.derhally avatar
zeid.derhally

anyone else notice a slowdown with terraform plan/apply targeting AWS? I only started noticing it a couple of days ago, or maybe I’m imagining things

Brij S avatar
Brij S

I’ve noticed this as well actually..

Zach avatar

us-east-1 is on fire the last 2 weeks

Zach avatar

so that might be it

Denys avatar
Denys

https://status.aws.amazon.com/ there are couple of stories around ec2 api error rates this week for us-east-1

Zach avatar

several of which weren’t even noted there but a bunch of folks in the hangops slack all confirmed we were seeing the same problems around the same times

zeid.derhally avatar
zeid.derhally

i’m currently targeting us-east-2

Eric Berg avatar
Eric Berg

I’m trying to set cors_rule in an aws_s3_bucket resource, using a dynamic block. the data looks like this:

  cors_rules = {
    cdn = {
      allowed_headers = ["*"]
      allowed_methods = ["POST", "GET"]
      allowed_origins = concat([
        "[brace.ai](https://borrower>\-\$\{var\.name\}\.<http://brace\.ai)",
        "[brace.ai](https://servicer>\-\$\{var\.name\}\.<http://brace\.ai)"
        ],
        lookup(local.extra_bucket_origins, var.name, [])
      )
      expose_headers  = ["ETag"]
      max_age_seconds = 3000
    },
    borrower = {
      allowed_headers = ["*"]
      allowed_methods = ["POST", "GET"]
      allowed_origins = concat([
        "[brace.ai](https://borrower>\-\$\{local\.name\}\.<http://brace\.ai)",
        "[brace.ai](https://servicer>\-\$\{local\.name\}\.<http://brace\.ai)"
        ],
        lookup(local.extra_bucket_origins, var.name, [])
      )
      expose_headers  = ["ETag"]
      max_age_seconds = 3000
    },
    servicer = {
    }
  }

I’m just not able to visualize how to reference this in the dynamic block. Anybody have any ideas?

I tried this:

  dynamic "cors_rule" {
    for_each = lookup(local.cors_rules, var.service)
    content {
      allowed_headers = lookup(cors_rule.value, "allowed_headers", null)
      allowed_methods = lookup(cors_rule.value, "allowed_methods", null)
      allowed_origins = lookup(cors_rule.value, "allowed_origins", null)
      expose_headers  = lookup(cors_rule.value, "expose_headers", null)
      max_age_seconds = lookup(cors_rule.value, "max_age_seconds", null)
    }
  }

Thanks for any help you can provide.

RB avatar

im unsure what the exact issue is but here’s a good example of its use

https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/blob/master/main.tf#L138

cloudposse/terraform-aws-cloudfront-s3-cdn

Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn

Eric Berg avatar
Eric Berg

great! that’s just the kind of example what i’ve been looking for. Thanks

Eric Berg avatar
Eric Berg

Hmm…that’s not really my use case. They’re passing in separate variables for the various fields, whereas I’ve got a data structure that I want to use to populate the cors_rule values.

So, the question is how I access specific values in that local data structure.

If I could do this, it would solve my problem:

  cors_rule {
    allowed_headers = local.cors_rules[var.service]["allowed_headers"]
    allowed_methods = local.cors_rules[var.service]["allowed_methods"]
    allowed_origins = local.cors_rules[var.service]["allowed_origins"]
    expose_headers  = local.cors_rules[var.service]["expose_headers"]
    max_age_seconds = local.cors_rules[var.service]["max_age_seconds"]
  }

So, the basic question is how to access elements of the local.cors_rules here.

Eric Berg avatar
Eric Berg

I tried a double lookup(), but that didn’t work. This works, though I had to add empty/blank values to the formerly-empty servicer attribute in the data:

  cors_rule {
    allowed_methods = lookup(local.cors_rules, var.service)["allowed_methods"]
    allowed_origins = lookup(local.cors_rules, var.service).allowed_origins
    expose_headers  = lookup(local.cors_rules, var.service).expose_headers
    max_age_seconds = lookup(local.cors_rules, var.service).max_age_seconds
  }

Notice that both the [<key>] and . syntaxes work.

1
:--1:1
RB avatar
Release v3.0.0 · terraform-providers/terraform-provider-aws

NOTES: provider: This version is built using Go 1.14.5, including security fixes to the crypto/x509 and net/http packages. BREAKING CHANGES provider: New versions of the provider can only be aut…

:--1:3
PePe avatar

interesting : resource/aws_codepipeline: Removes GITHUB_TOKEN environment variable

Release v3.0.0 · terraform-providers/terraform-provider-aws

NOTES: provider: This version is built using Go 1.14.5, including security fixes to the crypto/x509 and net/http packages. BREAKING CHANGES provider: New versions of the provider can only be aut…

PePe avatar

and this too

PePe avatar

provider: Add assume_role configuration block duration_seconds, policy_arns, tags, and transitive_tag_keys arguments (#14077)

provider: Authentication updates for Terraform AWS Provider v3.0.0 by bflad · Pull Request #14077 · terraform-providers/terraform-provider-aws

Community Note Please vote on this pull request by adding a :–1: reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave &quot;…

Eric Berg avatar
Eric Berg

So, what’s the recommended adoption schedule for updates like this?

RB avatar

recommended by who ? aws ? hashicorp ?

Eric Berg avatar
Eric Berg

you

Eric Berg avatar
Eric Berg

“Us”

Eric Berg avatar
Eric Berg

My guess is that it’ll be a few months, before there’s significant movement to deploying to production environments with this release.

RB avatar

probably

RB avatar

i usually wait for some patches to come out before adopting

2020-07-29

Pierre-Yves avatar
Pierre-Yves

Hello, I am looking for resources and examples on how to use terraform console there is not so much information around .. can you point me to videos or web pages ?

Brij S avatar
Brij S

Hey all! Does anyone here use TFE? If so, do you know if it supports submodules? example

RB avatar
module "iam" {
  # this can and should be pinned to a release tag using ?ref=tags/x.y.z
  source = "[email protected]:terraform-aws-modules/terraform-aws-iam.git//modules/iam-assumable-role"
}

example for https://github.com/terraform-aws-modules/terraform-aws-iam/tree/master/modules/iam-assumable-role

RB avatar

terraform enterprise and terraform free should suppor tboth

MrAtheist avatar
MrAtheist

Anyone got a recommended full fledged terraform template to run an app in ECS? (specifically on ec2)

RB avatar
cloudposse/terraform-aws-ecs-web-app

Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app

1
RB avatar

set the launch_type to ec2 if it doesnt default it already

2020-07-28

Valter Silva avatar
Valter Silva

Hi everybody

Valter Silva avatar
Valter Silva

Anyone have use the terraform-aws-key-pair module on Terraform Cloud?

Valter Silva avatar
Valter Silva

I’ve used it in a new workspace, and every time I queue a plan, it gives me the following error:

Joe Niland avatar
Joe Niland

what did you set for the ssh_public_key_path variable ?

Valter Silva avatar
Valter Silva

secrets

Valter Silva avatar
Valter Silva

I guess I’ve just used the default value

Joe Niland avatar
Joe Niland

ok try with an absolute path or “./secrets”

Joe Niland avatar
Joe Niland

you’re trying to use an existing key right?

Valter Silva avatar
Valter Silva

yes, correct Joe

Valter Silva avatar
Valter Silva

Will try with an absolute path

Valter Silva avatar
Valter Silva

Same error Joe

Error: Error in function call
  on .terraform/modules/aws_key_pair/main.tf line 30, in resource "aws_key_pair" "imported":
  30:   public_key = file(local.public_key_filename)
    |----------------
    | local.public_key_filename is "./secrets/acme-dev-myapp.pub"
Call to function "file" failed: no file exists at secrets/acme-dev-myapp.pub.
Joe Niland avatar
Joe Niland
cloudposse/terraform-aws-key-pair

Terraform Module to Automatically Generate SSH Key Pairs (Public/Private Keys) - cloudposse/terraform-aws-key-pair

Joe Niland avatar
Joe Niland

./ will be relative to your module path which can vary a bit depending on how you’re running terraform

Valter Silva avatar
Valter Silva

yeah, I saw that as well. Unfortunately, I’m having some issues using this module.. won’t use it in future projects

Valter Silva avatar
Valter Silva

Basically the workaround I did now, is to leave generate_ssh_key = true this way it doesn’t mess with the key pair

Joe Niland avatar
Joe Niland

that is strange - is it generating the files for you?

Joe Niland avatar
Joe Niland

if you can, please create an issue in github

Joe Niland avatar
Joe Niland

also you could try the previous release 0.11.0

Valter Silva avatar
Valter Silva
Error: Error in function call

  on .terraform/modules/aws_key_pair/main.tf line 30, in resource "aws_key_pair" "imported":
  30:   public_key = file(local.public_key_filename)
    |----------------
    | local.public_key_filename is "secrets/acme-dev-myapp.pub"

Call to function "file" failed: no file exists at secrets/acme-dev-myapp.pub.
Pierre-Yves avatar
Pierre-Yves

hello, you should specify root.path like this: public_key =file(${root.path}/blah/${local..public_key_filename})

Valter Silva avatar
Valter Silva

Hi @Pierre-Yves, is ${root.path}available for Terraform Cloud?

Pierre-Yves avatar
Pierre-Yves

I guess so it’s in terraform 0.12 .

Also you may want to consider storing the ssh key in vault and not in the repos

Valter Silva avatar
Valter Silva

Now I can’t queue any plans because it will force a new key pair to be created..

Pierre-Yves avatar
Pierre-Yves

hello, i am looking on a way to simplify this code that is generating azure lb config with a for or for_each loop. may be the data structure needs to be changed the point is that I would like to loop over the public ip ( ip1 and ip2) and then over each remote_port and lb_port .. can you help ?

locals {
  lbconfig = {
    ip1 = {
       remote_port = {
        http  = ["Tcp", "80"]
        https = ["Tcp", "443"]
       }
      lb_port = {
        http = ["80", "Tcp", "80"]
        http = ["443", "Tcp", "443"]
      }
    }
    ip2 = {
       remote_port = {
        http  = ["Tcp", "80"]
        https = ["Tcp", "443"]
       }
      lb_port = {
        http = ["80", "Tcp", "80"]
        http = ["443", "Tcp", "443"]
      }
    }
  }
}

resource "azurerm_lb_rule" "azlb" {
  count                          = length(local.lbconfig["ip1"]["lb_port"])
  resource_group_name            = var.resource_group_name
  loadbalancer_id                = azurerm_lb.azlb.id
  name                           = "${var.prefix}-${var.env}-${element(keys(local.lbconfig["ip1"]["lb_port"]), count.index)}"
  protocol                       = element(local.lbconfig["ip1"]["lb_port"]["${element(keys(local.lbconfig["ip1"]["lb_port"]), count.index)}"], 1)
  frontend_port                  = element(local.lbconfig["ip1"]["lb_port"]["${element(keys(local.lbconfig["ip1"]["lb_port"]), count.index)}"], 0)
  backend_port                   = element(local.lbconfig["ip1"]["lb_port"]["${element(keys(local.lbconfig["ip1"]["lb_port"]), count.index)}"], 2)
  frontend_ip_configuration_name = var.frontend_name
  enable_floating_ip             = false
  backend_address_pool_id        = azurerm_lb_backend_address_pool.azlb.id
  idle_timeout_in_minutes        = 5
  probe_id                       = element(azurerm_lb_probe.azlb.*.id, count.index)
  depends_on                     = [azurerm_lb_probe.azlb]
}
rajeshb avatar
rajeshb

done something similar, i hope this helps

dev_env_apps     = { zeppelin = 8890, spark = 18080, master = 50070 }
resource "aws_alb_target_group" "this" {
  for_each             = var.dev_env_apps
  name                 = "${var.tags.environment}-${each.key}-alb-tg"
  port                 = each.value
  protocol             = "HTTP"
  vpc_id               = var.vpc_id
  deregistration_delay = 30
  target_type          = "instance"
  tags                 = var.tags
}

resource "aws_alb_listener_rule" "this" {
  for_each     = var.dev_env_apps
  listener_arn = var.alb_lstnr_arn

  action {
    target_group_arn = aws_alb_target_group.this[each.key].arn
    type             = "forward"
  }

  condition {
    field  = "host-header"
    values = ["${each.key}.*"]
  }
}

resource "aws_lb_target_group_attachment" "this" {
  depends_on       = [aws_emr_cluster.this]
  for_each         = var.dev_env_apps
  target_group_arn = aws_alb_target_group.this[each.key].arn
  target_id        = data.aws_instance.master.id
  port             = each.value
}

resource "aws_route53_record" "this" {
  for_each = var.dev_env_apps
  zone_id  = var.zone_id
  name     = "${each.key}.${var.zone_name}"
  type     = "A"

  alias {
    name                   = var.alb_dns_name
    zone_id                = var.alb_zone_id
    evaluate_target_health = true
  }
}
RB avatar

anyone come up with a terraform method of switching between launch_type=EC2 and launch_type=FARGATE with zero downtime ? looking for a terraform-y way to do this.

im wondering if the code deploy method would work

https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/codedeploy_deployment_group#blue-green-deployments-with-ecs

John avatar

Hi all, can anyone direct me on how to use multiple_definitions from terraform-aws-ecs-container-definition with the module terraform-aws-ecs-alb-service-task?

1
RB avatar

for the container_definition_json argument of the ecs_alb_service_task module

RB avatar

simply put in a list of container json

RB avatar
  container_definition_json = jsonencode([
    module.my_container.json_map_object,
    module.my_other_container.json_map_object,
    module.my_other_other_container.json_map_object,
  ])
John avatar

Hi thanks for the tip, I was trying that earlier and I get “An output value with the name “json_map” has not been declared”

RB avatar

the new container definition now uses a different output

RB avatar

what version of the container definition are you using

RB avatar

try using this output instead json_map_encoded

John avatar

I have just switched to 0.38.0, going to try your suggestion now, will update you shortly.

John avatar

I now see the following error: Error: ECS Task Definition container_definitions is invalid: Error decoding JSON: json: cannot unmarshal string into Go value of type ecs.ContainerDefinition

on .terraform/modules/ecs_alb_service_task/main.tf line 34, in resource "aws_ecs_task_definition" "default": 34: resource "aws_ecs_task_definition" "default" {

RB avatar

Maybe try "[${module.container.json_map_encoded}]" to see if it works

RB avatar

Then add additional containers to that one at a time

Aleksandr Fofanov avatar
Aleksandr Fofanov
cloudposse/terraform-aws-ecs-container-definition

Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource - cloudposse/terraform-aws-ecs-container-definition

:--1:1
John avatar

Guys, sorry for the delay, I will try your suggestion @RB and I will check the updated entries from that link @Aleksandr Fofanov and let you know if they work, will probably be tomorrow before you hear back from me, appreciate the help so far.

RB avatar

np. this person asked a similar question to you.

RB avatar
Multi-container task definition · Issue #69 · cloudposse/terraform-aws-ecs-alb-service-task

Multi-container task definition The current module implementation assumes one container per tast definition. AWS however allows multiple container definitions per single tast. terraform-aws-ecs-alb…

2020-07-27

Pierre-Yves avatar
Pierre-Yves

hello, I manage terraform code in multiple repos. Is there a way to add a resource tag recording the repos name ? alternatively I can manually set it.

Henry Carter avatar
Henry Carter

Assuming the folder the repo is checked out into is the same as the repo you can use an inline command to resolve the git repo and pass it to terraform command-line

Henry Carter avatar
Henry Carter
terraform [...] -var 'reponame=$(basename $(git rev-parse --show-toplevel))'
Henry Carter avatar
Henry Carter

It’s not very resilient, and I think I would prefer to use a vars file instead as it only needs setting once per repo…

RB avatar

We use an internal module to run a similar command. Hoping to open source it eventually. For now Henry’s method would work

contact871 avatar
contact871

Hi, any thoughts about Terraform modules coming from registry vs git? I can’t find any benefit of the registry other than centralizing modules and abstracting access to a different layer…

RB avatar

it doesnt seem like there’s a way to use modules from the tf registry that’s free for private modules. but it does allow you to use the version argument of a module.

whereas the git/ssh method requires you to use the ?ref=tags/x.y.z in order to pin down a version

RB avatar

i prefer the latter because it works for both cases

thumbsup_all1
contact871 avatar
contact871

thx @RB

np1
Steven avatar
Steven

There was another thread about this. Some differences with registry:

• Can only get released versions

• Faster. Download tarball instead of cloning repo

• Version is a release. Versus tag that could be changed There are private registries avaliable including tf enterprise and some open source tools

contact871 avatar
contact871

@Steven git tags can be re-pushed (with force) and this isn’t ideal from the security perspective. But couldn’t one use similar attack vector by removing and recreating a specific Terraform registry version?

Steven avatar
Steven

Terraform registry is based on github releases. I don’t think you can overwrite a release, but you can delete a release then recreate the same version. So from a security perspective, it is only a little harder to change. For me the first 2 reasons are the biggest wins

contact871 avatar
contact871

@Steven going through the code, I found that one could use the depth=1 parameter: https://github.com/hashicorp/go-getter/blob/3db8f8b08debed6f7d341e147435feefc2d3def3/get_git.go#L169

hashicorp/go-getter

Package for downloading things from a string URL using a variety of protocols. - hashicorp/go-getter

contact871 avatar
contact871

and here is an funny thing about tar vs clone:

$ time curl -O <https://codeload.github.com/gatsbyjs/gatsby/tar.gz/gatsby%402.24.11>
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  380M    0  380M    0     0  1195k      0 --:--:--  0:05:25 --:--:-- 2493k
real	5m 25.97s
user	0m 11.64s
sys	0m 15.40s
$ time git clone <https://github.com/gatsbyjs/gatsby.git> --depth 1 --branch [email protected]
Cloning into 'gatsby'...
remote: Enumerating objects: 7360, done.
remote: Counting objects: 100% (7360/7360), done.
remote: Compressing objects: 100% (6224/6224), done.
remote: Total 7360 (delta 715), reused 4268 (delta 459), pack-reused 0
Receiving objects: 100% (7360/7360), 367.08 MiB | 1.49 MiB/s, done.
Resolving deltas: 100% (715/715), done.
Note: switching to '0f2be968c0e3b889650f5fa5dd05692b50cd7b2a'.

...

Updating files: 100% (6347/6347), done.
real	4m 13.89s
user	0m 43.44s
sys	0m 26.42s

Might clone download things in parallel?

contact871 avatar
contact871

Can only get released versions -> I think this could be also seen as a limitation because one can get only releases. With git, one can get releases (tags), branches or even specific commits

contact871 avatar
contact871

Currently I personally don’t see any technical benefits of the registry :neutral_face:

The only benefit from the user perspective is the clear interface for authentication terraform login. The git alternative looks more complex and one would definitely need a wrapper.

contact871 avatar
contact871

@Steven I really appreciate Your feedback so far!

contact871 avatar
contact871

I see that there are issues with the depth parameter: https://github.com/hashicorp/terraform/issues/23641

git does not clone the defined branch when defining depth to module source url · Issue #23641 · hashicorp/terraform

Terraform Version Terraform v0.12.17 Terraform Configuration Files module &quot;test&quot; { source = &quot;git://github.com/terraform-providers/terraform-provider-azurerm.git//examples/app->…

walicolc avatar
walicolc

hello, is it possible to override the default_actions config in terraform-aws-modules/alb/aws v5.6.0. I notice it automatically creates an lb listener with default_actions , but I’d like to tweak it a bit

walicolc avatar
walicolc

resolved.

RB avatar

anyone use a module to create scheduled ecs tasks ? looking at this module, but open to other modules too.

https://github.com/cloudposse/terraform-aws-ecs-alb-service-task (56 stars but not as applicable)

leaning on the last one

thumbsup_all1
RB avatar

thread starter

RB avatar

i dont believe there is a cloudposse one except for the cloudposse alb one.

RB avatar

leaning on the last one since it has variables to turn on and off the iam roles by passing existing ones. would love to hear more thoughts on this.

Cloud Posse avatar
Cloud Posse
04:00:59 PM

:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is Aug 05, 2020 11:30AM.
Register for Webinar
slack #office-hours (our channel)

Jon avatar

How do I properly get outputs working for the S3 module?

  source  = "cloudposse/s3-bucket/aws"
  version = "0.14.0"
Jon avatar

I’m trying to do this but it isn’t working..

output "this_s3_bucket_id" {
  description = "The name of the bucket."
  value       = module.cloudtrail_s3_bucket.aws_s3_bucket.default.*.bucket_id
}
Jon avatar

I tried to copy the outputs from the GitHub repo thinking that would help but it didn’t..

Jon avatar

never mind. Found the examples directory with the outputs. Sorry about that

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

all Cloud Posse modules have examples directory with a working example, which is deployed to AWS using terratest in the test directory

:--1:1
Jon avatar

appreciate all the hard work getting these modules created and available to the public!

RB avatar

Been looking more into how to deny CRUD on aws resources outside of terraform

it seems like it may be possible with a conditional policy with stringlike on the useragent since the useragent contains the word terraform

RB avatar

anyone use this approach before?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I know @Jan implemented this before. His implementation included 2 accounts per user. One account permitted read-only webconsole access, and the other account admin access, but no web console access.

RB avatar

thats very cool. glad it’s been tried and tested. would love to hear more input from him

Jan avatar

Ah yea that was fun

Jan avatar

in the end through the simple truth though is that if you have a power user (CLI intended use) that user can just open the console via cli if they really wanted to

RB avatar

@ do you mind sharing your comparison for the aws:useragent? I haven’t tried this yet but was thinking about testing/implementing this…

  "Statement":[
    {
      "Sid": "Allow kms:PutKeyPolicy only if executed using Terraform",
      "Effect": "Allow",
      "Principal": "*",
      "Action": ["kms:PutKeyPolicy"],
      "Resource": [
        "arn:aws:kms:us-east-1:012345678:key/guidguid-guid-guid-guid-guidguidguid"
      ],
      "Condition": {
        "StringLike": {"aws:UserAgent": "*terraform*"}
      }
    }
  ]
RB avatar

where the full user agent is something like this

aws-sdk-go/1.32.12 (go1.13.7; darwin; amd64) APN/1.0 HashiCorp/1.0 Terraform/0.12.26 (+<https://www.terraform.io>)	

2020-07-26

HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
07:05:13 AM

Scheduled Maintenance | Terraform Cloud Jul 26, 07:01 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Jul 22, 08:30 UTC Scheduled - We will be undergoing a scheduled maintenance for Terraform Cloud on July 26th at 7:00am UTC. During this window, there may be interruptions to terraform run output, and some runs might be delayed.

Scheduled Maintenance | Terraform Cloud
HashiCorp Services’s Status Page - Scheduled Maintenance Terraform Cloud.
HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
07:55:10 AM

Scheduled Maintenance | Terraform Cloud Jul 26, 07:49 UTC Completed - The scheduled maintenance finished successfully. The system is fully operational again.Jul 26, 07:01 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Jul 22, 08:30 UTC Scheduled - We will be undergoing a scheduled maintenance for Terraform Cloud on July 26th at 7:00am UTC. During this window, there may be interruptions to terraform run output, and some runs might be delayed.

2020-07-25

Mark avatar

Anyone else observing windows ec2 instances and route53 records taking more time than normal to create? I notice these two resources take a considerable amount of time even though I can see them created through the AWS Console already. A windows ec2 instance taking ~7minutes

2020-07-24

Eric Berg avatar
Eric Berg

I’m having a difficult time getting my providers set up right in the caller and called modules i’m using. For the most part, i’m trying to have my top-level mod providers look like this, where infra is the profile name of the master account:

provider "aws" {
  region  = var.aws_region
  profile = "infra"
  assume_role {
    role_arn = "arn:aws:iam::${var.aws_account_id}:role/OrganizationAccountAccessRole"
  }
  forbidden_account_ids = local.forbidden_account_ids
}

Then, i pass them down – all but one of course has an alias:

module "stack_install" {
  source = "../../../../../application-stack"

  providers = {
    aws                  = aws
    aws.infra            = aws.infra
    [aws.cf](http://aws\.cf)               = aws
    aws.cf-us-east-1     = aws.cf-us-east-1
    aws.client-us-east-1 = aws.cf-us-east-1
    aws.route53          = aws.infra
  }

And in the called module, i just stub out the provider like this:

provider "aws" {
  region  = local.default_region
  forbidden_account_ids = [local.master_account_id]
}

But i’m getting this error:

Error: No valid credential sources found for AWS Provider.
        Please see <https://terraform.io/docs/providers/aws/index.html> for more information on
        providing credentials for the AWS Provider

So, what am i doing wrong? I had full configs for some providers in the called modules, but moving that to the top-level mods I believe is the right thing to do, since TF seems to be subject to really weird dependencies, like one module relying on the provider from another module that was called as well.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

why so many providers ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
providers = {
    aws                  = aws
    aws.infra            = aws.infra
    [aws.cf](http://aws\.cf)               = aws
    aws.cf-us-east-1     = aws.cf-us-east-1
    aws.client-us-east-1 = aws.cf-us-east-1
    aws.route53          = aws.infra
  }
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what’s the difference b/w them?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

region and role to assume?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

check the regions in the child modules. They should be the same as in the top-level providers.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and you should not prob specify regions in the child modules providers

Eric Berg avatar
Eric Berg

I’m spinning up new accounts, with most stuff in us-east-2, but Cloudfront (cf) and some other stuff (ACM) has to be in us-east-2, and we have a master account that has our DNS as well as a few other global resources that we need to hit as well, so aws is just the new account in us-east-2, aws.infra is the master account, and the cf-us-east-1 is the new account in east. Region is whatever region we are deploying this stack to – now, us-east-2.

Regarding the role assumption, i’m spinning up this stack in separate accounts, and i’m using role assumption to access them.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

they will be taken from the top-level providers

Eric Berg avatar
Eric Berg

I think you have to specify regions in the child mod providers, or you can’t stub it out. I’ll confirm that.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

I think this should not be in a child module

provider "aws" {
  region  = local.default_region
  forbidden_account_ids = [local.master_account_id]
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in each resource in child modules, you just use `

provider = "aws"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(or other providers)

Eric Berg avatar
Eric Berg

perhaps not, but with all this cross-account access, i’ve hit that wall and avoided polluting our master account, so i think it’s valuable.

Eric Berg avatar
Eric Berg

I was hoping to avoid explicitly setting the provider for the +150 resources in this build. Obviously, I use the provider alias to reference everything else.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

terraform recommends not to declare providers in child modules, for many diff reasons. One of the reasons is that if you define a provider in a child module and then update it, you will not be able to destroy the resources created by that provider

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

define everything at top-level and just provide the providers to all child modules

Eric Berg avatar
Eric Berg

wait…it was my understanding that you need to stub out the provider – mauybe it’s because pycharm barfs if you don’t have a “local” provider,

Eric Berg avatar
Eric Berg

So, just completely remove the provider defs from the submods?

Eric Berg avatar
Eric Berg

When I remove the provider defs from the submods, I get errors like this:

Error: missing provider [module.stack_install.provider.aws.cf](http://module\.stack_install\.provider\.aws\.cf)

Also, my IDE (pycharm) does not recognize provider refs, unless the provider is defined in the module. I know that’s not exactly a TF issue, but one that impacts my workflow

Eric Berg avatar
Eric Berg

I’m going to see what happens, when i completely clear out the state and start over.

loren avatar
loren

we’ve found we need to stub the provider alias in a module, in order to use it, e.g.:

provider "aws" {}

provider "aws" {
  alias = "cf"
}
loren avatar
loren

but every other piece of the provider config goes in the top-level root config

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

per terraform, and what we saw many times, you should not define providers in child modules, nor mix the definitions at top-level and in child modules

loren avatar
loren

but that doesn’t entirely work… if you use an “alias” in a resource in the module, then you must stub the provider with the alias name

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@loren something like that is ok, you just should not define any other properties of those providers

1
loren avatar
loren

roger, yeah, that tracks what we’ve seen also

Eric Berg avatar
Eric Berg

Awesome. that generally jibes with what I’ve been trying to move to. I kicked this can down the road a few months ago and it’s time to pay the piper.

MrAtheist avatar
MrAtheist

Anyone familiar with the TGW for terraform? Why does it create 2x transit gateway route table for me?

https://github.com/terraform-aws-modules/terraform-aws-transit-gateway/blob/master/main.tf#L39-L51

terraform-aws-modules/terraform-aws-transit-gateway

Terraform module which creates Transit Gateway resources on AWS - terraform-aws-modules/terraform-aws-transit-gateway

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can use the default route table, but it has a lot of restrictions

terraform-aws-modules/terraform-aws-transit-gateway

Terraform module which creates Transit Gateway resources on AWS - terraform-aws-modules/terraform-aws-transit-gateway

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

e.g. you can’t use the default route table when you provision TGW in multi-account env, share the TGW with the organization using AWS RAM and attach VPCs from different accounts to the TGW

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so, by creating a separate route table, you can use it in all different scenarios (single account and multi-account sharing)

MrAtheist avatar
MrAtheist
07:57:08 PM

thanks for the response, i guess terraform created the 2 by default? and im guessing the one with empty name is the default?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, it says “Default” in the header

MrAtheist avatar
MrAtheist

ah ok “default association….”

MrAtheist avatar
MrAtheist

by the way, looking at the template, is there a way to disable the creation of RAM? my org hasnt been properly set up sharing yet.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(those terraform-aws-modules are not CloudPosse modules, but I guess you can open a PR if you need any new feature)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and the module has a var var.share_tgw to enable/disable sharing

MrAtheist avatar
MrAtheist

i set share_tgw to false but somehow it still tried to create ram

MrAtheist avatar
MrAtheist

and i found https://github.com/cloudposse/terraform-aws-transit-gateway, but it looks kind of empty. If you could tweak it with my feedback that would be great.

cloudposse/terraform-aws-transit-gateway

Contribute to cloudposse/terraform-aws-transit-gateway development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yea, we started the module, but implementing it for a single account is easy (but not too useful), implementing it for multi-account is not simple (and we use TGW for multi-account environments). We’ll get back to it soon

MrAtheist avatar
MrAtheist

hey @Andriy Knysh (Cloud Posse) sorry for bugging, but have you tried spinning up TGW with the complete example from https://github.com/terraform-aws-modules/terraform-aws-transit-gateway? To me that doesnt spin up the vpc attachment nor the routes within the tgw route table. I’ve been banging my head on this for a while now…

terraform-aws-modules/terraform-aws-transit-gateway

Terraform module which creates Transit Gateway resources on AWS - terraform-aws-modules/terraform-aws-transit-gateway

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

no, we did not try that module

Briet Sparks avatar
Briet Sparks

Hi, I’m trying to use terraform-aws-ecs-codepipeline to pull from a personal github repo, but the module thinks my gh username is a gh org. I get an error:

 GET <https://api.github.com/orgs/brietsparks>: 404 Not Found []

  on .terraform/modules/ecs_push_pipeline.github_webhooks/main.tf line 7, in provider "github":
   7: provider "github" {
cloudposse/terraform-aws-ecs-codepipeline

Terraform Module for CI/CD with AWS Code Pipeline and Code Build for ECS https://cloudposse.com/ - cloudposse/terraform-aws-ecs-codepipeline

Joe Niland avatar
Joe Niland

@ can you share the entire module block?

cloudposse/terraform-aws-ecs-codepipeline

Terraform Module for CI/CD with AWS Code Pipeline and Code Build for ECS https://cloudposse.com/ - cloudposse/terraform-aws-ecs-codepipeline

Briet Sparks avatar
Briet Sparks
module "ecs_push_pipeline" {
  source                = "git::<https://github.com/cloudposse/terraform-aws-ecs-codepipeline.git?ref=master>"
  name                  = "guestbook-ci"
  region                = var.region
  repo_owner            = "brietsparks"
  github_webhooks_token = var.github_webhooks_token
  repo_name             = "guestbook"
  image_repo_name       = "guestbook"
  branch                = "ci-pract"
  service_name          = "guestbook"
  ecs_cluster_name      = "guestbook"
  privileged_mode       = "true"
}

2020-07-23

Pedro Henriques avatar
Pedro Henriques

Hello everyone A coleague of mine asked for me to present this pull request https://github.com/cloudposse/terraform-aws-elasticsearch/pull/61 adding the possibility to insert a different aws ec2 service identifier

Adds AWS EC2 Service Name to service. by antoniocanelas · Pull Request #61 · cloudposse/terraform-aws-elasticsearch

what Adds possibility to insert different aws ec2 service identifier. why Insert aws ec2 service identifier different than [&quot;ec2.amazonaws.com&quot;] is necessary, for example in china accou…

Marcin Brański avatar
Marcin Brański

Review done. Not far away from accept if it will pass our code checks.

Adds AWS EC2 Service Name to service. by antoniocanelas · Pull Request #61 · cloudposse/terraform-aws-elasticsearch

what Adds possibility to insert different aws ec2 service identifier. why Insert aws ec2 service identifier different than [&quot;ec2.amazonaws.com&quot;] is necessary, for example in china accou…

Pedro Henriques avatar
Pedro Henriques

Thanks for the feedback. We’ll keep working on it and get back to you when it is updated

1
praveen avatar
praveen

#terraform I have the following error on ....\modules\BaseInfrastructure[main.tf](http://main.tf) line 225, in module “diagnostic_settings”: 225: resource_id = azurerm_virtual_network.this[each.key].id

The “each” object can be used only in “resource” blocks, and only when the “for_each” argument is set.

praveen avatar
praveen

#terraform . Here is the snippet of resource and module to which I am using to calling the resource module ”diagnostic_settings” {   source                              = ”../DiagnosticSettings”   resource_id        = azurerm_virtual_network.this[each.key].id

praveen avatar
praveen

resource ”azurerm_network_security_group” ”this” {   for_each            = var.network_security_groups   name                = each.value[“name”]

Craig Dunford avatar
Craig Dunford

You need a for_each on the resource you are trying to use the each in (your module diagnostic_settings in this case.

praveen avatar
praveen

yes, I have it in the resource. if you see hte resource details I have pasted below

praveen avatar
praveen

resource ”azurerm_network_security_group” ”this” {   for_each            = var.network_security_groups   name                = each.value[“name”]

Craig Dunford avatar
Craig Dunford

you are using each in your module "diagnostic_settings" which has no for_each :

module "diagnostic_settings" {
  source                              = "../DiagnosticSettings"
  resource_id        = azurerm_virtual_network.this[each.key].id
praveen avatar
praveen

can you tell me how to use it

Craig Dunford avatar
Craig Dunford

Can you describe what you are trying to achieve?

praveen avatar
praveen

am trying to enable diagnostics logging for all network security groups

praveen avatar
praveen

resource ”azurerm_network_security_group” ”this” {   for_each            = var.network_security_groups   name                = each.value[“name”]   location            = local.location   resource_group_name = azurerm_resource_group.this.name

praveen avatar
praveen

module ”diagnostic_settings” {   source                              = ”../DiagnosticSettings”   resource_id        = lookup(azurerm_virtual_network.this[each.key].id)   retention_days     = var.retention_days   storage_account_id = var.diagnostics_storage_account_name }

praveen avatar
praveen

azurerm_network_security_group is the resource to which I am calling diagnostic_settings module to enable diag settings

Craig Dunford avatar
Craig Dunford

I believe something like this would work:

module "diagnostic_settings" {
  for_each            = var.network_security_groups 

  source                              = "../DiagnosticSettings"
  resource_id        = lookup(azurerm_virtual_network.this[each.key].id)
  retention_days     = var.retention_days
  storage_account_id = var.diagnostics_storage_account_name
}
Craig Dunford avatar
Craig Dunford
count and for_each for modules · Issue #17519 · hashicorp/terraform
Is it possible to dynamically select map variable, e.g? Currently I am doing this: [vars.tf> locals { map1 = { name1 = &quot;foo&quot; name2 = &quot;bar&quot; } } <http://main.tf main.tf](http://vars.tf) module &quot;x1&quot; { sour…
Craig Dunford avatar
Craig Dunford

may require 0.13 per that issue

praveen avatar
praveen

got it. Let me directly use the resource without calling module

:--1:1
Pierre-Yves avatar
Pierre-Yves

hi, I may help but would need the content of “var.network_security_groups” to check that it’s a map etc ..

praveen avatar
praveen

and I get the following error

praveen avatar
praveen

Error: Reference to “each” in context without for_each

on ....\modules\BaseInfrastructure[main.tf](http://main.tf) line 225, in module “diagnostic_settings”: 225: resource_id = azurerm_virtual_network.this[each.key].id

The “each” object can be used only in “resource” blocks, and only when the “for_each” argument is set.

Pierre-Yves avatar
Pierre-Yves

i guess you need azurerm_virtual_network.this[${each.key}].id

drexler avatar
drexler

Hi i have MFA setup on certain AWS accounts. With the AWS CLI, i get prompted to enter the serial when executing commands in those accounts. How can i use Terraform to provision infrastructure there with MFA enabled?

Adrian avatar
Adrian

aws-vault

drexler avatar
drexler

thx

Jon avatar

Hello, I am trying to use the cloudposse/kms-key/aws public module. On the Terraform Registry, I do not see the option to use a configure a custom KMS key policy but when I click the link to go to the GitHub repo, I see that as an available input. Unfortunately, I haven’t been able to setup a custom policy. Is this possible to do using this module? Thanks in advance!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


Unfortunately, I haven’t been able to setup a custom policy.
can you elaborate?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

what happens when you try

Jon avatar

Whenever I grab a known working policy that I have that is not using this module and I try to inject it into this module, Terraform v12.24 outputs an error saying:

Error: Unsupported argument

on [kms.tf](http://kms\.tf) line 87, in module "cloudtrail_CMK":
87:   policy = <<JSON

An argument named "policy" is not expected here
Jon avatar

@Erik Osterman (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

can you share the actual invocation of the module?

Jon avatar

sure

Jon avatar
module "cloudtrail_CMK" {
  source  = "cloudposse/kms-key/aws"
  version = "0.2.0"

  name                    = "${var.APPLICATION}_${terraform.workspace}_${var.CUSTOMER_MANAGED_KEYS[4]}_CMK"
  description             = "KMS key for S3 buckets and objects"
  deletion_window_in_days = 10
  enable_key_rotation     = true
  alias                   = "alias/${var.APPLICATION}_${terraform.workspace}_${var.CUSTOMER_MANAGED_KEYS[4]}_CMK"

  policy = <<JSON
{
  "Version": "2012-10-17",

  "Id": "key-default-1",
  "Statement": [
    {
      "Sid": "Enable IAM User Permissions",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::xxxxxxxxxxxx:root"
      },
      "Action": "kms:*",
      "Resource": "*"
    }
  ]
}
JSON

  tags = "${merge(
    var.DEFAULT_TAGS,
    map(
      "Name", "${var.APPLICATION}_${terraform.workspace}_${var.CUSTOMER_MANAGED_KEYS[4]}",
      "Environment", terraform.workspace
    )
  )}"
}
Jon avatar

Do you know what I could be doing wrong? @Erik Osterman (Cloud Posse)

Jon avatar

I appreciate the help btw.. I’ve been scratching my head on this for a little while now..

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

hrmm.. nothing jumps out at me immediately. Some other feedback though:

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
tags = "${merge(
    var.DEFAULT_TAGS,
    map(
      "Name", "${var.APPLICATION}_${terraform.workspace}_${var.CUSTOMER_MANAGED_KEYS[4]}",
      "Environment", terraform.workspace
    )
  )}"
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this is pre-0.11 syntax

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it might work, but it’s not leveraging the full capabilities of HCL2

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You’re using 0.2.0 of the module terraform-aws-kms module

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The problem is the policy argument was not in that version.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The current version is 0.5.0

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Try:

  source  = "cloudposse/kms-key/aws"
  version = "0.5.0"
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

if you’re on terraform 0.11, then there might not be support for this in our module.

Jon avatar

Thank you, sir. I’ll try again with the updated version. I’m using Terraform 12.24

Jon avatar

That’s exactly what it was.. Thank you Erik!!

:100:1
Jon avatar

I’ll be sure to grab the latest and greatest from GitHub next time. I think in the Terraform Registry it still referenced the older version so I didn’t think to check if there was a newer version

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Our usage example might erroneously refer to 0.2.0. Typically our examples just pin to master and warn the user to pin to a version.

:--1:1
Jon avatar

If I don’t specify a version, will it automatically pull the latest and greatest? I’d personally rather do that..

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Oh, that’s not advisable though.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The problem is with so many modules, we can never ensure we do not break backwards compatibility.

Jon avatar

gotcha, okay then. I’ll stick with known working version numbers

:--1:1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So pin to the working version when you provision the infrastructure.

:--1:1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

2020-07-22

Pierre-Yves avatar
Pierre-Yves

hello, is there a terraform command that allow to fetch and inspect data output ? by example I would to get and print the values availables by :

 data.terraform_remote_state.core_state.outputs.mymodule.*
PePe avatar

in the terraform console

PePe avatar

you can do

PePe avatar

data.terraform_remote_state.core_state.outputs.mymodule

PePe avatar

and it should show you all values

Pierre-Yves avatar
Pierre-Yves

haaa perfect ! thanks

Pierre-Yves avatar
Pierre-Yves

when I fetch terraform console print me an id value for the given data.terraform_remote_state key, and the output value is the expected one.

 data.terraform_remote_state.core_state.outputs.placement_groups[0].entry_point.id
/subscriptions/07xyz/resourceGroups/tf_stage_placement_groups/providers/Microsoft.Compute/proximityPlacementGroups/tf_stage_placement_group_entry_point

but when I give it to a module

module "haproxy" {
  placement_group_id                 = data.terraform_remote_state.core_state.outputs.placement_groups[0].entry_point.id
}

I have the error

  on [main.tf](http://main\.tf) line 74, in module "haproxy":
  74:   placement_group_id                 = data.terraform_remote_state.core_state.outputs.placement_groups[0].entry_point.id
    |----------------
    | data.terraform_remote_state.core_state.outputs.placement_groups[0] is tuple with 1 element
This value does not have any attributes.

what should I change to give the id to my module ?

Pierre-Yves avatar
Pierre-Yves

solved ! little shenaningan it seems terraform console removed one [] from the output data.terraform_remote_state.core_state.outputs.placement_groups[0][0].entry_point.id solves it ..

HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
08:35:13 AM

Scheduled Maintenance | Terraform Cloud THIS IS A SCHEDULED EVENT Jul 26, 07:00 - 09:00 UTCJul 22, 08:30 UTC Scheduled - We will be undergoing a scheduled maintenance for Terraform Cloud on July 26th at 7:00am UTC. During this window, there may be interruptions to terraform run output, and some runs might be delayed.

Scheduled Maintenance | Terraform Cloud
HashiCorp Services’s Status Page - Scheduled Maintenance Terraform Cloud.
xluffy avatar
xluffy

Hi, I want to create a peering cross accout. I follow this module https://github.com/cloudposse/terraform-aws-vpc-peering-multi-account

When I run tf apply, I have a error like that

Error creating VPC Peering Connection: UnauthorizedOperation: You are not authorized to perform this operation. Encoded authoriza
tion failure message: f0JL_4uWp-Mwhq3z3IXzmRBpgU1j5tAqDBCqcAadPglsZUj221QT_jFXXJiZU4Ff--t_mdBRNLntwBgWBUvbLS8Z_MGQAMbmRg07sLwu66nJas330iV5tosDVC1RVPsW07ooR9M2nr2zyqcz8QTIe0m1dKCJ1MNrBJNS980XtIpmuvv6Zurajip2-3GAyXaRxM6eQj3IYz-rI5seHfoSdiA34k3Tm4rFEx7ITP2aIHgc5tmsH-OMltrn0Nr6z-vgAtxq4SYYFyNNOVLEL9wxXMn1JDfEGKqxVaN88cw4KbuErUPTwwquTR6p9PkfBv_Z9ADm8xcKuzde
f3t9i9o_WxF2_Y01ybW1I-Avb9wBhU38RJ7WAaT-meVRqF0iJMrvg0ZAsaFcAl44J98XItv1Jr0xUozJNQmWQbYwvAOEcdkRvtfOlElUhUsqVdGDMCfDtmCTFdDqQWAgR-KqjZLJpPHqMpyd6g5YF1wRtZkm9IrLg8L5ZXuCuoURvR8Q4
AvCRPNuTHDhfSxhotKP9-D_rgr3T1YixQOwwppw1u6BuXIWTsF0GkshxxP55i7xMecabyop1T7yUyWhkfBOvFCGgDAwfddHMOT_7l-o_qmm7z-iiZRpsRo2cF4HbBauzcQbOKC2RO1CS5M5HtiXx29YoOmo272EhNL7fUl2N3PQ9QEfPnjfRAG_xlf4CnBT6jzohOYEn7NoLFhhJyZLtj3HwFYIQcoXzhtJu7s

So I try to decode this message with aws sts decode-authorization-message ...

{
  "allowed": false,
  "explicitDeny": false,
  "matchedStatements": {
    "items": []
  },
  "failures": {
    "items": []
  },
  "context": {
    "principal": {
      "id": "AROARX57SNBJI7LD7TL5Q:1111111111111111",
      "arn": "arn:aws:sts::22222222222:assumed-role/r_ops_peering_access/1111111111111111"
    },
    "action": "ec2:CreateVpcPeeringConnection",
    "resource": "arn:aws:ec2:us-west-2:22222222222:vpc/vpc-33333333333",
    "conditions": {
      "items": [
        {
          "key": "22222222222:Env",
          "values": {
            "items": [
              {
                "value": "Prod"
              }
            ]
          }
        },
        {
          "key": "ec2:ResourceTag/Env",
          "values": {
            "items": [
              {
                "value": "Prod"
              }
            ]
          }
        },
        {
...

This is my IAM policy

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "ec2:CreateRoute",
        "ec2:DeleteRoute"
      ],
      "Resource": "arn:aws:ec2:*:XXXXXXXX:route-table/*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "ec2:DescribeVpcPeeringConnections",
        "ec2:DescribeVpcs",
        "ec2:ModifyVpcPeeringConnectionOptions",
        "ec2:DescribeSubnets",
        "ec2:DescribeVpcAttribute",
        "ec2:DescribeRouteTables"
      ],
      "Resource": "*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "ec2:AcceptVpcPeeringConnection",
        "ec2:DeleteVpcPeeringConnection",
        "ec2:CreateVpcPeeringConnection",
        "ec2:RejectVpcPeeringConnection"
      ],
      "Resource": [
        "arn:aws:ec2:*:XXXXXXXX:vpc-peering-connection/*",
        "arn:aws:ec2:*:XXXXXXXX:vpc/*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "ec2:DeleteTags",
        "ec2:CreateTags"
      ],
      "Resource": "arn:aws:ec2:*:XXXXXXXX:vpc-peering-connection/*"
    }
  ]
}

Very confuse now. The error message said I don’t have permission to create a Peering Connection, but I have this permission in my Policy. Any idea?

cloudposse/terraform-aws-vpc-peering-multi-account

Terraform module to provision a VPC peering across multiple VPCs in different accounts by using multiple providers - cloudposse/terraform-aws-vpc-peering-multi-account

Release notes from terraform avatar
Release notes from terraform
02:34:14 PM

v0.13.0-rc1 0.13.0-rc1 (July 22, 2020) BUG FIXES: command/init: Fix confusing error message for locally-installed providers with invalid package structure (#25504) core: Prevent outputs from being evaluated during destroy (<a href=”https://github.com/hashicorp/terraform/issues/25500” data-hovercard-type=”pull_request”…

Add post-install provider cache validation and error reporting by alisdair · Pull Request #25504 · hashicorp/terraform

When installing a provider which has an invalid package structure (e.g. a missing or misnamed executable), the previous error message was confusing: This PR adds support for a post-install provide…

Do not evaluate output when doing a full destroy by jbardin · Pull Request #25500 · hashicorp/terraform

If we&#39;re adding a node to remove a root output from the state, the output itself does not need to be re-evaluated. The exception for root outputs caused them to be missed when we refactored res…

Release notes from terraform avatar
Release notes from terraform
04:14:15 PM

v0.12.29 Version 0.12.29

Release notes from terraform avatar
Release notes from terraform
04:34:21 PM

v0.12.29 0.12.29 (July 22, 2020) BUG FIXES: core: core: Prevent quadratic memory usage with large numbers of instances by not storing the complete resource state in each instance (#25633)

backport gh 25544 by jbardin · Pull Request #25633 · hashicorp/terraform

This backports #25544, but is not a direct cherry-pick of the commit due to significant changes in the types between 0.12 and 0.13. The AbstractResourceInstance type was storing the entire Resource…

ibrahim avatar
ibrahim

I am looking for an EKS module

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

thumbsup_all1

2020-07-21

Paul Catinean avatar
Paul Catinean

Hi, has anyone here used terraform with helmfile together?

roth.andy avatar
roth.andy

I know @mumoshu has been working on a helmfile provider

Paul Catinean avatar
Paul Catinean

Yes he has and I’ve so hapily downloaded the latest release and built it locally: https://github.com/mumoshu/terraform-provider-helmfile

mumoshu/terraform-provider-helmfile

Deploy Helmfile releases from Terraform. Contribute to mumoshu/terraform-provider-helmfile development by creating an account on GitHub.

Paul Catinean avatar
Paul Catinean

But in his example he mentions external helmfile_release_set which I use

Paul Catinean avatar
Paul Catinean

But I also need to send env variables with it

Paul Catinean avatar
Paul Catinean

And sending env variables is shown only with inline releases

Andrey Nazarov avatar
Andrey Nazarov

For release sets it should work. We are doing this the following way:

resource "helmfile_release_set" "common_stack" {
...
  environment_variables = {
    EXTERNAL_IP = data.google_compute_address.dev_ip_address.address
  }
...
}
Paul Catinean avatar
Paul Catinean

ohhhh

Paul Catinean avatar
Paul Catinean

using release 0.3 ?

Andrey Nazarov avatar
Andrey Nazarov

Not yet, we are facing some issues with the most recent release

Paul Catinean avatar
Paul Catinean

which one you have so far?

Andrey Nazarov avatar
Andrey Nazarov

terraform apply just started failing with no visible reason. I’m investigating this right now

Paul Catinean avatar
Paul Catinean

same here…

Paul Catinean avatar
Paul Catinean

so 0.2.0 until then?

Andrey Nazarov avatar
Andrey Nazarov
Terraform apply starts failing processing things that worked before · Issue #22 · mumoshu/terraform-provider-helmfile

I&#39;ve tried the latest version of this provider (taken from the master branch) and found out that some terraform apply failed without any reason. I also noticed that the output had changed. It b…

Paul Catinean avatar
Paul Catinean

for me it just ran endlessly

Paul Catinean avatar
Paul Catinean

creating on and on

Andrey Nazarov avatar
Andrey Nazarov

There is something similar

Andrey Nazarov avatar
Andrey Nazarov
terraform plan stuck in Refreshing State for more than 2 hours · Issue #21 · mumoshu/terraform-provider-helmfile

Steps to Reproduce : Set export TF_LOG=TRACE which is the most verbose logging. Run terraform plan …. In the log, I got the root cause of the issue and it was : dag/walk: vertex &quot;module.kube…

Paul Catinean avatar
Paul Catinean

interseting

Paul Catinean avatar
Paul Catinean

should i revert to 0.2.0 until further notice?

Andrey Nazarov avatar
Andrey Nazarov

You can give it a try. But it also has some troubles like the spoiled state problem I described here: https://github.com/mumoshu/terraform-provider-helmfile/issues/16

Spoiled state problem · Issue #16 · mumoshu/terraform-provider-helmfile

In #9 we decided that taking file approach like resource &quot;helmfile_release_set&quot; &quot;mystack&quot; { content = file(&quot;./helmfile.yaml&quot;) … } would fix all the troubles with the…

Paul Catinean avatar
Paul Catinean

so 0.1.0 is safest

Paul Catinean avatar
Paul Catinean

probably a reason why it’s labeled as latest

Andrey Nazarov avatar
Andrey Nazarov

It has almost the same issues with the state:)

Andrey Nazarov avatar
Andrey Nazarov
terraform state output as .Values · Issue #505 · roboll/helmfile

Thoughts on adding support for terraform as a temptable values data source? Opening this to think through design and integration. An opposite option would be to have a terraform helmfile provider w…

muhaha avatar
muhaha

huh, seems broken a lot …

muhaha avatar
muhaha

unfortunatelly, helmfile is great for helm/kustomize combo + patches…

Andrey Nazarov avatar
Andrey Nazarov

Regarding the spoiled state problem, you might not face this. We’ve been using this provider successfully for quite some time. But when you encounter the problem it will bother you a lot)).

Paul Catinean avatar
Paul Catinean

Did I understand correctly that there are two approches. One is using the tf plugin to execute the release AND pass variable data and one is just to output variable data from the tf infrastructure that you can later using with helmfile?

Paul Catinean avatar
Paul Catinean

<ref+tfstate://path/to/some.tfstate/RESOURCE_NAME> and helmfile gets it from tf automatically by using output variables?

muhaha avatar
muhaha

I would like to see a tf variant2 provider instead :X

muhaha avatar
muhaha

Helmfile does not handle postdelete actions/hooks for deleting CRDs. So I have to use taskfile/variant2 for pre and post actions/hooks, but then I will lost a chance to use helmfile tf provider..

Andrey Nazarov avatar
Andrey Nazarov


Did I understand correctly that there are two approches. One is using the tf plugin to execute the release AND pass variable data and one is just to output variable data from the tf infrastructure that you can later using with helmfile?
I think so, haven’t tried the latter yet though:)

Paul Catinean avatar
Paul Catinean

interesting nonetheless, happy i got new info thanks a lot for that!

Paul Catinean avatar
Paul Catinean

i will try

:--1:1
muhaha avatar
muhaha

What if tfstate is stored in AWS S3, Azure Storage ?

Andrey Nazarov avatar
Andrey Nazarov
variantdev/vals

Helm-like configuration values loader with support for various sources - variantdev/vals

Andrey Nazarov avatar
Andrey Nazarov


Remote backends like S3 is also supported…

muhaha avatar
muhaha

Oh noe, Azure is not there..

muhaha avatar
muhaha

Would be nice to support AWS SSO and Azure MSI auth/authz ( + Azure Storage Backend ) like terraform does.

muhaha avatar
muhaha
(feat): AWS SSO and Azure MSI · Issue #6 · fujiwara/tfstate-lookup

Would be nice to support AWS v2 CLI SSO ( terraform-providers/terraform-provider-aws#10851 ) and Azure CLI ( https://www.terraform.io/docs/providers/azurerm/guides/azure_cli.html ) in the future. T…

muhaha avatar
muhaha
(feat): Azure Storage backend · Issue #7 · fujiwara/tfstate-lookup

Would be nice to support Azure Storage ( as alternative to AWS S3 ) via #6 ( Same should apply for AWS S3. Thanks

mumoshu avatar
mumoshu


for me it just ran endlessly
This is super interesting. I thought I have not changed any part of the provider that may end up something like that

mumoshu avatar
mumoshu
terraform plan stuck in Refreshing State for more than 2 hours · Issue #21 · mumoshu/terraform-provider-helmfile

Steps to Reproduce : Set export TF_LOG=TRACE which is the most verbose logging. Run terraform plan …. In the log, I got the root cause of the issue and it was : dag/walk: vertex &quot;module.kube…

Andrey Nazarov avatar
Andrey Nazarov

My issue was also resolved

Paul Catinean avatar
Paul Catinean

Thanks a lot for the support and fixes @mumoshu, I can get the latest release and try again?

mumoshu avatar
mumoshu

yes 0.3.2 is available for testing!

1
Paul Catinean avatar
Paul Catinean

I might be doing something really wrong but when I do terraform plan with 0.3.2 I get Error: mkdir : no such file or directory

Paul Catinean avatar
Paul Catinean

It has just: content = file(“../helmfile/helmfile.yaml”)

mumoshu avatar
mumoshu

Sry my bad! Fixed in 0.3.3

Paul Catinean avatar
Paul Catinean

ohh np. I just noticed your timezone, yikes sorry for bothering you

mumoshu avatar
mumoshu

ah no worry! much better than leaving it until tomorrow morning. thx for testing

Paul Catinean avatar
Paul Catinean

Paul Catinean avatar
Paul Catinean

on it now

Paul Catinean avatar
Paul Catinean

seems like the path to the binary is hardcoded to /usr/local/bin/helmfile but I have it installed via snap, even though the helm binary works on cli. And I think previous versions worked

Paul Catinean avatar
Paul Catinean

I tried also to helm_binary = “/snap/bin/helm” but no success

mumoshu avatar
mumoshu

wow really?

Paul Catinean avatar
Paul Catinean

i hope i’m not doing something wrong though

mumoshu avatar
mumoshu

actually, it’s not hard-coded. the default value for helm_bin is helm. For bin its helmfile

Paul Catinean avatar
Paul Catinean

beginner with terraform and news with helmfile

mumoshu avatar
mumoshu

do you have terraform.tfstate in your work dir?

Paul Catinean avatar
Paul Catinean

yes

mumoshu avatar
mumoshu

if you don’t have any kind of secrets in it, could you share it so that we can see what’s happening in the state

mumoshu avatar
mumoshu

or perhaps your .tf file if seeing the state file doesnt help

Paul Catinean avatar
Paul Catinean

let me see

1
Paul Catinean avatar
Paul Catinean

ohh i had a file with a long hash generated there (probably from last tests) after removing it I just get error that it doesn’t find some file but that’s normal because they are relative to the helmfile not terraform

Paul Catinean avatar
Paul Catinean

so appart from this it seems to be working I just need to change the structure a bit I assume

mumoshu avatar
mumoshu


after removing it I just get error that it doesn’t find some
are you talking about a file named like helmfile-8003d363e667d0a8cbc898f0866f02756c24328f56045aad1745a59deca0a14e.yaml generated by the provider?

Paul Catinean avatar
Paul Catinean

aand I just saw the working directory directive and now after changin that it works, wow you thought about everything

Paul Catinean avatar
Paul Catinean

yes sir tha’t’s the one I was refering to

Paul Catinean avatar
Paul Catinean

and now I can see the diff, this is SO SO cool

party_parrot1
mumoshu avatar
mumoshu

glad to hear it worked for you!

Paul Catinean avatar
Paul Catinean

and I can get values from terraform like ip address, sql instance address, domains from uptime checks and pass them on

Paul Catinean avatar
Paul Catinean

this is exciting

:100:1
Paul Catinean avatar
Paul Catinean

thanks so much for your help and support

Paul Catinean avatar
Paul Catinean

i’ll do my best to push to into production, test it and provide feedback

mumoshu avatar
mumoshu

awesome! please feel free to ask me anything towards that, here or in gh issues, whatever works for you

1
1
1
Paul Catinean avatar
Paul Catinean

@mumoshu hello again

Paul Catinean avatar
Paul Catinean

I went ahead and tested forward the latest release and it seems the diff generated from terraform is correct indeed but it ends with This is a bug in the provider, which should be reported in the provider’s own issue tracker.

Paul Catinean avatar
Paul Catinean

Everything seems fine and running the command manually directly with helmfile makes the deployment

Paul Catinean avatar
Paul Catinean

Error: Provider produced inconsistent final plan

Andrey Nazarov avatar
Andrey Nazarov

I’ve filed it already

1
Andrey Nazarov avatar
Andrey Nazarov
Error: rpc error: code = Unavailable desc = transport is closing · Issue #23 · mumoshu/terraform-provider-helmfile

I&#39;m moving forward from the previous version to the most recent and the latest fails for all our environment with Error: rpc error: code = Unavailable desc = transport is closing during terrafo…

Andrey Nazarov avatar
Andrey Nazarov
Error: Provider produced inconsistent final plan · Issue #22 · mumoshu/terraform-provider-helmfile

I&#39;ve tried the latest version of this provider (taken from the master branch) and found out that some terraform apply failed without any reason. I also noticed that the output had changed. It b…

Paul Catinean avatar
Paul Catinean

Thanks for the update Andrey

Paul Catinean avatar
Paul Catinean

I also tried the approach you suggested with tfstate and such

Paul Catinean avatar
Paul Catinean

Under values I added

Paul Catinean avatar
Paul Catinean
  • “ref+tfstate://../terraform/terraform.tfstate”
Paul Catinean avatar
Paul Catinean

no provider registered for scheme “tfstate”

Paul Catinean avatar
Paul Catinean

Paul Catinean avatar
Paul Catinean

oh wait, maybe I need to update my helmfile

Paul Catinean avatar
Paul Catinean

oh that was it

Paul Catinean avatar
Paul Catinean

now I get

Paul Catinean avatar
Paul Catinean

panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1172ece]

Paul Catinean avatar
Paul Catinean

PANIC!

Paul Catinean avatar
Paul Catinean

not sure what I’m doing wrong

Paul Catinean avatar
Paul Catinean

@Andrey Nazarov are you using this by any chance?

Paul Catinean avatar
Paul Catinean

Ah think I got it, I need to name variables separately, i included them like a .yaml, my bad

Andrey Nazarov avatar
Andrey Nazarov

Sorry, haven’t tried yet this ref+tfstate approach

Paul Catinean avatar
Paul Catinean

Would have went with the first one but seems it doesn’t go through

Paul Catinean avatar
Paul Catinean

@Andrey Nazarov for me updating to the latest helmfile seems to have worked

Paul Catinean avatar
Paul Catinean

ah nevermind it came back once I changed something else

Paul Catinean avatar
Paul Catinean

Any luck with this Andrey or did you revert back to an older version?

mumoshu avatar
mumoshu

Thanks. Error: Provider produced inconsistent final plan is really interesting. I’m now wondering why it doesn’t reproduce on my machine while I’m developing it

mumoshu avatar
mumoshu

Maybe it’s only a warning rather than an error on my env?

2020/07/27 09:52:31 [WARN] Provider "helmfile" produced an unexpected new value for helmfile_release_set.mystack2, but we are tolerating it because it is using the legacy plugin SDK.
    The following problems may be the cause of any confusing errors from downstream operations:
      - .diff_output: was cty.StringVal("Comparing release=myapp2, chart=sp/podinfo\ndefault, myapp2-podinfo, Deployment (apps) has changed:\n...\n          [prometheus.io/port](http://prometheus\.io/port): \"9898\"\n      spec:\n        terminationGracePeriodSeconds: 30\n        containers:\n          - name: podinfo\n-           image: \"stefanprodan/podinfo:1234\"\n+           image: \"stefanprodan/podinfo:12345\"\n            imagePullPolicy: IfNotPresent\n            command:\n              - ./podinfo\n              - --port=9898\n              - --port-metrics=9797\n...\n\nin ./helmfile-3adb8a9ba929668a0ea4b06dbeddd4cba9f0283b926b55c1c31f5e7176c497e1.yaml: failed processing release myapp2: helm3 exited with status 2:\n  Error: identified at least one change, exiting with non-zero exit code (detailed-exitcode parameter enabled)\n  Error: plugin \"diff\" exited with error\n"), but now cty.StringVal("")
mumoshu avatar
mumoshu

Maybe my tf is outdated?

$ terraform version
Terraform v0.12.13
mumoshu avatar
mumoshu

Still unable to reproduce it even with tf v0.12.29

mumoshu avatar
mumoshu

@Andrey Nazarov @ Could you share me exact config and steps to reproduce the error you’re seeing? I’ve tried this for an hour with various configs but have no luck so far. It works fine on my machine

mumoshu avatar
mumoshu

also which resource type are you using, helmfile_release_set or helmfile_release, when you get the error?

Paul Catinean avatar
Paul Catinean

Hi @mumoshu, yes sure I will try to give as much data as I can

Paul Catinean avatar
Paul Catinean

• Terraform v0.12.26

• helmfile version v0.125.0

Paul Catinean avatar
Paul Catinean

resource “helmfile_release_set” “staging” { content = file(“../helmfile/helmfile.yaml”)

working_directory = "../helmfile"

environment_variables = {
    TERRAFORM-TEST = "This came from terraform!"
}

environment = "staging" }

output “staging” { value = helmfile_release_set.odoo-staging.diff_output }

Paul Catinean avatar
Paul Catinean
Paul Catinean avatar
Paul Catinean

That’s the full output of the terraform apply command

Paul Catinean avatar
Paul Catinean

Let me know if I can help with any extra details

mumoshu avatar
mumoshu

@ Thank you so much! I’ll try reproducing the issue once again with it.

In the meantime, would you mind trying 0.3.5 which I’ve just released, to see if it gives you any difference?

For me, it did fix the warning I was seeing(https://sweetops.slack.com/archives/CB6GHNLG0/p1595811243157400?thread_ts=1595351319.054400&cid=CB6GHNLG0) which might relate to yours

Maybe it’s only a warning rather than an error on my env?

2020/07/27 09:52:31 [WARN] Provider "helmfile" produced an unexpected new value for helmfile_release_set.mystack2, but we are tolerating it because it is using the legacy plugin SDK.
    The following problems may be the cause of any confusing errors from downstream operations:
      - .diff_output: was cty.StringVal("Comparing release=myapp2, chart=sp/podinfo\ndefault, myapp2-podinfo, Deployment (apps) has changed:\n...\n          [prometheus.io/port](http://prometheus\.io/port): \"9898\"\n      spec:\n        terminationGracePeriodSeconds: 30\n        containers:\n          - name: podinfo\n-           image: \"stefanprodan/podinfo:1234\"\n+           image: \"stefanprodan/podinfo:12345\"\n            imagePullPolicy: IfNotPresent\n            command:\n              - ./podinfo\n              - --port=9898\n              - --port-metrics=9797\n...\n\nin ./helmfile-3adb8a9ba929668a0ea4b06dbeddd4cba9f0283b926b55c1c31f5e7176c497e1.yaml: failed processing release myapp2: helm3 exited with status 2:\n  Error: identified at least one change, exiting with non-zero exit code (detailed-exitcode parameter enabled)\n  Error: plugin \"diff\" exited with error\n"), but now cty.StringVal("")
mumoshu avatar
mumoshu

Just wondering but how are you upgrading terraform-provider-helmfile?

Paul Catinean avatar
Paul Catinean

That’s also a good question, since this is the first time I’ve been doing this and first time using terraform

Paul Catinean avatar
Paul Catinean

This is 0.3.6 provider I think that I was using

Paul Catinean avatar
Paul Catinean

I download the latest release available from github, unpack it

Paul Catinean avatar
Paul Catinean

I installed go with snap

Paul Catinean avatar
Paul Catinean

go version go1.14.6 linux/amd64

Paul Catinean avatar
Paul Catinean

inside the unarchived release I do go build

Paul Catinean avatar
Paul Catinean

and the resulting executable I place directly in to .tgplugins directory

Paul Catinean avatar
Paul Catinean

and replace the other one, then run terraform init

Paul Catinean avatar
Paul Catinean

I had no instructions on how to do it from the page so i assumed this would be the way

Paul Catinean avatar
Paul Catinean
Release v0.3.5: Fix warning on diff_output · mumoshu/terraform-provider-helmfile

Deploy Helmfile releases from Terraform. Contribute to mumoshu/terraform-provider-helmfile development by creating an account on GitHub.

Paul Catinean avatar
Paul Catinean

and the process describe above

Paul Catinean avatar
Paul Catinean

@mumoshu

mumoshu avatar
mumoshu

@ Thanks a lot! The process looks fine. Probably there should be some unseen bug I’m not aware of. Will keep investigating..

Paul Catinean avatar
Paul Catinean

If I can help with anything else like a screencast, helmfile description, even a call if needed I can do just let me know

1
mumoshu avatar
mumoshu

Would you mind sharing me terraform.tfstate file that should have been created by terraform?

Paul Catinean avatar
Paul Catinean

That was not included in the paste I give with terraform apply?

Paul Catinean avatar
Paul Catinean

I have the .tfstate not sure how to get the one that should have been created by terraform? if not in the paste

Paul Catinean avatar
Paul Catinean

Ah also i ommited my helm version if that has anything to do with it: version.BuildInfo{Version:”v3.2.4”, GitCommit:”0ad800ef43d3b826f31a5ad8dfbb4fe05d143688”, GitTreeState:”clean”, GoVersion:”go1.13.12”}

mumoshu avatar
mumoshu

I thought terraform apply creates it on first successful run

Paul Catinean avatar
Paul Catinean

it never went through I guess, so every time I do terraform apply I get the same diff

Paul Catinean avatar
Paul Catinean

the tfstate diff and helmfile diff inside it

mumoshu avatar
mumoshu

Got it! Thanks

Paul Catinean avatar
Paul Catinean

Maybe I should try a simpler helmfile with less data inside? see if it’s from that or idk

mumoshu avatar
mumoshu

Maybe. JFYI, this is what I’m using..

mumoshu avatar
mumoshu
provider "helmfile" {}

resource "helmfile_release_set" "mystack" {
	path = "./helmfile.yaml"

	helm_binary = "helm3"

	working_directory = path.module

	environment = "default"

	environment_variables = {
		FOO = "foo"
	}

	values = [
		<<EOF
{"name": "myapp"}
EOF
	]

	selector = {
	  labelkey1 = "value1"
	}
}

resource "helmfile_release_set" "mystack2" {
	content = <<EOF

releases:
- name: myapp2
  chart: sp/podinfo
  values:
  - image:
      tag: "12345"
  labels:
    labelkey1: value1
- name: myapp3
  chart: sp/podinfo
  values:
  - image:
     tag: "2345"
EOF

	helm_binary = "helm3"

//	working_directory = path.module
	working_directory = "mystack2"

	environment = "default"

	environment_variables = {
		FOO = "foo"
	}

	values = [
		<<EOF
{"name": "myapp"}
EOF
	]

	selector = {
		labelkey1 = "value1"
	}
}

output "mystack_diff" {
  value = helmfile_release_set.mystack.diff_output
}

output "mystack_apply" {
  value = helmfile_release_set.mystack.apply_output
}

output "mystack2_diff" {
  value = helmfile_release_set.mystack2.diff_output
}

output "mystack2_apply" {
  value = helmfile_release_set.mystack2.apply_output
}

resource "helmfile_release" "myapp" {
	name = "myapp"
	namespace = "default"
	chart = "sp/podinfo"
	helm_binary = "helm3"

//	working_directory = path.module
//	working_directory = "myapp"
	values = [
		<<EOF
{ "image": {"tag": "3.14" } }
EOF
	]
}


output "myapp_diff" {
	value = helmfile_release.myapp.diff_output
}

output "myapp_apply" {
	value = helmfile_release.myapp.apply_output
}
mumoshu avatar
mumoshu

helmfile.yaml

releases:
- name: {{ .Values.name }}-{{ requiredEnv "FOO" }}
  chart: sp/podinfo
  values:
  - image:
      tag: foobar2abcda
  labels:
    labelkey1: value1
Paul Catinean avatar
Paul Catinean

is this part of the automated test cases?

mumoshu avatar
mumoshu

Not yet. Just that I’m still not sure how this can be (easily) automated. I’m running terraform init, plan, apply on this every time I change code(and manually tag/publish a new release)

Paul Catinean avatar
Paul Catinean

could this have something to do with me trying multiple times to do a release through this that failed with previous versions?

mumoshu avatar
mumoshu

Possibly. Maybe you have helm releases installed on your cluster even though terraform+tf-provider-helmfile considers it a fresh install on every terraform apply run?

Paul Catinean avatar
Paul Catinean

i just removed the file from terraform and it killed the instance

Paul Catinean avatar
Paul Catinean

So that’s working

Paul Catinean avatar
Paul Catinean

Thank god the env variable works and killed only staging

mumoshu avatar
mumoshu

Are you saying terraform apply after removing some helmfile_release from your tf file resulted in deletion? :slightly_smiling_face: you may already know but you’d better run terraform plan beforehand

Paul Catinean avatar
Paul Catinean

ah yes, that was the plan to remove it from tf state and also the cluster so it’s “reset”

:--1:1
Paul Catinean avatar
Paul Catinean

and now doing an apply from scratch

Paul Catinean avatar
Paul Catinean

same issue

1
mumoshu avatar
mumoshu

ah, could you share me your ../helmfile/helmfile.yaml?

Paul Catinean avatar
Paul Catinean

Sure, i’ll just remove some sensitive data here and there

Paul Catinean avatar
Paul Catinean

but it’s multi-tiered with some subfiles

mumoshu avatar
mumoshu

it’s not a must if it takes too much effort! my current theory is this issue has something to do with complex helmfile.yaml and/or first install so just wanted to have more samples if i can have

Paul Catinean avatar
Paul Catinean
Paul Catinean avatar
Paul Catinean

Now that you mention, cert-manager has the installCrds hook which can add a lot of data and I did have issues in the past regarding the gitlab-runner

Paul Catinean avatar
Paul Catinean

But at the same time I went to the respective file and did helmfile -e staging apply and it went through

Paul Catinean avatar
Paul Catinean

I’ll check again

mumoshu avatar
mumoshu

much appreciated! i need some sleep now but i’ll definitely report back tomorrow

Paul Catinean avatar
Paul Catinean

thanks for you time! :–1:

Paul Catinean avatar
Paul Catinean

I’ll also post feedback if I get anything new

Paul Catinean avatar
Paul Catinean

Let me know if you need any more info

Paul Catinean avatar
Paul Catinean

It seems that if I change the main helmfile and remove entire conditions it works

Paul Catinean avatar
Paul Catinean

As soon as I change a value in the values.tmpl file and the diff shows diff of values vs diff of helmfile

Paul Catinean avatar
Paul Catinean

that’s when it breaks

Paul Catinean avatar
Paul Catinean

so the thing to try is to have a helmfile which has an external gotmpl values file

Andrey Nazarov avatar
Andrey Nazarov

Quite some talk:) Don’t know where to start. Yeah, I’ve got a huge helmfile (about 2k lines of code) injected into fairly small helmfile

Paul Catinean avatar
Paul Catinean

Indeed

Paul Catinean avatar
Paul Catinean

This is the only distinction I could find. If I remove an entire release it works and it shows just the few lines of the release

Paul Catinean avatar
Paul Catinean

If I change anything in the values.yaml.gotmpl and I change 1 char then I get that

Paul Catinean avatar
Paul Catinean

I tried in multiple releases

Andrey Nazarov avatar
Andrey Nazarov

I have one suspicion. Could it be these dots in diffs?

 ...
 bit, office365-editor, ConfigMap (v1) has changed:
 ...
Andrey Nazarov avatar
Andrey Nazarov

These are taken from diff_output

Paul Catinean avatar
Paul Catinean

could very well be, i also have changes in the configmap as well

Paul Catinean avatar
Paul Catinean

How can we test?

Andrey Nazarov avatar
Andrey Nazarov

Or this is the change I’ve got in the diff for the failed run

 - path                  = "helmfile.yaml" -> null
Andrey Nazarov avatar
Andrey Nazarov

But actually these look like just a red herring

Paul Catinean avatar
Paul Catinean

maybe there’s a standard go function that parses this where we can feed the diff into?

Andrey Nazarov avatar
Andrey Nazarov

Found this string in the output. There is a mess in logs after this error, I’ll try to grab something sensible

but now cty.StringVal("Adding repo stable
Andrey Nazarov avatar
Andrey Nazarov

I’ve tried to reconstruct the message from the mess

When expanding the plan for helmfile_release_set.mystack to include new values
learned so far during apply, provider "[registry.terraform.io/-/helmfile](http://registry\.terraform\.io/\-/helmfile)"
produced an invalid new value for .diff_output: was cty.StringVal("Adding repo
...
a lot of text
...
but now cty.StringVal("Adding repo stable
...
a lot of text
...
Andrey Nazarov avatar
Andrey Nazarov

So probably aforementioned dots matters))))

Andrey Nazarov avatar
Andrey Nazarov

I’ll try to decipher more

Paul Catinean avatar
Paul Catinean

think I have to start learning go so I can add some breakpoints and do some tests myself

Andrey Nazarov avatar
Andrey Nazarov

I would like to debug too, but don’t have time right now at all

Andrey Nazarov avatar
Andrey Nazarov

I’ve made some work to decipher the output and to me the value pointed in was cty.StringVal() is the same as the value from but now cty.StringVal()

Paul Catinean avatar
Paul Catinean

so it’s essentially the same?

Paul Catinean avatar
Paul Catinean
"[registry.terraform.io/-/helmfile](http://registry\.terraform\.io/\-/helmfile)" produced an invalid new value for
.diff_output: was cty.StringVal("Adding repo stable
Paul Catinean avatar
Paul Catinean

But that would happen in any diff

Paul Catinean avatar
Paul Catinean

Doing changes in the main helmfile has no issue whatsoever

Paul Catinean avatar
Paul Catinean

I can also see

Paul Catinean avatar
Paul Catinean
  • error = (known after apply)
mumoshu avatar
mumoshu

Thanks! I think it’s fixed in v0.3.6 https://github.com/mumoshu/terraform-provider-helmfile/releases/tag/v0.3.6

// I’ll add some explanation on why it (may) fix the issue later but I don’t have time right now!

Release v0.3.6: fix: Only set diff_output on plan · mumoshu/terraform-provider-helmfile

Deploy Helmfile releases from Terraform. Contribute to mumoshu/terraform-provider-helmfile development by creating an account on GitHub.

Paul Catinean avatar
Paul Catinean

Testing right away

Paul Catinean avatar
Paul Catinean

Seems to be working indeed, but the problem is I don’t see the diff for change of helmfille. Just the main one

Paul Catinean avatar
Paul Catinean

While this works I won’t be able to sign off on the final changes though

Andrey Nazarov avatar
Andrey Nazarov

Works for me too. @mumoshu huge thanks. And yes the diff_output is missing. Is this expected behaviour?

Paul Catinean avatar
Paul Catinean

Was wondering the same

mumoshu avatar
mumoshu

I’ve managed to fix it in v0.3.7. Sorry for back and forth but could you try it once again?

Paul Catinean avatar
Paul Catinean

Will do right after coffee! :D

1
Paul Catinean avatar
Paul Catinean

Same error i’m afraid

Paul Catinean avatar
Paul Catinean

I think reproducing the error is key here to fix the issue possibly adding it to automation later. Did you try having an external gotmpl values file from the main helmfile.yaml ?

mumoshu avatar
mumoshu

Sad

mumoshu avatar
mumoshu

So I did reproduce the issue and managed to fix it on my env

mumoshu avatar
mumoshu

The important point was that terraform calls plan twice and the two plan results must be equivalent

Paul Catinean avatar
Paul Catinean

twice, interesting

Paul Catinean avatar
Paul Catinean

actually you are right it does call it twice

Paul Catinean avatar
Paul Catinean

I see twice the same output

mumoshu avatar
mumoshu

Yeah. The first plan can optionally be done beforehand by calling terraform plan and storing the planfile to be reused later by terraform apply

mumoshu avatar
mumoshu

if you didnt store/reuse the plan on apply, terraform apply seems to run plan twice, one for initial planning and the second for the verification

Paul Catinean avatar
Paul Catinean

just to be 100% sure that nothing changed by the time you called plan to apply?

mumoshu avatar
mumoshu

yeah

Paul Catinean avatar
Paul Catinean

I guess that makes sense, I do plan once to check, and apply right after since it’s a small infrastructure that only I am managing

mumoshu avatar
mumoshu

to me the two plan results werent really equivalent

mumoshu avatar
mumoshu

helmfile diff runs a number of helm repo add and helm reop update in the very beginning, and the result of helm reop update isn’t reliable

Paul Catinean avatar
Paul Catinean

ohhhhhhhhhhhh

Paul Catinean avatar
Paul Catinean

I was just looking at that now

Paul Catinean avatar
Paul Catinean

maybe an option on helmfile to output directly just the diff? that way it’s also backwards compatible idk

mumoshu avatar
mumoshu

so i’ve tried to “erase” the messages from helm repo up in the last few commits, which solved the isseu for me

mumoshu avatar
mumoshu

yeah that might be an option!

Paul Catinean avatar
Paul Catinean

Strangely enough when I don’t change the vals yaml it always works

mumoshu avatar
mumoshu

thats odd. but if it’s the same issue, you should see the error after a few more trials

Paul Catinean avatar
Paul Catinean

what do you mean?

mumoshu avatar
mumoshu

maybe i’ve misunderstood what you’ve said in
Strangely enough when I don’t change the vals yaml it always works

mumoshu avatar
mumoshu

so you just said that terraform apply works when you have no change on your helmfile.yaml, right?

mumoshu avatar
mumoshu

does the issue disappear if you set concurrency = 1 on your helmfile_release_set resource in your tf file?

Paul Catinean avatar
Paul Catinean

let me check

Paul Catinean avatar
Paul Catinean

nop, same error

mumoshu avatar
mumoshu

Paul Catinean avatar
Paul Catinean

not sure what else to try

mumoshu avatar
mumoshu

Could you share me the full log obtained by runnign terraform apply with TF_LOG=TRACE TF_LOG_PATH=tf.log terraform apply ?

Paul Catinean avatar
Paul Catinean

sure

mumoshu avatar
mumoshu

Thank you so much for your cooperation It really helps!

Paul Catinean avatar
Paul Catinean

more than happy to do so, it’s in my own interest as well, will help me a great deal in my future deployments

Paul Catinean avatar
Paul Catinean

hmm I see there is also private info there like public keys and such

Paul Catinean avatar
Paul Catinean

I can try to remove them just not sure if I get it all

Paul Catinean avatar
Paul Catinean

you need it all or just the helmfile part?

Andrey Nazarov avatar
Andrey Nazarov

I’ll join soon too. Not sure what to check though. I’ll grab the latest version

Paul Catinean avatar
Paul Catinean

I was a bit worried that either gitlab runner or cert-manager with installCrds set to true might be the problem but seems not since it’s not included in the diff

Paul Catinean avatar
Paul Catinean

Hey I was the discussion on the github issue, how has the color diff been resolved?

mumoshu avatar
mumoshu

@ Thanks for sharing the TF_LOG!!

Apparently helm-diff result is unstable… From your log, I can see that the first helmfile-diff run shows changes on configmap and then deployment, where the second run shows changes on deployment and then configmap…

Paul Catinean avatar
Paul Catinean

ohhh

Paul Catinean avatar
Paul Catinean

I am using the latest helm diff plugin (which you contributed to in the upstream project)

Paul Catinean avatar
Paul Catinean

I was even going to open a PR to propose updating the diff version on helmfile to a more recent one

Paul Catinean avatar
Paul Catinean

As for me having the installCRDs parameters set to true on cert-manager resulted in huge diffs of essentially the exact same data just organized differently

Paul Catinean avatar
Paul Catinean

what version of helm diff are you using yourself sir?

mumoshu avatar
mumoshu

yeah. i thought helm-diff used a go map(hashmap) to collect resources to be diffed. afaik map entry order isn’t guaranteed/stable

Paul Catinean avatar
Paul Catinean

ouch hmmmm

mumoshu avatar
mumoshu

I’m using the latest version of helm-diff

Paul Catinean avatar
Paul Catinean

number?

Paul Catinean avatar
Paul Catinean

I have 3.1.2

mumoshu avatar
mumoshu

3.1.2

Paul Catinean avatar
Paul Catinean

hmmmm

Paul Catinean avatar
Paul Catinean

then it’s related to my chart and how it produces the output or how helm diff parses that output

mumoshu avatar
mumoshu

yeah probably… and it may even be due to that I’m so lucky that helm-diff somehow prints the changes in a stable order

mumoshu avatar
mumoshu

probably i’d get the same error if add more releases and k8s resources to be diffed

Paul Catinean avatar
Paul Catinean

Paul Catinean avatar
Paul Catinean

It’s a plausable scenario

Paul Catinean avatar
Paul Catinean

not sure how to proceed here

mumoshu avatar
mumoshu

maybe i can “enhance” the provider to skip running helmfile-diff on the second plan

Paul Catinean avatar
Paul Catinean

as it seems, if this is the case, there is too much volatility between repos added and diff plugin version that it can be wildly inconsistent

mumoshu avatar
mumoshu

on the surface - yes

Andrey Nazarov avatar
Andrey Nazarov

what about switching to kubectl diff as it was proposed in one helmfile issue? Will this help?

Andrey Nazarov avatar
Andrey Nazarov

I’ve just faced the issue with the fixed version (https://sweetops.slack.com/archives/CB6GHNLG0/p1595423095095200?thread_ts=1595351319.054400&cid=CB6GHNLG0).

To me the most stable version is after it became two times slower. I’m sorry for finding this:))))))))

My issue was also resolved

Andrey Nazarov avatar
Andrey Nazarov

But yes, it’s slow

mumoshu avatar
mumoshu

Thanks! So I’ve cut v0.3.8 https://github.com/mumoshu/terraform-provider-helmfile/releases/tag/v0.3.8

Starting this version, the provider should run helmfile-diff only once while apply, which should fix the issue…

Release v0.3.8: Run helmfile-diff only once instead of twice while in `terraform appl… · mumoshu/terraform-provider-helmfile

Deploy Helmfile releases from Terraform. Contribute to mumoshu/terraform-provider-helmfile development by creating an account on GitHub.

Paul Catinean avatar
Paul Catinean

mumoshu avatar
mumoshu

Also since it reuses the output from the previous helmfile-diff run against same resource attrs + helmfile build result, terraform apply results in one less helmfile-diff run, which should make it a bit faster to run

mumoshu avatar
mumoshu

to me it seems working (again).. the nature of this issue makes it very hard for me to reliably test. your confirmation is much appreciated as usual

Andrey Nazarov avatar
Andrey Nazarov

I’ll double check. There were quite some commits that day

1
mumoshu avatar
mumoshu

It now creates snapshots of helmfile-diff outputs under .terraform/helmfile/diff-*.txt. Ideally the provider should clean it up after a successful apply. I’ll add that later if we confirm it’s actually working

Paul Catinean avatar
Paul Catinean

testing now sorry work came in

Paul Catinean avatar
Paul Catinean

HAPPY DAYS IT WORKED! party_parrot party_parrot party_parrot

Paul Catinean avatar
Paul Catinean

And on top of that it was indeed faster

Paul Catinean avatar
Paul Catinean

Thanks for sticking with me on this one @mumoshu

Paul Catinean avatar
Paul Catinean

I’ll do some more testing on the staging environments and after some extensive testing will try to move them into production

Paul Catinean avatar
Paul Catinean

woop woop it also sends environment variables properly so this is great

Paul Catinean avatar
Paul Catinean

It did give a strange diff with some strange distribution of removed/added (a lot of red) but it did go through

Andrey Nazarov avatar
Andrey Nazarov

Works for me too. I’ll keep an eye on it and test more cases

Andrey Nazarov avatar
Andrey Nazarov

@mumoshu you rocks)

Andrey Nazarov avatar
Andrey Nazarov

@mumoshu Do you have an article about your time management?))))))

1
Paul Catinean avatar
Paul Catinean

true that

Paul Catinean avatar
Paul Catinean

Just as info for future version, not urgent since this seems to work indeed. The diff shows addition/removals on top of the diff which shows the helmfile diff and can be confusing

Paul Catinean avatar
Paul Catinean
Paul Catinean avatar
Paul Catinean

Should I create an issue with this?

mumoshu avatar
mumoshu

awesome! thanks a lot for your patience and support

mumoshu avatar
mumoshu

re: confusing diff output, i got to think that it’s unavoidable due to the nature of terraform plan

mumoshu avatar
mumoshu

it shows the diff for the latest helmfile diff output against the previous helmfile diff stored in the tfstate

mumoshu avatar
mumoshu

maybe we’d better use helmfile template instead so that terraform can use that to show changes in manifests?

mumoshu avatar
mumoshu

diff_output can be just removed, or renamed to changes so that it is clear that terraform is showing “diff of changes”, rather than changes in the manifests

Paul Catinean avatar
Paul Catinean

after asking the question myself it did make sense that it shows this indeed

Paul Catinean avatar
Paul Catinean

that’s not a bad idea, I didn’t even use helmfile template until now but it does make sense indeed

Andrey Nazarov avatar
Andrey Nazarov

I’m facing a strange issue when tf apply for the env ends up with the error map has no entry for key "securityService" . Like there is no such key in values. But actually it’s there and all prior runs were successful. Cannot understand what went wrong. Probably I’ll file an issue when I grab more information

Andrey Nazarov avatar
Andrey Nazarov

It seems that it doesn’t see the the values file somehow

Andrey Nazarov avatar
Andrey Nazarov

Found it. A values file was malformed - a colon was missing after one key not related to the key from the error at all.

mumoshu avatar
mumoshu

Re: diff_output, I was able to make helmfile diff output as the whole diff shown in plan, not “diff of diffs”. I’ll cut a release later

1
mumoshu avatar
mumoshu

Re: the “map has no entry” error, glad to see you fixed it so to be extra sure, we don’t need to fix helmfile/provider for that?

Andrey Nazarov avatar
Andrey Nazarov

no, everything is working great so far:)

mumoshu avatar
mumoshu

Awesome! Thanks for all the feedbacks.

Paul Catinean avatar
Paul Catinean

checking out 0.3.9 now

1
party_parrot1
mumoshu avatar
mumoshu

FYI: v0.3.9 has been released with the diff fix

Paul Catinean avatar
Paul Catinean

I’m like a kid on christmas with each delivery

1
1
Paul Catinean avatar
Paul Catinean

Removed the intial release and starting a new one

Paul Catinean avatar
Paul Catinean

Quick question while I run this. How would one be able to transfer data structures such as hashes/maps/dicts or lists from terraform to helmfile ? some decode on the helmfile part in the gotemplate?

mumoshu avatar
mumoshu

I’d suggest using string interpolation w/ jsonencode on the terraform side

mumoshu avatar
mumoshu

Like:

content = <<EOH
values:
- whatever: ${jsonencode(locals.whatever)}
EOH
Paul Catinean avatar
Paul Catinean

and then on helmfile part jsondecode I assume we have a gotemplate function

mumoshu avatar
mumoshu

to transforms maps/lists passed from tf to helmfile?

Paul Catinean avatar
Paul Catinean

yeah

Paul Catinean avatar
Paul Catinean

I assume you can only pass values as env variables? wth an external helmfile?

mumoshu avatar
mumoshu

How about this?

content = <<EOH
values:
- whatever: ${jsonencode(locals.whatever)}
---
{{ range $_, $item := .Values.whatever }}
...
{{ end }}
mumoshu avatar
mumoshu

or

values = [
		<<EOF
{"whatever": ${jsonencode(locals.whatever)}}
EOF
	]

content = <<EOH
{{ range $_, $item := .Values.whatever }}
...
{{ end }}
Paul Catinean avatar
Paul Catinean

have to try both

Paul Catinean avatar
Paul Catinean

if they work with external helmfiles then great

:--1:1
mumoshu avatar
mumoshu

the latter would work with external ones

Paul Catinean avatar
Paul Catinean

so I think 0.3.9 seems to work, I see the initial output with helmfile repo adding etc, the main helmfile and then the diff in the actual values.yaml within the terraform diff

Paul Catinean avatar
Paul Catinean

Doesn’t show with colors or just the original diff but seems to accurately show the difference

mumoshu avatar
mumoshu

ah so I disable coloring recently cuz i thought it would conflict with terraforms’ own coloring

Paul Catinean avatar
Paul Catinean

I’ll have to sit and think a bit what’s going on, today is one of the slowest days ever, need more coffee

1
mumoshu avatar
mumoshu

But I’m inclined to (re)enable coloring by default, and add an option to disable it instead

https://github.com/mumoshu/terraform-provider-helmfile/issues/24

Diff is partially hidden for larger outputs · Issue #24 · mumoshu/terraform-provider-helmfile

terraform-provider-helmfile version: 3df004fb52ba7c30f65d80eb0c3b537e0c4df7eb helmfile version: helmfile-0.125.0.high_sierra.bottle.tar.gz Overview We are using helmfile for managing Jenkins helm d…

Paul Catinean avatar
Paul Catinean

So when changing a value in the values.yaml file

Paul Catinean avatar
Paul Catinean
Paul Catinean avatar
Paul Catinean

So it somehow shows the diff before this operations, the version now and then the diff from now

Paul Catinean avatar
Paul Catinean

The last part is totally accurate just not sure why I see the diff from before this deploy, or maybe I’m just confused

mumoshu avatar
mumoshu

maybe you’re saying that apply_output is confusing?

Paul Catinean avatar
Paul Catinean

Actually apply output that’s at the very end (all in green but np) that’s perfectly accurate

mumoshu avatar
mumoshu

apply_output is showing that the previous apply_output(contains previously applied “diff”) to be recomputed after apply

Paul Catinean avatar
Paul Catinean

So I made that deployment now all I’m changing is strictly LIMIT_REQUEST: “” and the configmap will be changed along with it

Paul Catinean avatar
Paul Catinean

just changing 0 to 1 and backward

Paul Catinean avatar
Paul Catinean

And i see the diff with 0 to 1 and now the 1 to 0

Paul Catinean avatar
Paul Catinean

In theory shouldn’t it show just the latest change? - - - the current value + the changed value

mumoshu avatar
mumoshu

ahhh….

mumoshu avatar
mumoshu

I believe that’s due to stale cache. I used sha256 hash of helmfile build output in the name of cache files, which is used to run helmfile diff only once to avoid the error we had before

"[registry.terraform.io/-/helmfile](http://registry\.terraform\.io/\-/helmfile)" produced an invalid new value for
.diff_output: was cty.StringVal("Adding repo stable
mumoshu avatar
mumoshu

any changes to values.yaml won’t change the hash value so it would definitely happen

mumoshu avatar
mumoshu

(working on the fix

Paul Catinean avatar
Paul Catinean

but this is more then great and functional, thank you so much

Paul Catinean avatar
Paul Catinean

This is just feedback if you want to push it to another level it’s more than helpful

1
mumoshu avatar
mumoshu


when changing a value in the values.yaml file
should be fixed in v0.3.10

Paul Catinean avatar
Paul Catinean

🥰🥰

Paul Catinean avatar
Paul Catinean

Will test soon!

1
Paul Catinean avatar
Paul Catinean

Not exactly sure why I keep getting “Error: the lock file (Chart.lock) is out of sync with the dependencies file (Chart.yaml). Please update the dependencies”

Paul Catinean avatar
Paul Catinean

Could be an issue in my chart but I didn’t change it between tests

Paul Catinean avatar
Paul Catinean

Running helmfile alone seems to be working though

Paul Catinean avatar
Paul Catinean

So maybe something related to the provider

mumoshu avatar
mumoshu

Thanks! Interesting - could you try running helmfile diff, template, build and apply alone? The provider just delegates everything to those 4 helmfile commands. So as long as those commands work, the provider should work…

Paul Catinean avatar
Paul Catinean

it seems helmfile template fails indeed i have to find out why

Paul Catinean avatar
Paul Catinean

it’s most likely because the chart is being downloaded and needs to have the dependencies updated

mumoshu avatar
mumoshu

yeah that may be relevant. helmfile template updates repos, run helm dep build on local charts only, run helm fetch on helm 2 only, and finally run helm template.

Paul Catinean avatar
Paul Catinean

this is remote chart so not sure why it refers to a /tmp/ downloaded version of the chart

mumoshu avatar
mumoshu

just to be extra sure - you are using helm3, right?

mumoshu avatar
mumoshu

perhaps helmfile v0.125.1 should work differently https://github.com/roboll/helmfile/issues/1377

but not sure why it doesn’t fail on other helmfile commands for you

[BUG] helmfile template with selector fetches all charts · Issue #1377 · roboll/helmfile

When running helmfile –environment –selector chart=<MY_CHART> template, all of the charts are fetched beforehand and not just the charts in the selector. Running the same command but with l…

Paul Catinean avatar
Paul Catinean

helm 3 yes sir

mumoshu avatar
mumoshu

is it a public chart, or private one?

Paul Catinean avatar
Paul Catinean

helmfile version v0.125.0

Paul Catinean avatar
Paul Catinean

public

mumoshu avatar
mumoshu

could you share me the chart location? i’ll try reproducing it myself

Paul Catinean avatar
Paul Catinean

I did, it’s public yet not publicized so to speak so pm-ed

:--1:1
Paul Catinean avatar
Paul Catinean

Paul Catinean avatar
Paul Catinean

it has a few dependencies which granted I need to update if I use it locally but remotely i did not have to, not sure

Paul Catinean avatar
Paul Catinean

Updated the latest helmfile and still the same thing

mumoshu avatar
mumoshu

Thanks. I reproduced it! So the root cause seems to be due to that the chart has outdated dependencies stored in Chart.lock

mumoshu avatar
mumoshu

which needs to be updated by running helm dep up before publishing the chart, i think

mumoshu avatar
mumoshu

but this is a chart issue so Helmfile should anyway have a way to graceful handle it

mumoshu avatar
mumoshu

Maybe helmfile can just warn about the outdated chart deps and proceed

mumoshu avatar
mumoshu

instead of failing and forcing you to block until the chart is fixed in upstream

mumoshu avatar
mumoshu

rethinking it for a while, i’ve fixed it as a part of https://github.com/roboll/helmfile/pull/1400

Fix various issues in chart preparation by mumoshu · Pull Request #1400 · roboll/helmfile

In #1172, we accidentally changed the meaning of prepare hook that is intended to be called BEFORE the pathExists check. It broke the scenario where one used a prepare hook for generating the local…

mumoshu avatar
mumoshu

so it doesn’t run helm dep build on fetched charts anymore, which turns out to be unnecessary and safely skipped

Paul Catinean avatar
Paul Catinean

wow that sounds great, also thanks for the help on the chart that’s outside of helmfile and the provider. I’m getting free consultation now

Paul Catinean avatar
Paul Catinean

I need to document on how to handle those dependencies, maybe remove the .lock or update the dependencies as you said or pin an exact version

Andrey Nazarov avatar
Andrey Nazarov

It’s hard to follow this thread. But it seems with not the most recent version for the subsequent runs if nothing is changed tf apply prints out the apply result from the previous run and not related to the current one. Will investigate further

mumoshu avatar
mumoshu

@Andrey Nazarov AFAIK terraform apply outputs are computed from the latest tfstate. The helmfile provider does not “reset” the apply_output when no run so next terraform apply would still show the previous apply_output. Is this what you’ve observed?

mumoshu avatar
mumoshu

Maybe I can try to make the provider “reset” apply_output on read, so that it’s still shown in terraform output on successful apply, but will reset to an empty string on next terraform apply.

Andrey Nazarov avatar
Andrey Nazarov

Yeah. For the vast majority of outputs this behaviour seems natural. But I’m still not sure about apply_output though. Maybe we can keep it as it is. Let me file an issue and we can think about it for a while and continue discussing there

Andrey Nazarov avatar
Andrey Nazarov

It seems diff output is also shown

mumoshu avatar
mumoshu

ah that’s too bad. the provider does have some code to “reset” diff_output on read. so this strategy won’t work for apply_output as well

mumoshu avatar
mumoshu

next best workaround should be to let the provider show diff that “sets diff_output and apply_output to empty on terraform plan after the successful apply`

Andrey Nazarov avatar
Andrey Nazarov
apply_output and diff_output of a previous run are shown when there are no changes in the current one · Issue #26 · mumoshu/terraform-provider-helmfile

When the are no changes in manifests apply_output and diff_output that are shown to the user contain information from the previous successful run. This could be misleading. Terraform apply says No …

mumoshu avatar
mumoshu

thx!

Andrey Nazarov avatar
Andrey Nazarov

Maybe I’ll find time to play with it by myself. But I need to understand how this provider works now since you’ve made a lot of changes recently. And I’m not quite sure I understand all tf internals right.

mumoshu avatar
mumoshu

im still learning it, too. i’ll try to answer questions if you have any also, i recently started adding comments on code where i find difficult to understand at glance. so that may help

1
Paul Catinean avatar
Paul Catinean

I’m holiday for the time being but I’ve seen the commit message of the latest helmfile and it works along with the latest provider thanks so much @mumoshu

sheldonh avatar
sheldonh

Would like clarification so I can expand on the actual expression syntax used in this draft blog post I wrote on using expressions with Terraform for iteration with for_each. PR#40 - Iteration through list of objects

If you are up for it, I’d love any comments on the pull request itself, as I’m a bit unclear about if this is a Terraform foreach type of construct. It seems to be using syntax that is partially go, with the for key,val in list syntax.

I’d like to understand this better as I’ve seen the flatten function used before with some more complex cases and can’t find any reference on the for each syntax itself explaining the schema of it such as for <itemvariable> in <Collection>: <object> => <propertyforkey> and as a result I’m guessing too much on this stuff.

loren avatar
loren
Expressions - Configuration Language - Terraform by HashiCorp

The Terraform language allows the use of expressions to access data exported by resources and to transform and combine that data to produce other values.

sheldonh avatar
sheldonh

Yes. I’ve read that before. I’m going to reread it again for fresh insight

sheldonh avatar
sheldonh

Very Go inspired it seems, but it’s own functionality I guess.

loren avatar
loren


The source value can also be an object or map value, in which case two temporary variable names can be provided to access the keys and values respectively:

[for k, v in var.map : length(k) + length(v)]
loren avatar
loren


Finally, if the result type is an object (using { and } delimiters) then the value result expression can be followed by the ... symbol to group together results that have a common key:

{for s in var.list : substr(s, 0, 1) => s... if s != ""}
sheldonh avatar
sheldonh

the => operator in my .net world is a lambda typically, so also was a bit confused on that here

loren avatar
loren


The type of brackets around the for expression decide what type of result it produces. The above example uses [ and ], which produces a tuple. If { and } are used instead, the result is an object, and two result expressions must be provided separated by the => symbol:

{for s in var.list : s => upper(s)}
loren avatar
loren

IMO, the most important paragraph in all the docs is this:
However, unlike most resource arguments, the for_each value must be known before Terraform performs any remote resource actions. This means for_each can’t refer to any resource attributes that aren’t known until after a configuration is applied (such as a unique ID generated by the remote API when an object is created)

loren avatar
loren

can get yourself into all sorts of trouble if you do not carefully consider how all the attributes in the for_each expression objects may be specified

sheldonh avatar
sheldonh

Definitely aware of that

sheldonh avatar
sheldonh

Gone done that rabbit hole. I’m just more confused on advanced expressions to manipulate an objects structure

sheldonh avatar
sheldonh

Basically per that blog draft I showed I previously had to do key based lists, which is clunky. I figured out how to change a list of objects by using this to provide the key, but still not fully versant on using the flatten, nested for etc. trying to wrap my head around it as there is no “schema” doc + examples, it’s kinda all mixed with a general doc like what you showed which i find a bit confusing

loren avatar
loren

the flatten example in the docs is doing exactly that

    for subnet in local.network_subnets : "${subnet.network_key}.${subnet.subnet_key}" => subnet
loren avatar
loren

the left side of => is the key, and becomes the ID for the resource. it must be unique or you have duplicate resource IDs, which does not work

loren avatar
loren

you can dynamically construct that key, exactly as they show there

loren avatar
loren

your yaml is a bit confusing to me, it’s a list of maps of maps, where with terraform it would be cleaner if it were just a list of maps. i.e.

users:
  - name: foobar1
    email: [[email protected]](mailto:[email protected]\.com)
  - name: foobar2
    email: [[email protected]](mailto:[email protected]\.com)
  - name: foobar3
    email: [[email protected]](mailto:[email protected]\.com)

and you’d then loop over it with for_each:

for_each = { for user in <users> : user.name => user }
loren avatar
loren

you could of course mimic that exact structure in your data source, and just give for_each the map of maps… note there are no lists here, so no need to use for to build up the map

users:
  foobar1:
    name: foobar1
    email: [[email protected]](mailto:[email protected]\.com)
  foobar2:
    name: foobar2
    email: [[email protected]](mailto:[email protected]\.com)
  foobar3:
    name: foobar3
    email: [[email protected]](mailto:[email protected]\.com)    

and the for_each:

for_each = <users>
loren avatar
loren

i tend to prefer the former, a simple list of maps

loren avatar
loren

in both cases, each.key is the user name, and each.value is the map of user attributes. and you can dot-index into the map, e.g. each.value.name and each.value.email

sheldonh avatar
sheldonh

Yes, that’s what I already do, the key. however, it adds an additional layer, and was hoping to still working with an object list and inline tell what is the key, which my blog post showed worked. I’m just not far beyond that with the more advanced flatten with nested objects and all

loren avatar
loren

in your case, you can use a set also, because the name is your unique key… then you don’t need to construct the map

users:
  - name: foobar1
    email: [[email protected]](mailto:[email protected]\.com)
  - name: foobar2
    email: [[email protected]](mailto:[email protected]\.com)
  - name: foobar3
    email: [[email protected]](mailto:[email protected]\.com)

and the for_each:

for_each = <users>[*].name
loren avatar
loren

well, no, then you can’t get to the user attributes… hmm…

sheldonh avatar
sheldonh

@loren the PR for the blog post showed it seemed to work fine with what I did, it’s more the advanced usage that has me a bit stumped beyond this.

The foreach transformation i blogged about seemed to work, are you seeing that syntax fail for you too?

loren avatar
loren

no i understand what’s working and what’s not, based on the data structures passed in and how the expressions are evaluated. with my prior comment, i was only speculating some on how to maybe simplify the expression, but decided it wouldn’t work for this use case

:--1:1
paultath81 avatar
paultath81

hi hoping someone have ran into this and was able to come with a solution.

I have the following

resource "tfe_variable" "lt_vpc_security_group_ids" {
  category     = "terraform"
  key          = "lt_vpc_security_group_ids"
  value        = var.lt_vpc_security_group_ids
  hcl          = true
  workspace_id = tfe_workspace.id
}

If i ran terraform plan the error is thrown

Inappropriate value for attribute "value": string required.

how can i use a variable in the value w/out running into this? Maybe some way of escaping the value?

Joe Niland avatar
Joe Niland

I think just need to use join()

paultath81 avatar
paultath81

thx Joe doesn’t seem to work as it’s expected it as a list string type

Joe Niland avatar
Joe Niland

The docs seem to show a string for value

Joe Niland avatar
Joe Niland

Or am I misunderstanding you?

Joe Niland avatar
Joe Niland

Oh you’re trying to reference the variable and not the value?

paultath81 avatar
paultath81

correct sorry for the confusion

paultath81 avatar
paultath81

here’s my variable.tf

variable "lt_vpc_security_group_ids" {
  type        = list(string)
  description = "A list of security group to associate with"
  #default     = []
}
Joe Niland avatar
Joe Niland

So my understanding is you need to join that list into a single string so it can be used in the value field of tfe_variable

paultath81 avatar
paultath81

hmm possible if you can give an example?

paultath81 avatar
paultath81

not sure i understand

paultath81 avatar
paultath81

the variable type is set to use list(string)

paultath81 avatar
paultath81

maybe i can change it to use value = "[[var.lt](http://var\.lt)_vpc_security_group_ids]"

paultath81 avatar
paultath81
11:23:32 PM

and the final result if i use the above.

2020-07-20

Vucomir Ianculov avatar
Vucomir Ianculov

Hi, i was using EKS worker nodes in the past our our staging ENV and now i would like to switch to terraform-aws-eks-node-group my question is

  1. if i use terraform-aws-eks-node-group is there a way to encrypt the disk and also set scaling policy(CPU limit) ?
  2. if i use EKS worker nodes is there a way to automatically dain nodes before removing them, at the moment i’m using termination_policies = ["OldestInstance", "OldestLaunchConfiguration", "Default"] ? thanks.
cloudposse/terraform-aws-eks-node-group

Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We have a PR for encryption that was contributed, but needs rebasing by @

cloudposse/terraform-aws-eks-node-group

Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Enable encryption config by vkhatri · Pull Request #62 · cloudposse/terraform-aws-eks-cluster

what Enable optional encryption_config Create optional Cluster KMS Key if one is not provided why To enable eks cluster resources (e.g. secrets) encryption references https://aws.amazon.com/bl

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
AWS: aws_ebs_encryption_by_default - Terraform by HashiCorp

Manages whether default EBS encryption is enabled for your AWS account in the current AWS region.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

For draining, you might want to have a look at https://github.com/aws-samples/amazon-k8s-node-drainer

aws-samples/amazon-k8s-node-drainer

Gracefully drain Kubernetes pods from EKS worker nodes during autoscaling scale-in events. - aws-samples/amazon-k8s-node-drainer

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(haven’t used it)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
aws/aws-node-termination-handler

A Kubernetes Daemonset to gracefully handle EC2 instance shutdown - aws/aws-node-termination-handler

Cloud Posse avatar
Cloud Posse
04:00:04 PM

:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is Jul 29, 2020 11:30AM.
Register for Webinar
slack #office-hours (our channel)

2020-07-19

Turkleton avatar
Turkleton

Hey. question about the aws elastic beanstalk environment module. Is it possible for it to impliment waf using that module, for the EB environment? I didn’t see any inputs for waf or any other modules available for it?

2020-07-18

Châu Anh Tuấn avatar
Châu Anh Tuấn
05:45:57 PM

Hi everyone, I have got a problem with Terraform when I add more than 6 tags on the AWS services.

Châu Anh Tuấn avatar
Châu Anh Tuấn

It didn’t update the “kubernetes.io/role/elb” = “1” to stage file (edited)  I cannot find the reason for that. Can you help me find out?

sheldonh avatar
sheldonh

Are slashes valid for key? Never tried that

Châu Anh Tuấn avatar
Châu Anh Tuấn
05:57:03 PM

I think it not a problem because I also have other tags with slashes.

Steven avatar
Steven

What is the issue? What is the error message? What type of resource? You need to provide more details to get help

Châu Anh Tuấn avatar
Châu Anh Tuấn
02:36:53 AM

I cannot set 7 tags for a resource by Terraform.

Châu Anh Tuấn avatar
Châu Anh Tuấn

Terraform just shave 6 tags in the Terraform stage

Châu Anh Tuấn avatar
Châu Anh Tuấn
02:40:33 AM
Châu Anh Tuấn avatar
Châu Anh Tuấn
02:47:51 AM
Steven avatar
Steven

Are you saying that you believe your code should set 7 tags, but you are only getting 6 and no errors? If so, I suspect it is an issue in your code. You would need to share all of the related code for anyone to verify this

sheldonh avatar
sheldonh

What’s the latest from 01.13 hands on? Has it saved a lot of repeat code for you so far? Haven’t tried as using Terraform cloud primarily. Overall reaction to improvements would be great.

loren avatar
loren

My thoughts from about a month ago, the module-level count/for_each will definitely simplify things for many modules… https://sweetops.slack.com/archives/CB6GHNLG0/p1592345793219500?thread_ts=1592345793.219500&cid=CB6GHNLG0

oh yeah, playing with tf 0.13 today, and the ability to disable a module using count = 0 is the … eliminates so much of the cruft in advanced modules

loren avatar
loren

Also makes it easier to work with community modules and integrate them into your own work

:--1:1

2020-07-17

Luis avatar

Hi! anyone facing this issue when destroying/creating an AWS EKS cluster? https://github.com/cloudposse/terraform-aws-eks-cluster/issues/67

Creating CloudWatch Log Group failed: ResourceAlreadyExistsException: · Issue #67 · cloudposse/terraform-aws-eks-cluster

what Error: Creating CloudWatch Log Group failed: ResourceAlreadyExistsException: The specified log group already exists: The CloudWatch Log Group &#39;/aws/eks/eg-test-eks-cluster/cluster&#39; alr…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Please see the supporting details in that issue. It’s not a module problem, it’s a terraform problem.

Creating CloudWatch Log Group failed: ResourceAlreadyExistsException: · Issue #67 · cloudposse/terraform-aws-eks-cluster

what Error: Creating CloudWatch Log Group failed: ResourceAlreadyExistsException: The specified log group already exists: The CloudWatch Log Group &#39;/aws/eks/eg-test-eks-cluster/cluster&#39; alr…

:--1:1
HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
09:35:25 PM

Terraform Cloud Outage Jul 17, 21:31 UTC Investigating - Due to a failure in a third-party DNS provider, Terraform Cloud runs are failing and the Terraform Cloud web interface is unavailable.

HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
09:55:18 PM

Terraform Cloud Outage Jul 17, 21:41 UTC Monitoring - Terraform Cloud is currently back to normal functionality. We’re continuing to monitor DNS functionality and communicate with our provider.Jul 17, 21:31 UTC Investigating - Due to a failure in a third-party DNS provider, Terraform Cloud runs are failing and the Terraform Cloud web interface is unavailable.

HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
10:55:19 PM

Terraform Cloud Outage Jul 17, 22:35 UTC Resolved - The upstream DNS provider has fixed the issue. Terraform Cloud is operational again - if a run failed during this outage, please re-queue it. If you have problems queueing runs, please reach out to support.Jul 17, 21:41 UTC Monitoring - Terraform Cloud is currently back to normal functionality. We’re continuing to monitor DNS functionality and communicate with our provider.Jul 17, 21:31 UTC Investigating - Due to a failure in a third-party DNS provider, Terraform…

Terraform Cloud Outage

HashiCorp Services’s Status Page - Terraform Cloud Outage.

2020-07-16

contact871 avatar
contact871

Trying to use a private GitLab repository as Terraform module.

It works fine when I hardcode the token like this:

module "resource_name" {
  source = "git::<https://oauth2>:<GITLAB_TOKEN>@gitlab.com/user/repo.git?ref=tags/v0.1.2"
  ...
}

It also works like this:

module "resource_name" {
  source = "git::<https://gitlab.com/user/repo.git?ref=tags/v0.1.2>"
  ...
}

When I extend my ~/.gitconfig with:

[url "<https://oauth2>:<GITLAB_TOKEN>@gitlab.com"]
  insteadOf = <https://gitlab.com>

Is there a way I could provide the GITLAB_TOKEN via environment variable?

roth.andy avatar
roth.andy

Sounds like the 2nd way you have there is a good way to do it. I don’t believe you can feed in a terraform variable to a module source, since it is needed up front when you run terraform init

roth.andy avatar
roth.andy

https://www.terraform.io/docs/modules/sources.html#generic-git-repository give some good ideas. You can use ssh instead of HTTPS, and it will use your SSH key, or you can set up your git credential helper to already be logged in

Module Sources - Terraform by HashiCorp

The source argument within a module block specifies the location of the source code of a child module.

contact871 avatar
contact871

You are right, I tried over Terraform variable but it didn’t work:

There are some problems with the configuration, described below.

The Terraform configuration must be valid before initialization so that
Terraform can determine which modules and providers need to be installed.

Error: Unsuitable value type

  on [main.tf](http://main\.tf) line 2, in module "resource_name":
   2:   source = "git::<https://oauth2>:${var.gitlab_token}@gitlab.com/user/repo.git?ref=tags/v0.1.2"

Unsuitable value: value must be known


Error: Variables not allowed

  on [main.tf](http://main\.tf) line 2, in module "resource_name":
   2:   source = "git::<https://oauth2>:${var.gitlab_token}@gitlab.com/user/repo.git?ref=tags/v0.1.2"

Variables may not be used here.
contact871 avatar
contact871

need to read more about “git credential helper”, thx @roth.andy

contact871 avatar
contact871

Used the following:

• Have a simple script at /usr/local/bin/git-credential-helper


\#!/bin/sh


\# Script used to authenticate against GitLab by using the GitLab token

\# When using the GitLab token the assumed user is oauth2

\# Example: git clone <https://oauth2>@gitlab.com/user/project.git


\# Return the GitLab token 
echo "${GITLAB_TOKEN}"

• create environment variable with the token:

export GITLAB_TOKEN='<MY SECRET GITLAB TOKEN>'

• then just download the module with:

GIT_ASKPASS=/usr/local/bin/git-credential-helper terraform init

The [main.tf](http://main\.tf) looks like this now:

module "resource_name" {
  source = "git::<https://oauth2>@gitlab.com/user/project.git?ref=tags/v0.1.2"
  ...
}

@roth.andy thx again for pointing me in the right direction

roth.andy avatar
roth.andy
05:38:42 PM

you’re welcome

cool-doge2
Chris Wahl avatar
Chris Wahl

I ran into a similar need and saw this thread. Really appreciate the content / ideas here, I built upon it to solve my problem. Shared the details here! https://wahlnetwork.com/2020/08/11/using-private-git-repositories-as-terraform-modules/

Using Private Git Repositories as Terraform Modules - Wahl Network

Learn how to quickly and efficiently setup private git repositories as Terraform modules using a dynamic access token and continuous integration!

thumbsup_all1
Soren Martius avatar
Soren Martius
CDK for Terraform: Enabling Python & TypeScript Support

Cloud Development Kit for Terraform, a collaboration with AWS Cloud Development Kit (CDK) team. CDK for Terraform allows users to define infrastructure using TypeScript and Python while leveraging the hundreds of providers and thousands of module definitions provided by Terraform and the Terraform ecosystem.

vgdubrea avatar
vgdubrea

so does that mean I just write infrastructure and app both in typescript ?

CDK for Terraform: Enabling Python & TypeScript Support

Cloud Development Kit for Terraform, a collaboration with AWS Cloud Development Kit (CDK) team. CDK for Terraform allows users to define infrastructure using TypeScript and Python while leveraging the hundreds of providers and thousands of module definitions provided by Terraform and the Terraform ecosystem.

loren avatar
loren

so, pulumi out of business yet?

Andrey Nazarov avatar
Andrey Nazarov

Yeah, a race with pulumi becomes even more interesting:)

:--1:1
Jonathan Le avatar
Jonathan Le

damn damn damn, this is good: https://registry.terraform.io/modules/cloudposse/iam-policy-document-aggregator/aws/0.1.0. We’re doing it a different crappier way where I’m at. Going to suggest we swap over to this.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

btw, up to 0.3.1 now

aaratn avatar
aaratn

This is neat, but still can’t understand a reason why would someone aggregate policy into one ?

aaratn avatar
aaratn

What are some use cases ?

Jonathan Le avatar
Jonathan Le

I’m working at a large media company right now that is clearly pushing the limits of what IAM is capable of, for better or worse. We need every byte of IAM policy possible, so it’s useful to concat many small policies into larger one when attaching to a role.

1

2020-07-15

Sai Krishna avatar
Sai Krishna

Hi Team - I have my terraform scripts bundled as modules, now I have a main.tf under root directory and I have multiple module configurations reusing same module but with different variables with the intent of reusing the module code for multiple module configurations. But when I do a terraform plan instead of creating 2 resources its basically overriding 1st one with 2nd configuration why does this happen and what is the way to create multiple resources ?

Eric Berg avatar
Eric Berg

This is related to your using local state (assumption). The state of the managed objects is kept in local state files (terraform.tfstate) or in a remote data store, such as S3. Running TF in the same dir as the state files for a previous run of TF will operate on the same objects as the last run, because it gets its state from the state files.

I have a similar set-up and I was advised to look into Terraform Cloud, to manage the state.

You could use workspaces to separate out the state, regardless of where the state resides, but i’m not really up on that approach.

paultath81 avatar
paultath81

Has anyone ran into the error below when using dynamic block function on tags?

on [main.tf](http://main\.tf) line 50, in resource "aws_launch_template" "default":
  50:     dynamic tags {

Blocks of type "tags" are not expected here

here’s what i’m inserting

    dynamic "tags" {
      for_each = local.common_tags

      content {
        key   = tags.key
        value = tags.value
      }
    }
David Scott avatar
David Scott

In all of my code AWS tags have an equals sign tags = {}, and the Terraform AWS Tagging documentation> backs that up by saying resource tags are arguments that accept a key-value map (not blocks that support for_each). I wanted to verify that so I googled it and found the [Terraform 0.12 Preview for Dynamic Nested Blocks](https://www.hashicorp.com/blog/hashicorp-terraform-0-12-preview-for-and-for-each/) gives a contradictory example showing exactly what you’re trying to do. Finally I found <https://github.com/hashicorp/terraform-guides/tree/master/infrastructure-as-code/terraform-0.12-examples/first-class-expressions|official documentation clarifying this:

This is required in Terraform 0.12 since tags is an argument rather than a block which would not use =. In contrast, we do not include = when specifying the network_interface block of the EC2 instance since this is a block.

It is not easy to distinguish blocks from arguments of type map when looking at pre-0.12 Terraform code. But if you look at the documentation for a resource, all blocks have their own sub-topic describing the block. So, there is a Network Interfaces sub-topic for the network_interface block of the aws_instance resource, but there is no sub-topic for the tags argument of the same resource.

For more on the difference between arguments and blocks, see Arguments and Blocks>.</span

loren avatar
loren

the equivalent of “dynamic” for a map, as tags is, would be to use a for expression…

tags = { for k, v in <map> : k => v }
loren avatar
loren

something like that, anyway, not able to test it at the moment…

maarten avatar
maarten
 tags = flatten([
    for key in keys(local.common_tags) : merge(
      {
        key   = key
        value = local.common_tags[key]
    }, var.additional_tag_map)
  ])

You can replace var.additional_tag_map with {} if you don’t have other tags

paultath81 avatar
paultath81

sweet thank you all for the info i will test this out and let you know if it works

Anirudh Srinivasan avatar
Anirudh Srinivasan

How can narrow down my filter to just “id”. Here is what i am running terraform state show module.controlplane.aws_security_group.worker

resource "aws_security_group" "worker" {
    arn                    = "arn:aws:ec2:us-west-2:000000000:security-group/sg-000000000000"
    id                     = "sg-000000000000"
    ingress                = []
    name                   = "foobar"
    owner_id               = "000000000"
    revoke_rules_on_delete = false
    vpc_id                 = "vpc-000000000000"
}
maarten avatar
maarten

use terraform show -json and pipe to jq

:--1:1
bondar avatar
bondar

hey there, having legacy terraform 0.11 and aws provider 1.32.0 goal - manage credentials through 3 aws accounts (environments dev, test, prod) and 5 logical domains for each environment into SSM the question is that am i overcomplicating or exists more easier way to handle theses?

bondar avatar
bondar

here’s module code

bondar avatar
bondar

note:

• yes, i know regarding a security concerns

• state locally managed

2020-07-14

Vucomir Ianculov avatar
Vucomir Ianculov

Hey everyone. i’m using [terraform-aws-eks-workers> 0.7.1 still on terraform 0.11 </i](https://github.com/cloudposse/terraform-aws-eks-workers), i’m looking for a way to add availability zone in the tags so i can spread my pods on all nodes evenly across all nodes topologySpreadConstraints, is there a easy way of doing this?

cloudposse/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

HarjotSingh avatar
HarjotSingh

hi Team.. I want to create iam users for my team so that they can access AWS console and perform operations on dynamo db and sqs. I want to do it using terraform.. Any good pointers on that? I checked these 2 links but are very basic https://www.terraform.io/docs/providers/aws/r/iam_user.html https://www.terraform.io/docs/providers/aws/r/iam_access_key.html Need some advanced options and multiple policies which I can enforce for those IAM users

AWS: aws_iam_access_key - Terraform by HashiCorp

Provides an IAM access key. This is a set of credentials that allow API requests to be made as an IAM user.

Joe Niland avatar
Joe Niland

you will pretty much just need roles, policies and users

HarjotSingh avatar
HarjotSingh

ok. let me have a look

Joe Niland avatar
Joe Niland

Did you create your queues and dynamodb with Terraform too?

HarjotSingh avatar
HarjotSingh

yes

HarjotSingh avatar
HarjotSingh

in link you sghared, there are 3 different project folders for policy, roles and users.. how they connect to create users

Tim Birkett avatar
Tim Birkett

You create policies that are attached to roles and you allow individual users to assume those roles.

Tim Birkett avatar
Tim Birkett

If you already use something like Okta (or other IdP), I’d strongly consider setting up Federated SSO. Managing individual users in AWS with Terraform gets annoying and being too granular can also cause you trouble as there are limits to numbers and sizes of policies that can be attached to a user, and the user will inevitably want more and more.

Of course, it depends how many users you are talking about. If there’s 10, then Terraform will be fine. If there’s 1000 across 220+ AWS accounts then SSO / Self-Serve access request is a must.

2
:--1:1
HarjotSingh avatar
HarjotSingh

ok.. we have less users so terraform is fine for us

HarjotSingh avatar
HarjotSingh

hi Team,

HarjotSingh avatar
HarjotSingh

I created code for user account creation but once I run terraform plan, it also destroy many things by its own.. This destroy things is not needed,, how to avoid destroys or am I doing something wrong in my new code for user account creation

2020-07-13

rahulm4444 avatar
rahulm4444
Migrating from VMs to Kubernetes using HashiCorp Consul Service on Azure attachment image

Join HashiCorp and Microsoft to talk about workload migration from VM’s to Kubernetes using HCS on Azure with Consul Cluster Management. In this session, Ray Kao, Cloud Native Specialist at Microsoft…

rahulm4444 avatar
rahulm4444
HashiCorp Terraform on AWS Virtual Workshop July 14th

Join local practitioners for an overview of the HashiCorp toolset and a hands-on virtual workshop for Terraform on AWS on Tuesday, July 14th. http://events.hashicorp.com/workshops/terraform-july14

David Napier avatar
David Napier

I went to this, was very disappointed.

HashiCorp Terraform on AWS Virtual Workshop July 14th

Join local practitioners for an overview of the HashiCorp toolset and a hands-on virtual workshop for Terraform on AWS on Tuesday, July 14th. http://events.hashicorp.com/workshops/terraform-july14

Cloud Posse avatar
Cloud Posse
04:00:48 PM

:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is Jul 22, 2020 11:30AM.
Register for Webinar
slack #office-hours (our channel)

paultath81 avatar
paultath81

can someone help me understand the diff between why one should use .tfvars vs auto.tfvars? Reading several blogs/docs they seems to be used in the same manner

Joe Niland avatar
Joe Niland

Files called terraform.tfvars are loaded first.

*.auto.tfvars are loaded in alphabetical order after terraform.tfvars so they can be useful as overrides.

Files specified by -var-file are loaded last and can override values from the other two.

https://www.terraform.io/docs/commands/apply.html#var-file-foo

Command: apply - Terraform by HashiCorp

The terraform apply command is used to apply the changes required to reach the desired state of the configuration, or the pre-determined set of actions generated by a terraform plan execution plan.

Chris Wahl avatar
Chris Wahl

Also tfvars doesn’t work with Terraform Cloud. You have to use auto.tfvars or load the input values via TFC variables

Joe Niland avatar
Joe Niland

Didn’t know that @ - thanks!

:--1:1
paultath81 avatar
paultath81

ah thx @ that’s what i was looking for. I should have mentioned we are using TFC but this is good to know too Joe. thx you both

:--1:2
Marcin Brański avatar
Marcin Brański
Input Variables - Configuration Language - Terraform by HashiCorp

Input variables are parameters for Terraform modules. This page covers configuration syntax for variables.

jmccollum avatar
jmccollum

It is possible to use tfvars with TFC/TFE Make an Environment Variable called TF_CLI_ARGS_plan and a value of -var-file=./environments/prod.tfvars

jmccollum avatar
jmccollum

This client used open source workspaces with named tfvars for each workspace, and when moving to TFE they didn’t want to start managing variables in the UI and wanted to keep using tfvars for variables.

Marcin Brański avatar
Marcin Brański

I don’t think so. Unfortunately. It would be awesome if you could. What you can do though is use yaml configuration and parse it in tf This is what I’m currently doing instead of tfvars

Rajesh Babu Gangula avatar
Rajesh Babu Gangula

whats the best way to run a loop on module? I need to be able to build an small environment automatically based on a list variable , I see the loop for the module is going to be available in 0.13 but since its not available yet .. is there any other way that I can accomplish this … I am looking at for_each but I am confused if the cross resource dependencies will work as I needed to pass other resource outputs .. any help would be greatly appreciated

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Basically you’ll want to do something like this: https://github.com/cloudposse/terraform-aws-ecr/blob/master/main.tf#L24-L34

cloudposse/terraform-aws-ecr

Terraform Module to manage Docker Container Registries on AWS ECR - cloudposse/terraform-aws-ecr

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It gets complicated quickly though if you depend on a lot of resources

Rajesh Babu Gangula avatar
Rajesh Babu Gangula

thanks @Erik Osterman (Cloud Posse) I only have one resource dependency so I think this should work perfectly …

2020-07-12

loren avatar
loren

I hope they make it easy to host your own registry, or proxy/cache the terraform registry at your own endpoint

loren avatar
loren

or provide a git-source option for providers, like they do for modules

1
Tarlan Isaev avatar
Tarlan Isaev

Hi guys, how would you try to resolve this issue? I dive more in depth through this workshop https://www.techcrumble.net/2020/01/how-to-configure-terraform-aws-backend-with-s3-and-dynamodb-table/

terraform apply -auto-approve                                    
Acquiring state lock. This may take a few moments...
Error: Error locking state: Error acquiring the state lock: 2 errors occurred:
        * ResourceNotFoundException: Requested resource not found
        * ResourceNotFoundException: Requested resource not found
Terraform acquires a state lock to protect the state from being written
by multiple users at the same time. Please resolve the issue above and try
again. For most commands, you can disable locking with the "-lock=false"
flag, but this is not recommended.
How To Configure Terraform AWS Backend With S3 And DynamoDB Table attachment image

Managing state with terraform is quite crucial, when we are working with multiple developers in a project, with remote operation and sensitive data, let’s see how to use AWS Backend with S3 and DynamoDB table

Geordan avatar
Geordan

I’d verify the S3 bucket and DynamoDB table were actually created via the AWS Console just to rule the simple stuff out.

How To Configure Terraform AWS Backend With S3 And DynamoDB Table attachment image

Managing state with terraform is quite crucial, when we are working with multiple developers in a project, with remote operation and sensitive data, let’s see how to use AWS Backend with S3 and DynamoDB table

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-aws-tfstate-backend

Terraform module that provision an S3 bucket to store the terraform.tfstate file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption. - cloudposse…

1
:--1:1
Tarlan Isaev avatar
Tarlan Isaev

@ @Erik Osterman (Cloud Posse) so much folks. I’ll check that out :)

cool-doge1
Tarlan Isaev avatar
Tarlan Isaev

@Erik Osterman (Cloud Posse) here we go mate. Should I rely on the conditional operator?

terraform apply -auto-approve
var.region
  AWS Region the S3 bucket should reside in

  Enter a value: yes

provider.aws.region
  The region where AWS operations will take place. Examples
  are us-east-1, us-west-2, etc.

  Enter a value: us-west-2

data.aws_iam_policy_document.prevent_unencrypted_uploads[0]: Refreshing state...
module.terraform_state_backend.data.aws_iam_policy_document.prevent_unencrypted_uploads[0]: Refreshing state...
module.terraform_state_backend.aws_dynamodb_table.with_server_side_encryption[0]: Refreshing state... [id=eg-test-terraform-state-lock]
aws_dynamodb_table.with_server_side_encryption[0]: Refreshing state... [id=terraform-state-lock]

Error: Error in function call

  on [main.tf](http://main\.tf) line 255, in data "template_file" "terraform_backend_config":
 255:       coalescelist(
 256: 
 257: 
 258: 
    |----------------
    | aws_dynamodb_table.with_server_side_encryption is empty tuple
    | aws_dynamodb_table.without_server_side_encryption is empty tuple

Call to function "coalescelist" failed: no non-null arguments.


Error: Error in function call

  on .terraform/modules/terraform_state_backend/main.tf line 234, in data "template_file" "terraform_backend_config":
 234:       coalescelist(
 235: 
 236: 
 237: 
    |----------------
    | aws_dynamodb_table.with_server_side_encryption is empty tuple
    | aws_dynamodb_table.without_server_side_encryption is empty tuple

Call to function "coalescelist" failed: no non-null arguments.
Tarlan Isaev avatar
Tarlan Isaev

Sorry about stupid question guys. Hopefully, you don’t mind too much. I’m still on a learning curve and quite a bit in my slow-mode regime lol

2020-07-11

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Installing Community Providers

This manual-install situation should hopefully gradually become uncommon after the Terraform 0.13.0 release in a few weeks, as more providers…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
:--1:1
Zachary Loeber avatar
Zachary Loeber

Does this mean it becomes easier to publish/use custom providers in tf then? The hackery involved in using a community provider found out in the wild is little painful to automate.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yep, this should be easy - as long as the maintainers register their providers

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it will continue to be a pain for legacy providers disitrbuted via github.

sheldonh avatar
sheldonh
Compiling a Custom Provider and Including for Terraform Cloud

Assumptions You are familiar with the basics of setting up Go and can run basic Go commands like go build and go install and don’t need much guidance on that specific part. You have a good familiarity with Terraform and the concept of providers. You need to include a custom provider which isn’t included in the current registry (or perhaps you’ve geeked out and modified one yourself ). You want to run things in Terraform Enterprise .

sheldonh avatar
sheldonh

Turns out I wasted some time on this. I totally missed that creating a bundle was only for terraform Enterprise and so I had to backtrack at the end of my exploration

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Excited to see that third party providers will be as easy to leverage as modules

2020-07-10

Sai Krishna avatar
Sai Krishna

Hi Everyone - I have a question on terraform.

I wrote a aws pipeline setup script in terraform that gets all the config values from variables with the intention of reusing the script for creating multiple pipelines. But , if I update my variables to create a new pipeline then as the state file has information on the previous terraform builds its overriding the existing pipeline with new values. How do I handle this situation?

Matt Gowie avatar
Matt Gowie

Sounds like your pipeline resources should be a module and you should invoke usage of that module for your X many pipelines and pass the corresponding variables at that level.

Generally, it sounds like reading up on Terraform state and lifecycle would be useful to you as terraform doesn’t act like a typical bash script would. You invoke terraform once to make your resources and then upon any subsequent invoking it’s either updating, adding, or destroying those same resources.

Sai Krishna avatar
Sai Krishna

Thanks @Matt Gowie! So if I have a module for pipelines I can then create x number of pipelines by changing the variables is it?

Matt Gowie avatar
Matt Gowie

Yeah, you can create your project’s main.tf file, enumerate each pipeline you want by including a module block for it, and then pass your associated pipeline variables to their corresponding module.

Sai Krishna avatar
Sai Krishna

Got it thanks again @Matt Gowie appreciate the help!

Matt Gowie avatar
Matt Gowie

Np, good luck with it!

Sai Krishna avatar
Sai Krishna

How about using terraform workspaces and having multiple state files for multiple executions?

Matt Gowie avatar
Matt Gowie

I typically use workspaces for environments i.e. dev, stage, prod. You could use them for what you’re looking to accomplish, but I’d suggest against it considering what I know so far.

sheldonh avatar
sheldonh

Yes, best practice is use workspaces for env variations, but not different deployments of unique stack. Good advice Gowiem.

2020-07-09

Haroon Rasheed avatar
Haroon Rasheed

Below local-exec was able to update .bashrc file in macOS but it never works. My execution fails with aws credentials are not configured..Same works fine in ubuntu OS. Any idea why it is failing and any solution to make it work. I cant configure using aws configure command as I am doing it on run time. Below is the way for me but why it is not working on macOS need to solve. Please suggest.

resource "null_resource" "aws_configure" {

provisioner "local-exec" {
    command = "grep -qwF 'export AWS_ACCESS_KEY_ID' ~~/.bashrc || echo 'export AWS_ACCESS_KEY_ID=${module.globals.aws_details["access_key"]}' >> ~~.bashrc;grep -qwF 'export AWS_SECRET_ACCESS_KEY' ~~/.bashrc || echo 'export AWS_SECRET_ACCESS_KEY=${module.globals.aws_details["secret_key"]}' >> ~~.bashrc;grep -qwF 'export AWS_DEFAULT_REGION' ~~/.bashrc || echo 'export AWS_DEFAULT_REGION=${module.globals.aws_details["region"]}' >> ~~.bashrc;"
  interpreter = ["bash", "-c"]
}
}
Joe Niland avatar
Joe Niland

What are you trying to achieve here?

Haroon Rasheed avatar
Haroon Rasheed

Actually I am trying to setup aws credentials directly as env value during terraform execution..this would be used by terraform for AWS resource creation..Actually My plan is to run this terraform files anywhere and I will do the aws credential configuration..and initiate AWS resource creation..

Joe Niland avatar
Joe Niland

Can you use aws-vault exec?

Haroon Rasheed avatar
Haroon Rasheed

Till now I’ve been running it on ubuntu machines..where .bashrc file update made sure aws credentials are used during runtime..

Haroon Rasheed avatar
Haroon Rasheed

oh ok..i never knew about this..any link u would suggest?

Joe Niland avatar
Joe Niland

You could then use tfenv

https://github.com/cloudposse/tfenv

cloudposse/tfenv

Transform environment variables for use with Terraform (e.g. HOSTNAMETF_VAR_hostname) - cloudposse/tfenv

Joe Niland avatar
Joe Niland
99designs/aws-vault

A vault for securely storing and accessing AWS credentials in development environments - 99designs/aws-vault

Joe Niland avatar
Joe Niland

There’s also cloudposse/geodesic if you want to use a docker image as your standard environment for running Terraform things

Haroon Rasheed avatar
Haroon Rasheed

oh ok..i never thought about running as docker image..interesting..let me chk out..

Haroon Rasheed avatar
Haroon Rasheed

this will solve all my problem..if I am able run it as docker image..

Joe Niland avatar
Joe Niland

It works well

Joe Niland avatar
Joe Niland
cloudposse/testing.cloudposse.co

Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co

cool-doge1
Joe Niland avatar
Joe Niland

Pass your account id, etc as a docker run env var

Joe Niland avatar
Joe Niland

See dockerfile

Joe Niland avatar
Joe Niland

Then docker exec and run assume-role

Haroon Rasheed avatar
Haroon Rasheed

sure really looks great..i never thought so many repo on cloudposse..really cool stuff..let me go over and see if i can set it up..

1
Haroon Rasheed avatar
Haroon Rasheed

Thank you very much @Joe Niland

Joe Niland avatar
Joe Niland

Good luck!

Joe Niland avatar
Joe Niland

You’re welcome

Rajesh Babu Gangula avatar
Rajesh Babu Gangula

I am trying to update codebuild project with EFS settings as terraform aws_codebuild_project does not have the option to do it during initial setup .. I am getting the following error …. I am sure its something simple that I am missing here .. let me know if anyone can point me what is that I am missing here

null_resource.output-id: Provisioning with 'local-exec'...
null_resource.output-id (local-exec): Executing: ["/bin/sh" "-c" "aws codebuild update-project --name supercell-shared-infra --file-system-locations [type=EFS,location=[fs-865b7b05.efs.us-east-1.amazonaws.com](http://fs\-865b7b05\.efs\.us\-east\-1\.amazonaws\.com),mountPoint=mount-point,identifier=efs-identifier]"]

null_resource.output-id (local-exec): Expecting value: line 1 column 2 (char 1)

2020-07-08

HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
07:55:26 PM

Partial Outage of Workspace Updates, some Runs may not complete Jul 8, 19:47 UTC Identified - We are currently working on a fix for an issue that affects workspace changes. The underlying problem affects the ability for some runs to start or complete.

Partial Outage of Workspace Updates, some Runs may not complete

HashiCorp Services’s Status Page - Partial Outage of Workspace Updates, some Runs may not complete.

HashiCorp Services Status - Incident History avatar
HashiCorp Services Status - Incident History
08:15:22 PM

Partial Outage of Workspace Updates, some Runs may not complete Jul 8, 20:08 UTC Resolved - We’ve tested and rolled out a fix for this issue. Runs that aren’t completing can be discarded and re-queued. Locked workspaces can now be force-unlocked (via a new run or via settings –> locking –> force unlock). https://www.terraform.io/docs/cloud/workspaces/settings.html#lockingJul 8, 19:47 UTC Identified - We are currently working on a fix for an issue that affects workspace changes. The underlying problem affects the ability for some runs to start or…

sheldonh avatar
sheldonh

Does terraform registry nondeterministic versioning give you enough cause to stop using GitHub tag based sources and instead use the private registry that Terraform cloud offers? Seems like I could do non breaking updates this way while GitHub tags wouldn’t

2020-07-07

Eric Alford avatar
Eric Alford

Hey everyone huge fan of what yall do. Been using your terraform modules for a long time.

Question: I’m switching our worker modules from [terraform-aws-eks-workers> to <https://github.com/cloudposse/terraform-aws-eks-node-group terraform-aws-eks-node-group](https://github.com/cloudposse/terraform-aws-eks-workers) and I noticed the node group module is missing the bootstrap_extra_args parameter that the workers module has. This is a blocker for us so I wanted to see if there was something I was missing or if maybe this was on the road map to add?
cloudposse/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

cloudposse/terraform-aws-eks-node-group

Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.

cool-doge1
Eric Alford avatar
Eric Alford

Looks like this isn’t actually possible with managed node groups. Nvm can ignore.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

exactly - this is a fundamental limitation

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we wanted this too - for using jenkins (docker in docker), but had to stick with workers for that node group.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

note, it’s possible to mix and match. you can use both modules in concert under the same EKS cluster

Eric Alford avatar
Eric Alford

Got it makes sense. Yeah thats too bad. I love the simplicity of managed node groups but not being able to add kubelet args is a blocker for us

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

what flags in particular?

Eric Alford avatar
Eric Alford

Specifically I need to be able to set the --cluster-dns flag so we can use node-local-dns caching for the cluster

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

aha, ok

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

fwiw, for others following along, this is the feature request issue: https://github.com/aws/containers-roadmap/issues/596

[EKS] [request]: Managed Node Groups Custom Userdata support · Issue #596 · aws/containers-roadmap

Community Note Please vote on this issue by adding a :–1: reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or &quot;me to…

:--1:2
1
sweetops avatar
sweetops

Here’s a question for the crowd. I’m conditionally creating an NLB based off of a variable, and I need to conditionally add said NLB as an additional load balancer to an ECS cluster. I know this example doesn’t work, but this is basically what I’m trying to do, and wondering if anyone has done something similar

resource "aws_ecs_service" "service" {
  name            = "${var.namespace}-${var.stage}-${var.service}"
  cluster         = aws_ecs_cluster.cluster.id
  task_definition = aws_ecs_task_definition.td.arn
  launch_type     = "FARGATE"
  desired_count   = var.desired_count

  network_configuration {
    security_groups  = [aws_security_group.ecs_tasks_sg.id]
    subnets          = var.internal == true ? tolist(var.private_subnets_ids) : tolist(var.public_subnets_ids)
    assign_public_ip = true
  }

  load_balancer {
    target_group_arn = aws_lb_target_group.alb_tg.arn
    container_name   = var.container_name
    container_port   = var.public_container_port
  }

  load_balancer {
    count = var.nlb_enabled == true ? 1 : 0
    target_group_arn = aws_lb_target_group.nlb_tg[0].arn
    container_name   = var.container_name
    container_port   = var.private_container_port
  }

}
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You can use a for loop on the attributes, no?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-aws-ec2-autoscale-group

Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
GitHub

GitHub is where people build software. More than 50 million people use GitHub to discover, fork, and contribute to over 100 million projects.

sweetops avatar
sweetops

I’ll give that a try now!

sweetops avatar
sweetops

Thanks Erik, I was able to use the examples to create a dynamic block with a for_each. First time I’ve used a dynamic.

:100:1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Excellent! glad that worked out for what you needed.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

HCl2 for the win!

S L avatar

Hello, I had a question about configuring parameter group settings using cloudposse/terraform-aws-documentdb-cluster . Is there currently no way to update parameter group settings (such as enabling tls, ttl_monitor, and profiler logs) using that terraform template?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

quite possibly - but we’d accept PRs and are actively reviewing contributions

S L avatar

Does it take the parameter group settings of the default parameter group (docdb3.6) then?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-aws-documentdb-cluster

Terraform module to provision a DocumentDB cluster on AWS - cloudposse/terraform-aws-documentdb-cluster

btai avatar

when terraforming EKS with node groups, how do you add ingress for the automatically provisioned security group to the cluster SG and vice versa? I don’t see a SG attribute that is exported

btai avatar

im a little late to the party, but we can’t tag the underlyign ec2 instances w/ Name, etc?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

• Stop using alb-ingress?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

[alb.ingress.kubernetes.io/security-groups](http://alb\.ingress\.kubernetes\.io/security\-groups)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Instead, provision the security groups and setup the trust relationships. Then add pass the new blessed security group to the ingress.

btai avatar

sorry i’m not talking about kubernetes ingress. i was talking about the cluster security group ingress rule and node group security group ingress rule. when provisioning the nodegroup, i don’t seem to have a reference to its security group in terraform? going the traditional worker ASG route, the security group is provisioned manually by us in terraform

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
04:20:22 AM
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
04:20:24 AM
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Maybe I’m still misunderstanding…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You don’t have control over the rules themselves added by AWS. That’s what you’re paying them for

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but you have the security groups that you can add/remove rules to

btai avatar

forgive me @Erik Osterman (Cloud Posse) for not being specific enough. I don’t believe the EKS node group returns the automatically provisioned security group for the node group.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

terraform exposes the autoscale group of the managed nodes

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

look up the autoscale group using the aws_autoscaling_group data provider

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

then from there, lookup the launch configuration using the aws_launch_configuration data provider

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

then from there you can access the security_groups

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This seems like a nice addition to our module - if you end up implementing it, we’d accept it

btai avatar

@Erik Osterman (Cloud Posse) I may revist nodegroups in the future if there are new features. Right now, for those of us that had been doing things the traditional ASG way, I feel like the terraform module for eks workers is basically just as “managed” (because its automated provisioning) but allows for more configurability.

:--1:1
btai avatar

obviously if you were implementing the EKS workers terraform from scratch today without any prior context, doing it via nodegroups would’ve been way faster

2020-07-06

Avi Khandelwal avatar
Avi Khandelwal

Hi guys! I am new to terraform and trying to get my hands on terraform functions. I have created main.tf file:

resource "aws_lb_listener" "backend_alb_listener" {
  load_balancer_arn = aws_lb.backend_alb.arn
  port = lookup(var.alb_http_listeners, "port")
  protocol = lookup(var.alb_http_listeners, "protocol", null)

  # default_action - (Required) An Action block.
  dynamic "default_action" {
    for_each = var.alb_http_listeners
    content {
      type =  lookup(default_action.value, "action_type", "forward")
      target_group_arn = aws_lb_target_group.backend_alb_target_group.arn
    }
  }
} 

and variables.tf file:

variable "alb_http_listeners" {
  default     = {
    "block 1" = {
      port = 443
      protocol = "HTTPS"
      default_action = {
        action_type = "forward"
      }
    }
  }
  type = any
  description = "A list of maps describing the HTTP listeners or TCP ports for this ALB."
}

It seems like lookup the function is not able to read from variables.tf> file as when I run terraform plan , it takes default values i.e, port = 80 and protocol = HTTP, not which I set in [variables.tf](http://variables.tf) file. Can anyone help me in writing the <http://variables.tf|variables.tf file correctly. Thanks in advance.

Zach avatar

When you get into situations like this with terraform a good technique is to make a temporary folder and add a few of the bare minimum things to get an example working, preferably with no remote resources. Usually I’ll just declare a locals block and a few output resources to show me what’s going on.

In your case here, the problem you’re having is that you have a complex map defined and lookup expects the key to be at the top level. So to lookup up the port you’d actually need to pass the ‘block 1’ map to the function.

Avi Khandelwal avatar
Avi Khandelwal

@ thanks for your valuable input. Can you give me a quick fix for the problem, meaning how do I pass block 1 map to function

Zach avatar
locals {

  alb_http_listeners = {
    default          = {
      "block 1"      = {
        port         = 443
      }
    }
  }

}

output "port" {
  value = lookup(local.alb_http_listeners["default"]["block 1"], "port"  ,80)
}
Zach avatar

you can stick that into a .tf and run an apply, you’ll get ‘443’ as the output.

Avi Khandelwal avatar
Avi Khandelwal

So do I have to do this in every lookup function, such as in protocol

Zach avatar

Anywhere you have nested maps, yes

Avi Khandelwal avatar
Avi Khandelwal

great thanks:)

Zach avatar

or figure out if there’s a way to do it using the ‘key’ part of the lookup function, but either way its going to mean you have to know the structure of the map

Zach avatar

You may just want to step back and think if you’re taking the right approach in that case

Cloud Posse avatar
Cloud Posse
04:00:34 PM

:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is Jul 15, 2020 11:30AM.
Register for Webinar
slack #office-hours (our channel)

2020-07-04

2020-07-03

Mahesh avatar
Mahesh

Hi All, anyone facing issues with Terraform v0.12.24 running inside EKS Pod? somehow Terraform is assuming EKS worker node’s role than Pod’s ServiceAccount, the worker node’s role doesn’t have admin policy so its failing. Terraform v0.12.20 works fine with same setup. any leads?

Tom Howarth avatar
Tom Howarth

OK i have hit this little issuette with AzureRM and Vault where the token issued from vault is not being accepted by Azure as AD has not replicated.  everything I have read suggests using a bash script to insert an artificial delay of 120 seconds into the authentication process.

I have this script that I nicked

subscription_id=$1 sleep $2 echo “{ "subscription_id\”: \”$subscription_id\” }” data “external” “subscription_id” {   program = [“./install.sh”, “<subscription_id>”, “120”] } (edited)  [8:09 AM] and according to the post I was reading I replace the line subscription_id = <subscription_id> with subscription_id = “data.external.subscription_id.result[“subscription_id”]” however when I issue a terraform plan against that i receive:

There are some problems with the configuration, described below.

The Terraform configuration must be valid before initialization so that Terraform can determine which modules and providers need to be installed.

Error: Missing newline after argument

  on test.tf line 3, in provider “azurerm”:    3:   subscription_id = “data.external.subscription_id.result[“subscription_id”]”

An argument definition must end with a newline. I know I am missing something simple but i just cant see it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(please use code blocks) e.g.

subscription_id=$1
sleep $2
echo "{ \"subscription_id\": \"$subscription_id\" }"
data "external" "subscription_id" {
  program = ["./install.sh", "<subscription_id>", "120"]
} 
Tim Birkett avatar
Tim Birkett

Could you maybe post the full bit of appropriate code in a code block? It’s hard to understand above… Is it literally:

subscription_id=$1
sleep $2
echo "{ \"subscription_id\": \"$subscription_id\" }"
data "external" "subscription_id" {
  program = ["./install.sh", "<subscription_id>", "120"]
}
Tim Birkett avatar
Tim Birkett

Or is:

subscription_id=$1
sleep $2
echo "{ \"subscription_id\": \"$subscription_id\" }"

The contents of install.sh?

Tom Howarth avatar
Tom Howarth

that is the script.

this is the code:

provider “azurerm” { version = “~>2.0” subscription_id = “data.external.subscription_id.result[“subscription_id”]”

tenant_id = “tenant_id” client_id = “data.vault_generic_secret.azure.data[“client_id”]” client_secret = “data.vault_generic_secret.azure.data[“client_secret”]” features {} }

provider “vault” { address = “vault_address:8200/” auth_login { path = “auth/approle/login” parameters = { role_id = “role_id” secret_id = “secret_id” } } }

data “vault_generic_secret” “azure” { path = “azure/creds/Azure-Terraform” }

resource “azurerm_resource_group” “rg” { name = “myRemoteAmazicTest-rg” location = “northeurope” }

maarten avatar
maarten

:wave: FWIW It helps everyone to first run terraform fmt in the source folder as it makes it a bit more readable. Secondly, you can put code into Slack Codeblocks by either typing 3 backticks or find codeblock in the comment menu at the ellipse.

:--1:1
Tim Birkett avatar
Tim Birkett

You could also use a local-exec on a null resource to sleep for a bit in your terraform code like:

resource "null_resource" "pause_a_bit" {
  provisioner "local-exec" {
    command = "sleep 120"
  }
}
Tom Howarth avatar
Tom Howarth

I might try that cheers

Tom Howarth avatar
Tom Howarth

nope that did not work. the delay needs to be in the provider authentication not putting a pause on the code use.

Tom Howarth avatar
Tom Howarth

the issue is that the vault generated tokens have not been replicated arround the Azure AD so when they are presented back to Azure they are not seen as valid

Tom Howarth avatar
Tom Howarth

Error: Error building account: Error getting authenticated object ID: Error listing Service Principals: autorest.DetailedError{Original“adal: Refresh request failed. Status Code = ‘400’. Response body: {"error"<i class=”em em-"unauthorized_client","error_description"”></i>"AADSTS700016: Application with identifier ‘data.vault_generic_secret.azure.data[“client_id”]’ was not found in the directory ‘7aeb5a8a-a7d2-40c1-8019-859b3549e7f1’. This can happen if the application has not been installed by the administrator of the tenant or consented to by any user in the tenant. You may have sent your authentication request to the wrong tenant.\r\nTrace ID: 5073fbfa-8f18-4ba9-a0c7-6f0934bf6c00\r\nCorrelation ID: f640271c-6e6f-49e2-8ede-5d24d6b46bb2\r\nTimestamp: 2020-07-03 10:06:44Z","error_codes":[700016],"timestamp":"2020-07-03 10:06:44Z","trace_id":"5073fbfa-8f18-4ba9-a0c7-6f0934bf6c00","correlation_id":"f640271c-6e6f-49e2-8ede-5d24d6b46bb2","error_uri"<i class=”em em-"<https”></i>//login.microsoftonline.com/error?code=700016>"}”, resp:(http.Response)(0xc000449950)}, PackageType:”azure.BearerAuthorizer”, Method:”WithAuthorization”, StatusCode:400, Message:”Failed to refresh the Token for request to https://graph.windows.net/7aeb5a8a-a7d2-40c1-8019-859b3549e7f1/servicePrincipals?%24filter=appId+eq+%27data.vault_generic_secret.azure.data%5B%E2%80%9Cclient_id%E2%80%9D%5D%27&api-version=1.6”, ServiceError:[]uint8(nil), Response:(http.Response)(0xc000449950)}

on test.tf line 1, in provider “azurerm”: 1: provider “azurerm” {

this is the generated account the enbolded section show an AD error code that says not valid account details.

maarten avatar
maarten

@Tom Howarth you have this in quotes

  client_id = "data.vault_generic_secret.azure.data["client_id"]"
  client_secret = "data.vault_generic_secret.azure.data["client_secret"]"
maarten avatar
maarten

you need to remove those double quotes like so:

 client_id = data.vault_generic_secret.azure.data["client_id"]
 client_secret = data.vault_generic_secret.azure.data["client_secret"]
Tom Howarth avatar
Tom Howarth

Error: Invalid character

on test.tf line 8, in provider “azurerm”: 8: client_id = data.vault_generic_secret.azure.data[“client_id”]

This character is not used within the language.

Error: Invalid expression

on test.tf line 8, in provider “azurerm”: 8: client_id = data.vault_generic_secret.azure.data[“client_id”]

Expected the start of an expression, but found an invalid expression token.

Error: Invalid character

on test.tf line 8, in provider “azurerm”: 8: client_id = data.vault_generic_secret.azure.data[“client_id”]

This character is not used within the language.

Error: Invalid character

on test.tf line 9, in provider “azurerm”: 9: client_secret = data.vault_generic_secret.azure.data[“client_secret”]

This character is not used within the language.

Error: Invalid character

on test.tf line 9, in provider “azurerm”: 9: client_secret = data.vault_generic_secret.azure.data[“client_secret”]

This character is not used within the language.

maarten avatar
maarten

If you look closely at the double quotes you see something is off. I’m sure you’re on a Mac and sometimes if you copy paste double quotes from online, your mac makes it a weird double quote which has a direction left or right like “ or ” , it needs to be “

Tom Howarth avatar
Tom Howarth

I have looked at those and replaced the all now it looks like my script is not being read. so it will not load the external data stanza. is there a special way for this to be called

maarten avatar
maarten

TBH I think you should take a little time to go through a few of the online terraform courses, that will help you a lot

Tom Howarth avatar
Tom Howarth

that is after removing the quotes

Tom Howarth avatar
Tom Howarth

removing the quotes for the value in the brackets results in this:

Error: Reference to undeclared resource

on test.tf line 3, in provider “azurerm”: 3: subscription_id =data.external.subscription_id.result[subscription_id]

A data resource “external” “subscription_id” has not been declared in the root module.

Error: Invalid reference

on test.tf line 3, in provider “azurerm”: 3: subscription_id =data.external.subscription_id.result[subscription_id]

A reference to a resource type must be followed by at least one attribute access, specifying the resource name.

Error: Invalid reference

on test.tf line 8, in provider “azurerm”: 8: client_id = data.vault_generic_secret.azure.data[client_id]

A reference to a resource type must be followed by at least one attribute access, specifying the resource name.

Error: Invalid reference

on test.tf line 9, in provider “azurerm”: 9: client_secret = data.vault_generic_secret.azure.data[client_secret]

A reference to a resource type must be followed by at least one attribute access, specifying the resource name.

maarten avatar
maarten

did you remove the quotes around client_secret and client_id ?

Tom Howarth avatar
Tom Howarth

Yes see later response

maarten avatar
maarten

can you post me the current code ?

Tom Howarth avatar
Tom Howarth

Give me 10 minutes, just not at my machine at the moment.

maarten avatar
maarten

Removing those quotes is wrong as they aren’t a variable, the previous code was right, I’ll respond there

:--1:1
S L avatar

hi all, I used the cloudposse/terraform-aws-documentdb-cluster repo to create a documentdb instance in aws. How can I configure the docdb instance to send logs to Cloudwatch? Enabling enabled_cloudwatch_logs_exports only enables the cluster logging. However, it does not enable the parameter group’s audit_logs variable. Any ideas?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Possibly not supported yet - PRs welcome! we’ve ramped up our review capacity.

:--1:1
Igor Bronovskyi avatar
Igor Bronovskyi

How to make a load balancer that after a successful health check not switched to the old version of the container?

randomy avatar
randomy

Are you talking about ECS?

Igor Bronovskyi avatar
Igor Bronovskyi

Yes. ECS Fargate

randomy avatar
randomy

I guess you’re using the default ECS deployment mode which is a rolling update. That results in old and new containers running at the same time during the deployment.

Igor Bronovskyi avatar
Igor Bronovskyi

Yes

randomy avatar
randomy

There is a CodeDeploy blue green deployment option that should help

randomy avatar
randomy
Amazon ECS Deployment types - Amazon Elastic Container Service

An Amazon ECS deployment type determines the deployment strategy that your service uses. There are three deployment types: rolling update, blue/green, and external.

vFondevilla avatar
vFondevilla

The “but” to the Blue Green using CodeDeploy is that service autoscaling is not supported.

vFondevilla avatar
vFondevilla
Amazon ECS service auto scaling is not supported when using the blue/green deployment type. As a workaround, you can suspend scaling processes on the Amazon EC2 Auto Scaling groups created for your service before the service deployment, then resume the processes once the deployment has completed. For more information, see Suspending and resuming scaling processes in the Amazon EC2 Auto Scaling User Guide.
Igor Bronovskyi avatar
Igor Bronovskyi

Ok. Thanks

Igor Bronovskyi avatar
Igor Bronovskyi

About a minute I opened new and old version.

Igor Bronovskyi avatar
Igor Bronovskyi

how to fix it?

2020-07-02

Sean Turner avatar
Sean Turner

Is there a way to deploy lambdas to [email protected] via tf? Or is the only way to do it via api calls?

RB avatar

im using tf to do this now

RB avatar

look at resource aws_cloudfront_distribution and look at its lambda_function_association

     lambda_function_association {
      event_type   = "origin-request"
      include_body = false
      lambda_arn   = "${aws_lambda_function.request.arn}:${aws_lambda_function.request.version}"
    }
Sean Turner avatar
Sean Turner

Cool

Sean Turner avatar
Sean Turner

Just need that and the cloudfront trigger?

RB avatar

you need the cloudfront distribution resource, you need the lambda resource, and then you can use that block with the cf distribution resource to tie it in

RB avatar

i slightly cheated tho. i created my cf distribution and lambda via the aws console, then i backported it into terraform. i went through a couple iterations of updating the lambda code in my repo and redeploying with terraform and it worked as expected

Sean Turner avatar
Sean Turner

Yeah fair. I did the same with my cf distribution

Sean Turner avatar
Sean Turner

I just discovered this though which is pretty cool

data "archive_file" "this" {
  for_each = local.lambdas

  type        = "zip"
  source_file = "${path.root}/lambdas/${each.key}_viewer_request.py"
  output_path = "${path.root}/lambdas/${each.key}_viewer_request.zip"
}
RB avatar

yep that’s what im using to zip up lambda

RB avatar
data "archive_file" "request" {
  type        = "zip"
  source_file = "${path.module}/lambda_request.js"
  output_path = "${path.module}/lambda_request.zip"
}

resource "aws_lambda_function" "request" {
  provider = aws.us-east-1

  role          = data.aws_iam_role.lambda_exec.arn
  function_name = "CloudFrontRewriteToIndex"
  handler       = "lambda_request.handler"
  runtime       = "nodejs12.x"

  publish          = true
  source_code_hash = data.archive_file.request.output_base64sha256
  filename         = "${path.module}/lambda_request.zip"

  tags = local.tags
}
RB avatar

nice and easy

Sean Turner avatar
Sean Turner
09:58:40 PM

Is it possible to add the cloudfront trigger? None of the lambda resources seem right? Perhaps this isn’t needed if you’re telling the cloudfront dist about the lambda in the tf code?

2020-07-01

Josh Duffney avatar
Josh Duffney
06:05:00 PM

What’s a recommended way to manage the provider versions across modules?

Requirements:

• Ability to test modules independently Please feel free to direct me to some reading if necessary.

loren avatar
loren

these days, i believe the required_providers block, an attribute of the terraform block…

https://www.terraform.io/docs/configuration/terraform.html#specifying-required-provider-versions

Terraform Settings - Configuration Language - Terraform by HashiCorp

The terraform configuration section is used to configure some behaviors of Terraform itself.

:--1:1
Josh Duffney avatar
Josh Duffney

I’ll look into it, thank you!

sheldonh avatar
sheldonh

I thought that pinning provider versions in modules was advised against?

Pierre-Yves avatar
Pierre-Yves

yes there may be breaking change.

this will allows you to roll out update version per module

Release notes from terraform avatar
Release notes from terraform
06:54:32 PM

v0.13.0-beta3 0.13.0-beta3 (July 01, 2020) BUG FIXES: backend/azurerm: support for snapshotting the blob used for remote state storage prior to change (#24069) backend/remote: Prevent panic when there’s a connection error (<a href=”https://github.com/hashicorp/terraform/issues/25341” data-hovercard-type=”pull_request”…

Azure backend: support snapshots/versioning by evenh · Pull Request #24069 · hashicorp/terraform

Rebased PR with some small fixes. Originally by @pmarques (2018) and @rahdjoudj (2019). Fixes #18284 Closes #18512 Closes #21888

prevent panic in remote backend retry by jbardin · Pull Request #25341 · hashicorp/terraform

Ensure that the *http.Response is not nil before checking the status. This can happen when retrying transport errors multiple times.

    keyboard_arrow_up