#terraform (2021-09)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2021-09-19

2021-09-18

Ozzy Aluyi avatar
Ozzy Aluyi

Hello Guys, I’m trying to create parameters in AWS SSM- any ideas/solution will be much appreciated.

Ozzy Aluyi avatar
Ozzy Aluyi
data "aws_ssm_parameter" "rds_master_password" {
  name = "/grafana/GF_RDS_MASTER_PASSWORD"
  with_decryption = "true"
}
resource "aws_ssm_parameter" "rds_master_password" {
  name        = "/grafana/GF_RDS_MASTER_PASSWORD"
  description = "The parameter description"
  type        = "SecureString"
  value       = data.aws_ssm_parameter.rds_master_password.value
}
resource "aws_ssm_parameter" "GF_SERVER_ROOT_URL" {
  name  = "/grafana/GF_SERVER_ROOT_URL"
  type  = "String"
  value = "https://${var.dns_name}"
}

resource "aws_ssm_parameter" "GF_LOG_LEVEL" {
  name  = "/grafana/GF_LOG_LEVEL"
  type  = "String"
  value = "INFO"
}

resource "aws_ssm_parameter" "GF_INSTALL_PLUGINS" {
  name  = "/grafana/GF_INSTALL_PLUGINS"
  type  = "String"
  value = "grafana-worldmap-panel,grafana-clock-panel,jdbranham-diagram-panel,natel-plotly-panel"
}

resource "aws_ssm_parameter" "GF_DATABASE_USER" {
  name  = "/grafana/GF_DATABASE_USER"
  type  = "String"
  value = "root"
}

resource "aws_ssm_parameter" "GF_DATABASE_TYPE" {
  name  = "/grafana/GF_DATABASE_TYPE"
  type  = "String"
  value = "mysql"
}

resource "aws_ssm_parameter" "GF_DATABASE_HOST" {
  name  = "/grafana/GF_DATABASE_HOST"
  type  = "String"
  value = "${aws_rds_cluster.grafana.endpoint}:3306"
}
Ozzy Aluyi avatar
Ozzy Aluyi
 Error: Error describing SSM parameter (/grafana/GF_RDS_MASTER_PASSWORD): ParameterNotFound: 
│ 
│   with module.Grafana_terraform.data.aws_ssm_parameter.rds_master_password,
│   on Grafana_terraform/ssm.tf line 1, in data "aws_ssm_parameter" "rds_master_password":
│    1: data "aws_ssm_parameter" "rds_master_password" {
│ 
RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

Looks like you don’t have the parameter created and so your data source is failing to pull it

Ozzy Aluyi avatar
Ozzy Aluyi

@RB (Ronak) (Cloud Posse) thanks. Sorted now.

managedkaos avatar
managedkaos

@ you have a conflict with the data and resource for the parameter named rds_master_password

On line 1, you are trying to read it as data. and on line 5 you are trying to create it as a resource.

If its already created and you just want to read it, remove the resource "aws_ssm_parameter" "rds_master_password" {… section.

If you are trying to create it, remove the data "aws_ssm_parameter" "rds_master_password" {... section.

Of course, if you are reading it, you will need to find a way to get the value into place. In summary, you can’t have a data resource that calls on itself.

If you are trying to create and store a password, consider using the random_password resource and storing the result of that in the parameter. https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/password

1
Michael Dizon avatar
Michael Dizon

hey guys, i am a little confused about what dns_gbl_delegated refers to in eks-iam https://github.com/cloudposse/terraform-aws-components/blob/master/modules/eks-iam/tfstate.tf#L51

terraform-aws-components/tfstate.tf at master · cloudposse/terraform-aws-components attachment image

Opinionated, self-contained Terraform root modules that each solve one, specific problem - terraform-aws-components/tfstate.tf at master · cloudposse/terraform-aws-components

Michael Dizon avatar
Michael Dizon

is delegated-dns supposed to be added to the global env as well as regional?

Michael Dizon avatar
Michael Dizon

i modified the remote state for dns_gbl_delegated to point to primary-dns – not sure if that’s going to cause any issues later on

Ozzy Aluyi avatar
Ozzy Aluyi

@ thanks for the solution. the random_password will make more sense,

1
1

2021-09-17

jose.amengual avatar
jose.amengual
Enforcing best practice on self-serve infrastructure with Terraform, Atlantis and Policy As Code attachment image

Here at loveholidays we are heavily dependant on Terraform. All of our Google Cloud infrastructure is managed using Terraform, along with a…

1
loren avatar
loren

i really wish it were easier to extend atlantis to additional source code hosts. would be fantastic if it worked with codecommit

Enforcing best practice on self-serve infrastructure with Terraform, Atlantis and Policy As Code attachment image

Here at loveholidays we are heavily dependant on Terraform. All of our Google Cloud infrastructure is managed using Terraform, along with a…

jose.amengual avatar
jose.amengual

as in one multiple atlantis one repo?

loren avatar
loren

no, just as in developing the code to support new source code hosts. last time i looked, it was a bit of a spaghetti mess touching all sorts of core internal parts

2021-09-16

Vikram Yerneni avatar
Vikram Yerneni

Fellas, Is there a way to add a condition when adding S3 bucket/folder level permissions here at: https://github.com/cloudposse/terraform-aws-iam-s3-user

For example, I want to give like this string query:

  {
     "Sid": "AllowStatement3",
     "Action": ["s3:ListBucket"],
     "Effect": "Allow",
     "Resource": ["arn:aws:s3:::DOC-EXAMPLE-BUCKET"],
     "Condition":{"StringLike":{"s3:prefix":["media/*"]}}
    }

2021-09-15

Release notes from terraform avatar
Release notes from terraform
06:53:40 PM

v1.0.7 1.0.7 (September 15, 2021) BUG FIXES: core: Remove check for computed attributes which is no longer valid with optional structural attributes (#29563) core: Prevent object types with optional attributes from being instantiated as concrete values, which can lead to failures in type comparison (<a…

remove incorrect computed check by jbardin · Pull Request #29563 · hashicorp/terraform attachment image

The config is already validated, and does not need to be checked again in AssertPlanValid, so we can just remove the check which conflicts with the new optional nested attribute types. Add some mor…

2021-09-14

SlackBot avatar
SlackBot
10:17:55 AM

This message was deleted.

greg n avatar
greg n

good afternoon guys, I think I’ve found a version issue with terraform-aws-ecs-web-app (version = “~> 0.65.2”). Is this a legit upper version limit or perhaps just versions.tf a bit out of date? Thanks

tf -version
Terraform v1.0.2
on linux_amd64

Your version of Terraform is out of date! The latest version
is 1.0.6. You can update by downloading from <https://www.terraform.io/downloads.html>
- services_api_assembly.this in .terraform/modules/services_api_assembly.this
╷
│ Error: Unsupported Terraform Core version
│
│   on .terraform/modules/services_api_alb.alb.access_logs.s3_bucket.this/versions.tf line 2, in terraform:
│    2:   required_version = ">= 0.12.0, < 0.14.0"
│
│ Module module.services_api_alb.module.alb.module.access_logs.module.s3_bucket.module.this (from git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.19.2>) does not support Terraform version 1.0.2. To
│ proceed, either choose another supported Terraform version or update this version constraint. Version constraints are normally set for good reason, so updating the constraint may lead to other errors or unexpected behavior.
╵

╷
│ Error: Unsupported Terraform Core version
│
│   on .terraform/modules/services_api_alb.alb.access_logs.this/versions.tf line 2, in terraform:
│    2:   required_version = ">= 0.12.0, < 0.14.0"
│
│ Module module.services_api_alb.module.alb.module.access_logs.module.this (from git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.19.2>) does not support Terraform version 1.0.2. To proceed, either choose
│ another supported Terraform version or update this version constraint. Version constraints are normally set for good reason, so updating the constraint may lead to other errors or unexpected behavior.
╵

╷
│ Error: Unsupported Terraform Core version
│
│   on .terraform/modules/services_api_alb.alb.default_target_group_label/versions.tf line 2, in terraform:
│    2:   required_version = ">= 0.12.0, < 0.14.0"
│
│ Module module.services_api_alb.module.alb.module.default_target_group_label (from git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.19.2>) does not support Terraform version 1.0.2. To proceed, either
│ choose another supported Terraform version or update this version constraint. Version constraints are normally set for good reason, so updating the constraint may lead to other errors or unexpected behavior.
╵
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ya, could be - please open PR to remove upper bound pinning

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

post here and we’ll get it promptly reviewed

Richard Quadling avatar
Richard Quadling

The versions.tf> for v0.65.2 <https://github.com/cloudposse/terraform-aws-ecs-web-app/blob/master/versions.tf says

terraform {
  required_version = ">= 0.13.0"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 3.34"
    }
  }
}

Which all looks good. What is the source of the services_api_alb module?

greg n avatar
greg n

it’s

  source                    = "cloudposse/alb/aws"
  version                   = "0.23.0"
  context                   = module.this.context

`

Richard Quadling avatar
Richard Quadling

https://registry.terraform.io/modules/cloudposse/alb/aws/latest is 0.35.3, so you are quite a way behind.

Nikola Milic avatar
Nikola Milic

For some reason, ec2 instance does not have public dns assigned, even though it’s part of the public subnet? What could be the case?

managedkaos avatar
managedkaos

During the cretion of the resource, did you specify to attach a public IP? even if the subnet is public, if the default setting for the subnet is to NOT assign a public IP, instances won’t get one. (AFAIK)

Nikola Milic avatar
Nikola Milic

Yeah i was under the impression that on was the default. Thanks, i think that solved it

managedkaos avatar
managedkaos

2021-09-13

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

anyone hooked in the identity provider for EKS yet? any gothcas I should be aware of?

Rhys Davies avatar
Rhys Davies

Hey guys I’m writing the Terraform for a new AWS ECS Service, I want to deploy 6 (but effectively n ) similar container definitions in my task definition. What’s the recommended way of looping a data structure (a dict, or list of lists) and creating container_definitions?

  1. Is it supposed to be done with a JSON file and a data "template_file" block with some sort of comprehension?
  2. I’ve found https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ecs_container_definition but it doesn’t have any parameters for command which is the part between the container definitions that needs to differ slightly
  3. https://github.com/cloudposse/terraform-aws-ecs-container-definition I’ve also found this, not sure if anyone here has had any experience with it? I was going to experiment for_eaching with it to create 6 container_defs I can then merge()in my resource "task_definition" - is this the right sort of approach?
RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

I believe you want option 3

Rhys Davies avatar
Rhys Davies

Just out of interest, can I just do this?

Rhys Davies avatar
Rhys Davies
celery_queues = {
  1 : ["queue1"],
  2 : ["queue2", "blah", "default"],
  ...
}

resource "aws_ecs_task_definition" "celery" {
  for_each = local.celery_queues
  family                   = "celery"
  requires_compatibilities = ["FARGATE"]
  cpu                      = "4096"
  memory                   = "8192"
  network_mode             = "awsvpc"
  execution_role_arn       = module.ecs_cluster.task_role_arn
  container_definitions = jsonencode([
    {
      name        = "celery_${each.key}",
      image       = blah,
      command     = ["celery", ${each.value}],
      environment = blah,
      essential   = true,
      logConfiguration = {
        logDriver = "awslogs",
        options = {
          awslogs-group         = log_group_name,
          awslogs-region        = log_group_region,
          awslogs-stream-prefix = log_group_prefix
        }
      },
      healthCheck = {
        command     = ["CMD-SHELL", "pipenv run celery -A my_proj inspect ping"],
        interval    = 10,
        timeout     = 60,
        retries     = 5,
        startPeriod = 60
      }
    }
  ])
}
RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

Ya that would work too

Rhys Davies avatar
Rhys Davies

awesome thanks for the help, I’m a devops of one, its so good to have somewhere to work through a solution!

1
1
Rhys Davies avatar
Rhys Davies

Thanks in advance for any help

othman issa avatar
othman issa

Hello everyone, I have a question what is the best way to connect TF module with API ?

Alex Jurkiewicz avatar
Alex Jurkiewicz

AWS API Gateway?

Alex Jurkiewicz avatar
Alex Jurkiewicz

Or something else

othman issa avatar
othman issa

I was reading in TF doc HTTP API

2021-09-12

2021-09-11

2021-09-10

emem avatar

hey guys anyone ever implemented a description on what terraform is applying on the approval stage in codepipeline. Like i can see what my terraform is planing in the terraform plan stage and i would like to pass this to details to my approval stage but approval does not support artifact atrtibute. Anyone found a solution for this before

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We’re using Spacelift which does that. If you learn hwo to do it with codepipeline, lmk!

Nikola Milic avatar
Nikola Milic
10:20:31 AM

How do I access the ARN of the created resource in the sibling modules belonging to same main.tf file? I want to create IAM user, and ECR resource that need’s that user’s ARN (Check line 22). How to reference variables?

pjaudiomv avatar
pjaudiomv

Check the outputs of the user module then you would reference it prefixed with module and the name ex. module.gitlab_user.user_arn

1
1
Nikola Milic avatar
Nikola Milic

Thanks @

pjaudiomv avatar
pjaudiomv

Yes this explains modules and accessing their values https://www.terraform.io/docs/language/modules/syntax.html section Accessing Module Output Values

Modules - Configuration Language - Terraform by HashiCorp

Modules allow multiple resources to be grouped together and encapsulated.

pjaudiomv avatar
pjaudiomv

All of the cloudposse modules reference the inputs/outputs on the respective GitHub repo https://github.com/cloudposse/terraform-aws-iam-system-user#outputs

GitHub - cloudposse/terraform-aws-iam-system-user: Terraform Module to Provision a Basic IAM System User Suitable for CI/CD Systems (E.g. TravisCI, CircleCI) attachment image

Terraform Module to Provision a Basic IAM System User Suitable for CI/CD Systems (E.g. TravisCI, CircleCI) - GitHub - cloudposse/terraform-aws-iam-system-user: Terraform Module to Provision a Basic…

Cameron Pope avatar
Cameron Pope

Hello - First of all, thank you for having so many wonderful Terraform modules. I have a question about the aws-ecs-web-app module and task definitions. It seems like neither setting for ignore_changes_task_definition does quite what I need, so I sense I am ‘doing it wrong’, but I am struggling to find the happy path to doing the right thing.

When I update by pushing new code to Github, and then run terraform apply the module wants to switch the task definition back to the previous version. Setting ignore_changes_task_definition to True fixes that, but if I want to update the container size or environment variables, then those changes do not get picked up.

It seems like the underlying problem is my way of doing things (managing the Task Definition via Terraform) is coupling Terraform and the CI/CD process too tightly, and that either Terraform or CodeBuild should ‘own’ the Task Definition, but not both. I don’t see a clean way to create the Task Definition during the Build phase and set it during the deploy phase. The standard ECS deployment takes the currently-running task definition and updates the image uri. It looks like one needs to use CodeDeploy to do anything more advanced.

I don’t think I’m the first person to want Terraform not to change the revision unless I’ve made changes to the task definition on the Terraform side. How do others handle this? Or is my use-case outside of what the aws-ecs-web-appmodule is designed for?

If you made it here, thank you for reading!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I would use the web app module more as a reference for how to tie all the other modules together

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you’ll quickly find yourself wanting to make changes

Cameron Pope avatar
Cameron Pope

Thank you for the response - that was my sense. It is great to have a working end-to-end example, and it made it easy to set up a Github -> ECS pipeline.

Interestingly, after about a year, the only thing that we’re really missing for our use-case is the ability to generate task definitions after a successful container build. The web-app module got us almost 100% of the way there, and for that I’m grateful.

2021-09-09

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

does anyone know of an IAM policy that will let people view the SSM parameters names and thats it? I don’t want them to be able to see the values.

mfridh avatar
mfridh

“Secret” values would usually be encrypted using a KMS key. So by controlling access to the KMS key could be enough if your intentions is to hide only the encrypted values.

Otherwise, the only thing you can give would be ssm:DescribeParameters I think.

https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-paramstore-access.html

Restricting access to Systems Manager parameters using IAM policies - AWS Systems Manager

Restrict access to Systems Manager parameters by using IAM policies.

Aleksandr Fofanov avatar
Aleksandr Fofanov

just give them ssm:DescribeParameters permission they will be able to list and view individual parameters metadata but not the values

2
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

thanks @mfridh @Aleksandr Fofanov that worked like a dream

1
Pierre-Yves avatar
Pierre-Yves

I had a lot of tags to deploy, and not all resources support tagging . to be effective in the process and after trying many option to trigger command on *.tf changes. I finally use watch terraform validate ( inotifywait don’t seems to work on wsl + vscode )

deepak kumar avatar
deepak kumar

Hi People, I am creating ecs service using tf 0.11.7 I have set the network_mode default to “bridge” for the ecs task definition but the module can be reused with different network_mode such as “awsvpc”. Since tf 0.11.* doesn’t support dynamic block , I need to find out a way to achieve dynamic block to set arguments such as network_configurations(based on the network_mode) Using locals I guess it can be achieved .Is there any other way to do it in tf 0.11.*?

Grummfy avatar
Grummfy

You can use terraspace / terragrunt / other to do that, but I would advise to update a bit the version of terraform …

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

has anyone managed to get terraform with when using federated SSO with AWS and leveraging an assume-role in the terraform configuration?

Andrea Cavagna avatar
Andrea Cavagna

I think you can manage this situation with Leapp Leap manages also the assume role from federated

1
conzymaher avatar
conzymaher
GitHub - 99designs/aws-vault: A vault for securely storing and accessing AWS credentials in development environments attachment image

A vault for securely storing and accessing AWS credentials in development environments - GitHub - 99designs/aws-vault: A vault for securely storing and accessing AWS credentials in development envi…

2
Andrea Cavagna avatar
Andrea Cavagna

I started an open-source project to manage multi-account access in multi-cloud. It is a Desktop App that Manages IAM Users, IAM federated roles, IAM chained roles and automatically retrieving all the AWS SSO roles. Also, It secures credentials by managing the credentials file on your behalf and generates a profile with short-lived credentials only when needed. If you are interested in the idea, look at the guide made by Nuru:

https://docs.cloudposse.com/howto/geodesic/authenticate-with-leapp/

GitHub - 99designs/aws-vault: A vault for securely storing and accessing AWS credentials in development environments attachment image

A vault for securely storing and accessing AWS credentials in development environments - GitHub - 99designs/aws-vault: A vault for securely storing and accessing AWS credentials in development envi…

conzymaher avatar
conzymaher

Its an awesome tool. I am using it for interacting with dozens of AWS accounts whether its IAM users + MFA or AWS SSO

Tomek avatar
Tomek

ooof, I just corrupted my local state file and lost the state of a bunch of resources in my terraform (backup was corrupted to ). I don’t actually care about the resources, is there a way I can force terraform to destroy the resources that map to my terraform code and reapply?

Alex Jurkiewicz avatar
Alex Jurkiewicz

No. Run Terraform apply repeatedly and manually delete the resources it says are in the way. But this doesn’t work in all cases. If you had eg S3 buckets it IAM resources with a name prefix specified instead of a name, they will be missed

Tomek avatar
Tomek

i was afraid of this

Tomek avatar
Tomek

well first thing i’m doing is switching to versioned s3 backend

Alex Jurkiewicz avatar
Alex Jurkiewicz

Good idea

pjaudiomv avatar
pjaudiomv

Backup the bucket too :), learned that one after a coworker deleted said versioned bucket

conzymaher avatar
conzymaher

ooof

2021-09-08

Mohammed Yahya avatar
Mohammed Yahya

Terraform is not currently reviewing Community Pull Requests: HashiCorp has acknowledged that it is currently understaffed and is unable to review public PRs.

Be explicit that community PR review is currently paused · hashicorp/[email protected] attachment image

Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned. - Be explicit that community PR review is currently paused · hashicorp/[email protected]

Mohammed Yahya avatar
Mohammed Yahya
Be explicit that community PR review is currently paused · hashicorp/[email protected] attachment image

Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned. - Be explicit that community PR review is currently paused · hashicorp/[email protected]

conzymaher avatar
conzymaher

Only applies to terraform core

conzymaher avatar
conzymaher

Not providers

Mohammed Yahya avatar
Mohammed Yahya

I see.

conzymaher avatar
conzymaher

Lets see how it plays out but I’m not particularly worried

Mohammed Yahya avatar
Mohammed Yahya

For core I guess yes, maybe they don’t want specific features added by community - example terraform add command, but not sure why

conzymaher avatar
conzymaher
HashiCorp Terraform and Community Contributions attachment image

We recently added a note to the HashiCorp Terraform contribution guidelines and this blog provides additional clarity and context for our community and commercial customers.

Saichovsky avatar
Saichovsky

Hello,

We have a aws_directory_service_directory resource defined in a service, which creates a security group that allows ports 1024-65535 to be accessible from 0.0.0.0/0 and this is getting flagged by security hub because AWS CIS standards to not recommend allowing ingress from 0.0.0.0/0 for TCP port 3389.

My question is on how to restrict some of the rules in the resultant SG that gets created by the aws_directory_service_directory resource. How do you remediate this using terraform?

mfridh avatar
mfridh

Anyone here using tfexec / tfinstall? https://github.com/hashicorp/terraform-exec

2021/09/08 13:15:58 error running Init: fork/exec /tmp/tfinstall354531296/terraform: not a directory

I feel like there are a few lies in this code here

This one for example: https://github.com/hashicorp/terraform-exec/blob/v0.14.0/tfexec/terraform.go#L62-L74

mfridh avatar
mfridh

As usual… nothing to see here. oh, funny :smile: … Yeah it was all a lie.

I had given a file instead of a directory as its workingDir.

And the error message was very confusing because it didn’t report THAT variable as “not a directory”

SlackBot avatar
SlackBot
12:58:39 PM

This message was deleted.

Tomek avatar
Tomek

:wave: I have the following public subnet resource:

resource "aws_subnet" "public_subnet" {
  for_each = {
    "${var.aws_region}a" = "172.16.1.0"
    "${var.aws_region}b" = "172.16.2.0"
    "${var.aws_region}c" = "172.16.3.0"
  }
  vpc_id                  = aws_vpc.vpc.id
  cidr_block              = "${each.value}/24"
  availability_zone       = each.key
  map_public_ip_on_launch = true
}

I want to reference the subnets in an ALB resource I’m creating. At the moment this looks like:

  subnet_ids = [
    aws_subnet.public_subnet["us-east-1a"].id,
    aws_subnet.public_subnet["us-east-1b"].id,
    aws_subnet.public_subnet["us-east-1c"].id
  ]

Is there a way to wildcard the above? I tried aws_subnet.public_subnet.*.id which doesn’t work because I think the for each object is a map. What is the proper way to handle this?

loren avatar
loren
subnet_ids = [ for subnet in aws_subnet.public_subnet : subnet.id ]
1
1
Tomek avatar
Tomek

thanks, that worked perfectly!

1
Release notes from terraform avatar
Release notes from terraform
07:43:40 PM

v1.1.0-alpha20210908 1.1.0 (Unreleased) UPGRADE NOTES: Terraform on macOS now requires macOS 10.13 High Sierra or later; Older macOS versions are no longer supported. The terraform graph command no longer supports -type=validate and -type=eval options. The validate graph is always the same as the plan graph anyway, and the “eval” graph was just an implementation detail of the terraform console command. The default behavior of creating a plan graph should be a reasonable replacement for both of the removed graph…

2021-09-07

O K avatar

Hi All! How long approximately should it take to deploy AWS MSK? I use this module https://registry.terraform.io/modules/cloudposse/msk-apache-kafka-cluster/aws/latest and I deployment is passed 20 min already and still nothing. Any feedback please?

module.kafka.aws_msk_cluster.default[0]: Still creating... [26m0s elapsed]
module.kafka.aws_msk_cluster.default[0]: Still creating... [26m10s elapsed]
RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

It does take a while

RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

Id give it 30 min at least

1
O K avatar

Thank you!

RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

Note that it’s not the module but the aws msk itself

O K avatar

I see, do we need to specify zone_id or this os optional parameter?

O K avatar
terraform-aws-msk-apache-kafka-cluster/main.tf at master · cloudposse/terraform-aws-msk-apache-kafka-cluster attachment image

Terraform module to provision AWS MSK. Contribute to cloudposse/terraform-aws-msk-apache-kafka-cluster development by creating an account on GitHub.

Mohamed Habib avatar
Mohamed Habib

yup MSK takes ages to be ready

O K avatar


I see, do we need to specify zone_id or this os optional parameter?
please suggest regarding this question

RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

All the module arguments are shown in the readme. On the far right, it shows required yes or no

O K avatar

After 26 min it has been created…

1
Wira avatar
Wira
12:32:46 PM

Hello, I am currently using this terraform module https://registry.terraform.io/modules/cloudposse/elastic-beanstalk-environment/aws/latest to create a worker environment. But I can’t find how to configure custom endpoint for the worker daemon to post the sqs queue.

RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

Is there a terraform resource that can provide a custom endpoint? I don’t see one :(

RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

Only one i can see is the environment resources endpoint url as an attribute but i don’t see a way to modify it like in the picture above

Wira avatar

I am actually not too familiar with terraform. But after I looked here https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/elastic_beanstalk_environment , I don’t think so

RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

There may be an open pull request in the aws provider? If not, they need all the contributions they can get :)

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)
loren avatar
loren

bummed, but glad they’re at least up front about it

Rhys Davies avatar
Rhys Davies

Time to apply to Hashi

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ya, so curious what the back story is here…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

have they have some recent departures?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

have they reached some tipping point?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

have they had some incident reported and need to pause all contributions (E.g. like what happened to the linux kernel)?

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

I wonder where we can get more information about this? Any people you can get some commentary on this?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

have they taken some time to pause and regroup on how to scale engineering of open source at this scale?

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

It’s really interesting to look at this in light of Docker’s issues in the open source world: https://www.infoworld.com/article/3632142/how-docker-broke-in-half.html

How Docker broke in half attachment image

The game changing container company is a shell of its former self. What happened to one of the hottest enterprise technology businesses of the cloud era?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I doubt we can get anyone to comment publicly on it.

Rhys Davies avatar
Rhys Davies

Not hugely forthcoming in the Reddit threads that I’ve been reading, but it seems that they are growing faster than they are hiring, compounded with some loses in the Terraform department coupled with normal PTO/Vacay overhead

Rhys Davies avatar
Rhys Davies

I was reading a Tweet from Mitchell too, but I can’t find it now

loren avatar
loren

@gooeyblob This is only for core which should not be noticeable to any end users since providers are the main source of external contribution and there is no change in policy there. This allows our core team to focus a bit more while we hire to fill the team more.

Rhys Davies avatar
Rhys Davies

he was basically trying to downplay the situation

Rhys Davies avatar
Rhys Davies

thank you - that’s the exact one

1
Rhys Davies avatar
Rhys Davies

Basically it looks like Silicon Valley is hot af right now if you have Terraform skill, they literally cannot hire fast enough because everyone is hiring again after the pandemic and it’s feeding frenzy

Rhys Davies avatar
Rhys Davies

I wasn’t joking when I said it’s time to apply to Hashicorp, maybe it’s time to work for a big company…

Rhys Davies avatar
Rhys Davies

I also think that a lot of companies haven’t really figured out working full remotely yet, it’s possible that they are having a people issue as well as a resourcing block which is slowing things down

Rhys Davies avatar
Rhys Davies

I notice that their SF office isn’t listed on any job listings and they are all fully remote..

Rhys Davies avatar
Rhys Davies

Looking at cashflow Hashi is 5.2B valuation, 8 years old, Series E of 175m, so they have fuel in the tank to hire with even if Series E and not revenue positive denotes that they are having trouble monetizing their products

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

I think Hashi was mostly remote even pre-pandemic. I agree that the market is hot and it’s hard to find good people. There’s a lot of cash running around.

Jeb Cole avatar
Jeb Cole

It’s the remote pool that is getting drained hardest now that so many tech companies have been pushed to go remote

Mohamed Habib avatar
Mohamed Habib

could it be a cashflow issue?

Andrew Nazarov avatar
Andrew Nazarov

Sharing an update to the recent speculation around Terraform and community contributions. The gist is: we’re growing a ton, this temporary pause is localized to a single team (of many), and Terraform Providers are completely unchanged and unaffected. https://www.hashicorp.com/blog/terraform-community-contributions

Andrew Nazarov avatar
Andrew Nazarov

Sharing a brief update on Terraform and community contributions, given some recent noise. TL;DR: Terraform is continuing to grow rapidly, we are scaling the team, and we welcome contributions. Also we are hiring! https://www.hashicorp.com/blog/terraform-community-contributions

Kyle Johnson avatar
Kyle Johnson

Is there any existing solution for generating KMS policies that enable the interop with various AWS services?

Some services need actions others don’t such as kms:CreateGrant. CloudTrail audits will flag that action being granted to services which don’t need it.

Seems like there ought to be a module for creating these policies which already knows the details of individual action requirements vs recreating policies from AWS docs on every project

loren avatar
loren

dealing with exactly this right now, for cloudtrail, config, and guardduty. such a pain to figure out the kms policy and bucket policy!!

Alex Jurkiewicz avatar
Alex Jurkiewicz

I started work on creating canned policies for every service in a PR for the cloudposse key module, but I am no longer actively working on it

Alex Jurkiewicz avatar
Alex Jurkiewicz

If you wanted to improve everyone’s life a little bit, it might be a good launchpad

1

2021-09-06

David avatar
David

Hi folks - I appear to be having an issue with the following module: https://github.com/cloudposse/terraform-aws-ecs-alb-service-task

╷
│ Error: Invalid value for module argument
│ 
│   on main.tf line 40, in module "ecs_alb_service_task":
│   40:   volumes = var.volumes
│ 
│ The given value is not suitable for child module variable "volumes" defined at .terraform/modules/ecs_alb_service_task/variables.tf:226,1-19: element 0: attributes "efs_volume_configuration" and "host_path" are required.
╵

The above is the error message I get when performing a Terraform plan

The section of code which it is complaining about looks like this:

  dynamic "volume" {
    for_each = var.volumes
    content {
      host_path = lookup(volume.value, "host_path", null)
      name      = volume.value.name

      dynamic "docker_volume_configuration" {
        for_each = lookup(volume.value, "docker_volume_configuration", [])
        content {
          autoprovision = lookup(docker_volume_configuration.value, "autoprovision", null)
          driver        = lookup(docker_volume_configuration.value, "driver", null)
          driver_opts   = lookup(docker_volume_configuration.value, "driver_opts", null)
          labels        = lookup(docker_volume_configuration.value, "labels", null)
          scope         = lookup(docker_volume_configuration.value, "scope", null)
        }
      }

      dynamic "efs_volume_configuration" {
        for_each = lookup(volume.value, "efs_volume_configuration", [])
        content {
          file_system_id          = lookup(efs_volume_configuration.value, "file_system_id", null)
          root_directory          = lookup(efs_volume_configuration.value, "root_directory", null)
          transit_encryption      = lookup(efs_volume_configuration.value, "transit_encryption", null)
          transit_encryption_port = lookup(efs_volume_configuration.value, "transit_encryption_port", null)
          dynamic "authorization_config" {
            for_each = lookup(efs_volume_configuration.value, "authorization_config", [])
            content {
              access_point_id = lookup(authorization_config.value, "access_point_id", null)
              iam             = lookup(authorization_config.value, "iam", null)
            }
          }
        }
      }
    }
  }

With vars for var.volumes declared like this:

variable "volumes" {
  type = list(object({
    host_path = string
    name      = string
    docker_volume_configuration = list(object({
      autoprovision = bool
      driver        = string
      driver_opts   = map(string)
      labels        = map(string)
      scope         = string
    }))
    efs_volume_configuration = list(object({
      file_system_id          = string
      root_directory          = string
      transit_encryption      = string
      transit_encryption_port = string
      authorization_config = list(object({
        access_point_id = string
        iam             = string
      }))
    }))
  }))
  description = "Task volume definitions as list of configuration objects"
  default     = []
}

I am passing in the following:

volumes = [
  {
    name = "etc"
    docker_volume_configuration = {
      scope         = "shared"
      autoprovision = true
    }
  },
  {
    name      = "log"
    host_path = "/var/log/hello"
  },
  {
    name = "opt"
    docker_volume_configuration = {
      scope         = "shared"
      autoprovision = true
    }
  },
]

If I update the module variables file in my .terraform folder to:

variable "volumes" {
  type = list(object({
    #host_path = string
    #name      = string
    #docker_volume_configuration = list(object({
    #  autoprovision = bool
    #  driver        = string
    #  driver_opts   = map(string)
    #  labels        = map(string)
    #  scope         = string
    #}))
    #efs_volume_configuration = list(object({
    #  file_system_id          = string
    #  root_directory          = string
    #  transit_encryption      = string
    #  transit_encryption_port = string
    #  authorization_config = list(object({
    #    access_point_id = string
    #    iam             = string
    #  }))
    #}))
  }))
  description = "Task volume definitions as list of configuration objects"
  default     = []
}

This applies no problem, any ideas or will I submit a bug

GitHub - cloudposse/terraform-aws-ecs-alb-service-task: Terraform module which implements an ECS service which exposes a web service via ALB. attachment image

Terraform module which implements an ECS service which exposes a web service via ALB. - GitHub - cloudposse/terraform-aws-ecs-alb-service-task: Terraform module which implements an ECS service whic…

RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

@ every key in the object has to be set or terraform will error out. this is a limitation in terraform itself.

RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)
Type Constraints - Configuration Language - Terraform by HashiCorp

Terraform module authors and provider developers can use detailed type constraints to validate the inputs of their modules and resources.

David avatar
David

i think i tried this, let me try again

David avatar
David

yeah i tried setting the values to null

David avatar
David
volumes = [
  {
    name = "etc"
    host_path = null
    docker_volume_configuration = {
      scope         = "shared"
      autoprovision = true
    },
    efs_volume_configuration = {
      file_system_id = null
      root_directory = null
      transit_encryption = null
      transit_encryption_port = null
      authorization_config = { 
        access_point_id = null
        iam = null
      }
    }
  },
  {
    name      = "log"
    host_path = "/var/log/hello"
    docker_volume_configuration = {
      scope         = null
      autoprovision = null
    },
    efs_volume_configuration = {
      file_system_id = null
      root_directory = null
      transit_encryption = null
      transit_encryption_port = null
      authorization_config = { 
        access_point_id = null
        iam = null
      }
    }
  },
  {
    name = "opt"
    host_path = null
    docker_volume_configuration = {
      scope         = "shared"
      autoprovision = true
    },
    efs_volume_configuration = {
      file_system_id = null
      root_directory = null
      transit_encryption = null
      transit_encryption_port = null
      authorization_config = { 
        access_point_id = null
        iam = null
      }
    }
  },
]
David avatar
David

but just moans about this:

│ Error: Invalid value for module argument
│ 
│   on main.tf line 40, in module "ecs_alb_service_task":
│   40:   volumes = var.volumes
│ 
│ The given value is not suitable for child module variable "volumes" defined at .terraform/modules/ecs_alb_service_task/variables.tf:226,1-19: element 0: attribute "docker_volume_configuration": list of object required.
╵
loren avatar
loren

typically, a list of objects can be zeroed using []. a singular object can be passed as null

2
RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

you’re giving docker_volume_configuration a map instead of a list

this

    docker_volume_configuration = {
      scope         = "shared"
      autoprovision = true
    },

should be

    docker_volume_configuration = [{
      scope         = "shared"
      autoprovision = true
    }],

see

attribute "docker_volume_configuration": list of object required.
1
David avatar
David

didn’t spot the [] and {}

David avatar
David
volumes = [
  {
    name = "etc"
    host_path = null
    efs_volume_configuration = []
    docker_volume_configuration = [{
      autoprovision = true
      driver = null
      driver_opts = null
      labels = null
      scope         = "shared"
    }]
  },
  {
    name      = "log"
    host_path = "/var/log/gitlab"
    efs_volume_configuration = []
    docker_volume_configuration = []
  },
  {
    name = "opt"
    host_path = null
    docker_volume_configuration = [{
      autoprovision = true
      scope         = "shared"
      driver = null
      driver_opts = null
      labels = null
    }]
    efs_volume_configuration = []
  },
]
David avatar
David

this works

RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

Nice, glad you got it working!

David avatar
David

me too, i really appreciate the help

RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

Np!

Tony C avatar
Tony C

I’m having a similar issue as this one, but I’m trying to use efs_volume_configuration instead of docker_volume_configuration. I am correctly passing the docker config as an empty list to avoid the problem of a required option, but then when I go to apply, I get the following error:

Error: ClientException: When the volume parameter is specified, only one volume configuration type should be used.

So, Terraform requires me to pass both configurations, but even when one is empty, it’s complaining that both are provided. Is there any way around this problem? @RB (Ronak) (Cloud Posse) any ideas?

Tony C avatar
Tony C

the volumes block:

  volumes = [{
    name = "html"
    host_path = "/usr/share/nginx/html"
    docker_volume_configuration = []
    efs_volume_configuration = [{
      file_system_id = dependency.efs.outputs.id
      root_directory          = "/home/user/www"
      transit_encryption      = "ENABLED"
      transit_encryption_port = 2999
      authorization_config = []
    }]
  }]
RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

Try setting docker_volume_configuration to null instead

Tony C avatar
Tony C

@RB (Ronak) (Cloud Posse) no bueno:

Error: Invalid dynamic for_each value

  on .terraform/modules/ecs-service/main.tf line 70, in resource "aws_ecs_task_definition" "default":
  70:         for_each = lookup(volume.value, "docker_volume_configuration", [])
    |----------------
    | volume.value is object with 4 attributes

Cannot use a null value in for_each.
RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

could you create a ticket with a minimum viable reproducible example in the https://github.com/cloudposse/terraform-aws-ecs-container-definition repo ? doing this would be easier to debug locally.

if this is truly the case, then the issue may be with the terraform resource itself because it should respect passing in null as if the param is not passed in. if it’s not honoring that, then the terraform golang resource in the aws provider is to blame rather than the module itself

GitHub - cloudposse/terraform-aws-ecs-container-definition: Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource attachment image

Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource - GitHub - cloudposse/terraform-aws-ecs-container-…

Tony C avatar
Tony C

will do

Tony C avatar
Tony C

@RB (Ronak) (Cloud Posse) the volumes variable is in ecs-service not aws-ecs-container-definition. are you sure you want me to submit the issue in the latter?

Tony C avatar
Tony C

or maybe i’m not understanding the distinction between volumes_from in the container definition module and volumes in the service module

RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

the ecs service module feeds it into the container definition module

Tony C avatar
Tony C

ok so i can just use my volumes arg verbatim as the value for volumes_from in my reproducer?

Tony C avatar
Tony C

appears not. can i give you a reproducer that uses ecs-service?

Tony C avatar
Tony C

I’m using terraform-aws-ecs-alb-service-task

Tony C avatar
Tony C
Error when trying to use EFS volumes in task/container definition · Issue #147 · cloudposse/terraform-aws-ecs-container-definition attachment image

Describe the Bug I&#39;m trying to use an EFS volume in an ECS service definition. The volumes variable is defined such that one has to supply a value for both the efs_volume_configuration and dock…

2021-09-05

Rhys Davies avatar
Rhys Davies

Hey guys, quick q: When using Terraform to manage your AWS account, how do you or you team deploy containers to ECS? Are you using Terraform to do it or some other process to create/update containerdefinitions?

Zach avatar

The answer is largely “it depends” based on a few factors. Is the service in question considered “part of the infrastructure” such as a log aggregation system? In that case you might manage it entirely by terraform and specify upgrades to image tags and specs via module versioning and variables. If its part of your actual application layer you can do the same thing but this could get in the way of your app teams managing their own deploys, and then you’re using terraform to deploy software; or you can have terraform deploy an initial dummy container definition that uses a sort of ‘hello world’ service while ignoring any further changes to the Task Definition, and allow your CI/CD system to push new definitions directly to ECS.

2
Rhys Davies avatar
Rhys Davies

Yeah it’s application layer, using Terraform to apply updates by tagging images, and passing the image tags to terraform as var. I had no idea about about https://www.terraform.io/docs/language/meta-arguments/lifecycle.html#ignore_changes if that’s what you are referring to? This seems like a really great solution because with this small change to our ECS services I could hand over the container deploy to something like https://circleci.com/docs/2.0/ecs-ecr/ which seems like an attractive solution.

The lifecycle Meta-Argument - Configuration Language - Terraform by HashiCorp

The meta-arguments in a lifecycle block allow you to customize resource behavior.

Deploying to AWS ECR/ECS - CircleCI

How to use CircleCI to deploy to AWS ECS from ECR

Rhys Davies avatar
Rhys Davies

Awesome! Thanks so much for your help

NeuroWinter avatar
NeuroWinter

Good morning all!

I have a few quick questions - I think I am doing something wrong because I have not seen anyone else talk about this but here goes! - I have been trying to use cloudposse/cloudfront-s3-cdn/aws in github actions to set up the infrastructure for my static site, and I have faced a few issues. The first was when I was trying to create the cert for the site within main.tf, as per the examples in the README.md but I was getting an error about the zone_id being “”. I solved that by supplying the cert arn manually.

Now I face the problem of after running terraform and applying the config via github actions, on the next run I get “Error creating S3 bucket: BucketAlreadyOwnedByYou” and it looks like it is trying to create everything again, even though it has been deployed and I can see all the pieces in the aws console. Here is a gist of my main.tf>: <https://gist.github.com/NeuroWinter/2e1877909ce06bd4ae2719b7d004f721

Alex Jurkiewicz avatar
Alex Jurkiewicz

Sounds like you don’t have a backend set up to store your statefile

Alex Jurkiewicz avatar
Alex Jurkiewicz

Terraform creates a JSON file after running apply that contains details of all infrastructure that was created. It uses this file on subsequent runs to know which infra it has already created.

Most commonly this is stored in S3 using the S3 backend. Read the docs for more info on how to configure this.

To repair your deployment it will take some tedious surgery, btw. The simplest approach would be to manually delete any resource that Terraform claims is in the way, so it can recreate them. (Once your state is set up)

NeuroWinter avatar
NeuroWinter

Ahh that makes a lot of sense thank you @ ! I will read up on the docs on how to do that

Jeb Cole avatar
Jeb Cole

Understanding what the statefile is and what terraform does with it (not too complicated) is important

1

2021-09-03

AugustasV avatar
AugustasV

I would like to use aws_lb data file arn_suffix, but receive this error aws_lb | Data Sources | hashicorp/aws | Terraform Registry I could see that option in resource atributes aws_lb | Resources | hashicorp/aws | Terraform Registry

Error: Value for unconfigurable attribute

  on ../../modules/deployment/data_aws_lb.tf line 3, in data "aws_lb" "lb":
   3:   arn_suffix = var.arn_suffix

Can't configure a value for "arn_suffix": its value will be decided
automatically based on the result of applying this configuration.
Markus Muehlberger avatar
Markus Muehlberger

Only values in Argument Reference can be supplied. Values in Attributes Reference are available to read only from the resource and can’t be set.

Release notes from terraform avatar
Release notes from terraform
03:03:43 PM

v1.0.6 1.0.6 (September 03, 2021) ENHANCEMENTS: backend/s3: Improve SSO handling and add new endpoints in the AWS SDK (#29017) BUG FIXES: cli: Suppress confirmation prompt when initializing with the -force-copy flag and migrating state between multiple workspaces. (<a href=”https://github.com/hashicorp/terraform/issues/29438“…

Bumping AWS GO SDK to 1.38.42 to fix AWS SSO auth woes by luxifr · Pull Request #29017 · hashicorp/terraform attachment image

AWS SSO is used in many organizations to authenticate users for access to their AWS accounts. It&#39;s the same scale organizations that would very likely also use Terraform to manage their infrast…

command: Suppress prompt for init -force-copy by alisdair · Pull Request #29438 · hashicorp/terraform attachment image

The -force-copy flag to init should automatically migrate state. Previously this was not applied to one case: when migrating from a backend with multiple workspaces to another backend supporting mu…

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

does anyone know a good module for AWS budgets before I created my own?

Mohamed Habib avatar
Mohamed Habib

Hi guys recently I’ve been thinking of ways to make my terraform code DRY within a project, and avoid having to wire outputs from some modules to other modules. I came up with a pattern similar to “dependency injection” using terraform data blocks. Keen to hear your thoughts on this? And also curious how do folks organise their large terraform codebases? https://github.com/diggerhq/infragenie/

GitHub - diggerhq/infragenie: decompose your terraform with dependency injection attachment image

decompose your terraform with dependency injection - GitHub - diggerhq/infragenie: decompose your terraform with dependency injection

1
loren avatar
loren

Nifty

GitHub - diggerhq/infragenie: decompose your terraform with dependency injection attachment image

decompose your terraform with dependency injection - GitHub - diggerhq/infragenie: decompose your terraform with dependency injection

1

2021-09-02

curious deviant avatar
curious deviant

Hello !

I am maintaining state in S3 and using DynamoDB for state locking. I had to make a manual change to the state file. I successfully uploaded the updated state file. But running any tf command errors out now due to the md5 digest of the new uploaded file not matching the entry in the DynamoDb table. Looks like the solution is to update the digest manually in the table corresponding to the backend entry. Just wanted to be sure that there isn’t indeed another way to have terraform regenerate/repopulate DynamoDb with the updated md5

loren avatar
loren

easy button is to just delete the item from the dynamodb and let terraform auto-generate it

2
curious deviant avatar
curious deviant

ty!

1
Tom Vaughan avatar
Tom Vaughan

I am using the tfstate-backend module and noticed some add behavior. This is only when using a single s3 bucket to hold multiple state files. For example, bucket is named tf-state and state file for VPC would be in tf-state/vpc, RDS state file would be in tf-state/rds. The issue is the s3 bucket tag Name gets updated to whatever is set in the module name parameter. What ends up happening is when VPC is created the Name tag would be set as vpc but when RDS is created the tag is updated to rds. This may be by design but is there any way to override this and explicitly set the tag value to something else other than what is set as name in the module?

RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

Can you override it using tags input var?

Tom Vaughan avatar
Tom Vaughan

@RB (Ronak) (Cloud Posse) Yes, but it also updates the dynamoDB tag name. Is there any way to limit this to only the s3 bucket?

RB (Ronak) (Cloud Posse) avatar
RB (Ronak) (Cloud Posse)

Ah no i don’t believe so. You’d have to submit a pr to tag resources differently

Tom Vaughan avatar
Tom Vaughan

OK, thanks!

2021-09-01

Michael Dizon avatar
Michael Dizon

having a weird issue setting up sso with iam-primary-roles, after authenticating with google workspace, leaap opens the aws console. i’m not sure where the misconfiguration is, but my user isn’t getting the arn:aws:iam::XXXXXXXXXXXX:role/xyz-gbl-identity-admin role assignment. i’m also not sure if i’m supposed to use the idp from the root account or from the identity account. any help is appreciated!

Andrea Cavagna avatar
Andrea Cavagna

Hi are you using AWS Single Sign-on or a federated role with Google workspace?

Michael Dizon avatar
Michael Dizon

a federated role w/ google

Andrea Cavagna avatar
Andrea Cavagna

This is the doc about your use case:

https://docs.leapp.cloud/use-cases/aws_iam_role/#aws-iam-federated-role

required items are:

• session Alias: a fancy name

• roleArn: the role arn you need to federate access to

• Identity Provider arn: It’s in the IAM service under Identity Providers

• SAML Url: the url of the SAML app connected to google workspace

AWS IAM Role - Leapp - Docs

Leapp is a tool for developers to manage, secure, and gain access to any cloud. From setting up your access data to activating a session, Leapp can help manage the underlying assets to let you use your provider CLI or SDK seamlessy.

1
Michael Dizon avatar
Michael Dizon

thank you @ for the quick assist!

1
OliverS avatar
OliverS

On the topic of version tracking of iac, such that only resources in plan get new tag, I found, amazingly, it should be possible to do with https://registry.terraform.io/providers/hashicorp/aws/latest/docs/guides/resource-tagging#ignoring-changes-in-all-resources. I’m going to try this:

locals {
  iac_version = ...get git short hash...
}

provider "aws" {
  ...
  default_tags {
    tags = {
      IAC_Version = local.iac_version
    }
  }
  ignore_tags {
    keys = ["IAC_Version"]
  }
}
1
cool-doge1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

fascinating!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ok, please report back.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I’ve struggled to see a use-case for provider default tags b/c we use null-label and tag all of our resources explicitly.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but I would like to use this if it works in our root modules.

Alex Jurkiewicz avatar
Alex Jurkiewicz

You can use a var for this, but not a data source or resource. Because provider is instantiated before any resources or data sources run

Alex Jurkiewicz avatar
Alex Jurkiewicz

It’s a nice idea though. I wanted to use Yor for this, but found it quite buggy. This approach would get you 80% of the way for 5% of the effort

loren avatar
loren

provider default_tags are kinda nice as aws and the aws provider add support for tagging more types of resources… you can at least get the default tags on those resources without an update to the module, which can also serve as a notification that, hey, the module needs an update

loren avatar
loren

but the current implementation of default_tags leaves a bit to be desired, between errors on duplicate tags and persistent diffs

Alex Jurkiewicz avatar
Alex Jurkiewicz

Thanks for this idea Oliver. I replaced our complex WIP integration of Yor with something much simpler. The Terraform CD platform we use (Spacelift) provides a bunch of variables automatically, so just have to take advantage of them:

provider "aws" {
  default_tags {
    tags = {
      iac_repo         = var.spacelift_repository
      iac_path         = var.spacelift_project_root
      iac_commit       = var.spacelift_commit_sha
      iac_branch       = var.spacelift_commit_branch
    }
  }
}

variable "spacelift_repository" {
  type = string
  description = "Auto-computed by Spacelift."
}
variable "spacelift_project_root" {
  type = string
  description = "Auto-computed by Spacelift."
}
variable "spacelift_commit_sha" {
  type = string
  description = "Auto-computed by Spacelift."
}
variable "spacelift_commit_branch" {
  type = string
  description = "Auto-computed by Spacelift."
}

3
Alex Jurkiewicz avatar
Alex Jurkiewicz

Correction to the above. Having every update to any resource cause every resource to get modified in the plan was very annoying. We dropped iac_commit

1
1
OliverS avatar
OliverS

@ @Erik Osterman (Cloud Posse) you forgot to use ignore_tags so obviously you get everything modified, that’s what I explained during the office hours. Ignore-tags will configure the provider to ignore the tag when determining *which* resources to update. Only resources that need updating for some other reason will get the new value of the tag. Look at my original example. It has it.

Alex Jurkiewicz avatar
Alex Jurkiewicz

i saw that, but it seemed a little magic for me

Alex Jurkiewicz avatar
Alex Jurkiewicz

very clever idea tho

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


Ignore-tags will configure the provider to ignore the tag when determining *which* resources to update. Only resources that need updating for some other reason will get the new value of the tag.
now i get it. yes, clever indeed.

    keyboard_arrow_up