#terraform (2021-09)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2021-09-01

Michael Dizon avatar
Michael Dizon

having a weird issue setting up sso with iam-primary-roles, after authenticating with google workspace, leaap opens the aws console. i’m not sure where the misconfiguration is, but my user isn’t getting the arn:aws:iam::XXXXXXXXXXXX:role/xyz-gbl-identity-admin role assignment. i’m also not sure if i’m supposed to use the idp from the root account or from the identity account. any help is appreciated!

Andrea Cavagna avatar
Andrea Cavagna

Hi are you using AWS Single Sign-on or a federated role with Google workspace?

Michael Dizon avatar
Michael Dizon

a federated role w/ google

Andrea Cavagna avatar
Andrea Cavagna

This is the doc about your use case:

https://docs.leapp.cloud/use-cases/aws_iam_role/#aws-iam-federated-role

required items are:

• session Alias: a fancy name

• roleArn: the role arn you need to federate access to

• Identity Provider arn: It’s in the IAM service under Identity Providers

• SAML Url: the url of the SAML app connected to google workspace

AWS IAM Role - Leapp - Docs

Leapp is a tool for developers to manage, secure, and gain access to any cloud. From setting up your access data to activating a session, Leapp can help manage the underlying assets to let you use your provider CLI or SDK seamlessy.

1
Michael Dizon avatar
Michael Dizon

thank you @Andrea Cavagna for the quick assist!

1
OliverS avatar
OliverS

On the topic of version tracking of iac, such that only resources in plan get new tag, I found, amazingly, it should be possible to do with https://registry.terraform.io/providers/hashicorp/aws/latest/docs/guides/resource-tagging#ignoring-changes-in-all-resources. I’m going to try this:

locals {
  iac_version = ...get git short hash...
}

provider "aws" {
  ...
  default_tags {
    tags = {
      IAC_Version = local.iac_version
    }
  }
  ignore_tags {
    keys = ["IAC_Version"]
  }
}
1
cool-doge1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

fascinating!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ok, please report back.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I’ve struggled to see a use-case for provider default tags b/c we use null-label and tag all of our resources explicitly.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but I would like to use this if it works in our root modules.

Alex Jurkiewicz avatar
Alex Jurkiewicz

You can use a var for this, but not a data source or resource. Because provider is instantiated before any resources or data sources run

Alex Jurkiewicz avatar
Alex Jurkiewicz

It’s a nice idea though. I wanted to use Yor for this, but found it quite buggy. This approach would get you 80% of the way for 5% of the effort

loren avatar

provider default_tags are kinda nice as aws and the aws provider add support for tagging more types of resources… you can at least get the default tags on those resources without an update to the module, which can also serve as a notification that, hey, the module needs an update

loren avatar

but the current implementation of default_tags leaves a bit to be desired, between errors on duplicate tags and persistent diffs

Alex Jurkiewicz avatar
Alex Jurkiewicz

Thanks for this idea Oliver. I replaced our complex WIP integration of Yor with something much simpler. The Terraform CD platform we use (Spacelift) provides a bunch of variables automatically, so just have to take advantage of them:

provider "aws" {
  default_tags {
    tags = {
      iac_repo         = var.spacelift_repository
      iac_path         = var.spacelift_project_root
      iac_commit       = var.spacelift_commit_sha
      iac_branch       = var.spacelift_commit_branch
    }
  }
}

variable "spacelift_repository" {
  type = string
  description = "Auto-computed by Spacelift."
}
variable "spacelift_project_root" {
  type = string
  description = "Auto-computed by Spacelift."
}
variable "spacelift_commit_sha" {
  type = string
  description = "Auto-computed by Spacelift."
}
variable "spacelift_commit_branch" {
  type = string
  description = "Auto-computed by Spacelift."
}

3
Alex Jurkiewicz avatar
Alex Jurkiewicz

Correction to the above. Having every update to any resource cause every resource to get modified in the plan was very annoying. We dropped iac_commit

1
1
OliverS avatar
OliverS

@Alex Jurkiewicz @Erik Osterman (Cloud Posse) you forgot to use ignore_tags so obviously you get everything modified, that’s what I explained during the office hours. Ignore-tags will configure the provider to ignore the tag when determining *which* resources to update. Only resources that need updating for some other reason will get the new value of the tag. Look at my original example. It has it.

Alex Jurkiewicz avatar
Alex Jurkiewicz

i saw that, but it seemed a little magic for me

Alex Jurkiewicz avatar
Alex Jurkiewicz

very clever idea tho

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


Ignore-tags will configure the provider to ignore the tag when determining *which* resources to update. Only resources that need updating for some other reason will get the new value of the tag.
now i get it. yes, clever indeed.

2021-09-02

curious deviant avatar
curious deviant

Hello !

I am maintaining state in S3 and using DynamoDB for state locking. I had to make a manual change to the state file. I successfully uploaded the updated state file. But running any tf command errors out now due to the md5 digest of the new uploaded file not matching the entry in the DynamoDb table. Looks like the solution is to update the digest manually in the table corresponding to the backend entry. Just wanted to be sure that there isn’t indeed another way to have terraform regenerate/repopulate DynamoDb with the updated md5

loren avatar

easy button is to just delete the item from the dynamodb and let terraform auto-generate it

this2
curious deviant avatar
curious deviant

ty!

1
Tom Vaughan avatar
Tom Vaughan

I am using the tfstate-backend module and noticed some add behavior. This is only when using a single s3 bucket to hold multiple state files. For example, bucket is named tf-state and state file for VPC would be in tf-state/vpc, RDS state file would be in tf-state/rds. The issue is the s3 bucket tag Name gets updated to whatever is set in the module name parameter. What ends up happening is when VPC is created the Name tag would be set as vpc but when RDS is created the tag is updated to rds. This may be by design but is there any way to override this and explicitly set the tag value to something else other than what is set as name in the module?

RB avatar

Can you override it using tags input var?

Tom Vaughan avatar
Tom Vaughan

@RB Yes, but it also updates the dynamoDB tag name. Is there any way to limit this to only the s3 bucket?

RB avatar

Ah no i don’t believe so. You’d have to submit a pr to tag resources differently

Tom Vaughan avatar
Tom Vaughan

OK, thanks!

2021-09-03

AugustasV avatar
AugustasV

I would like to use aws_lb data file arn_suffix, but receive this error aws_lb | Data Sources | hashicorp/aws | Terraform Registry I could see that option in resource atributes aws_lb | Resources | hashicorp/aws | Terraform Registry

Error: Value for unconfigurable attribute

  on ../../modules/deployment/data_aws_lb.tf line 3, in data "aws_lb" "lb":
   3:   arn_suffix = var.arn_suffix

Can't configure a value for "arn_suffix": its value will be decided
automatically based on the result of applying this configuration.
Markus Muehlberger avatar
Markus Muehlberger

Only values in Argument Reference can be supplied. Values in Attributes Reference are available to read only from the resource and can’t be set.

Release notes from terraform avatar
Release notes from terraform
03:03:43 PM

v1.0.6 1.0.6 (September 03, 2021) ENHANCEMENTS: backend/s3: Improve SSO handling and add new endpoints in the AWS SDK (#29017) BUG FIXES: cli: Suppress confirmation prompt when initializing with the -force-copy flag and migrating state between multiple workspaces. (<a href=”https://github.com/hashicorp/terraform/issues/29438“…

Bumping AWS GO SDK to 1.38.42 to fix AWS SSO auth woes by luxifr · Pull Request #29017 · hashicorp/terraformattachment image

AWS SSO is used in many organizations to authenticate users for access to their AWS accounts. It&#39;s the same scale organizations that would very likely also use Terraform to manage their infrast…

command: Suppress prompt for init -force-copy by alisdair · Pull Request #29438 · hashicorp/terraformattachment image

The -force-copy flag to init should automatically migrate state. Previously this was not applied to one case: when migrating from a backend with multiple workspaces to another backend supporting mu…

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

does anyone know a good module for AWS budgets before I created my own?

Mohamed Habib avatar
Mohamed Habib

Hi guys recently I’ve been thinking of ways to make my terraform code DRY within a project, and avoid having to wire outputs from some modules to other modules. I came up with a pattern similar to “dependency injection” using terraform data blocks. Keen to hear your thoughts on this? And also curious how do folks organise their large terraform codebases? https://github.com/diggerhq/infragenie/

GitHub - diggerhq/infragenie: decompose your terraform with dependency injectionattachment image

decompose your terraform with dependency injection - GitHub - diggerhq/infragenie: decompose your terraform with dependency injection

1
loren avatar

Nifty

GitHub - diggerhq/infragenie: decompose your terraform with dependency injectionattachment image

decompose your terraform with dependency injection - GitHub - diggerhq/infragenie: decompose your terraform with dependency injection

1

2021-09-05

Rhys Davies avatar
Rhys Davies

Hey guys, quick q: When using Terraform to manage your AWS account, how do you or you team deploy containers to ECS? Are you using Terraform to do it or some other process to create/update containerdefinitions?

Zach avatar

The answer is largely “it depends” based on a few factors. Is the service in question considered “part of the infrastructure” such as a log aggregation system? In that case you might manage it entirely by terraform and specify upgrades to image tags and specs via module versioning and variables. If its part of your actual application layer you can do the same thing but this could get in the way of your app teams managing their own deploys, and then you’re using terraform to deploy software; or you can have terraform deploy an initial dummy container definition that uses a sort of ‘hello world’ service while ignoring any further changes to the Task Definition, and allow your CI/CD system to push new definitions directly to ECS.

2
Rhys Davies avatar
Rhys Davies

Yeah it’s application layer, using Terraform to apply updates by tagging images, and passing the image tags to terraform as var. I had no idea about about https://www.terraform.io/docs/language/meta-arguments/lifecycle.html#ignore_changes if that’s what you are referring to? This seems like a really great solution because with this small change to our ECS services I could hand over the container deploy to something like https://circleci.com/docs/2.0/ecs-ecr/ which seems like an attractive solution.

The lifecycle Meta-Argument - Configuration Language - Terraform by HashiCorp

The meta-arguments in a lifecycle block allow you to customize resource behavior.

Deploying to AWS ECR/ECS - CircleCI

How to use CircleCI to deploy to AWS ECS from ECR

Rhys Davies avatar
Rhys Davies

Awesome! Thanks so much for your help

NeuroWinter avatar
NeuroWinter

Good morning all!

I have a few quick questions - I think I am doing something wrong because I have not seen anyone else talk about this but here goes! - I have been trying to use cloudposse/cloudfront-s3-cdn/aws in github actions to set up the infrastructure for my static site, and I have faced a few issues. The first was when I was trying to create the cert for the site within main.tf, as per the examples in the README.md but I was getting an error about the zone_id being “”. I solved that by supplying the cert arn manually.

Now I face the problem of after running terraform and applying the config via github actions, on the next run I get “Error creating S3 bucket: BucketAlreadyOwnedByYou” and it looks like it is trying to create everything again, even though it has been deployed and I can see all the pieces in the aws console. Here is a gist of my main.tf: https://gist.github.com/NeuroWinter/2e1877909ce06bd4ae2719b7d004f721

Alex Jurkiewicz avatar
Alex Jurkiewicz

Sounds like you don’t have a backend set up to store your statefile

Alex Jurkiewicz avatar
Alex Jurkiewicz

Terraform creates a JSON file after running apply that contains details of all infrastructure that was created. It uses this file on subsequent runs to know which infra it has already created.

Most commonly this is stored in S3 using the S3 backend. Read the docs for more info on how to configure this.

To repair your deployment it will take some tedious surgery, btw. The simplest approach would be to manually delete any resource that Terraform claims is in the way, so it can recreate them. (Once your state is set up)

NeuroWinter avatar
NeuroWinter

Ahh that makes a lot of sense thank you @Alex Jurkiewicz ! I will read up on the docs on how to do that

Jeb Cole avatar
Jeb Cole

Understanding what the statefile is and what terraform does with it (not too complicated) is important

1

2021-09-06

David avatar

Hi folks - I appear to be having an issue with the following module: https://github.com/cloudposse/terraform-aws-ecs-alb-service-task

╷
│ Error: Invalid value for module argument
│ 
│   on main.tf line 40, in module "ecs_alb_service_task":
│   40:   volumes = var.volumes
│ 
│ The given value is not suitable for child module variable "volumes" defined at .terraform/modules/ecs_alb_service_task/variables.tf:226,1-19: element 0: attributes "efs_volume_configuration" and "host_path" are required.
╵

The above is the error message I get when performing a Terraform plan

The section of code which it is complaining about looks like this:

  dynamic "volume" {
    for_each = var.volumes
    content {
      host_path = lookup(volume.value, "host_path", null)
      name      = volume.value.name

      dynamic "docker_volume_configuration" {
        for_each = lookup(volume.value, "docker_volume_configuration", [])
        content {
          autoprovision = lookup(docker_volume_configuration.value, "autoprovision", null)
          driver        = lookup(docker_volume_configuration.value, "driver", null)
          driver_opts   = lookup(docker_volume_configuration.value, "driver_opts", null)
          labels        = lookup(docker_volume_configuration.value, "labels", null)
          scope         = lookup(docker_volume_configuration.value, "scope", null)
        }
      }

      dynamic "efs_volume_configuration" {
        for_each = lookup(volume.value, "efs_volume_configuration", [])
        content {
          file_system_id          = lookup(efs_volume_configuration.value, "file_system_id", null)
          root_directory          = lookup(efs_volume_configuration.value, "root_directory", null)
          transit_encryption      = lookup(efs_volume_configuration.value, "transit_encryption", null)
          transit_encryption_port = lookup(efs_volume_configuration.value, "transit_encryption_port", null)
          dynamic "authorization_config" {
            for_each = lookup(efs_volume_configuration.value, "authorization_config", [])
            content {
              access_point_id = lookup(authorization_config.value, "access_point_id", null)
              iam             = lookup(authorization_config.value, "iam", null)
            }
          }
        }
      }
    }
  }

With vars for var.volumes declared like this:

variable "volumes" {
  type = list(object({
    host_path = string
    name      = string
    docker_volume_configuration = list(object({
      autoprovision = bool
      driver        = string
      driver_opts   = map(string)
      labels        = map(string)
      scope         = string
    }))
    efs_volume_configuration = list(object({
      file_system_id          = string
      root_directory          = string
      transit_encryption      = string
      transit_encryption_port = string
      authorization_config = list(object({
        access_point_id = string
        iam             = string
      }))
    }))
  }))
  description = "Task volume definitions as list of configuration objects"
  default     = []
}

I am passing in the following:

volumes = [
  {
    name = "etc"
    docker_volume_configuration = {
      scope         = "shared"
      autoprovision = true
    }
  },
  {
    name      = "log"
    host_path = "/var/log/hello"
  },
  {
    name = "opt"
    docker_volume_configuration = {
      scope         = "shared"
      autoprovision = true
    }
  },
]

If I update the module variables file in my .terraform folder to:

variable "volumes" {
  type = list(object({
    #host_path = string
    #name      = string
    #docker_volume_configuration = list(object({
    #  autoprovision = bool
    #  driver        = string
    #  driver_opts   = map(string)
    #  labels        = map(string)
    #  scope         = string
    #}))
    #efs_volume_configuration = list(object({
    #  file_system_id          = string
    #  root_directory          = string
    #  transit_encryption      = string
    #  transit_encryption_port = string
    #  authorization_config = list(object({
    #    access_point_id = string
    #    iam             = string
    #  }))
    #}))
  }))
  description = "Task volume definitions as list of configuration objects"
  default     = []
}

This applies no problem, any ideas or will I submit a bug

GitHub - cloudposse/terraform-aws-ecs-alb-service-task: Terraform module which implements an ECS service which exposes a web service via ALB.attachment image

Terraform module which implements an ECS service which exposes a web service via ALB. - GitHub - cloudposse/terraform-aws-ecs-alb-service-task: Terraform module which implements an ECS service whic…

RB avatar

@David every key in the object has to be set or terraform will error out. this is a limitation in terraform itself.

RB avatar
Type Constraints - Configuration Language - Terraform by HashiCorp

Terraform module authors and provider developers can use detailed type constraints to validate the inputs of their modules and resources.

David avatar

i think i tried this, let me try again

David avatar

yeah i tried setting the values to null

David avatar
volumes = [
  {
    name = "etc"
    host_path = null
    docker_volume_configuration = {
      scope         = "shared"
      autoprovision = true
    },
    efs_volume_configuration = {
      file_system_id = null
      root_directory = null
      transit_encryption = null
      transit_encryption_port = null
      authorization_config = { 
        access_point_id = null
        iam = null
      }
    }
  },
  {
    name      = "log"
    host_path = "/var/log/hello"
    docker_volume_configuration = {
      scope         = null
      autoprovision = null
    },
    efs_volume_configuration = {
      file_system_id = null
      root_directory = null
      transit_encryption = null
      transit_encryption_port = null
      authorization_config = { 
        access_point_id = null
        iam = null
      }
    }
  },
  {
    name = "opt"
    host_path = null
    docker_volume_configuration = {
      scope         = "shared"
      autoprovision = true
    },
    efs_volume_configuration = {
      file_system_id = null
      root_directory = null
      transit_encryption = null
      transit_encryption_port = null
      authorization_config = { 
        access_point_id = null
        iam = null
      }
    }
  },
]
David avatar

but just moans about this:

│ Error: Invalid value for module argument
│ 
│   on main.tf line 40, in module "ecs_alb_service_task":
│   40:   volumes = var.volumes
│ 
│ The given value is not suitable for child module variable "volumes" defined at .terraform/modules/ecs_alb_service_task/variables.tf:226,1-19: element 0: attribute "docker_volume_configuration": list of object required.
╵
loren avatar

typically, a list of objects can be zeroed using []. a singular object can be passed as null

2
RB avatar

you’re giving docker_volume_configuration a map instead of a list

this

    docker_volume_configuration = {
      scope         = "shared"
      autoprovision = true
    },

should be

    docker_volume_configuration = [{
      scope         = "shared"
      autoprovision = true
    }],

see

attribute "docker_volume_configuration": list of object required.
1
David avatar

didn’t spot the [] and {}

David avatar
volumes = [
  {
    name = "etc"
    host_path = null
    efs_volume_configuration = []
    docker_volume_configuration = [{
      autoprovision = true
      driver = null
      driver_opts = null
      labels = null
      scope         = "shared"
    }]
  },
  {
    name      = "log"
    host_path = "/var/log/gitlab"
    efs_volume_configuration = []
    docker_volume_configuration = []
  },
  {
    name = "opt"
    host_path = null
    docker_volume_configuration = [{
      autoprovision = true
      scope         = "shared"
      driver = null
      driver_opts = null
      labels = null
    }]
    efs_volume_configuration = []
  },
]
David avatar

this works

RB avatar

Nice, glad you got it working!

David avatar

me too, i really appreciate the help

Tony C avatar

I’m having a similar issue as this one, but I’m trying to use efs_volume_configuration instead of docker_volume_configuration. I am correctly passing the docker config as an empty list to avoid the problem of a required option, but then when I go to apply, I get the following error:

Error: ClientException: When the volume parameter is specified, only one volume configuration type should be used.

So, Terraform requires me to pass both configurations, but even when one is empty, it’s complaining that both are provided. Is there any way around this problem? @RB any ideas?

Tony C avatar

the volumes block:

  volumes = [{
    name = "html"
    host_path = "/usr/share/nginx/html"
    docker_volume_configuration = []
    efs_volume_configuration = [{
      file_system_id = dependency.efs.outputs.id
      root_directory          = "/home/user/www"
      transit_encryption      = "ENABLED"
      transit_encryption_port = 2999
      authorization_config = []
    }]
  }]
RB avatar

Try setting docker_volume_configuration to null instead

Tony C avatar

@RB no bueno:

Error: Invalid dynamic for_each value

  on .terraform/modules/ecs-service/main.tf line 70, in resource "aws_ecs_task_definition" "default":
  70:         for_each = lookup(volume.value, "docker_volume_configuration", [])
    |----------------
    | volume.value is object with 4 attributes

Cannot use a null value in for_each.
RB avatar

could you create a ticket with a minimum viable reproducible example in the https://github.com/cloudposse/terraform-aws-ecs-container-definition repo ? doing this would be easier to debug locally.

if this is truly the case, then the issue may be with the terraform resource itself because it should respect passing in null as if the param is not passed in. if it’s not honoring that, then the terraform golang resource in the aws provider is to blame rather than the module itself

GitHub - cloudposse/terraform-aws-ecs-container-definition: Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resourceattachment image

Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource - GitHub - cloudposse/terraform-aws-ecs-container-…

Tony C avatar

will do

Tony C avatar

@RB the volumes variable is in ecs-service not aws-ecs-container-definition. are you sure you want me to submit the issue in the latter?

Tony C avatar

or maybe i’m not understanding the distinction between volumes_from in the container definition module and volumes in the service module

RB avatar

the ecs service module feeds it into the container definition module

Tony C avatar

ok so i can just use my volumes arg verbatim as the value for volumes_from in my reproducer?

Tony C avatar

appears not. can i give you a reproducer that uses ecs-service?

Tony C avatar

I’m using terraform-aws-ecs-alb-service-task

Tony C avatar
Error when trying to use EFS volumes in task/container definition · Issue #147 · cloudposse/terraform-aws-ecs-container-definitionattachment image

Describe the Bug I&#39;m trying to use an EFS volume in an ECS service definition. The volumes variable is defined such that one has to supply a value for both the efs_volume_configuration and dock…

2021-09-07

O K avatar

Hi All! How long approximately should it take to deploy AWS MSK? I use this module https://registry.terraform.io/modules/cloudposse/msk-apache-kafka-cluster/aws/latest and I deployment is passed 20 min already and still nothing. Any feedback please?

module.kafka.aws_msk_cluster.default[0]: Still creating... [26m0s elapsed]
module.kafka.aws_msk_cluster.default[0]: Still creating... [26m10s elapsed]
RB avatar

It does take a while

RB avatar

Id give it 30 min at least

1
O K avatar

Thank you!

RB avatar

Note that it’s not the module but the aws msk itself

O K avatar

I see, do we need to specify zone_id or this os optional parameter?

Mohamed Habib avatar
Mohamed Habib

yup MSK takes ages to be ready

O K avatar


I see, do we need to specify zone_id or this os optional parameter?
please suggest regarding this question

RB avatar

All the module arguments are shown in the readme. On the far right, it shows required yes or no

O K avatar

After 26 min it has been created…

1
Wira avatar
Wira
12:32:46 PM

Hello, I am currently using this terraform module https://registry.terraform.io/modules/cloudposse/elastic-beanstalk-environment/aws/latest to create a worker environment. But I can’t find how to configure custom endpoint for the worker daemon to post the sqs queue.

RB avatar

Is there a terraform resource that can provide a custom endpoint? I don’t see one :(

RB avatar

Only one i can see is the environment resources endpoint url as an attribute but i don’t see a way to modify it like in the picture above

Wira avatar

I am actually not too familiar with terraform. But after I looked here https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/elastic_beanstalk_environment , I don’t think so

RB avatar

There may be an open pull request in the aws provider? If not, they need all the contributions they can get :)

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)
loren avatar

bummed, but glad they’re at least up front about it

Rhys Davies avatar
Rhys Davies

Time to apply to Hashi

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ya, so curious what the back story is here…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

have they have some recent departures?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

have they reached some tipping point?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

have they had some incident reported and need to pause all contributions (E.g. like what happened to the linux kernel)?

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

I wonder where we can get more information about this? Any people you can get some commentary on this?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

have they taken some time to pause and regroup on how to scale engineering of open source at this scale?

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

It’s really interesting to look at this in light of Docker’s issues in the open source world: https://www.infoworld.com/article/3632142/how-docker-broke-in-half.html

How Docker broke in halfattachment image

The game changing container company is a shell of its former self. What happened to one of the hottest enterprise technology businesses of the cloud era?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I doubt we can get anyone to comment publicly on it.

Rhys Davies avatar
Rhys Davies

Not hugely forthcoming in the Reddit threads that I’ve been reading, but it seems that they are growing faster than they are hiring, compounded with some loses in the Terraform department coupled with normal PTO/Vacay overhead

Rhys Davies avatar
Rhys Davies

I was reading a Tweet from Mitchell too, but I can’t find it now

loren avatar

@gooeyblob This is only for core which should not be noticeable to any end users since providers are the main source of external contribution and there is no change in policy there. This allows our core team to focus a bit more while we hire to fill the team more.

Rhys Davies avatar
Rhys Davies

he was basically trying to downplay the situation

Rhys Davies avatar
Rhys Davies

thank you - that’s the exact one

1
Rhys Davies avatar
Rhys Davies

Basically it looks like Silicon Valley is hot af right now if you have Terraform skill, they literally cannot hire fast enough because everyone is hiring again after the pandemic and it’s feeding frenzy

Rhys Davies avatar
Rhys Davies

I wasn’t joking when I said it’s time to apply to Hashicorp, maybe it’s time to work for a big company…

Rhys Davies avatar
Rhys Davies

I also think that a lot of companies haven’t really figured out working full remotely yet, it’s possible that they are having a people issue as well as a resourcing block which is slowing things down

Rhys Davies avatar
Rhys Davies

I notice that their SF office isn’t listed on any job listings and they are all fully remote..

Rhys Davies avatar
Rhys Davies

Looking at cashflow Hashi is 5.2B valuation, 8 years old, Series E of 175m, so they have fuel in the tank to hire with even if Series E and not revenue positive denotes that they are having trouble monetizing their products

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

I think Hashi was mostly remote even pre-pandemic. I agree that the market is hot and it’s hard to find good people. There’s a lot of cash running around.

Jeb Cole avatar
Jeb Cole

It’s the remote pool that is getting drained hardest now that so many tech companies have been pushed to go remote

Mohamed Habib avatar
Mohamed Habib

could it be a cashflow issue?

Andrew Nazarov avatar
Andrew Nazarov

Sharing an update to the recent speculation around Terraform and community contributions. The gist is: we’re growing a ton, this temporary pause is localized to a single team (of many), and Terraform Providers are completely unchanged and unaffected. https://www.hashicorp.com/blog/terraform-community-contributions

Andrew Nazarov avatar
Andrew Nazarov

Sharing a brief update on Terraform and community contributions, given some recent noise. TL;DR: Terraform is continuing to grow rapidly, we are scaling the team, and we welcome contributions. Also we are hiring! https://www.hashicorp.com/blog/terraform-community-contributions

Kyle Johnson avatar
Kyle Johnson

Is there any existing solution for generating KMS policies that enable the interop with various AWS services?

Some services need actions others don’t such as kms:CreateGrant. CloudTrail audits will flag that action being granted to services which don’t need it.

Seems like there ought to be a module for creating these policies which already knows the details of individual action requirements vs recreating policies from AWS docs on every project

loren avatar

dealing with exactly this right now, for cloudtrail, config, and guardduty. such a pain to figure out the kms policy and bucket policy!!

Alex Jurkiewicz avatar
Alex Jurkiewicz

I started work on creating canned policies for every service in a PR for the cloudposse key module, but I am no longer actively working on it

Alex Jurkiewicz avatar
Alex Jurkiewicz

If you wanted to improve everyone’s life a little bit, it might be a good launchpad

1

2021-09-08

Mohammed Yahya avatar
Mohammed Yahya

Terraform is not currently reviewing Community Pull Requests: HashiCorp has acknowledged that it is currently understaffed and is unable to review public PRs.

Be explicit that community PR review is currently paused · hashicorp/terraform@6562466attachment image

Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned. - Be explicit that community PR review is currently paused · hashicorp/terraform@6562466

Mohammed Yahya avatar
Mohammed Yahya
Be explicit that community PR review is currently paused · hashicorp/terraform@6562466attachment image

Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned. - Be explicit that community PR review is currently paused · hashicorp/terraform@6562466

conzymaher avatar
conzymaher

Only applies to terraform core

conzymaher avatar
conzymaher

Not providers

Mohammed Yahya avatar
Mohammed Yahya

I see.

conzymaher avatar
conzymaher

Lets see how it plays out but I’m not particularly worried

Mohammed Yahya avatar
Mohammed Yahya

For core I guess yes, maybe they don’t want specific features added by community - example terraform add command, but not sure why

conzymaher avatar
conzymaher
HashiCorp Terraform and Community Contributionsattachment image

We recently added a note to the HashiCorp Terraform contribution guidelines and this blog provides additional clarity and context for our community and commercial customers.

Saichovsky avatar
Saichovsky

Hello,

We have a aws_directory_service_directory resource defined in a service, which creates a security group that allows ports 1024-65535 to be accessible from 0.0.0.0/0 and this is getting flagged by security hub because AWS CIS standards to not recommend allowing ingress from 0.0.0.0/0 for TCP port 3389.

My question is on how to restrict some of the rules in the resultant SG that gets created by the aws_directory_service_directory resource. How do you remediate this using terraform?

mfridh avatar

Anyone here using tfexec / tfinstall? https://github.com/hashicorp/terraform-exec

2021/09/08 13:15:58 error running Init: fork/exec /tmp/tfinstall354531296/terraform: not a directory

I feel like there are a few lies in this code here

This one for example: https://github.com/hashicorp/terraform-exec/blob/v0.14.0/tfexec/terraform.go#L62-L74

mfridh avatar

As usual… nothing to see here. oh, funny :smile: … Yeah it was all a lie.

I had given a file instead of a directory as its workingDir.

And the error message was very confusing because it didn’t report THAT variable as “not a directory”

SlackBot avatar
SlackBot
12:58:39 PM

This message was deleted.

Tomek avatar

:wave: I have the following public subnet resource:

resource "aws_subnet" "public_subnet" {
  for_each = {
    "${var.aws_region}a" = "172.16.1.0"
    "${var.aws_region}b" = "172.16.2.0"
    "${var.aws_region}c" = "172.16.3.0"
  }
  vpc_id                  = aws_vpc.vpc.id
  cidr_block              = "${each.value}/24"
  availability_zone       = each.key
  map_public_ip_on_launch = true
}

I want to reference the subnets in an ALB resource I’m creating. At the moment this looks like:

  subnet_ids = [
    aws_subnet.public_subnet["us-east-1a"].id,
    aws_subnet.public_subnet["us-east-1b"].id,
    aws_subnet.public_subnet["us-east-1c"].id
  ]

Is there a way to wildcard the above? I tried aws_subnet.public_subnet.*.id which doesn’t work because I think the for each object is a map. What is the proper way to handle this?

loren avatar
subnet_ids = [ for subnet in aws_subnet.public_subnet : subnet.id ]
1
1
Tomek avatar

thanks, that worked perfectly!

1
Release notes from terraform avatar
Release notes from terraform
07:43:40 PM

v1.1.0-alpha20210908 1.1.0 (Unreleased) UPGRADE NOTES: Terraform on macOS now requires macOS 10.13 High Sierra or later; Older macOS versions are no longer supported. The terraform graph command no longer supports -type=validate and -type=eval options. The validate graph is always the same as the plan graph anyway, and the “eval” graph was just an implementation detail of the terraform console command. The default behavior of creating a plan graph should be a reasonable replacement for both of the removed graph…

2021-09-09

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

does anyone know of an IAM policy that will let people view the SSM parameters names and thats it? I don’t want them to be able to see the values.

mfridh avatar

“Secret” values would usually be encrypted using a KMS key. So by controlling access to the KMS key could be enough if your intentions is to hide only the encrypted values.

Otherwise, the only thing you can give would be ssm:DescribeParameters I think.

https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-paramstore-access.html

Restricting access to Systems Manager parameters using IAM policies - AWS Systems Manager

Restrict access to Systems Manager parameters by using IAM policies.

Aleksandr Fofanov avatar
Aleksandr Fofanov

just give them ssm:DescribeParameters permission they will be able to list and view individual parameters metadata but not the values

2
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

thanks @mfridh @Aleksandr Fofanov that worked like a dream

1
Pierre-Yves avatar
Pierre-Yves

I had a lot of tags to deploy, and not all resources support tagging . to be effective in the process and after trying many option to trigger command on *.tf changes. I finally use watch terraform validate ( inotifywait don’t seems to work on wsl + vscode )

deepak kumar avatar
deepak kumar

Hi People, I am creating ecs service using tf 0.11.7 I have set the network_mode default to “bridge” for the ecs task definition but the module can be reused with different network_mode such as “awsvpc”. Since tf 0.11.* doesn’t support dynamic block , I need to find out a way to achieve dynamic block to set arguments such as network_configurations(based on the network_mode) Using locals I guess it can be achieved .Is there any other way to do it in tf 0.11.*?

Grummfy avatar
Grummfy

You can use terraspace / terragrunt / other to do that, but I would advise to update a bit the version of terraform …

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

has anyone managed to get terraform with when using federated SSO with AWS and leveraging an assume-role in the terraform configuration?

Andrea Cavagna avatar
Andrea Cavagna

I think you can manage this situation with Leapp Leap manages also the assume role from federated

1
conzymaher avatar
conzymaher
GitHub - 99designs/aws-vault: A vault for securely storing and accessing AWS credentials in development environmentsattachment image

A vault for securely storing and accessing AWS credentials in development environments - GitHub - 99designs/aws-vault: A vault for securely storing and accessing AWS credentials in development envi…

2
Andrea Cavagna avatar
Andrea Cavagna

I started an open-source project to manage multi-account access in multi-cloud. It is a Desktop App that Manages IAM Users, IAM federated roles, IAM chained roles and automatically retrieving all the AWS SSO roles. Also, It secures credentials by managing the credentials file on your behalf and generates a profile with short-lived credentials only when needed. If you are interested in the idea, look at the guide made by Nuru:

https://docs.cloudposse.com/howto/geodesic/authenticate-with-leapp/

GitHub - 99designs/aws-vault: A vault for securely storing and accessing AWS credentials in development environmentsattachment image

A vault for securely storing and accessing AWS credentials in development environments - GitHub - 99designs/aws-vault: A vault for securely storing and accessing AWS credentials in development envi…

conzymaher avatar
conzymaher

Its an awesome tool. I am using it for interacting with dozens of AWS accounts whether its IAM users + MFA or AWS SSO

Tomek avatar

ooof, I just corrupted my local state file and lost the state of a bunch of resources in my terraform (backup was corrupted to ). I don’t actually care about the resources, is there a way I can force terraform to destroy the resources that map to my terraform code and reapply?

Alex Jurkiewicz avatar
Alex Jurkiewicz

No. Run Terraform apply repeatedly and manually delete the resources it says are in the way. But this doesn’t work in all cases. If you had eg S3 buckets it IAM resources with a name prefix specified instead of a name, they will be missed

Tomek avatar

i was afraid of this

Tomek avatar

well first thing i’m doing is switching to versioned s3 backend

Alex Jurkiewicz avatar
Alex Jurkiewicz

Good idea

pjaudiomv avatar
pjaudiomv

Backup the bucket too :), learned that one after a coworker deleted said versioned bucket

conzymaher avatar
conzymaher

ooof

2021-09-10

emem avatar

hey guys anyone ever implemented a description on what terraform is applying on the approval stage in codepipeline. Like i can see what my terraform is planing in the terraform plan stage and i would like to pass this to details to my approval stage but approval does not support artifact atrtibute. Anyone found a solution for this before

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We’re using Spacelift which does that. If you learn hwo to do it with codepipeline, lmk!

Nikola Milic avatar
Nikola Milic
10:20:31 AM

How do I access the ARN of the created resource in the sibling modules belonging to same main.tf file? I want to create IAM user, and ECR resource that need’s that user’s ARN (Check line 22). How to reference variables?

pjaudiomv avatar
pjaudiomv

Check the outputs of the user module then you would reference it prefixed with module and the name ex. module.gitlab_user.user_arn

1
1
Nikola Milic avatar
Nikola Milic

Thanks @pjaudiomv

pjaudiomv avatar
pjaudiomv

Yes this explains modules and accessing their values https://www.terraform.io/docs/language/modules/syntax.html section Accessing Module Output Values

Modules - Configuration Language - Terraform by HashiCorp

Modules allow multiple resources to be grouped together and encapsulated.

pjaudiomv avatar
pjaudiomv

All of the cloudposse modules reference the inputs/outputs on the respective GitHub repo https://github.com/cloudposse/terraform-aws-iam-system-user#outputs

GitHub - cloudposse/terraform-aws-iam-system-user: Terraform Module to Provision a Basic IAM System User Suitable for CI/CD Systems (E.g. TravisCI, CircleCI)attachment image

Terraform Module to Provision a Basic IAM System User Suitable for CI/CD Systems (E.g. TravisCI, CircleCI) - GitHub - cloudposse/terraform-aws-iam-system-user: Terraform Module to Provision a Basic…

Cameron Pope avatar
Cameron Pope

Hello - First of all, thank you for having so many wonderful Terraform modules. I have a question about the aws-ecs-web-app module and task definitions. It seems like neither setting for ignore_changes_task_definition does quite what I need, so I sense I am ‘doing it wrong’, but I am struggling to find the happy path to doing the right thing.

When I update by pushing new code to Github, and then run terraform apply the module wants to switch the task definition back to the previous version. Setting ignore_changes_task_definition to True fixes that, but if I want to update the container size or environment variables, then those changes do not get picked up.

It seems like the underlying problem is my way of doing things (managing the Task Definition via Terraform) is coupling Terraform and the CI/CD process too tightly, and that either Terraform or CodeBuild should ‘own’ the Task Definition, but not both. I don’t see a clean way to create the Task Definition during the Build phase and set it during the deploy phase. The standard ECS deployment takes the currently-running task definition and updates the image uri. It looks like one needs to use CodeDeploy to do anything more advanced.

I don’t think I’m the first person to want Terraform not to change the revision unless I’ve made changes to the task definition on the Terraform side. How do others handle this? Or is my use-case outside of what the aws-ecs-web-appmodule is designed for?

If you made it here, thank you for reading!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I would use the web app module more as a reference for how to tie all the other modules together

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you’ll quickly find yourself wanting to make changes

Cameron Pope avatar
Cameron Pope

Thank you for the response - that was my sense. It is great to have a working end-to-end example, and it made it easy to set up a Github -> ECS pipeline.

Interestingly, after about a year, the only thing that we’re really missing for our use-case is the ability to generate task definitions after a successful container build. The web-app module got us almost 100% of the way there, and for that I’m grateful.

Nick Kocharhook avatar
Nick Kocharhook

@Cameron Pope can you say a bit more about how you solved this problem? I’m running into the same conflict between CI/CD (Codefresh in my case) and Terraform. When ignore_changes_task_definition is on (which it is by default), I’m still getting Terraform wanting to update the task definition to a new revision with the sha256 of the new image as the tag, compared to the GitHub short rev for the CD. This breaks the web app deploy. :disappointed:

I think everything would be fine if it just honored the variable and actually ignored changes to the service’s task_definition. I don’t have a lot of changes to the instance count planned. I can’t figure out why it’s not honoring the setting.

2021-09-11

2021-09-12

2021-09-13

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

anyone hooked in the identity provider for EKS yet? any gothcas I should be aware of?

Rhys Davies avatar
Rhys Davies

Hey guys I’m writing the Terraform for a new AWS ECS Service, I want to deploy 6 (but effectively n ) similar container definitions in my task definition. What’s the recommended way of looping a data structure (a dict, or list of lists) and creating container_definitions?

  1. Is it supposed to be done with a JSON file and a data "template_file" block with some sort of comprehension?
  2. I’ve found https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ecs_container_definition but it doesn’t have any parameters for command which is the part between the container definitions that needs to differ slightly
  3. https://github.com/cloudposse/terraform-aws-ecs-container-definition I’ve also found this, not sure if anyone here has had any experience with it? I was going to experiment for_eaching with it to create 6 container_defs I can then merge()in my resource "task_definition" - is this the right sort of approach?
RB avatar

I believe you want option 3

Rhys Davies avatar
Rhys Davies

Just out of interest, can I just do this?

Rhys Davies avatar
Rhys Davies
celery_queues = {
  1 : ["queue1"],
  2 : ["queue2", "blah", "default"],
  ...
}

resource "aws_ecs_task_definition" "celery" {
  for_each = local.celery_queues
  family                   = "celery"
  requires_compatibilities = ["FARGATE"]
  cpu                      = "4096"
  memory                   = "8192"
  network_mode             = "awsvpc"
  execution_role_arn       = module.ecs_cluster.task_role_arn
  container_definitions = jsonencode([
    {
      name        = "celery_${each.key}",
      image       = blah,
      command     = ["celery", ${each.value}],
      environment = blah,
      essential   = true,
      logConfiguration = {
        logDriver = "awslogs",
        options = {
          awslogs-group         = log_group_name,
          awslogs-region        = log_group_region,
          awslogs-stream-prefix = log_group_prefix
        }
      },
      healthCheck = {
        command     = ["CMD-SHELL", "pipenv run celery -A my_proj inspect ping"],
        interval    = 10,
        timeout     = 60,
        retries     = 5,
        startPeriod = 60
      }
    }
  ])
}
RB avatar

Ya that would work too

Rhys Davies avatar
Rhys Davies

awesome thanks for the help, I’m a devops of one, its so good to have somewhere to work through a solution!

2
2
Bhavik Patel avatar
Bhavik Patel

Thanks, helped me out as well

1
Rhys Davies avatar
Rhys Davies

Thanks in advance for any help

othman issa avatar
othman issa

Hello everyone, I have a question what is the best way to connect TF module with API ?

Alex Jurkiewicz avatar
Alex Jurkiewicz

AWS API Gateway?

Alex Jurkiewicz avatar
Alex Jurkiewicz

Or something else

othman issa avatar
othman issa

I was reading in TF doc HTTP API

2021-09-14

SlackBot avatar
SlackBot
10:17:55 AM

This message was deleted.

greg n avatar

good afternoon guys, I think I’ve found a version issue with cloudposse/terraform-aws-ecs-web-app (version = “~> 0.65.2”). Is this a legit upper version limit or perhaps just versions.tf a bit out of date? Thanks

tf -version
Terraform v1.0.2
on linux_amd64

Your version of Terraform is out of date! The latest version
is 1.0.6. You can update by downloading from <https://www.terraform.io/downloads.html>
- services_api_assembly.this in .terraform/modules/services_api_assembly.this
╷
│ Error: Unsupported Terraform Core version
│
│   on .terraform/modules/services_api_alb.alb.access_logs.s3_bucket.this/versions.tf line 2, in terraform:
│    2:   required_version = ">= 0.12.0, < 0.14.0"
│
│ Module module.services_api_alb.module.alb.module.access_logs.module.s3_bucket.module.this (from git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.19.2>) does not support Terraform version 1.0.2. To
│ proceed, either choose another supported Terraform version or update this version constraint. Version constraints are normally set for good reason, so updating the constraint may lead to other errors or unexpected behavior.
╵

╷
│ Error: Unsupported Terraform Core version
│
│   on .terraform/modules/services_api_alb.alb.access_logs.this/versions.tf line 2, in terraform:
│    2:   required_version = ">= 0.12.0, < 0.14.0"
│
│ Module module.services_api_alb.module.alb.module.access_logs.module.this (from git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.19.2>) does not support Terraform version 1.0.2. To proceed, either choose
│ another supported Terraform version or update this version constraint. Version constraints are normally set for good reason, so updating the constraint may lead to other errors or unexpected behavior.
╵

╷
│ Error: Unsupported Terraform Core version
│
│   on .terraform/modules/services_api_alb.alb.default_target_group_label/versions.tf line 2, in terraform:
│    2:   required_version = ">= 0.12.0, < 0.14.0"
│
│ Module module.services_api_alb.module.alb.module.default_target_group_label (from git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.19.2>) does not support Terraform version 1.0.2. To proceed, either
│ choose another supported Terraform version or update this version constraint. Version constraints are normally set for good reason, so updating the constraint may lead to other errors or unexpected behavior.
╵
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ya, could be - please open PR to remove upper bound pinning

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

post here and we’ll get it promptly reviewed

Richard Quadling avatar
Richard Quadling

The versions.tf for v0.65.2 https://github.com/cloudposse/terraform-aws-ecs-web-app/blob/master/versions.tf says

terraform {
  required_version = ">= 0.13.0"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 3.34"
    }
  }
}

Which all looks good. What is the source of the services_api_alb module?

greg n avatar

it’s

  source                    = "cloudposse/alb/aws"
  version                   = "0.23.0"
  context                   = module.this.context

`

Richard Quadling avatar
Richard Quadling

https://registry.terraform.io/modules/cloudposse/alb/aws/latest is 0.35.3, so you are quite a way behind.

Nikola Milic avatar
Nikola Milic

For some reason, ec2 instance does not have public dns assigned, even though it’s part of the public subnet? What could be the case?

managedkaos avatar
managedkaos

During the cretion of the resource, did you specify to attach a public IP? even if the subnet is public, if the default setting for the subnet is to NOT assign a public IP, instances won’t get one. (AFAIK)

Nikola Milic avatar
Nikola Milic

Yeah i was under the impression that on was the default. Thanks, i think that solved it

managedkaos avatar
managedkaos

2021-09-15

Release notes from terraform avatar
Release notes from terraform
06:53:40 PM

v1.0.7 1.0.7 (September 15, 2021) BUG FIXES: core: Remove check for computed attributes which is no longer valid with optional structural attributes (#29563) core: Prevent object types with optional attributes from being instantiated as concrete values, which can lead to failures in type comparison (<a…

remove incorrect computed check by jbardin · Pull Request #29563 · hashicorp/terraformattachment image

The config is already validated, and does not need to be checked again in AssertPlanValid, so we can just remove the check which conflicts with the new optional nested attribute types. Add some mor…

2021-09-16

Vikram Yerneni avatar
Vikram Yerneni

Fellas, Is there a way to add a condition when adding S3 bucket/folder level permissions here at: https://github.com/cloudposse/terraform-aws-iam-s3-user

For example, I want to give like this string query:

  {
     "Sid": "AllowStatement3",
     "Action": ["s3:ListBucket"],
     "Effect": "Allow",
     "Resource": ["arn:aws:s3:::DOC-EXAMPLE-BUCKET"],
     "Condition":{"StringLike":{"s3:prefix":["media/*"]}}
    }

2021-09-17

jose.amengual avatar
jose.amengual
Enforcing best practice on self-serve infrastructure with Terraform, Atlantis and Policy As Codeattachment image

Here at loveholidays we are heavily dependant on Terraform. All of our Google Cloud infrastructure is managed using Terraform, along with a…

1
loren avatar

i really wish it were easier to extend atlantis to additional source code hosts. would be fantastic if it worked with codecommit

Enforcing best practice on self-serve infrastructure with Terraform, Atlantis and Policy As Codeattachment image

Here at loveholidays we are heavily dependant on Terraform. All of our Google Cloud infrastructure is managed using Terraform, along with a…

jose.amengual avatar
jose.amengual

as in one multiple atlantis one repo?

loren avatar

no, just as in developing the code to support new source code hosts. last time i looked, it was a bit of a spaghetti mess touching all sorts of core internal parts

2021-09-18

Ozzy Aluyi avatar
Ozzy Aluyi

Hello Guys, I’m trying to create parameters in AWS SSM- any ideas/solution will be much appreciated.

Ozzy Aluyi avatar
Ozzy Aluyi
data "aws_ssm_parameter" "rds_master_password" {
  name = "/grafana/GF_RDS_MASTER_PASSWORD"
  with_decryption = "true"
}
resource "aws_ssm_parameter" "rds_master_password" {
  name        = "/grafana/GF_RDS_MASTER_PASSWORD"
  description = "The parameter description"
  type        = "SecureString"
  value       = data.aws_ssm_parameter.rds_master_password.value
}
resource "aws_ssm_parameter" "GF_SERVER_ROOT_URL" {
  name  = "/grafana/GF_SERVER_ROOT_URL"
  type  = "String"
  value = "https://${var.dns_name}"
}

resource "aws_ssm_parameter" "GF_LOG_LEVEL" {
  name  = "/grafana/GF_LOG_LEVEL"
  type  = "String"
  value = "INFO"
}

resource "aws_ssm_parameter" "GF_INSTALL_PLUGINS" {
  name  = "/grafana/GF_INSTALL_PLUGINS"
  type  = "String"
  value = "grafana-worldmap-panel,grafana-clock-panel,jdbranham-diagram-panel,natel-plotly-panel"
}

resource "aws_ssm_parameter" "GF_DATABASE_USER" {
  name  = "/grafana/GF_DATABASE_USER"
  type  = "String"
  value = "root"
}

resource "aws_ssm_parameter" "GF_DATABASE_TYPE" {
  name  = "/grafana/GF_DATABASE_TYPE"
  type  = "String"
  value = "mysql"
}

resource "aws_ssm_parameter" "GF_DATABASE_HOST" {
  name  = "/grafana/GF_DATABASE_HOST"
  type  = "String"
  value = "${aws_rds_cluster.grafana.endpoint}:3306"
}
Ozzy Aluyi avatar
Ozzy Aluyi
 Error: Error describing SSM parameter (/grafana/GF_RDS_MASTER_PASSWORD): ParameterNotFound: 
│ 
│   with module.Grafana_terraform.data.aws_ssm_parameter.rds_master_password,
│   on Grafana_terraform/ssm.tf line 1, in data "aws_ssm_parameter" "rds_master_password":
│    1: data "aws_ssm_parameter" "rds_master_password" {
│ 
RB avatar

Looks like you don’t have the parameter created and so your data source is failing to pull it

Ozzy Aluyi avatar
Ozzy Aluyi

@RB thanks. Sorted now.

managedkaos avatar
managedkaos

@Ozzy Aluyi you have a conflict with the data and resource for the parameter named rds_master_password

On line 1, you are trying to read it as data. and on line 5 you are trying to create it as a resource.

If its already created and you just want to read it, remove the resource "aws_ssm_parameter" "rds_master_password" {… section.

If you are trying to create it, remove the data "aws_ssm_parameter" "rds_master_password" {... section.

Of course, if you are reading it, you will need to find a way to get the value into place. In summary, you can’t have a data resource that calls on itself.

If you are trying to create and store a password, consider using the random_password resource and storing the result of that in the parameter. https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/password

1
Michael Dizon avatar
Michael Dizon

hey guys, i am a little confused about what dns_gbl_delegated refers to in eks-iam https://github.com/cloudposse/terraform-aws-components/blob/master/modules/eks-iam/tfstate.tf#L51

terraform-aws-components/tfstate.tf at master · cloudposse/terraform-aws-componentsattachment image

Opinionated, self-contained Terraform root modules that each solve one, specific problem - terraform-aws-components/tfstate.tf at master · cloudposse/terraform-aws-components

Michael Dizon avatar
Michael Dizon

is delegated-dns supposed to be added to the global env as well as regional?

Michael Dizon avatar
Michael Dizon

i modified the remote state for dns_gbl_delegated to point to primary-dns – not sure if that’s going to cause any issues later on

Ozzy Aluyi avatar
Ozzy Aluyi

@managedkaos thanks for the solution. the random_password will make more sense,

1
1

2021-09-19

MrAtheist avatar
MrAtheist

Would like some assistance with the following error with fargate task. It seems like the stuff inside container_definitions isnt being registered at all… im getting all sorts of error saying args not found when they are clearly within the template. EDIT: terraform state show data.template_file.main got all the right args in the json.

Fargate only supports network mode 'awsvpc'. Fargate requires that 'cpu' be defined at the task level.

resource "aws_ecs_task_definition" "main" {
  family                   = "${var.app_name}-app"
  execution_role_arn       = aws_iam_role.ecs_task_execution_role.arn
  #network_mode             = "awsvpc"
  requires_compatibilities = ["FARGATE"]
  #cpu                      = var.fargate_cpu
  #memory                   = var.fargate_memory
  container_definitions    = data.template_file.main.rendered
}

data "template_file" "main" {
  template = file("./templates/ecs/main_app.json.tpl")

  vars = {
    app_name       = var.app_name
    app_image      = var.app_image
    container_port = var.container_port
    app_port       = var.app_port
    fargate_cpu    = var.fargate_cpu
    fargate_memory = var.fargate_memory
    aws_region     = var.aws_region
  }
}

# ./templates/ecs/main_app.json.tpl
[
  {
    "name": "${app_name}",
    "image": "${app_image}",
    "cpu": ${fargate_cpu},
    "memory": ${fargate_memory},
    "logConfiguration": {
        "logDriver": "awslogs",
        "options": {
          "awslogs-group": "/ecs/${app_name}",
          "awslogs-region": "${aws_region}",
          "awslogs-stream-prefix": "ecs"
        }
    },
    "portMappings": [
      {
        "containerPort": ${container_port},
        "hostPort": ${app_port}
      }
    ]
  }
]
RB avatar
GitHub - cloudposse/terraform-aws-ecs-container-definition: Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resourceattachment image

Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource - GitHub - cloudposse/terraform-aws-ecs-container-…

RB avatar

It constructs the valid JSON for you instead of having to create your own template

1
Pedro Santana avatar
Pedro Santana

Hello folks, Im trying to use [AWS MQ Module](https://github.com/cloudposse/terraform-aws-mq-broker) but it looks to have a issue on Benchmark Infraestructure Security. However i cant see whats kind of issues it is on github page. Anyone can explain this for me ?

GitHub - cloudposse/terraform-aws-mq-broker: Terraform module for provisioning an AmazonMQ brokerattachment image

Terraform module for provisioning an AmazonMQ broker - GitHub - cloudposse/terraform-aws-mq-broker: Terraform module for provisioning an AmazonMQ broker

2021-09-20

Vikram Yerneni avatar
Vikram Yerneni

Fellas, is there a way to create multiple users with the module//github.com/cloudposse/terraform-aws-iam-s3-user> I tried to add a variable for creating multiple users, but its not picking up as two users instead its combining into one//github.com/cloudposse/terraform-aws-iam-s3-user/blob/master/examples/complete/fixtures.us-west-1.tfvars#L9>

Vikram Yerneni avatar
Vikram Yerneni

It ended up doing like this:

      ~ user                           = "user1" -> "user1user2" # forces replacement
Vikram Yerneni avatar
Vikram Yerneni

This is the tfvars entry

iam_user_name                                                = "user1, user2"
Vikram Yerneni avatar
Vikram Yerneni

Any clue here fellas @channel

RB avatar

can you reference that module more than once - once for each user ?

Vikram Yerneni avatar
Vikram Yerneni

I basically pulled this module into our gitlab and referred it as a child module from my parent module.

Vikram Yerneni avatar
Vikram Yerneni

Not sure how can I add one more reference again within the same parent module Ronak

Vikram Yerneni avatar
Vikram Yerneni

If I add on multiple references in my parent module like this

# CloudPosse Module for creating AWS IAM User along with S3 Permissions
module "aws-iam-s3-user" {
  count        = var.aws-iam-s3-user_enabled ? 1 : 0
  source       = "[email protected]:qomplx/engineering/infrastructure/terraform-modules/terraform-cloudposse-aws-iam-s3-user.git"
  name         = var.iam_user_name
  s3_actions   = var.s3_actions
  s3_resources = var.s3_resources
}

It will be complicated when the time comes for 50 - 100 users.

RB avatar

you would need to do a for_each like this

# CloudPosse Module for creating AWS IAM User along with S3 Permissions
module "aws_iam_s3_user" {
  for_each     = var.aws-iam-s3-user_enabled ? toset(var.users) : 0
  
  source       = "cloudposse/iam-s3-user/aws"
  version      = "0.15.3"
  name         = each.key
  s3_actions   = var.s3_actions
  s3_resources = var.s3_resources
}

then you can pass in var.users = ["user1", "user2"]

something like that would work

Vikram Yerneni avatar
Vikram Yerneni

Sure, let me try this option…

RB avatar

note: for best practices

• i renamed the module name so it uses underscores instead of dashes

• i set the source and version so its pinned

Vikram Yerneni avatar
Vikram Yerneni

Unserstood…

Vikram Yerneni avatar
Vikram Yerneni

Testing this for_each method…

Vikram Yerneni avatar
Vikram Yerneni

It ended up with this output Ronak

│ Error: Invalid value for input variable
209│ 
210│   on ./terraform.tfvars line 34:
211│   34: users  = ["user1", "user2"]
212│ 
213│ The given value is not valid for variable "users": string required.
RB avatar

You need to create a variable

Vikram Yerneni avatar
Vikram Yerneni

Actually hang on

RB avatar

Or a local

Vikram Yerneni avatar
Vikram Yerneni

I actually created a variable for users and passed the values after your change

Vikram Yerneni avatar
Vikram Yerneni

And I ended up with The given value is not valid for variable "users": string required.

RB avatar

Change the variable type to list

Vikram Yerneni avatar
Vikram Yerneni

aah ok ok

Vikram Yerneni avatar
Vikram Yerneni

one sec

RB avatar

It might help to do some terraform tutorials to pick up the basics

Vikram Yerneni avatar
Vikram Yerneni

Yeah, I am not an expert in TF… Learning as I go. And in this case, instead of 1 for for_each condition a set ([]) and few other changes worked

1
1
Vikram Yerneni avatar
Vikram Yerneni

Thanks Ronak for the inputs here. Appreceate it

RB avatar

Happy to help!

1
Vikram Yerneni avatar
Vikram Yerneni

Hi Ronak, can I bother you for one more question I am having here while dealing with this module?

RB avatar

Hi Vikram. I may be busy but feel free to post it here

1
Vikram Yerneni avatar
Vikram Yerneni

Sure then… My question here is, since I got the creation multiple users sorted out, I am trying to give permissions for an individual user for a specific S3 resource. But the problem here is when I give multiple S3 resources (under s3_resources ), all users are getting the permissions applied for all S3 resources by default. In my case, basically I want to target an individual user for an individual S3 resource.

Vikram Yerneni avatar
Vikram Yerneni

I am missing the logic on how to get to this objective here Ronak using this module…

RB avatar

couldnt you use something like module.aws_iam_s3_user.user-1 to reference a specific user ?

RB avatar

or perhaps im misunderstanding

Vikram Yerneni avatar
Vikram Yerneni

Basically, this is how my setup is: main.tf

  iam_user_name                  = local.iam_user_name
  s3_actions                     = var.s3_actions
  s3_resources                   = local.s3_resources
  aws-iam-s3-user_enabled        = var.aws-iam-s3-user_enabled

locals {
  s3_resources                   = ["S3 bucket 1", "S3 bucket 2"]
  iam_user_name                  = ["IAM User 1", "IAM User 2"]
}

And the tfvars file has the S3:actions (get object)

So whats happening here is, all IAM Users are getting permissions on all S3 buckets. So, I am trying to tag basically IAM user 1 with only S3 bucket 1 only and IAM user 2 with S3 bucket 2 and so on….

Vikram Yerneni avatar
Vikram Yerneni

In the above code, I need to link each iam_user_name with a specific s3_resources

RB avatar

i thnk you may want this zipmap function https://www.terraform.io/docs/language/functions/zipmap.html

zipmap - Functions - Configuration Language - Terraform by HashiCorp

The zipmap function constructs a map from a list of keys and a corresponding list of values.

RB avatar

zipmap(local.iam_user_name, local.s3_resources)

RB avatar

that will create a mapping of the user to the s3 resource

Vikram Yerneni avatar
Vikram Yerneni

Ok, I am gonna try to work with this zipmap Function and will let u know if i find a solution

Vikram Yerneni avatar
Vikram Yerneni

Thanks again Ronak

Vikram Yerneni avatar
Vikram Yerneni

I used a key/value pair to match the iam user and s3 buckets Ronak….

Vikram Yerneni avatar
Vikram Yerneni

Thanks man….

np1
RB avatar

Awesome!

David avatar

Hi, all. I’m trying to use cloudposse/terraform-aws-cloudfront-s3-cdn in a module with an existing origin bucket managed in a higher level block using cloudposse/terraform-aws-s3-bucket. I’m getting a continual change cycle where the CDN module sets the origin bucket policy, but then the S3 module goes in and wants to re-write the policy. I’m not sure how to address this. Is there a way to get the S3 module to ignore_changes on the bucket policy or pass in the CDN OAI policy bits so that they’re not stomped on by S3 module runs?

David avatar

FYI, I addressed this via copying out the bucket policy and hard coding it into the s3-bucket module. This is exceptionally gross, but it lets my applys proceed.

joshmyers avatar
joshmyers

:wave: Anyone know if possible to ignore_changes to an attribute in a dynamic block? Doesn’t seem so.

2021-09-21

loren avatar

Anyone building self-hosted GitHub Action Runners using terraform? I found this module, which looks pretty reasonable… https://github.com/philips-labs/terraform-aws-github-runner

GitHub - philips-labs/terraform-aws-github-runner: Terraform module for scalable GitHub action runners on AWSattachment image

Terraform module for scalable GitHub action runners on AWS - GitHub - philips-labs/terraform-aws-github-runner: Terraform module for scalable GitHub action runners on AWS

RB avatar

Yes, I’ve come across this one. It’s very nice!

GitHub - philips-labs/terraform-aws-github-runner: Terraform module for scalable GitHub action runners on AWSattachment image

Terraform module for scalable GitHub action runners on AWS - GitHub - philips-labs/terraform-aws-github-runner: Terraform module for scalable GitHub action runners on AWS

RB avatar
terraform-aws-components/modules/github-runners at master · cloudposse/terraform-aws-componentsattachment image

Opinionated, self-contained Terraform root modules that each solve one, specific problem - terraform-aws-components/modules/github-runners at master · cloudposse/terraform-aws-components

loren avatar

oh nice! in case you didn’t see it, support for ephemeral (one-time) runners was just released, https://github.blog/changelog/2021-09-20-github-actions-ephemeral-self-hosted-runners-new-webhooks-for-auto-scaling/

GitHub Actions: Ephemeral self-hosted runners & new webhooks for auto-scalingattachment image

GitHub Actions: Ephemeral self-hosted runners & new webhooks for auto-scaling

Frank avatar

What is considered a “best practice” when dealing with many projects that are mostly similar in setup / configuration? A lot of our projects share ~90-95% of the same setup approach (e.g. VPC + ALB + ECS + RDS + Redis + SES + ACM + SSM) and only differ slightly (some have no Redis or no RDS, or additional parameters assigned to the ECS instance).

For each project we currently have separate Git repositories and the current approach when new infrastructures needs to be built that all Terraform code for one of the other projects is copied in and modified accordingly (mostly replacing vars, adding in some additional ECS Secrets / Parameters etc). This is fairly quick to do and is also flexible as we can simply add or remove things we do (not) need.

But it doesn’t feel like the most optimal approach. It’s also somewhat of a PITA if a change has to be made across all projects.

A few idea’s that spring to mind to address this:

  1. Create a Terraform “app” module where we can toggle components using variables (e.g. redis_enable = false), use this as only module and add in optional custom extra’s (e.g. a project that needs a service not covered by the app module)
  2. Use Atmos (but this appears to be pretty much the same way by copy/pasting) I’m eager to learn how others are doing this.
Michael Dizon avatar
Michael Dizon

+1 for Atmos

Matt Gowie avatar
Matt Gowie

Problem with the single App module is that you’ll run into your root module being too large, which can be a huge pain due to large blast radius and a host of other annoying problems.

I’d suggest atmos and the SweetOps workflow as well. It is copying + pasting using vendir, so it follows a defined pattern and ensures that you don’t end up drifting your components (root modules) from one another. You’ll need to make that a policy at your org, but that shouldn’t be too hard: “No one updates components locally — updates only go upstream and then they’re updated in the consuming project via vendir”.

You could also look into potentially consolidating all your git repos and then each of your environments / projects just becomes another Stack file.

joshmyers avatar
joshmyers

Yeah, I’ve stayed away from a single app module but have a similar issue. Lots of same but slightly different modules to compose a service. One way could be to have a “template” terraform repo that creates the real service repo based on some vars. Not sure how I feel about this. Plenty of tools out there for templating same but different services

Frank avatar

Thanks @Matt Gowie!

The root module being too large is definitely a problem.

Yesterday - before I asked this question - I was experimenting with building one but I wanted everything to be toggle-able (ecs on/off, redis on/off, acm cert on/off, rds on/off etc) but even after tinkering on it for ~2 hours it already became quite complex with a large number of enabled/count/try() etc.

Looking into Atmos has been on my backlog ever since its demo in Office Hours a few months ago. Good excuse to spend some time on that now I guess :-)

I did find https://github.com/cloudposse/atmos/blob/master/example/vendir.yml and https://github.com/cloudposse/terraform-aws-components which seems like a good starting point.

atmos/vendir.yml at master · cloudposse/atmosattachment image

Universal Tool for DevOps and Cloud Automation (works with terraform, helm, helmfile, istioctl, etc) - atmos/vendir.yml at master · cloudposse/atmos

GitHub - cloudposse/terraform-aws-components: Opinionated, self-contained Terraform root modules that each solve one, specific problemattachment image

Opinionated, self-contained Terraform root modules that each solve one, specific problem - GitHub - cloudposse/terraform-aws-components: Opinionated, self-contained Terraform root modules that each…

Matt Gowie avatar
Matt Gowie

@Frank Start with https://docs.cloudposse.com/ — I wrote those up earlier this year and they cover a good intro of what you can do and how it all works out. Would be great to hear any feedback as well!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Keep in mind that with atmos you get import functionality, so you can define the stack and then import it to rapidly deploy. However, there’s a lot of other architectural decisions we make in how we design our modules/components that ensures it works very well for us.

Frank avatar

Excellent, thanks. It’s quite a shift from how we’re doing things right now but its a better approach for maintaining many projects. And of course being able to onboard new customers/environments even faster.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, agreed - it’s a shift that may require some juggling.

Ryan Ryke avatar
Ryan Ryke

has anyone been able to get the terraform-aws-ecs-web-app to work with for_each it seems to be cranky with the embedded provider configuration in the github-webhooks module. https://github.com/cloudposse/terraform-github-repository-webhooks/blob/master/main.tf

terraform-github-repository-webhooks/main.tf at master · cloudposse/terraform-github-repository-webhooksattachment image

Terraform module to provision webhooks on a set of GitHub repositories - terraform-github-repository-webhooks/main.tf at master · cloudposse/terraform-github-repository-webhooks

jose.amengual avatar
jose.amengual

I have been a contributor for that module

terraform-github-repository-webhooks/main.tf at master · cloudposse/terraform-github-repository-webhooksattachment image

Terraform module to provision webhooks on a set of GitHub repositories - terraform-github-repository-webhooks/main.tf at master · cloudposse/terraform-github-repository-webhooks

Ryan Ryke avatar
Ryan Ryke
The Terraform configuration must be valid before initialization so that
Terraform can determine which modules and providers need to be installed.
╷
│ Error: Module module.apps.module.web_app.module.ecs_codepipeline.module.github_webhooks contains provider configuration
│ 
│ Providers cannot be configured within modules using count, for_each or depends_on.
Ryan Ryke avatar
Ryan Ryke

yeah i think im in there somewhere also

jose.amengual avatar
jose.amengual

there was a conversation about moving the provider out of the module

Ryan Ryke avatar
Ryan Ryke

would be bueno, can you link me to that ?

jose.amengual avatar
jose.amengual

I used to work for CloudPosse

1
jose.amengual avatar
jose.amengual

I mean internally

jose.amengual avatar
jose.amengual

so the way I used it is that I added the provider in my module and that will take precedence over the cloudposse module

Ryan Ryke avatar
Ryan Ryke

but will that get rid of the error… the provider is still there

jose.amengual avatar
jose.amengual

the reason why it was there was that you could use the anonymous API or credentials pass through the GITHUB_ ENV variables which the provider can read

Ryan Ryke avatar
Ryan Ryke

right, would be nice if it just needed to be defined in the root

jose.amengual avatar
jose.amengual

send a PR, I can approve it

Ryan Ryke avatar
Ryan Ryke

yeah i think it fundamentally changes the codepipeline module

Ryan Ryke avatar
Ryan Ryke

not sure anyone would be too happy with that change

jose.amengual avatar
jose.amengual

it is a pretty bad practice to set the provider in a submodule

jose.amengual avatar
jose.amengual

what I did was to use the ecs-web-app module but I set the github stufff outside of that module

jose.amengual avatar
jose.amengual

the access for codepipeline can be done after the fact and it will still work

Ryan Ryke avatar
Ryan Ryke

yeah, none of that will work with a for_each loop

Ryan Ryke avatar
Ryan Ryke

Ryan Ryke avatar
Ryan Ryke
Ability to pass providers to modules in for_each · Issue #24476 · hashicorp/terraformattachment image

Use-cases I&#39;d like to be able to provision the same set of resources in multiple regions a for_each on a module. However, looping over providers (which are tied to regions) is currently not sup…

jose.amengual avatar
jose.amengual

yes

András Sándor avatar
András Sándor

Following up on this question, I’m having the same issue, and wondering if anyone has a workaround. I’m using ecs-web-app module, that calls codepipeline child module, which in turn calls github webhooks child module. I get the following error

│ Error: Module module.ecs_web_app.module.ecs_codepipeline.module.github_webhooks contains provider configuration │ │ Providers cannot be configured within modules using count, for_each or │ depends_on.

I’m using codestar connections so would not need the webhooks module at all. Any way to disable github webooks module from ecs-web-app? My only idea right now is to have all these modules in a local source and modify them to get rid of the validation error.

jose.amengual avatar
jose.amengual

that module is opinionated and uses github so you could disable the webhook and do it yourself

Ryan Ryke avatar
Ryan Ryke
itsacloudlife/terraform-aws-ecs-web-app-no-pipeline

Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more.

jose.amengual avatar
jose.amengual

I have use that module with no codepipeline before but if you want to support other products PRs are welcome

Ryan Ryke avatar
Ryan Ryke

im not sure if they updated it but the github provider in the codepipeline sub module is what busted it

Ryan Ryke avatar
Ryan Ryke

even when i disable the sub-module it’s still cranky

2021-09-22

R Dha avatar

any good resources to learn terraform for gcp?

ByronHome avatar
ByronHome

HI everyone :hand:, I have weird behavior with s3 terraform resource. Specifically with this aws_s3_bucket_object. I have a local property array list, containing a .csv values, and I need to create a s3 object for each element array list value. This is my terraform code:

local{
  foo_values = [
    {
      "name"    = "foo_a"
      "content" = <<-EOT
var_1,var_2,var_3,var_4
value_1,value_2,value_3,value_4
EOT
    },
    {
      "name"    = "foo_b"
      "content" = <<-EOT
var_1,var_2,var_3,var_4
value_1,value_2,value_3,value_4
EOT
    }
  ]
}

aws_s3_bucket_object

resource "aws_s3_bucket_object" "ob" {
  bucket = aws_s3_bucket.b.id
  count  = length(local.foo_values)
  key    = "${local.foo_values[count.index].name}.csv"
  content      = local.foo_values[count.index].content
  content_type = "text/csv"
} 

When i apply it locally, all works fine, and then when i try to make a terraform plan it give me a No changes. Infrastructure is up-to-date message My coworkers tried to make a terraform plan and they got the same message. But, when I launch a terraform plan into Codebuild container, with the same terraform version and with no code changes. The terraform plan give me this changes to apply.

ByronHome avatar
ByronHome

The atrr content of aws_s3_bucket_object makes a diff in terraform tfstate when the code has not been modified. And this only appear when run terraform plan on CodeBuild context. If run locally all is ok. ¿Does anyone know what I’m doing wrong?. I am using terraform version = 0.12.29 Thanks!!

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

looking for some advise if possible … i have a go binary called rds-to-s3-exporter it needs to run as a lambda in each account, I have two options here

  1. Add the binary as a zip file to an core s3 bucket
  2. Push a docker image to a core ECR registry however on both occasions I need to make changes to their the bucket policy or registry policy when we create a new account.

does anyone have a nice way to do this?

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Are all accounts in the same organization?

loren avatar

run the lambda centrally, using assume-role to gain access to other accounts?

loren avatar

as part of the new account process, create an s3 bucket in the account, push the binary there, and create the lambda in the account?

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

my other option is using a gitlab release for the binary and then using a local provisioner in the module to get the zip file

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

this way i don’t need to worry about how many accounts we create as this will just work regardless

loren avatar

a gitlab release… that would be an interesting provider datasource… have the provider retrieve the binary instead of a local provisioner…

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

For speed I’m think of just using a local provisioner trying to work out how to obtain it though as the glab binary requires interaction

Release notes from terraform avatar
Release notes from terraform
06:33:43 PM

v1.1.0-alpha20210922 1.1.0 (Unreleased) UPGRADE NOTES: Terraform on macOS now requires macOS 10.13 High Sierra or later; Older macOS versions are no longer supported. The terraform graph command no longer supports -type=validate and -type=eval options. The validate graph is always the same as the plan graph anyway, and the “eval” graph was just an implementation detail of the terraform console command. The default behavior of creating a plan graph should be a reasonable replacement for both of the removed graph…

Kevin Neufeld(PayByPhone) avatar
Kevin Neufeld(PayByPhone)

Question: Curious to know if someone has a solution to bootstrap RDS Postgres for IAM authentication, specifically creating and granting the IAM user in the database?

for more context: https://aws.amazon.com/premiumsupport/knowledge-center/rds-postgresql-connect-using-iam/

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Can you explain what the gap is? Technically, you set iam_database_authentication_enabled to true on the aws_db_instance

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

On the IAM role front, this is what we do:

# IAM Policy: allow DB auth
resource "aws_iam_role_policy" "db-auth" {
  count = length(local.psql_users)

  name = "db-auth"
  role = element(local.roles, count.index)

  policy = <<-EOF
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "DBpermissions",
                "Effect": "Allow",
                "Action": [
                    "rds-db:connect"
                ],
                "Resource": [
                    "arn:aws:rds-db:${var.aws_region}:${var.aws_account}:dbuser:${module.rds.rds_resource_id}/${element(local.psql_users, count.index)}"
                ]
            }
        ]
    }
    EOF
}
Kevin Neufeld(PayByPhone) avatar
Kevin Neufeld(PayByPhone)

@Yoni Leitersdorf (Indeni Cloudrail) maybe I missed the ease of it but how are you populating the local user?

CREATE USER iamuser WITH LOGIN; 
GRANT rds_iam TO iamuser;

currently teams are doing this manually. local-exec provisioner requires connectivity and access which our gitlab runner do not have. Wondering what others do, before I dive this.

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Ah yes, we use the local-exec to do it.

mrwacky avatar
mrwacky

How are folks dealing with the braindeadedness that is TF 0.14+ .terraform.lock.hcl files ? We have a pretty large set of Terraform roles/modules, and boy what a pain to manage & upgrade a zillion different .terraform.lock.hcl files..

loren avatar

using terragrunt, i just delete it using hooks, but also add it to .gitignore…

  before_hook "terraform_lock" {
    commands = ["init"]
    execute  = ["rm", "-f", ".terraform.lock.hcl"]
  }

  after_hook "terraform_lock" {
    commands = concat(get_terraform_commands_that_need_locking(), ["init"])
    execute  = ["rm", "-f", "${get_terragrunt_dir()}/.terraform.lock.hcl"]
  }
mrwacky avatar
mrwacky

Ha, yes, @Gabe is trying to get me to just git ignore them …

mrwacky avatar
mrwacky

I wish they had some sort of hierarchical method like .gitconfig so I could populate the list once per git repository…

loren avatar

for CI, we do also already zip up a pinned terraform binary and provider cache, and host the zip. then before execution, retrieve and extract the bundle. so not too much concern about the supply chain risks that the lock is trying to protect you from…

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@mrwacky like **/.terraform.lock.hcl?

Alex Jurkiewicz avatar
Alex Jurkiewicz

How do you manage that Loren?

loren avatar

you mean the bundle @Alex Jurkiewicz? currently still using terraform-bundle. eventually we’ll switch to terraform providers mirror. wrapped in a make target

1
mrwacky avatar
mrwacky

@Erik Osterman (Cloud Posse) - yup

mrwacky avatar
mrwacky

I have worked up a disgusting shell script to regenerate all of them as quickly as possible.

mrwacky avatar
mrwacky

I mean – I wish Terraform would walk up the filesystem tree to find .terraform.lock.hcl similar to how git searches for .gitignore files.. Then I could have as few as 1 lockfile per repo

2
loren avatar

Open a feature request!

2
Valter Silva avatar
Valter Silva

Hi All, I’ve started using the following module in one of my customers as a quickstart. We are making some modifications to meet our requirements. We’ve added the LICENSE file but I can’t find the NOTICE file as stated in the README.md file. By not having a NOTICE file I believe we need to add a header to our *tf files, correct? https://github.com/cloudposse/terraform-aws-ecs-alb-service-task

Alex Jurkiewicz avatar
Alex Jurkiewicz

Not quite sure what you are thinking of, but the Apache Software Licence is permissive. If you fork the module, you can do whatever you want, except strip the CloudPosse copyright

1
Valter Silva avatar
Valter Silva

I was under the impression that we must keep the LICENSE file and add the CloudPosse copyright as header for every file

Alex Jurkiewicz avatar
Alex Jurkiewicz

do you want to relicense your fork? What you describe might be necessary in that case. But the simplest approach is to fork and change nothing about the license, commit your changes on top of the existing files

mrwacky avatar
mrwacky

Your changes are too custom to send back as a PR to the cloudposse version?

Valter Silva avatar
Valter Silva

Hi @mrwacky, yes

1

2021-09-23

Dustin Lee avatar
Dustin Lee

Hello, anybody hitting the issue with multiple MX Records on https://github.com/cloudposse/terraform-cloudflare-zone, getting stopped due to duplicate object error’s

GitHub - cloudposse/terraform-cloudflare-zone: Terraform module to provision a CloudFlare zone with DNS records, Argo, Firewall filters and rulesattachment image

Terraform module to provision a CloudFlare zone with DNS records, Argo, Firewall filters and rules - GitHub - cloudposse/terraform-cloudflare-zone: Terraform module to provision a CloudFlare zone w…

Dustin Lee avatar
Dustin Lee

i think the object key may need to pull in the priority into the key id to differentiate

Dustin Lee avatar
Dustin Lee

i changed the local.records to pull it in … bit hacky i got lost down the rabbit hole with the if logic and formatting if the record.priority was present, went with the try() instead seems to work cloudflare must throw it away if it doesn’t make sense

Alex Jurkiewicz avatar
Alex Jurkiewicz

sounds like a good change to submit as a pull request

1
Alex Jurkiewicz avatar
Alex Jurkiewicz

I think use of try here makes sense. You could also do something with lookup and coalesce, but try seems like a good simple fit

Dustin Lee avatar
Dustin Lee

i think i’d prefer to have it that it checks for the record.priority and creat the record if exists, than just blat in a default and send it to cloudflare and hope they don’t stop taking it, if it’s not appropriate down the track what you reckon ?

Alex Jurkiewicz avatar
Alex Jurkiewicz

I don’t know this module, from what I can see of your change you only changed the key used by items in local.records. But it seems you are now talking about changing the records this module creates also? I can’t comment on that, I don’t know enough

Dustin Lee avatar
Dustin Lee

Some records have a priority and some don’t, the try() will throw in a default value and it will be sent to cloudflare. Cloudflare take it and probably just don’t anything with for that record type.

Dustin Lee avatar
Dustin Lee

i’ll play with it and see how it goes

Dustin Lee avatar
Dustin Lee

2021-09-24

Jakub Igła avatar
Jakub Igła

Hi Folks, I’m using your s3-website module, but whenever I try to run terraform plan the data source data "aws_iam_policy_document" "default" gets refreshed with different output and it messes up my plan, which should produce “no chanfges”. I’m on latest terraform, the module version is 0.17.1. In the thread I’m attaching what it produces.

Jakub Igła avatar
Jakub Igła
 data "aws_iam_policy_document" "default"  {
      ~ id      = "3597815271" -> (known after apply)
      ~ json    = jsonencode(
            {
              - Statement = [
                  - {
                      - Action    = "s3:GetObject"
                      - Effect    = "Allow"
                      - Principal = {
                          - AWS = "*"
                        }
                      - Resource  = "arn:aws:s3:::sandbox.example.com/*"
                      - Sid       = ""
                    },
                ]
              - Version   = "2012-10-17"
            }
        ) -> (known after apply)
      - version = "2012-10-17" -> null

      ~ statement {
          - effect        = "Allow" -> null
          - not_actions   = [] -> null
          - not_resources = [] -> null
            # (2 unchanged attributes hidden)

            # (1 unchanged block hidden)
        }
    }
Jakub Igła avatar
Jakub Igła

and that’s how I invoke it:

module "this_s3_website" {
  source  = "cloudposse/s3-website/aws"
  version = "0.17.1"
  context = module.this.context

  logs_enabled       = true
  encryption_enabled = false
  hostname           = var.hostname
  parent_zone_id     = var.parent_zone_id
}
Jakub Igła avatar
Jakub Igła

I did some troubleshooting and the data "aws_iam_policy_document" gets “rebuilt” on every terraform plan only when I have

provider "aws" {
  default_tags {
    tags = ...
  }
}

If I remove it, the plan is correct - No changes. Your infrastructure matches the configuration.

Is it something to raise a bug for?

RB avatar

that’s a bug with the provider’s default_tags parameter

Almondovar avatar
Almondovar

Hi all, in our terraform, we got environments and we differentiate between the different envs by using different variables. So far so good, but what happens when we don’t want the terraform code to be exactly the same in all envs? For example, in dev i want to do waf filter by ip’s in staging i need to combine ip’s & urls and this is changing the terraform code and of course its trying to apply this code everywhere and not only in one specific env. Is there any way to make some programmatic intelligence behind the tf like

if env = dev then run code A 
elseif env = stage run code B 
elseif env = prod run code C

thanks.

loren avatar

Could be a nifty tool … https://github.com/im2nguyen/rover

GitHub - im2nguyen/rover: Interactive Terraform visualization. State and configuration explorer.attachment image

Interactive Terraform visualization. State and configuration explorer. - GitHub - im2nguyen/rover: Interactive Terraform visualization. State and configuration explorer.

Grummfy avatar
Grummfy

nice, does it support multiple state file? a replacement for terraboard?

Alyson avatar

Hi, it looks like the desired_size variable from the eks-node-group module is not working.

Anyone else going through this?

terraform-aws-eks-node-group - Version 0.26.0 Terraform v0.14.11

RB avatar

the input var is passed to the local ng map which is then passed in as scaling_config in the aws_eks_node_group resource

https://github.com/cloudposse/terraform-aws-eks-node-group/blob/34be126797af6673ca0375d6e60bca5616257786/main.tf#L129-L133

terraform-aws-eks-node-group/main.tf at 34be126797af6673ca0375d6e60bca5616257786 · cloudposse/terraform-aws-eks-node-groupattachment image

Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.

Alyson avatar
Initial desired number of worker nodes (external changes ignored)	

Does the desired_size variable only work when we create the nodes? After creating the nodes, this variable no longer works. That’s right?

Alyson avatar
RB avatar

I’m unsure. You may have to dig into the AWS docs regarding the eks node group

1
SlackBot avatar
SlackBot
04:14:20 PM

This message was deleted.

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

is there a way to fire a cloudwatch event ad-hoc?

jose.amengual avatar
jose.amengual

change the cron to run every 10 min and check

RB avatar

you can do a aws ecs task-run command i believe

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i am getting this …

There was an error while saving rule cron-re-dev-pe-sbx-lambda-monthly-snapshots-to-s3-eu-west-1.
Details: 1 validation error detected: Value 'AWSEvents_cron-re-dev-pe-sbx-lambda-monthly-snapshots-to-s3-eu-west-1_terraform-20210924172025276000000001' at 'statementId' failed to satisfy constraint: Member must have length less than or equal to 100.
RB avatar
✗ echo 'AWSEvents_cron-re-dev-pe-sbx-lambda-monthly-snapshots-to-s3-eu-west-1_terraform-20210924172025276000000001' | wc -c
107
RB avatar

you need to reduce the number of chars of that name

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

my rule is called [cron-re-dev-pe-sbx-lambda-monthly-snapshots-to-s3-eu-west-1](https://eu-west-1.console.aws.amazon.com/cloudwatch/home?region=eu-west-1#rules:name=cron-re-dev-pe-sbx-lambda-monthly-snapshots-to-s3-eu-west-1)

RB avatar

it looks like it’s prefixing AWSEvents_ to it and suffixing it with _terraform-20210924172025276000000001 which increases your name which goes over the max chars

RB avatar

are you using a name_prefix instead of a name argument on the resource ?

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

name

Steve Wade (swade1987) avatar
Steve Wade (swade1987)
resource "aws_cloudwatch_event_rule" "weekly" {
  count               = var.schedule == "weekly" ? 1 : 0
  name                = "cron-${var.database_name}-lambda-weekly-snapshots-to-s3-${data.aws_region.current.name}"
  description         = "Cron to start the lambda that exports ${var.database_name} snapshots to S3 every Monday at 10am."
  schedule_expression = "cron(0 10 ** MON *)"
}
loren avatar

make the name less descriptive and rely on the description field…?

loren avatar

you could put two rules, one that triggers one time now

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

its weird that it was created fine

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

its when i tried to change it that it didn’t like it

RB avatar

i find it odd that it’s using that random terraform suffix without you using a name_prefix

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

is there a recommended way to alert on a failed lambda invocation?

2021-09-25

Joaquin Menchaca avatar
Joaquin Menchaca

SweetOps is no longer using helmfile? Is terraform used instead for k8s/helm? Any issues w/ current APi not supported w/ k8s provider, e.g. Ingress?

RB avatar

we’ve been using helm_release recently

https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release

we’ve converted a few of the helm files and haven’t noticed any glaring issues so far

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yea, we’re mostly using terraform’s helm provider now natively

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
GitHub - cloudposse/terraform-aws-helm-release: Create helm release and common aws resources like an eks iam roleattachment image

Create helm release and common aws resources like an eks iam role - GitHub - cloudposse/terraform-aws-helm-release: Create helm release and common aws resources like an eks iam role

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

or where we’re deep in with helmfile for backing-services, we’ve started using the helmfile provider for terraform.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

for CD, we’re mostly investing in helm + argocd

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

there’s nothing wrong with helmfile, it’s just we were able to consolidate without giving up too much.

2021-09-26

Almondovar avatar
Almondovar

Hi all, i am trying to use the and_statement to combine different statements (we need to combine ip filtering with url). The issue is that from the documentation is not clear if the and_statement block should include inside it the statement argument, or the opposite, the statement block should include inside it the and_statement argument: I tried several ways of composing the code, can please someone tell me what i am doing wrong?

resource "aws_wafv2_web_acl" "alb_waf" {
name = "ALB-WAF"
description = "ALB"
scope = "REGIONAL"

default_action {
block {}
}

rule {
name = "allow-specific-ips"
priority = 1

action {
  allow {}
}
statement {
  and_statement {
    ip_set_reference_statement {
      arn = aws_wafv2_ip_set.ipset.arn
    }
    regex_pattern_set_reference_statement {
      arn = aws_wafv2_regex_pattern_set.staging_regex.arn
    }
  } # and_statement
} # statement block

error code

Error: Unsupported block type

on main.tf line 56, in resource "aws_wafv2_web_acl" "alb_waf":
56: regex_pattern_set_reference_statement {

Blocks of type "regex_pattern_set_reference_statement" are not expected here.

    
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

cc: @Ben Smith (Cloud Posse)

Ben Smith (Cloud Posse) avatar
Ben Smith (Cloud Posse)

Hi @Almondovar, I agree these docs can be terribly confusing. So it looks like the rule{} must contain a statement{} which itself can contain a and_statement, then within the and_statement it can contain multiple statements to join by and. something like:

resource "aws_wafv2_web_acl" "alb_waf" {
  name        = "ALB-WAF"
  description = "ALB"
  scope       = "REGIONAL"

  default_action {
    block {}
  }

  rule {
    name     = "allow-specific-ips"
    priority = 1

    action {
      allow {}
    }
    statement {
      and_statement {
        statement {
          ip_set_reference_statement {
            arn = "aws_wafv2_ip_set.ipset.arn"
          }
        }
        statement {
          regex_pattern_set_reference_statement {
            arn = "aws_wafv2_regex_pattern_set.staging_regex.arn"
            text_transformation {
              priority = 0
              type = ""
            }
          }
        }
      }
      # and_statement
    }
    # statement block
    visibility_config {
      cloudwatch_metrics_enabled = false
      metric_name = null
      sampled_requests_enabled = false
    }
  }
  visibility_config {
    cloudwatch_metrics_enabled = false
    metric_name = null
    sampled_requests_enabled = false
  }
}

Another option for WAF rules would be to create them through AWS Firewall manager under WAF / WAF_v2 Policies

Ben Smith (Cloud Posse) avatar
Ben Smith (Cloud Posse)

Note the above won’t just work as visibility config has to be set properly. but that should atleast help with the format of the rules

1
Fizz avatar

Try statement -> and_statement-> statement -> ip_set_reference_statement

Almondovar avatar
Almondovar

thank you very much Fizz

2021-09-27

Ben Kero avatar
Ben Kero

Hi all. I’m not sure if this is the right place but I’m looking for a review for a PR I made to one of the Cloudposse Terraform AWS modules: https://github.com/cloudposse/terraform-aws-cloudtrail-s3-bucket/pull/54

Add custom policies by bkero · Pull Request #54 · cloudposse/terraform-aws-cloudtrail-s3-bucketattachment image

what Allows the policy variable to be used in a useful way to set a custom S3 bucket policy Conditionally the data resource for the unused default bucket policy why Issue #19 outlines why this i…

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

cc @Ben Smith (Cloud Posse)

Add custom policies by bkero · Pull Request #54 · cloudposse/terraform-aws-cloudtrail-s3-bucketattachment image

what Allows the policy variable to be used in a useful way to set a custom S3 bucket policy Conditionally the data resource for the unused default bucket policy why Issue #19 outlines why this i…

Ben Kero avatar
Ben Kero

Thanks Erik. I see an approval and tests passing. Now it needs merged.

Alex Jurkiewicz avatar
Alex Jurkiewicz
Ignore admin credentials for snapshots/replicated clusters by alexjurkiewicz · Pull Request #119 · cloudposse/terraform-aws-rds-clusterattachment image

Fixes errors like: Error: error creating RDS cluster: InvalidParameterCombination: Cannot specify user name for instance cluster replication cluster

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

cc @Yonatan Koren

Ignore admin credentials for snapshots/replicated clusters by alexjurkiewicz · Pull Request #119 · cloudposse/terraform-aws-rds-clusterattachment image

Fixes errors like: Error: error creating RDS cluster: InvalidParameterCombination: Cannot specify user name for instance cluster replication cluster

1
Alex Jurkiewicz avatar
Alex Jurkiewicz

thanks to @jose.amengual

1
cool-doge1

2021-09-28

Alyson avatar

I am getting timeout when creating an eks cluster using module 0.43.2.

https://github.com/cloudposse/terraform-aws-eks-cluster/

GitHub - cloudposse/terraform-aws-eks-cluster: Terraform module for provisioning an EKS clusterattachment image

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

2021-09-29

Release notes from terraform avatar
Release notes from terraform
05:53:37 PM

v1.0.8 1.0.8 (September 29, 2021) BUG FIXES: cli: Check required_version as early as possibly during init so that version incompatibility can be reported before errors about new syntax (#29665) core: Don’t plan to remove orphaned resource instances in refresh-only plans (<a href=”https://github.com/hashicorp/terraform/issues/29640“…

Check required_version as early as possible by jbardin · Pull Request #29665 · hashicorp/terraformattachment image

Our current check of required_version happens after parsing the configuration, which may not be possible if new configuration constructs have been added to the language since the declared required_…

core: Fix refresh-only interaction with orphans by alisdair · Pull Request #29640 · hashicorp/terraformattachment image

When planning in refresh-only mode, we must not remove orphaned resources due to changed count or for_each values from the planned state. This was previously happening because we failed to pass thr…

2021-09-30

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

AWS just launched a new Cloud Control API ( 1 single CRUD API for AWS resources) and Terraform has a new provider for it (links still WIP I guess?): https://aws.amazon.com/blogs/aws/announcing-aws-cloud-control-api/

AWS Cloud Control API, a Uniform API to Access AWS & Third-Party Services | Amazon Web Servicesattachment image

Today, I am happy to announce the availability of AWS Cloud Control API a set of common application programming interfaces (APIs) that are designed to make it easy for developers to manage their AWS and third-party services. AWS delivers the broadest and deepest portfolio of cloud services. Builders leverage these to build any type of […]

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

Link to the new provider: https://github.com/hashicorp/terraform-provider-awscc

Hasicorp blog yet to be posted

loren avatar

yeah that thing seems incredibly aspirational. we’ll see.

loren avatar

docs indicate it depends on cloudformation resource support. i guess it’s nice to have that exposed natively (best of both worlds!), but that support hasn’t always moved quickly…

loren avatar

i’m curious if the awscc provider accepts the same authentication mechanisms and configuration settings as the aws provider… can i pass a profile? a role_arn? credential_process? how do i override endpoints?

OliverS avatar
OliverS

(Just saw the previous post about AWS Cloud Control) Being based on CloudFormation, I wonder how much of that bleeds through, esp since CF now supports stop-on-exception and resume-from-last-exception maybe TF interface to AWS Cloud Control API is ok.

loren avatar

i’m figuring we’ll see more multi-provider modules for a bit… things the aws provider does, things the awscc provider does… not loving that idea

loren avatar

i’m really hoping this doesn’t manifest as actual CFN stacks behind the scenes lol

this1
loren avatar

registry docs went live recently, answering some of my questions on authentication… https://registry.terraform.io/providers/hashicorp/awscc/latest/docs#authentication

lucaslu avatar
lucaslu

hello folks, im very newbie on devops culture. so i was figuring if docker and terraform do the same job and why use terraform instead of docker who has a bigger marketshare, sorry if i was rough but im just a beginner trying to catch what is better to learn by now days

loren avatar

they are orthogonal. learn both.

OliverS avatar
OliverS

@lucaslu they are very different:

• with terraform you write code the describes infrastructure resources like load balancers, security groups, virtual private clouds, etc

• with docker you build and run docker “images” in “containers” an image is like a snapshot of a mini linux environment and the container is like the computer running that linux Normally you need both: You will use terraform to setup resources that will be used to run docker containers, such as AWS ECS or EKS (or Azure AKS or Google GKE), databases, message queues, etc.

lucaslu avatar
lucaslu

thank u so much for the explanation OliverS

OliverS avatar
OliverS

You’re welcome, good luck!

MrAtheist avatar
MrAtheist

hey ya’ll, not sure if it’s possible, but heres a tiny problem im hitting…

1> Someone deployed some tf stuff from local, state file is stored in s3 2> Presumably this someone got thrown under the bus and didnt have a chance to push the iac, assuming iac is lost 3> The actual resources went thru some manual hell… and i would like to restore/revert back to the original state based on the json

is this possible? something to do with tainting…?

Alex Jurkiewicz avatar
Alex Jurkiewicz

terraform will do this automatically

Alex Jurkiewicz avatar
Alex Jurkiewicz

it will make the cloud infra look like what your local code specifies

Alex Jurkiewicz avatar
Alex Jurkiewicz

eg, you have a stack which creates an rds instance of type r5.4xlarge. Someone comes along and changes the instance’s type to t3.small If you re-ran terraform, it would detect this change and propose changing the size back to r5.4xl

MrAtheist avatar
MrAtheist

hmm maybe u missed the point where i dont have the actual tf code

MrAtheist avatar
MrAtheist

i only have the state file

MrAtheist avatar
MrAtheist

still doable u think?

Alex Jurkiewicz avatar
Alex Jurkiewicz

oh. that’s not really possible

Alex Jurkiewicz avatar
Alex Jurkiewicz

you can read the statefile and attempt to write the configuration it describes, by hand

Alex Jurkiewicz avatar
Alex Jurkiewicz

well, I guess there is one other approach.

If you try and apply a blank configuration with this statefile, it will propose deleting every resource. You could copy and paste the resource definitions it proposes deleting into your local configuration. That will speed things up. If there were no modules involved..

MrAtheist avatar
MrAtheist

hmm i’ll give this magic a shot

MrAtheist avatar
MrAtheist

it’s complaining already…

Error: Provider configuration not present

To work with aws_route_table_association.public[2] its original provider
configuration at provider["registry.terraform.io/-/aws"] is required, but it
has been removed. This occurs when a provider configuration is removed while
objects created by that provider still exist in the state. Re-add the provider
configuration to destroy aws_route_table_association.public[2], after which
you can remove the provider configuration again.
Alex Jurkiewicz avatar
Alex Jurkiewicz

you’ll need to add the aws provider at least

MrAtheist avatar
MrAtheist

yes i did

provider "aws" {
  shared_credentials_file = "~/.aws/credentials"
  profile                 = "iac_hello_world" # CHANGE ME
  region                  = "us-east-1"
}
Alex Jurkiewicz avatar
Alex Jurkiewicz

ah. the original code was using a much older terraform version

Alex Jurkiewicz avatar
Alex Jurkiewicz

you have to update the provider address from

registry.terraform.io/-/aws

to

registry.terraform.io/hashicorp/aws
Alex Jurkiewicz avatar
Alex Jurkiewicz

there is a command to do it in your statefile automatically, but I forget it. You might be able to find it. Or you can edit the statefile manually

MrAtheist avatar
MrAtheist

as in cli from tf?

Alex Jurkiewicz avatar
Alex Jurkiewicz

yes

MrAtheist avatar
MrAtheist
Command: state replace-provider - Terraform by HashiCorp

The terraform state replace-provider command replaces the provider for resources in the Terraform state.

Alex Jurkiewicz avatar
Alex Jurkiewicz

something like terraform state replace-providers -/aws hashicorp/aws

Alex Jurkiewicz avatar
Alex Jurkiewicz

yup!

MrAtheist avatar
MrAtheist
05:40:46 AM

magic…

MrAtheist avatar
MrAtheist

thanks, at least i see it plans to destroy everything now…

MrAtheist avatar
MrAtheist

ok it turns out this is a vpc stack, and it appears that some NAT got deleted already… so in this case i guess theres no chance of bringing it back?

OliverS avatar
OliverS

If the IAC is lost, you need to recreate it from scratch and bring the existing resources under its management.

You could loop over the items in the state file and auto create entries in a main.tf. Have a look at terraformer also, as it will generate a skeleton tf file, you will use the existing state file to tell it what to import.

Once all of the existing infra is back under tf management, you will have create definitions for the resources that have been deleted, you will have to use terraform state show NAME and guess the spec that will recreate the missing resources.

MrAtheist avatar
MrAtheist

let me give it a whirl

    keyboard_arrow_up