#terraform (2024-01)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2024-01-02

Boris Dyga avatar
Boris Dyga

Hi! Here are two PRs for your review. Until the first one is merged, the second will not pass the /terratest, since its depends on the first module update

  1. https://github.com/cloudposse/terraform-aws-sns-lambda-notify-slack/pull/45
  2. https://github.com/cloudposse/terraform-aws-budgets/pull/26
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Boris Dyga thanks, first one approved and merged, the seconds one needs some updates

Boris Dyga avatar
Boris Dyga

@Andriy Knysh (Cloud Posse) I’ve updated the module vesrion and the Readme. Please review. Thanks

1
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Boris Dyga Andriy has comments

Boris Dyga avatar
Boris Dyga

Sure, having a look

1
Boris Dyga avatar
Boris Dyga

@Gabriela Campana (Cloud Posse) I’ve run make init make readme but git cannot see any difference. What could be the reason?

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Boris Dyga avatar
Boris Dyga

Could it be some issue on the /terratest side? I’ve made a trivial change (empty line), rerun make init make readme and submitted the changes to see if this possibly affects the bot behavior

Boris Dyga avatar
Boris Dyga

ping

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Boris Dyga thanks for the PRs

Boris Dyga avatar
Boris Dyga

You are welcome!

2024-01-03

2024-01-04

Tommi Jensen avatar
Tommi Jensen

hey there.

I created a PR in terraform-aws-vpc-peering-multi-account (https://github.com/cloudposse/terraform-aws-vpc-peering-multi-account/pull/82) - december 4’th, with a proposed fix for https://github.com/cloudposse/terraform-aws-vpc-peering-multi-account/issues/81 - teardown/enabled = false - not working because of an enabled check on the provider config, specifically for assumerole usage.

not sure if the codeowners are swamped, or no longer there, but it seems other PR’s are also stranded. I couldn’t find guidelines regarding where/what/who to poke to try and alleviate the issue. currently we have to use the proposed fix PR as module, or we cannot do cross account vpc peering teardown. I don’t mind if it’s not deemed an acceptable fix, but some feedback would be good

pointers?

1
Rafal Rabenda avatar
Rafal Rabenda

Hello,

Could I ask question about module: https://github.com/cloudposse/terraform-aws-sso/tree/1.2.0/modules/permission-sets I would like to deploy following inline policy with aws sso role:

data "aws_iam_policy_document" "dev_env_developer" {

  statement {
    sid    = "ROAccessRDS"
    effect = "Allow"
    actions = [
      "rds-db:connect",
    ]
    resources = [
      "arn:aws:rds-db:*:111111111111:dbuser:*/dev_ro"
    ]
    condition {
      test     = "BoolIfExists"
      variable = "aws:MultiFactorAuthPresent"
      values   = ["true"]
    }
  }
}

Could I somehow setup account ID dynamically? We have multiple products and I would like to provide access only for resources in account where policy will be deployed.

Rafal Rabenda avatar
Rafal Rabenda

not really, I have root org account where I’m deploying SSO configuration but it’s propagated to all child accounts so caller_identity data source will provide only ID of root org account as there is only that aws provider if I understand it correctly

Tommi Jensen avatar
Tommi Jensen

we’ve made a simple output-only module, that literally outputs a map of accounts you can for_each over - maybe this works for you as well. otherwise, parse a yaml file, etc, etc.

2024-01-05

AdamP avatar

Hey folks! I have a weird issue that popped up 1 day after I had been running terraform successfully, I’m using the CloudPosse EKS Cluster Module, and it started giving me this error message when I try to spin up my sandbox cluster:

AdamP avatar

(oops, sorry hit enter too quickly, one moment)

AdamP avatar
Error: Value Conversion Error

  with module.eks_cluster.provider["registry.****.io/hashicorp/kubernetes"],
  on .****/modules/eks_cluster/auth.tf line 96, in provider "kubernetes":
  96: provider "kubernetes" {

An unexpected error was encountered trying to build a value. This is always
an error in the provider. Please report the following to the provider
developer:

Received unknown value, however the target type cannot handle unknown values.
Use the corresponding `types` package type or a custom type that handles
unknown values.

Path: exec
Target Type: []struct { APIVersion basetypes.StringValue
"tfsdk:\"api_version\""; Command basetypes.StringValue "tfsdk:\"command\"";
Env map[string]basetypes.StringValue "tfsdk:\"env\""; Args
[]basetypes.StringValue "tfsdk:\"args\"" }
Suggested Type: basetypes.ListValue
[Pipeline] }
[Pipeline] // stage
[Pipeline] slackSend

I looked at auth.tf , should I you can disable it by setting var.dummy_kubeapi_server = null ? TF version: “~> 1.6.6” AWS provider: “5.31.0” —

  source  = "cloudposse/eks-cluster/aws"
  version = "3.0.0"

I was going to open up an issue/bug report on the repo, but I searched around and no one else is hitting this error, making me seem it might be me I figured I’d ask here first

• was also kind of confused if I should go to the providers repo or CP

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

maybe try to delete these:

.terraform folder • the lock file • $HOME/.terraform.d folder (where TF caches all the providers)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

was t working with TF 1.6 before?

AdamP avatar

thanks! let me try that now I had it on… 1.5.3 IIRC, and it was happening there too. Then I updated all provider versions to the latest.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if the deletion does not help, try TF 1.5

1
AdamP avatar

that tried all 1.5.X versions, no luck. Even tried the last 1.4.X version. Also tried adding “var.dummy_kubeapi_server = null” in the tf manifest as a last chance, per:

AdamP avatar
provider "kubernetes" {
  # Without a dummy API server configured, the provider will throw an error and prevent a "plan" from succeeding
  # in situations where Terraform does not provide it with the cluster endpoint before triggering an API call.
  # Since those situations are limited to ones where we do not care about the failure, such as fetching the
  # ConfigMap before the cluster has been created or in preparation for deleting it, and the worst that will
  # happen is that the aws-auth ConfigMap will be unnecessarily updated, it is just better to ignore the error
  # so we can proceed with the task of creating or destroying the cluster.
  #
  # If this solution bothers you, you can disable it by setting var.dummy_kubeapi_server = null
  host                   = local.cluster_auth_map_endpoint
  cluster_ca_certificate = local.enabled && !local.kubeconfig_path_enabled ? base64decode(local.certificate_authority_data) : null
  token                  = local.kube_data_auth_enabled ? one(data.aws_eks_cluster_auth.eks[*].token) : null
  # The Kubernetes provider will use information from KUBECONFIG if it exists, but if the default cluster
  # in KUBECONFIG is some other cluster, this will cause problems, so we override it always.
  config_path    = local.kubeconfig_path_enabled ? var.kubeconfig_path : ""
  config_context = var.kubeconfig_context

  dynamic "exec" {
    for_each = local.kube_exec_auth_enabled && length(local.cluster_endpoint_data) > 0 ? ["exec"] : []
    content {

.terraform/modules/eks_cluster/auth.tf. I have nuked the workpaces in jenkins too. I’m going to keep at it today, see what I can come up with

AdamP avatar

I’ll open an issue with https://github.com/hashicorp/terraform-provider-kubernetes too, just in case they have any feedback.

AdamP avatar
#2388 Error: Value Conversion Error - crashes terraform plan

Terraform Version, Provider Version and Kubernetes Version Terraform version:

1.6.6

I’ve also tried all of these versions, same error:

jenkins@jenkins:~$ tfenv list
* 1.6.6 (set by /var/lib/jenkins/.tfenv/version)
  1.6.0
  1.5.7
  1.5.6
  1.5.5
  1.5.4
  1.5.3
  1.5.2
  1.5.0
  1.4.7

AWS provider version:

5.31.0

EKS Version

1.28

Affected Resource(s)

EKS cluster

Terraform Configuration Files

module "eks_cluster" {
  source  = "cloudposse/eks-cluster/aws"
  version = "3.0.0"
  // <https://github.com/cloudposse/terraform-aws-eks-cluster>

  namespace = var.namespace
  stage     = var.stage
  name      = var.name
  region    = var.region

  apply_config_map_aws_auth                             = false
  cluster_encryption_config_enabled                     = true
  cluster_encryption_config_kms_key_enable_key_rotation = true

  kubernetes_version        = var.kubenetes_version
  addons                    = var.addons
  vpc_id                    = module.vpc.vpc_id
  subnet_ids                = module.subnets.public_subnet_ids
  public_access_cidrs       = var.public_access_cidrs
  endpoint_private_access   = var.endpoint_private_access
  endpoint_public_access    = var.endpoint_public_access
  enabled_cluster_log_types = var.enabled_cluster_log_types

  tags = var.tags

}

Steps to Reproduce

terraform plan

Expected Behavior

What should have happened?
terraform plan succeeds

Actual Behavior

What actually happened?

terraform plan fails

...
...
...
  + public_subnet_cidrs                             = [
      + (known after apply),
      + (known after apply),
      + (known after apply),
    ]
  + secondary_eks_node_group_cbd_pet_name           = (known after apply)
  + secondary_group_remote_access_security_group_id = (known after apply)
  + ssh_security_group_id                           = (known after apply)
  + vpc_arn                                         = (known after apply)
  + vpc_cidr_block                                  = (known after apply)
  + vpc_default_security_group_id                   = (known after apply)
  + vpc_id                                          = (known after apply)
  + vpc_main_route_table_id                         = (known after apply)

Warning: Argument is deprecated

  with module.subnets.aws_eip.default,
  on .****/modules/subnets/main.tf line 286, in resource "aws_eip" "default":
 286:   vpc = true

use domain attribute instead

(and 3 more similar warnings elsewhere)

Error: Value Conversion Error

  with module.eks_cluster.provider["registry.****.io/hashicorp/kubernetes"],
  on .****/modules/eks_cluster/auth.tf line 96, in provider "kubernetes":
  96: provider "kubernetes" {

An unexpected error was encountered trying to build a value. This is always
an error in the provider. Please report the following to the provider
developer:

Received unknown value, however the target type cannot handle unknown values.
Use the corresponding `types` package type or a custom type that handles
unknown values.

Path: exec
Target Type: []struct { APIVersion basetypes.StringValue
"tfsdk:\"api_version\""; Command basetypes.StringValue "tfsdk:\"command\"";
Env map[string]basetypes.StringValue "tfsdk:\"env\""; Args
[]basetypes.StringValue "tfsdk:\"args\"" }
Suggested Type: basetypes.ListValue
[Pipeline] }
[Pipeline] // stage
[Pipeline] slackSend
Slack Send Pipeline step running, values are - baseUrl: <empty>, teamDomain: <REDACTED>, channel: <REDACTED>, color: bad, botUser: false, tokenCredentialId: <REDACTED>, notifyCommitters: false, iconEmoji: <empty>, username: <empty>, timestamp: <empty>
[Pipeline] }
[Pipeline] // withVault
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: FAILURE

Important Factoids

This was working just fine on Tuesday, then I got busy Wednesday with unplanned work, then Thursday this started bombing out.

• I updated all provider versions on Thursday to see if that would help, it did not • I tried removing (on jenkins server) .kubernetes.d, there was no lock to remove, I even deleted the workspace directories too. No luck • I added add-ons on Tuesday, so figured I’d remove that from my .tfvars but that was not the issue either • There are no resources built, I have a destroy pipeline I used Tuesday afternoon, and I ran it again just to be sure • This is my sandbox cluster, but blocking work to get lower envs and production built via terraform & CI/CD • I’ve also scoured the internet, nothing seems to help. It all points back to provider it seems, however I looked at open and closed issues too on several repos and there is nothing out there regarding this. • I don’t think its local to me but not 100% sure

Please let me know if you need any additional information, thank you!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@AdamP try to pin the kubernetes provider to 2.24.0

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

here’s what I see as the issue with the latest 2.25.0 released yesterday:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

2.25.1 (Jan 4, 2024)

HOTFIX:

kubernetes_manifest: Implement response for GetMetadata protocol function [GH-2384]

2.25.0 (Jan 4, 2024)

ENHANCEMENTS:

• Add terraform-plugin-framework provider [GH-2347] • data_source/kubernetes_persistent_volume_claim_v1: add a new attribute spec.volume_mode. [GH-2353] • data_source/kubernetes_persistent_volume_claim: add a new attribute spec.volume_mode. [GH-2353] • kubernetes/schema_stateful_set_spec.go: Add spec.persistentVolumeClaimRetentionPolicy in kubernetes_stateful_set [GH-2333] • resource/kubernetes_persistent_volume_claim_v1: add a new attribute spec.volume_mode. [GH-2353] • resource/kubernetes_persistent_volume_claim: add a new attribute spec.volume_mode. [GH-2353] • resource/kubernetes_stateful_set_v1: add a new attribute spec.volume_claim_template.spec.volume_mode. [GH-2353] • resource/kubernetes_stateful_set: add a new attribute spec.volume_claim_template.spec.volume_mode. [GH-2353]

BUG FIXES:

resource/kubernetes_cron_job_v1: Change the schema to include a namespace in jobTemplate
resource/kubernetes_stateful_set_v1: Change the schema to include a namespace in template [GH-2362] • resource/kubernetes_ingress_v1: Fix an issue where the empty tls attribute in the configuration does not generate the corresponding Ingress object without any TLS configuration. [GH-2344] • resource/kubernetes_ingress: Fix an issue where the empty tls attribute in the configuration does not generate the corresponding Ingress object without any TLS configuration. [GH-2344]

NOTES:

• We have updated the logic of data sources and now the provider will return all annotations and labels attached to the object, regardless of the ignore_annotations and ignore_labels provider settings. In addition to that, a list of ignored labels when they are attached to kubernetes_job(_v1) and kubernetes_cron_job(_v1) resources were extended with labels [batch.kubernetes.io/controller-uid](http://batch.kubernetes.io/controller-uid) and [batch.kubernetes.io/job-name](http://batch.kubernetes.io/job-name) since they aim to replace controller-uid and job-name in the future Kubernetes releases. [GH-2345]

A special and warm welcome to the first contribution from our teammate @SarahFrench! :rocket:

Community Contributors :raised_hands:

@tbobm made their contribution in #2348@andremarianiello made their contribution in #2344@adinhodovic made their contribution in #2333@wonko made their contribution in #2362

2.24.0 (Nov 27, 2023)

ENHANCEMENTS:

kubernetes/schema_affinity_spec.go: Add match_fields to nodeAffinity [GH-2296]
kubernetes/schema_pod_spec.go: Add os to podSpecFields [GH-2290]
resource/kubernetes_config_map_v1_data: improve error handling while validating the existence of the target ConfigMap. [GH-2230]

BUG FIXES:

resource/kubernetes_labels: Add [“f:metadata”] check in kubernetes_labels to prevent crash with kubernetes_node_taints [GH-2246]

DOCS:

• Add example module for configuring OIDC authentication on EKS [GH-2287] • Add example module for configuring OIDC authentication on GKE [GH-2319]

NOTES:

• Bump Go version from 1.20 to 1.21. [GH-2337] • Bump Kubernetes dependencies from x.25.11 to x.27.8.

2.23.0 (August 16, 2023)

FEATURES:

resource/kubernetes_cron_job_v1: add a new volume type ephemeral to spec.job_template.spec.template.spec.volume to support generic ephemeral volumes. [GH-2199] • resource/kubernetes_cron_job: add a new volume type ephemeral to spec.job_template.spec.template.spec.volume to support generic ephemeral volumes. [GH-2199] • resource/kubernetes_daemon_set_v1: add a new volume type ephemeral to spec.template.spec.volume to support generic ephemeral volumes. [GH-2199] • resource/kubernetes_daemonset: add a new volume type ephemeral to spec.template.spec..volume to support generic ephemeral volumes. [GH-2199] • resource/kubernetes_deployment_v1: add a new volume type ephemeral to spec.template.spec.volume to support generic ephemeral volumes. [GH-2199] • resource/kubernetes_deployment: add a new volume type ephemeral to spec.template.spec.volume to support generic ephemeral volumes. [GH-2199] • resource/kubernetes_job_v1: add a new volume type ephemeral to spec.template.spec.volume to support generic ephemeral volumes. [GH-2199] • resource/kubernetes_job: add a new volume type ephemeral to spec.template.spec.volume to support generic ephemeral volumes. [GH-2199] • resource/kubernetes_pod_v1: add a new volume type ephemeral to spec.volume to support generic ephemeral volumes. [GH-2199] • resource/kubernetes_pod: add a new volume type ephemeral to spec.volume to support generic ephemeral volumes. [GH-2199]

ENHANCEMENTS:

resource/kubernetes_endpoint_slice_v1: make attribute endpoint.condition optional. If you had previously included an empty block condition {} in your configuration, we request you to remove it. Doing so will prevent receiving continuous “update in-place” messages while performing the plan and apply operations. [GH-2208] • resource/kubernetes_pod_v1: add a new attribute target_state to specify the Pod phase(s) that indicate whether it was successfully creat…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Add terraform-plugin-framework provider [GH-2347]
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
#2347 :wrench: Add framework provider

Description

This PR adds a new provider project using Terraform Plugin Framework and muxes with the main and manifest provider. The provider currently does nothing except provide the same provider block configuration interface as the other 2 providers. This provider will be used as a target for code generation.

Acceptance tests

☐ Have you added an acceptance test for the functionality being added? ☐ Have you run the acceptance tests on this branch?

Output from acceptance testing:

$ make testacc TESTARGS='-run=TestAccXXX'

...

Release Note

Release note for CHANGELOG:

...

References Community Note

• Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

looks like they switched to use terraform-plugin-framework , and the new framework has issues with computed lists

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
#713 Framework doesn't appear to support a computed list of objects.

Module version

github.com/hashicorp/terraform-plugin-framework v1.2.0

Relevant provider source code

https://github.com/liamawhite/tf-issue-repro

Debug Output

make test                                                                                                                                                 1.20.2
TF_ACC=1 go test ./...
--- FAIL: TestAccServiceAccountResource (0.70s)
    resource_test.go:36: Step 1/3 error: Error running apply: exit status 1
        
        Error: Value Conversion Error
        
          with provider_service_account.some-name,
          on terraform_plugin_test.tf line 4, in resource "provider_service_account" "some-name":
           4: resource provider_service_account "some-name" {}
        
        An unexpected error was encountered trying to build a value. This is always
        an error in the provider. Please report the following to the provider
        developer:
        
        Received unknown value, however the target type cannot handle unknown values.
        Use the corresponding `types` package type or a custom type that handles
        unknown values.
        
        Path: keys
        Target Type: []*repro.KeyModel
        Suggested Type: basetypes.ListValue
FAIL
FAIL    github.com/liamawhite/tf-issue-repro    1.064s
FAIL
make: *** [test] Error 1

Expected Behavior

To populate the model without having to define a custom type just for computed lists of objects.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
      args        = concat(local.exec_profile, ["eks", "get-token", "--cluster-name", try(aws_eks_cluster.default[0].id, "deleted")], local.exec_role)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

looks like that new provider framework still does not support computed lists, hence the error

Target Type: []struct { APIVersion basetypes.StringValue
"tfsdk:\"api_version\""; Command basetypes.StringValue "tfsdk:\"command\"";
Env map[string]basetypes.StringValue "tfsdk:\"env\""; Args
[]basetypes.StringValue "tfsdk:\"args\"" }
Suggested Type: basetypes.ListValue
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

try to pin the kubernetes provider to 2.24.0 and let us know

AdamP avatar

will do! I have a meeting in a few minutes, I will try right after. Thanks! Will let you know asap

AdamP avatar

You guys are life savers!!!!!! THANK YOU!! it worked

AdamP avatar

I swear, whenever I don’t have a provider pinned, weird stuff happens

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@AdamP FYI, we’ve just tested a few EKS clusters using the latest version 2.25.1 of the kubernetes provider, and all is OK

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

prob something else is/was wrong with your cluster (I’m not sure why it started working when you pinned the k8s provider to 2.24.0 but the issue is def something else, or related)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

update:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

when using the latest kubernetes provider 2.25.1 , the error occurs only when creating a new EKS cluster, it does not occur on updating/modifying existing clusters

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so the latest k8s provider broke the eks-cluster module

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we’ll update the module to work with the new provider version

AdamP avatar

fantastic! I just didn’t think to add (and always pin) the kubernetes provider. Thanks again for all the help!

setheryops avatar
setheryops

Is there a better way to do this than having to do a double lookup call? Code in

setheryops avatar
setheryops

module "thing" {
  source                = "../../modules/thing"
  instance_count        = lookup(lookup(var.instances_thing, terraform.workspace), "thing_count")
  instance_type         = lookup(lookup(var.instances_thing, terraform.workspace), "thing_instance_type")
}


variable "instances_thing" {
  type = map(object({
    thing_count         = number
    thing_instance_type = string
  }))

  default = {

    // workspace dev
    dev = {
      "thing_count"         = 1
      "thing_instance_type" = "t3.small"
    }

    // workspace stage
    stage = {
      "thing_count"         = 2
      "thing_instance_type" = "t3.small"
    }

    // workspace prod
    prod = {
      "thing_count"         = 3
      "thing_instance_type" = "m5.xlarge"
    }
  }
}
setheryops avatar
setheryops

Using this like I typically do is not working for me:

instance_count        = lookup(var.instances_thing, "${terraform.workspace}.thing_count")
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

should it be

lookup(var.instances_thing, terraform.workspace).thing_count
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also, lookup w/o a default value is deprecated, https://developer.hashicorp.com/terraform/language/functions/lookup

lookup - Functions - Configuration Language | Terraform | HashiCorp Developer

The lookup function retrieves an element value from a map given its key.

setheryops avatar
setheryops

Ahh…moving that ) and putting a . after works

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so w/o any defaults, it should be

var.instances_thing[terraform.workspace].thing_count


or

var.instances_thing[terraform.workspace][thing_count]
setheryops avatar
setheryops

This is still on 1.5.5…not sure if its depd yet in that version

setheryops avatar
setheryops

Doing it like that first suggestion you gave works though. Appreciate it.

setheryops avatar
setheryops

I knew that dbl lookup was not the way to go but it worked.

kallan.gerard avatar
kallan.gerard

It looks like you’re trying to have one configuration for multiple environments and use HCL shenanigans to switch between them

kallan.gerard avatar
kallan.gerard

Here’s what I would do:

• Get rid of Terraform Workspaces. They’re not intended to be used in this manner

• Have separate state files or prefixes for each environment

• Have a different directory in your git for each environment

• Call the instance of that module in the root module for each environments directory.

• Write the variables directly into the module block. Don’t use variables for root modules.

kallan.gerard avatar
kallan.gerard

Then you can do static analysis, simple and concrete

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

or just use Atmos (it handles scenarios like that and much more complex)

https://atmos.tools/category/quick-start

atmos

Quick Start | atmos

Take 30 minutes to learn the most important Atmos concepts.

setheryops avatar
setheryops

Getting rid of workspaces right now isnt an option for us. But you are right and I hate that we use them. I got it figured out though. Thanks

1

2024-01-06

2024-01-07

2024-01-08

David Escobar avatar
David Escobar

Hi everyone! I’m trying to create a Redis instance using your module. The apply pass but i’m not being able to connect to host from my local. This is my TF definition

module "vpc" {
  source                  = "cloudposse/vpc/aws"
  ipv4_primary_cidr_block = "172.16.0.0/16"
}

module "subnets" {
  source               = "cloudposse/dynamic-subnets/aws"
  availability_zones   = ["us-east-1a", "us-east-1b"]
  vpc_id               = module.vpc.vpc_id
  igw_id               = [module.vpc.igw_id]
  ipv4_cidr_block      = [module.vpc.vpc_cidr_block]
  nat_gateway_enabled  = true
  nat_instance_enabled = true
}

resource "aws_security_group" "elasticache_sg" {
  name        = "elasticache-sg${local.suffix}"
  description = "Security group for ElastiCache cluster"
  vpc_id      = module.vpc.vpc_id
}

resource "aws_security_group_rule" "elasticache_ingress" {
  type                     = "ingress"
  from_port                = 6379
  to_port                  = 6379
  protocol                 = "tcp"
  security_group_id        = aws_security_group.elasticache_sg.id
  source_security_group_id = module.redis.security_group_id
}

module "redis" {
  source                     = "cloudposse/elasticache-redis/aws"
  description                = "Redis cluster"
  name                       = "${var.project}-redis"
  availability_zones         = ["us-east-1a", "us-east-1b"]
  vpc_id                     = module.vpc.vpc_id
  allowed_security_group_ids = [aws_security_group.elasticache_sg.id]
  subnets                    = module.subnets.private_subnet_ids
  cluster_size               = 1
  instance_type              = "cache.t2.micro"
  apply_immediately          = true
  automatic_failover_enabled = false
}

And this is my output

output "elasticache_cluster_endpoint" {
  value = module.redis.host # blank btw
}

After a brief investigation, looks like i need this https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/accessing-elasticache.html but i’m not sure about how to incorporate these changes into my tf code.

My final user case is to connect redis with a few lambdas.

Any help is welcome

1
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Jeremy White (Cloud Posse)

2
Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)
  enabled  = local.enabled && length(var.zone_id) > 0 ? true : false
Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

As for the ec2-instance for access, I would recommend using the bastion module

ec2-bastion-server | The Cloud Posse Developer Hub

Terraform module to define a generic Bastion host with parameterized user_data and support for AWS SSM Session Manager for remote access with IAM authentication.

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

in the bastion config, if you set ssm_enabled to true, then you won’t have to set up or connect ssh. Instead, you can just use the aws console to reach redis. Here’s more on using ssm connect with ec2-instances

New – AWS Systems Manager Session Manager for Shell Access to EC2 Instances | Amazon Web Servicesattachment image

Update (August 2019) – The original version of this blog post referenced the now-deprecated AmazonEC2RoleForSSM IAM policy. It has been updated to reference the AmazonSSMManagedInstanceCore policy instead. It is a very interesting time to be a corporate IT administrator. On the one hand, developers are talking about (and implementing) an idyllic future where infrastructure as […]

David Escobar avatar
David Escobar

i’m not sure if i’m following your thoughts. So, I need to define an EC2 to make redis available for AWS Lambdas?

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

err, it was in response to “not being able to connect to host from my local”. I figured you just wanted a working connection

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

for lambda, just make sure you give them vpc subnet connections.

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

then allow their security group to connect to the security group of the elasticache

David Escobar avatar
David Escobar

sorry i already forgot to answer ups; you were right! every reply was so accurate that within an hour or so i could finish my issue, thanks!

1

2024-01-09

2024-01-10

olad avatar

Hello, I have an aws account that contains some infrastructure created manually. Are there tools out there that can discover the infra and also create terraform script for the infra? appreciate any info

digipandit91 avatar
digipandit91

Checkit out — terrateam.

RB avatar
GoogleCloudPlatform/terraformer

CLI tool to generate terraform files from existing infrastructure (reverse Terraform). Infrastructure to Code

olad avatar

thanks very much for the quick response @digipandit91 & @RB. I just watched the demo on terraformer, very solid. that’s all I need. thanks again.

setheryops avatar
setheryops

Ive use terraformer all the time…solid tool

1
Kuba Martin avatar
Kuba Martin

OpenTofu is now stable!

2
2
Doug Bergh avatar
Doug Bergh

I am configuring an SFTP server using cloudposse terraform-aws-transfer-sftp. I don’t see a way to give it a lambda identity provider. Is there one? Thanks in advance!

1
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Jeremy White (Cloud Posse)

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

seems like you would need to adjust the module to allow it to configure the AWS_LAMBDA identity type here and then you would want to configure the function attribute like so . As long as you create the lambda with AWS’s recommended policies, it should work smoothly.

Using AWS Lambda to integrate your identity provider - AWS Transfer Family

Use a Lambda function as an identity provider for Transfer Family

  identity_provider_type = "SERVICE_MANAGED"
Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

Note that the module doesn’t support aws provider v5 yet

Doug Bergh avatar
Doug Bergh

Jeremy, sounds good, thanks much!

Doug Bergh avatar
Doug Bergh

also a custom hostname (SFTP server using cloudposse terraform-aws-transfer-sftp)

José avatar

Create a r53 record and attach it to the sftp. You just need to attach the zone_id which will belong the new record, and the domain_name. Since I do multi-stage I perform a lockup, but those are simple string values. With var values like stage=prd and dns_zone=cloudposse.com the final domain should looks like sftp.prd.cloudposse.com

Doug Bergh avatar
Doug Bergh

SWEET THANKS!!

2024-01-11

Release notes from terraform avatar
Release notes from terraform
07:13:31 PM

v1.7.0-rc2 1.7.0-rc2 (January 11, 2024) UPGRADE NOTES:

Input validations are being restored to the state file in this version of Terraform. Due to a state interoperability issue (#33770) in earlier versions, users that require interaction between different minor series should ensure they have upgraded to the following patches:

Users of…

Release v1.7.0-rc2 · hashicorp/terraformattachment image

1.7.0-rc2 (January 11, 2024) UPGRADE NOTES:

Input validations are being restored to the state file in this version of Terraform. Due to a state interoperability issue (#33770) in earlier versions…

rviniandrade avatar
rviniandrade

Hey there!

I’m using the cloudposse/elastic-beanstalk-environment module, and I’m having some issues with the S3 policy and ElasticBeanstalk resource from the module.

│ Error: Error putting S3 policy: MalformedPolicy: Policy has invalid resource
│       status code: 400, request id: R97RSKZPBGGW6FP5, host id: uKnFrAwfNP4l5KEhJpJwmm9Sy8qMMg4AwYdV4vbeBmoU4kgFv5HsIigIPZciVDT4Pd3lY4Tc9LU=
│ 
│   with module.eb_environment_core_api[0].module.elb_logs.module.s3_bucket.module.aws_s3_bucket.aws_s3_bucket_policy.default[0],
│   on .terraform/modules/eb_environment_core_api.elb_logs.s3_bucket.aws_s3_bucket/main.tf line 461, in resource "aws_s3_bucket_policy" "default":
│  461: resource "aws_s3_bucket_policy" "default" {
│ 
╵
╷
│ Error: waiting for Elastic Beanstalk Environment (e-rkgqnkyvjp) create: couldn't find resource (21 retries)
│ 
│   with module.eb_environment_core_api[0].aws_elastic_beanstalk_environment.default[0],
│   on .terraform/modules/eb_environment_core_api/main.tf line 602, in resource "aws_elastic_beanstalk_environment" "default":
│  602: resource "aws_elastic_beanstalk_environment" "default" {

I would appreciate it if you guys could help me with this.

I’m leaving the module code in a snippet below, and please let me know if you need more information that I can provide.

rviniandrade avatar
rviniandrade

2024-01-15

Igor Rodionov avatar
Igor Rodionov

@rviniandrade, can you try setting s3_bucket_access_log_bucket_name to some value. The error is a bit weird. If the workaround works, message me. I will try to investigate the bug

rviniandrade avatar
rviniandrade

Hey Igor! It really worked with the S3 policy issue (thank you!), but I’m still getting a message error from elastic beanstalk resource.

 Error: waiting for Elastic Beanstalk Environment (e-idnvmxbmxd) create: couldn't find resource (21 retries)
│ 
│   with module.eb_environment_core_api[0].aws_elastic_beanstalk_environment.default[0],
│   on .terraform/modules/eb_environment_core_api/main.tf line 602, in resource "aws_elastic_beanstalk_environment" "default":
│  602: resource "aws_elastic_beanstalk_environment" "default" {
Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Igor Rodionov

2024-01-16

2024-01-17

Boris Dyga avatar
Boris Dyga

Hi team! Could you have a look at this PR please? https://github.com/cloudposse/terraform-aws-config/pull/80

#80 Added the option to use access tokens

That feature allows to access private GitHub repos, where custom conformance packs could be stored

what

• updates to the conformance_pack submodule • added the acess_token variable (defaults to empty string) • when provided its value is implemented in the conformance pack URL allowing to access private GitHub repos

why

• sometimes customized conformance packs are stored in repos with restricted access

references

1
Boris Dyga avatar
Boris Dyga

Ping

Release notes from terraform avatar
Release notes from terraform
08:13:31 PM

v1.7.0 1.7.0 (January 17, 2024) UPGRADE NOTES:

Input validations are being restored to the state file in this version of Terraform. Due to a state interoperability issue (#33770) in earlier versions, users that require interaction between different minor series should ensure they have upgraded to the following patches:

Users of Terraform…

Release v1.7.0 · hashicorp/terraformattachment image

1.7.0 (January 17, 2024) UPGRADE NOTES:

Input validations are being restored to the state file in this version of Terraform. Due to a state interoperability issue (#33770) in earlier versions, us…

Craig avatar

How am I meant to be using the terraform null label with a root module that also includes a sub-module to loop through a list of config options, where I need the sub-module resources to be tagged with the same tags as the root module?

Craig avatar

It seems like I can’t just pass in the same context that I’m using in my root module, to the child module, without also having to pass in all of the context variables that I already set in my root module

Matt Gowie avatar
Matt Gowie

Include [context.tf](http://context.tf) into your sub module.

Matt Gowie avatar
Matt Gowie

Then it will accept the context argument and you can just pass that.

Matt Gowie avatar
Matt Gowie
terraform-null-label: the why and how it should be used | Masterpoint Consultingattachment image

A post highlighting one of our favorite terraform modules: terraform-null-label. We dive into what it is, why it’s great, and some potential use cases in …

Matt Gowie avatar
Matt Gowie

But you’re getting into the advanced functionality that we allude to at the bottom of that post. And it’s something we haven’t written a post on yet. But the gist is: Include this file into your child module and it should then enable you to do what you want.

``` #

ONLY EDIT THIS FILE IN github.com/cloudposse/terraform-null-label

All other instances of this file should be a copy of that one

# #

Copy this file from https://github.com/cloudposse/terraform-null-label/blob/master/exports/context.tf

and then place it in your Terraform module to automatically get

Cloud Posse’s standard configuration inputs suitable for passing

to Cloud Posse modules.

#

curl -sL https://raw.githubusercontent.com/cloudposse/terraform-null-label/master/exports/context.tf -o context.tf

#

Modules should access the whole context as module.this.context

to get the input variables with nulls for defaults,

for example context = module.this.context,

and access individual variables as module.this.<var>,

with final values filled in.

#

For example, when using defaults, module.this.context.delimiter

will be null, and module.this.delimiter will be - (hyphen).

#

module “this” { source = “cloudposse/label/null” version = “0.25.0” # requires Terraform >= 0.13.0

enabled = var.enabled namespace = var.namespace tenant = var.tenant environment = var.environment stage = var.stage name = var.name delimiter = var.delimiter attributes = var.attributes tags = var.tags additional_tag_map = var.additional_tag_map label_order = var.label_order regex_replace_chars = var.regex_replace_chars id_length_limit = var.id_length_limit label_key_case = var.label_key_case label_value_case = var.label_value_case descriptor_formats = var.descriptor_formats labels_as_tags = var.labels_as_tags

context = var.context }

Copy contents of cloudposse/terraform-null-label/variables.tf here

variable “context” { type = any default = { enabled = true namespace = null tenant = null environment = null stage = null name = null delimiter = null attributes = [] tags = {} additional_tag_map = {} regex_replace_chars = null label_order = [] id_length_limit = null label_key_case = null label_value_case = null descriptor_formats = {} # Note: we have to use [] instead of null for unset lists due to # https://github.com/hashicorp/terraform/issues/28137 # which was not fixed until Terraform 1.0.0, # but we want the default to be all the labels in label_order # and we want users to be able to prevent all tag generation # by setting labels_as_tags to [], so we need # a different sentinel to indicate “default” labels_as_tags = [“unset”] } description = «-EOT Single object for setting entire context at once. See description of individual variables for details. Leave string and numeric variables as null to use default value. Individual variable settings (non-null) override settings in context object, except for attributes, tags, and additional_tag_map, which are merged. EOT

validation { condition = lookup(var.context, “label_key_case”, null) == null ? true : contains([“lower”, “title”, “upper”], var.context[“label_key_case”]) error_message = “Allowed values: lower, title, upper.” }

validation { condition = lookup(var.context, “label_value_case”, null) == null ? true : contains([“lower”, “title”, “upper”, “none”], var.context[“label_value_case”]) error_message = “Allowed values: lower, title, upper, none.” } }

variable “enabled” { type = bool default = null description = “Set to false to prevent the module from creating any resources” }

variable “namespace” { type = string default = null description = “ID element. Usually an abbreviation of your organization name, e.g. ‘eg’ or ‘cp’, to help ensure generated IDs are globally unique” }

variable “tenant” { type = string default = null description = “ID element (Rarely used, not included by default). A customer identifier, indicating who this instance of a resource is for” }

variable “environment” { type = string default = null description = “ID element. Usually used for region e.g. ‘uw2’, ‘us-west-2’, OR role ‘prod’, ‘staging’, ‘dev’, ‘UAT’” }

variable “stage” { type = string default = null description = “ID element. Usually used to indicate role, e.g. ‘prod’, ‘staging’, ‘source’, ‘build’, ‘test’, ‘deploy’, ‘release’” }

variable “name” { type = string default = null description = «-EOT ID element. Usually the component or solution name, e.g. ‘app’ or ‘jenkins’. This is the only ID element not also included as a tag. The “name” tag is set to the full id string. There is no tag with the value of the name input. EOT }

variable “delimiter” { type = string default = null description = «-EOT Delimiter to be used between ID elements. Defaults to - (hyphen). Set to "" to use no delimiter at all. EOT }

variable “attributes” { type = list(string) default = [] description = «-EOT ID element. Additional attributes (e.g. workers or cluster) to add to id, in the order they appear in the list. New attributes are appended to the end of the list. The elements of the list are joined by the delimiter and treated as a single ID element. EOT }

variable “labels_as_tags” { type = set(string) default = [“default”] description = «-EOT Set of labels (ID elements) to include as tags in the tags output. Default is to include all labels. Tags with empty values will not be included in the tags output. Set to [] to suppress all generated tags. Notes: The value of the name tag, if included, will be the id, not the name. Unlike other null-label inputs, the initial setting of labels_as_tags cannot be changed in later chained modules. Attempts to change it will be silently ignored. EOT }

variable “tags” { type = map(string) default = {} description = «-EOT Additional tags (e.g. {'BusinessUnit': 'XYZ'}). Neither the tag keys nor the tag values will be modified by this module. EOT }

variable “additional_tag_map” { type = map(string) default = {} description = «-EOT Additional key-value pairs to add to each map in tags_as_list_of_maps. Not added to tags or id. This is for some rare cases where resources want additional configuration of tags and therefore take a list of maps with tag key, value, and additional configuration. EOT }

variable “label_order” { type = list(string) default = null description = «-EOT The order in which the labels (ID elements) appear in the id. Defaults to [“namespace”, “environment”, “stage”, “name”, “attributes”]. You can omit any of the 6 labels (“tenant” is the 6th), but at least one must be present. EOT }

variable “regex_replace_chars” { type = string default = null description = «-EOT Terraform regular expression (regex) string. Characters matching the regex will be removed from the ID elements. If not set, "/[^a-zA-Z0-9-]/" is used to remove all characters other than hyphens, letters and digits. EOT }

variable “id_length_limit” { type = number default = null description = «-EOT Limit id to this many characters (minimum 6). Set to 0 for unlimited length. Set to null for keep the existing setting, which defaults to 0. Does not affect id_full. EOT validation { condition = var.id_length_limit == null ? true : var.id_length_limit >= 6 || var.id_length_limit == 0 error_messa…

Craig avatar

When I include this context:

module "this" {
  source  = "cloudposse/label/null"
  version = "0.25.0" # requires Terraform >= 0.13.0

  enabled             = var.enabled
  namespace           = var.namespace
  tenant              = var.tenant
  environment         = var.environment
  stage               = var.stage
  name                = var.name
  delimiter           = var.delimiter
  attributes          = var.attributes
  tags                = var.tags
  additional_tag_map  = var.additional_tag_map
  label_order         = var.label_order
  regex_replace_chars = var.regex_replace_chars
  id_length_limit     = var.id_length_limit
  label_key_case      = var.label_key_case
  label_value_case    = var.label_value_case
  descriptor_formats  = var.descriptor_formats
  labels_as_tags      = var.labels_as_tags

  context = var.context
}

I just end up with a bunch of missing variables that TF is complaining about

Craig avatar

do I really have to define them against just for my sub-module?

Matt Gowie avatar
Matt Gowie

Yeah. You need to include the whole file. It’s a drop in file to your child modules. It includes all the vars + the this instance so you can drop it in, use it, and accept the vars in the calling root module.

Matt Gowie avatar
Matt Gowie

Starting to make sense?

Craig avatar

….kinda? What I dont understand is why when we (and this could totally just be our Org doing this) include (in our root project) a

module "this" {
  source  = "cloudposse/label/null"
  version = "0.25.0"

  namespace = var.namespace
  stage     = var.accountname
  delimiter = "-"
}

as our context

Craig avatar

without any of the extra variables

Matt Gowie avatar
Matt Gowie

Yeah, you’re just creating your own “var.namespace” and “var.accountname” vars in your root module that you’re then using in those arguments.

But then when you want to create that same structure all the way down your hierarchy of child modules that your root module calls… you can either define a bunch of annoying inputs OR you can just drop in the full [context.tf](http://context.tf) file, pass context = module.this.context when you use that child module, and then everything will wire up nicely.

Matt Gowie avatar
Matt Gowie

It is complex, so don’t feel discouraged haha.

We have a draft of our blog post on this advanced usage… I’ll share that with you privately

Craig avatar

That makes a bit more sense now

Craig avatar

Say I had a child module that loops and creates 3 different AWS resources for every object in a map, how can I ensure that I am applying one tag for each resource type as needed?

Right now I’m just passing the tags like

module "tasks" {
  source   = "./datasync_tasks"
  for_each = var.datasync_configuration

  datasync_task_name = each.key
  datasync_src_host  = each.value.src_host
  subdirectory       = each.value.subdirectory

  datasync_agent                     = aws_datasync_agent.default.arn
  datasync_iam_role                  = module.aws_datasync_role.arn
  datasync_destination_s3_bucket     = data.aws_s3_bucket.dest.arn
  datasync_task_cloudwatch_log_group = aws_cloudwatch_log_group.datasync_task_logs.arn

  context = module.this.context

  tags = merge(local.local_tags, var.additional_tags)
}

but say I wanted to tag the datasync locations differently than the datasync tasks, do I have to add that kind of logic into my child module?

Craig avatar

I was going to just pass a task tag like tags = merge(local.local_tags, var.additional_tags, module.datasync_task_label) and use it with context = module.datasync_task_label.tags but if I have multiple resource types in the child module, I can’t really do that

Craig avatar

oh I guess I could just pass all the tags to the module, and just use what I need for each resource?

Matt Gowie avatar
Matt Gowie


do I have to add that kind of logic into my child module?
This is a way to do it.

A better way might be to update your datasync_configuration to have an additional attribute called extra_tags and then you can pass that into your tags = merge(local_tags, each.value.extra_tags .

Craig avatar

ah, I don’t think that’s what I want to do since I’d just be applying the same tag to each of the created datasync resources, when really what I want is a different tag type for each different resource.

I think defining the tag in the root module context and then just passing it along to the module, and then for each resource type I have use it like tags = merge(module.this.tags, module.specific_resource_tag) with each resource type I’d like to tag separately

Craig avatar

*is probably the simplest way

Craig avatar

But yeah it’s a lot clearer now

Matt Gowie avatar
Matt Gowie

Ah you’re saying tagging certain resources within the child module? Yeah, you would need to create separate labels for each. Example of that here: https://github.com/cloudposse/terraform-aws-ecs-alb-service-task/blob/main/main.tf#L23-L47

module "task_label" {
  source     = "cloudposse/label/null"
  version    = "0.25.0"
  enabled    = local.create_task_role
  attributes = ["task"]

  context = module.this.context
}

module "service_label" {
  source     = "cloudposse/label/null"
  version    = "0.25.0"
  attributes = ["service"]

  context = module.this.context
}

module "exec_label" {
  source     = "cloudposse/label/null"
  version    = "0.25.0"
  enabled    = local.create_exec_role
  attributes = ["exec"]

  context = module.this.context
}
1

2024-01-18

Omar avatar

Hey all, I am using the cloudposse/terraform-aws-vpc-peering-multi-account module and I want to override the provider defined here. Is it possible? I am getting this error.

The configuration of module.vpc_peering has its
90│ own local configuration for aws.accepter, and so it cannot accept an
91│ overridden configuration

Mainly I was trying to solve Provider configuration not present as explained here

any recommendations? thanks.

Error: Provider configuration not present

Introduction A module in Terraform is a logical, reusable grouping of resources. Terraform modules can be classified into two major categories - root modules and child modules. This article provide…

cloudposse/terraform-aws-vpc-peering-multi-account
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s not possible to override a provider defined in a child module from a parent module

Error: Provider configuration not present

Introduction A module in Terraform is a logical, reusable grouping of resources. Terraform modules can be classified into two major categories - root modules and child modules. This article provide…

cloudposse/terraform-aws-vpc-peering-multi-account
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

child modules implicitly inherit the providers from the parent modules (if child modules don’t have their own provider defined)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what is not working?

Omar avatar

I want to override both the requester and the accepter providers mainly to solve the Provider configuration not present as explained here but it’s not supported any work around?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

since the providers are defined in the child module, they can’t be overridden.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but… Provider configuration not present error is not because of that. Note that the module works in many deployments. The error is b/c some other issues with Terraform/providers

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

on the other hand, if you want to test custom changes in the module,:

  1. Make a subfolder modules/peering-multi-account in your repo (in the TF component)
  2. Clone cloudposse/terraform-aws-vpc-peering-multi-account into the subfolder
  3. Make the changes you need to test
  4. Reference the module source in your code - instead of source = "cloudposse/vpc-peering-multi-account/aws" use source = ./modules/peering-multi-account
cloudposse/terraform-aws-vpc-peering-multi-account

Terraform module to provision a VPC peering across multiple VPCs in different accounts by using multiple providers

2024-01-19

AdamP avatar

Hey everyone! I have an issue trying to spin up a new elasticache cluster in AWS. I did search slack for the error, however I couldn’t find much help for my scenario.

AdamP avatar

crap, one second I hit enter too quickly

AdamP avatar

Root module:

module "elasticache-redis" {
  source  = "cloudposse/elasticache-redis/aws"
  version = "1.2.0"
  // <https://github.com/cloudposse/terraform-aws-elasticache-redis>

  namespace = var.namespace
  stage     = var.stage
  name      = var.name

  apply_immediately          = true
  at_rest_encryption_enabled = true
  transit_encryption_enabled = true
  cluster_mode_enabled       = true
  automatic_failover_enabled = true
  auto_minor_version_upgrade = true
  create_security_group      = false

  instance_type  = var.instance_type
  engine_version = var.engine_version
  family         = var.family
  # parameter                            = var.parameter
  auth_token                           = data.vault_kv_secret_v2.hcp_secret.data.secret
  cluster_mode_num_node_groups         = var.cluster_mode_num_node_groups
  cluster_mode_replicas_per_node_group = var.cluster_mode_replicas_per_node_group
  final_snapshot_identifier            = timestamp()
  maintenance_window                   = var.maintenance_window

  vpc_id                        = module.vpc.vpc_id
  associated_security_group_ids = [module.elasticache_sg.id]
  subnets                       = module.subnets.private_subnet_ids

  tags = var.tags

Here is the error:

Plan: 40 to add, 0 to change, 0 to destroy.
╷
│ Error: Inconsistent conditional result types
│
│   on .terraform/modules/elasticache_sg/main.tf line 197, in resource "aws_security_group_rule" "keyed":
│  197:   for_each = local.rule_create_before_destroy ? local.keyed_resource_rules : {}
│     ├────────────────
│     │ local.keyed_resource_rules is object with 2 attributes
│     │ local.rule_create_before_destroy is true
│
│ The true and false result expressions must have consistent types. The 'true' value includes object attribute
│ "_allow_all_egress_", which is absent in the 'false' value.
╵
╷
│ Error: Inconsistent conditional result types
│
│   on .terraform/modules/elasticache_sg/main.tf line 229, in resource "aws_security_group_rule" "dbc":
│  229:   for_each = local.rule_create_before_destroy ? {} : local.keyed_resource_rules
│     ├────────────────
│     │ local.keyed_resource_rules is object with 2 attributes
│     │ local.rule_create_before_destroy is true
│
│ The true and false result expressions must have consistent types. The 'false' value includes object attribute
│ "_allow_all_egress_", which is absent in the 'true' value.

Provider info:

terraform {
  required_version = "~> 1.6.0"
###
    aws = {
      source  = "hashicorp/aws"
      version = "5.26.0"
    }

I can’t seem to figure this one out yet, I’m still researching the error too.

AdamP avatar

oh wait, I think I see whats going on, I wasn’t looking at my security group root module. One moment let me take a closer look at that

AdamP avatar

ok, I think this is where I left off when I was workin on this last. I will make sure I have finished the security group root module.. its likely that I did not finish this aspect when I was working on it last. Nothing to see here, carry on

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in any case, errors like

The true and false result expressions must have consistent types. The 'false' value includes object attribute
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

are from expressions like

for_each = local.rule_create_before_destroy ? local.keyed_resource_rules : {}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

TF always complains about that. The default {} object does not have all the attributes that the main object has

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

there is an easy fix (works in all cases):

for_each = { for key, value in local.keyed_resource_rules: key => value if local.rule_create_before_destroy } 
AdamP avatar

I don’t have any locals declared in my root module, am I supposed to add that to my root module?

I’m kinda confused about that for_each and where it should go.. if it goes somewhere in the .terraform folder, I think that would be a problem when I get this into a pipeline which things will run in a container.. so that directory won’t be there each time the pipeline runs.

Interesting stuff:

  • Have the exact same syntax for the security group root module, on my EKS root modules and I don’t get the error.

When I comment out my security_groups.tf file, I can run the root module with create_security_group = true with no errors.

module "elasticache_security_group" {
  source  = "cloudposse/security-group/aws"
  version = "2.2.0"
  //  <https://github.com/cloudposse/terraform-aws-security-group>

  namespace = var.namespace
  stage     = var.stage
  name      = var.name

  allow_all_egress = true

  rules = [
    {
      key         = "SOME KEY"
      type        = "ingress"
      from_port   = 6379
      to_port     = 6379
      protocol    = "tcp"
      cidr_blocks = [var.sg_cidr_blocks]
      self        = null
      description = "SOME DESCRIPTION"
    }
  ]

  vpc_id = module.vpc.vpc_id
}

I don’t think this module works with a security group in the same root directoy of the module, without adding someting not clear from documentation (to me at least)

.
├── CODEOWNERS
├── README.md
├── backends
│   ├── nonprod.tfbackend
│   ├── production.tfbackend
│   └── staging.tfbackend
├── build.groovy
├── data.tf
├── elasticache-redis.tf
├── environments
│   ├── nonprod.tfvar
│   ├── production.tfvar
│   └── staging.tfvar
├── partial-backend.tf
├── provider.tf
├── security_groups.tf
├── subnets.tf
├── variables.tf
└── vpc.tf

maybe I’m just not exactly sure about htat for_each you stated though

AdamP avatar

additional_security_group_rules works fine though, I’m going that route so I can move forward

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that was just an example on how to fix the error

The true and false result expressions must have consistent types. The 'false' value includes object attribute
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that for_each is inside the SG terraform module ( I was saying that the module could be improved)

AdamP avatar

ahh ok cool, thanks!!!

AdamP avatar

that makes a lot more sense then

2024-01-20

Wei Quan avatar
Wei Quan

Hello everyone. I am using this terraform-aws-sso terraform module from this github repo: cloudposse/terraform-aws-sso: Terraform module to configure AWS Single Sign-On (SSO) (github.com), and I got this error:

Error: Invalid for_each argument
on .terraform/modules/sso_account_assignments/modules/account-assignments/main.tf line 29, in resource "aws_ssoadmin_account_assignment" "this":
  for_each = local.assignment_map
local.assignment_map will be known only after apply
The "for_each" map includes keys derived from resource attributes that cannot be determined until apply, and so Terraform cannot determine the full set of keys that will identify the instances of this resource.

When working with unknown values in for_each, it's better to define the map keys statically in your configuration and place apply-time results only in the map values.

Alternatively, you could use the -target planning option to first apply only the resources that the for_each value depends on, and then apply a second time to fully converge.

And I’ve forked the repo and added my fix with this commit: https://github.com/wquan1/terraform-aws-sso/commit/c2049b3e08d278aa79413f40540b1009746faf73. It seems working fine for me. Here is the Pull Request: https://github.com/cloudposse/terraform-aws-sso/pull/53. Please help review and let me know. Thanks!

1

2024-01-21

2024-01-22

Boris Dyga avatar
Boris Dyga

@Andriy Knysh (Cloud Posse) once the PR has been merged, how long does it usually take to propagate changes to Terraform registry?

The cloudposse/config/aws//modules/conformance-pack still has only 1.1.0 version available

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

should be already in the registry, it usually happens right away since it’s transparent. Not sure why it’s not there, might be some issues with the registry, let’s check a bit later

Boris Dyga avatar
Boris Dyga

Thanks

Patrick McDonald avatar
Patrick McDonald

hello, can I pass in helm plugin commands to the terraform-helm-release provider?

2024-01-23

Karim avatar

Hello all we’re using cloudposse terraform module terraform-aws-rds module which is using terraform-null-label v.0.25.0 & terraform-aws-route53-cluster-hostname v0.13.0 modules under-hood. Suddenly, terraform fails downloading the modules saying the branch not exist as below:

│ Error: Failed to download module
│ Could not download module "this" (context.tf:23) source code from
│ "git::<https://github.com/cloudposse/terraform-null-label?ref=488ab91e34a24a86957e397d9f7262ec5925586a>":
│ error downloading
│ '<https://github.com/cloudposse/terraform-null-label?ref=488ab91e34a24a86957e397d9f7262ec5925586a>':
│ /opt/homebrew/bin/git exited with 128: Cloning into
│ '.terraform/modules/this'...
│ fatal: Remote branch 488ab91e34a24a86957e397d9f7262ec5925586a not found in
│ upstream origin

However the under-hood modules are referenced by their tags not commit hashes. Any insights what could be the reason?

kallan.gerard avatar
kallan.gerard

I wonder, are you referencing the same module multiple times in your configuration

kallan.gerard avatar
kallan.gerard

I remember someone talking about a bug in Terraform when it came to multiple copies of an external module

Boris Dyga avatar
Boris Dyga
#83 The access token is now passed in a http header

This is done to avoid exposure as the data.http.id (which contains the URL) in the logs.

Added the MacOS .DS_Store files to .gitignore

what

• The access token is now passed in a http header • Added the MacOS .DS_Store files to .gitignore

why

• This is done to avoid exposure as the data.http.id (which contains the URL) in the logs.

dan avatar

anyone have an example of using a k8s provider to create resources after this cloudposse eks module has been created?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-helm-release
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the module uses k8s and helm providers

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also, this component uses the k8s provider and k8s resources directly https://github.com/cloudposse/terraform-aws-components/blob/main/modules/eks/storage-class/main.tf

locals {
  enabled = module.this.enabled

  efs_components = local.enabled ? toset([for k, v in var.efs_storage_classes : v.efs_component_name]) : []

  # In order to use `optional()`, the variable must be an object, but
  # object keys must be valid identifiers and cannot be like "csi.storage.k8s.io/fstype"
  # See <https://github.com/hashicorp/terraform/issues/22681>
  # So we have to convert the object to a map with the keys the StorageClass expects
  ebs_key_map = {
    fstype = "csi.storage.k8s.io/fstype"
  }
  old_ebs_key_map = {
    fstype = "fsType"
  }

  efs_key_map = {
    provisioner-secret-name      = "csi.storage.k8s.io/provisioner-secret-name"
    provisioner-secret-namespace = "csi.storage.k8s.io/provisioner-secret-namespace"
  }

  # Tag with cluster name rather than just stage ID.
  tags = merge(module.this.tags, { Name = module.eks.outputs.eks_cluster_id })
}

resource "kubernetes_storage_class_v1" "ebs" {
  for_each = local.enabled ? var.ebs_storage_classes : {}

  metadata {
    name = each.key
    annotations = {
      "storageclass.kubernetes.io/is-default-class" = each.value.make_default_storage_class ? "true" : "false"
    }
    labels = each.value.labels
  }

  # Tags are implemented via parameters. We use "tagSpecification_n" as the key, starting at 1.
  # See <https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/docs/tagging.md#storageclass-tagging>
  parameters = merge({ for k, v in each.value.parameters : (
    # provisioner kubernetes.io/aws-ebs uses the key "fsType" instead of "csi.storage.k8s.io/fstype"
    lookup((each.value.provisioner == "kubernetes.io/aws-ebs" ? local.old_ebs_key_map : local.ebs_key_map), k, k)) => v if v != null && v != "" },
    each.value.include_tags ? { for i, k in keys(local.tags) : "tagSpecification_${i + 1}" => "${k}=${local.tags[k]}" } : {},
  )

  storage_provisioner = each.value.provisioner
  reclaim_policy      = each.value.reclaim_policy
  volume_binding_mode = each.value.volume_binding_mode
  mount_options       = each.value.mount_options

  # Allowed topologies are poorly documented, and poorly implemented.
  # According to the API spec <https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/#storageclass-v1-storage-k8s-io>
  # it should be a list of objects with a `matchLabelExpressions` key, which is a list of objects with `key` and `values` keys.
  # However, the Terraform resource only allows a single object in a matchLabelExpressions block, not a list,,
  # the EBS driver appears to only allow a single matchLabelExpressions block, and it is entirely unclear
  # what should happen if either of the lists has more than one element. So we simplify it here to be singletons, not lists.
  dynamic "allowed_topologies" {
    for_each = each.value.allowed_topologies_match_label_expressions != null ? ["zones"] : []
    content {
      match_label_expressions {
        key    = each.value.allowed_topologies_match_label_expressions.key
        values = each.value.allowed_topologies_match_label_expressions.values
      }
    }
  }

  # Unfortunately, the provider always sets allow_volume_expansion to something whether you provide it or not.
  # There is no way to omit it.
  allow_volume_expansion = each.value.allow_volume_expansion
}

resource "kubernetes_storage_class_v1" "efs" {
  for_each = local.enabled ? var.efs_storage_classes : {}

  metadata {
    name = each.key
    annotations = {
      "storageclass.kubernetes.io/is-default-class" = each.value.make_default_storage_class ? "true" : "false"
    }
    labels = each.value.labels
  }
  parameters = merge({ fileSystemId = module.efs[each.value.efs_component_name].outputs.efs_id },
  { for k, v in each.value.parameters : lookup(local.efs_key_map, k, k) => v if v != null && v != "" })

  storage_provisioner = each.value.provisioner
  reclaim_policy      = each.value.reclaim_policy
  volume_binding_mode = each.value.volume_binding_mode
  mount_options       = each.value.mount_options
}

1
1
RB avatar

Will cloudposse ever move to using CloudControl API via hashicorp/terraform-provider-awscc ?

hashicorp/terraform-provider-awscc

Terraform AWS Cloud Control provider

RB avatar
hashicorp/terraform-provider-awscc

Terraform AWS Cloud Control provider

RB avatar

I’ve seen it before but it looks like it has some resources that may not be available in the aws provider.

ref https://docs.aws.amazon.com/awssupport/latest/user/creating-resources-with-cloudformation.html?icmpid=docs_support_slack#terraform-support-app

Creating AWS Support App in Slack resources with AWS CloudFormation - AWS Support

Create resources for AWS Support App in Slack using an AWS CloudFormation template.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@RB you are asking if Cloud Posse will create TF components for that?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(since it’s a Tf provider, anyone can use it with Terraform and Atmos)

RB avatar

not TF components but maybe some modules that may make use of it

RB avatar

i didnt know it was so much more developed and i didnt realize that there are some api calls that only exist in awscc and not in aws provider

RB avatar

i was wondering if there was any rule against using it (such as until awscc is out of beta )

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you prob don’t want to use the http provider to call the API directly, should wait for the awscc provider

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It would have to be an entirely new module to follow proper naming conventions

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

E.g. terraform-awscc-foobar

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I am not sure when/where the tipping point will be. Most cloud posse development is customer driven. This has not come up.

2
RB avatar

It will probably help a lot once it’s out of beta.

It’s cool that it creates cloudformation stacks as an output so drift detection is built in

I wonder if they generate the terraform resources too

Gheorghe Casian avatar
Gheorghe Casian
#50 Add permission boundary attachment logic

what

This commit adds a variable and logic to attach permission boundaries to permission sets.

why

This functionality is useful in the context of deploying permission sets.

references

fixed #37

Notes

Just in time (?) provisioning for the boundary to the target account doesn’t seem to work so the boundary policy needs to exist in the target account prior to attempting to attach it.

1

2024-01-24

Release notes from terraform avatar
Release notes from terraform
12:53:33 PM

v1.7.1 1.7.1 (January 24, 2024) BUG FIXES:

terraform test: Fix crash when referencing variables or functions within the file level variables block. (#34531) terraform test: Fix crash when override_module block was missing the outputs attribute. (<a href=”https://github.com/hashicorp/terraform/issues/34563“…

terraform test: Fix crash when file level variables reference variables. by liamcervante · Pull Request #34531 · hashicorp/terraformattachment image

Fixes #34529 This PR also allows users to reference functions at test-file level global variables, which is something we published within the CHANGELOG for v1.7. Target Release

1.7.1 Draft CHANGEL…

mocking overrides: default to concrete empty object when values are missing by liamcervante · Pull Request #34563 · hashicorp/terraformattachment image

When the values or outputs attribute was missing from the override_* blocks in the new mocking framework, we were setting the value to be cty.NilVal. This was then causing a crash later in the proc…

Boris Dyga avatar
Boris Dyga
#83 The access token is now passed in a http header

This is done to avoid exposure as the data.http.id (which contains the URL) in the logs.

Added the MacOS .DS_Store files to .gitignore

what

• The access token is now passed in a http header • Added the MacOS .DS_Store files to .gitignore

why

• This is done to avoid exposure as the data.http.id (which contains the URL) in the logs.

Alex Atkinson avatar
Alex Atkinson

Here’s a bit of old tf kit for setting up a high availability jenkins instance backed by efs. If anyone ever needs. Though there’s probably an up-to-date one at the end of a google search. :P https://github.com/AlexAtkinson/jenkins_efs_terraform

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
2
1

2024-01-25

Vitali avatar

Hi, I am looking for a solution to flip-flopping tags in a module. We use cloudposse/vpn-connection/aws to setup multiple VPNs between AWS VPCs and our DC. Every VPN uses a different tfvar with namespace, environment etc. The DC has only one VPN gateway but connects via multiple tunnels, one tunnel per VPC. The module expects customer_gateway_ip_address as a required input. The IP address is always the same as there is only that one VPN gateway in the DC. The tags on that aws entry are now flip-flopping with every apply. Removing the context also removes the tags from the tunnels and not just the customer gateway. Is there an easy solution to this?

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Jeremy White (Cloud Posse)

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

A solution doesn’t come to mind immediately. The most mundane way would be to use lifecycle rules to ignore portions of state. Just understand you would need to fork our module for that to work best.

Vitali avatar

Thanks.

Karina Titov avatar
Karina Titov

hi! i’m curious if there is a way to tell atlantis only run plan for filenames terragrunt.hcl? Currently my repo has a lot of config files with .hcl extension and it makes it look very messy since of course those are not valid for plan files and i get errors for every single file if it’s in the same directory with terragrunt.hcl

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Try #atlantis instead

Brent G avatar
Brent G

So they say best practices for an ECR repo is for immutable tags. I have a GH repo where after code is pushed, it triggers a docker build, then pushes that image to ECR. What I’m running into is how do I reference this new image in an aws_ecs_task_definition because if the workspace is VCS backed, it’ll trigger off that first commit and will plan/apply before the image is built

Joe Perez avatar
Joe Perez

When you say VCS backed, does this mean you use terraform cloud or a similar service? And what is driving the build/push process? GitHub actions?

Brent G avatar
Brent G

It’s TF Cloud, and the workspace was created as a ‘Version Control Workflow’ as opposed to CLI/API driven

Joe Perez avatar
Joe Perez

Ahhh ok, and this is just the first push that this is happening?

Brent G avatar
Brent G

Really it’s any. I just can’t figure out the proper order of operations here. Since both the TF run and GH Build will trigger at the same time for a commit, I’m just trying to figure out if the VCS Workflow can even do what I want.

Joe Perez avatar
Joe Perez

I haven’t done it with the VCS workflow yet, but is there a pre-run option to run arbitrary scripts?

Joe Perez avatar
Joe Perez

My hope is that you can just run a script prior to terraform initializing that looks for the image tag available in ecr in a loop, when it finds the tag, then it proceeds

Brent G avatar
Brent G

you can specify a pre-plan run task but that’s shared across the org and not the workflow

Joe Perez avatar
Joe Perez

Damn

Brent G avatar
Brent G

I suppose it could just get changed to API driven, and just power it through GH actions call to TFC

Joe Perez avatar
Joe Perez

What happens when tf runs? Does it actually attempt the deployment?

Brent G avatar
Brent G

It’s just that TF would need pre-knowledge of what the resultant tag would be. Like if GH just re-built the image as latest every time it wouldn’t be a problem, but TF would either to know what the tag was going to be, or the digest and let the ECR deployment flap until the tag existed.

Joe Perez avatar
Joe Perez

I guess it depends on what your tagging scheme is, but doesn’t terraform cloud expose that info? Eg a short SHA?

Brent G avatar
Brent G

It’s just what goes in the TF.

image = "${aws_ecr_repository.this.repository_url}:${some_voodoo_here}"
Joe Perez avatar
Joe Perez

There may be a way to do this with terraform cloud and vcs, but I’m leaning towards disabling vcs and adding a GitHub job that depends on the completion of the build/push workflow

Brent G avatar
Brent G

That’s what I’m leaning towards too, at least for the general case.

Joe Perez avatar
Joe Perez

Also not sure what you’re using for tagging, but the SHA is handy and unique

Joe Perez avatar
Joe Perez

Just not very human readable friendly

Brent G avatar
Brent G

SemVer

Joe Perez avatar
Joe Perez

Oh for sure, but what about in lower environments and development?

Joe Perez avatar
Joe Perez

I’ve used combos of feature brand/SHA as tags for development, then used semver for release after PR(s) merged in

Brent G avatar
Brent G

regardless of how it’s tagged, still requires TF to either be psychic, or wait for GH to trigger it

Joe Perez avatar
Joe Perez

hahaha, it was just a general curiosity question to see how others solve these kinds of problems

Brent G avatar
Brent G

In this case, it’s a Flask app so someone can just run it directly from a GH checkout, worrying about an image is just for Fargate’s benefit

Val Naipaul avatar
Val Naipaul

adding a GitHub job that depends on the completion of the build/push workflow I’ve got some unfinished work towards this end, just using GH to orchestrate (and bypassing ECR altogether):

  1. GH builds and pushes image to its CR
  2. GH updates the ECS task definition for the new image URI
  3. ECS triggers a deployment upon seeing the new task definition, pulling the image from the GHCR as a private registry
Private registry authentication for tasks - Amazon Elastic Container Service

Private registry authentication for tasks using AWS Secrets Manager enables you to store your credentials securely and then reference them in your task definition. This provides a way to reference container images that exist in private registries outside of AWS that require authentication in your task definitions. This feature is supported by tasks hosted on Fargate, Amazon EC2 instances, and external instances using Amazon ECS Anywhere.

aws-actions/amazon-ecs-render-task-definition

Inserts a container image URI into an Amazon ECS task definition JSON file.

2024-01-26

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Any one with contacts at terraform-docs can help expedite this bug fix for a crash when outputs are null? https://github.com/terraform-docs/terraform-docs/pull/749

#749 Fix output values with null

Steps to reproduce

  1. Create an empty dir with terraform file [example.tf](http://example.tf)
output "foo" {
  value = "foo"
}

output "bar" {
  value = null
}
  1. Run
terraform plan
terraform apply
terraform output --json > output.json
  1. Run
terraform-docs markdown ./ --output-values --output-values-from ./output.json

Expected

• Successfully generated markdown

Exists

• Error

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x93e475]

goroutine 1 [running]:
github.com/terraform-docs/terraform-docs/terraform.loadOutputs(0xc000165b80, 0xc000002180)
	/home/runner/work/terraform-docs/terraform-docs/terraform/load.go:298 +0x2d5
github.com/terraform-docs/terraform-docs/terraform.loadModuleItems(0x7ffdd7128646?, 0xc0000022a8?)
	/home/runner/work/terraform-docs/terraform-docs/terraform/load.go:70 +0x106
github.com/terraform-docs/terraform-docs/terraform.LoadWithOptions(0xc000002180)
	/home/runner/work/terraform-docs/terraform-docs/terraform/load.go:41 +0x38
github.com/terraform-docs/terraform-docs/internal/cli.generateContent(0xc000002180)
	/home/runner/work/terraform-docs/terraform-docs/internal/cli/run.go:326 +0x1c
github.com/terraform-docs/terraform-docs/internal/cli.(*Runtime).RunEFunc(0xc00023b540, 0x0?, {0x0?, 0x0?, 0x0?})
	/home/runner/work/terraform-docs/terraform-docs/internal/cli/run.go:134 +0x1e5
github.com/spf13/cobra.(*Command).execute(0xc000005200, {0xc00023b800, 0x4, 0x4})
	/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:983 +0xabc
github.com/spf13/cobra.(*Command).ExecuteC(0xc000004300)
	/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:1115 +0x3ff
github.com/spf13/cobra.(*Command).Execute(...)
	/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:1039
github.com/terraform-docs/terraform-docs/cmd.Execute()
	/home/runner/work/terraform-docs/terraform-docs/cmd/root.go:37 +0x1c
main.main()
	/home/runner/work/terraform-docs/terraform-docs/main.go:20 +0x13

Reason

If the output value is null,, it would be skipped in terraform output.

So output.json file will be

{
  "foo": {
    "sensitive": false,
    "type": "string",
    "value": "foo"
  }
}

Point in code to fix https://github.com/terraform-docs/terraform-docs/blob/master/terraform/load.go#L298

Versions

• terraform v1.2.2 • terraform docs v0.17.0 795d369 linux/amd64

Reference

terraform-docs/terraform-docs#748

Alex Atkinson avatar
Alex Atkinson

Not fixable. I suspected as much. RE: RDS B/G & replica state disassociation. The docs will get updated though. https://github.com/hashicorp/terraform-provider-aws/issues/33702#issuecomment-1908696514

Comment on #33702 [Bug]: RDS Blue/Green & Replica TFState Tracking

Hi @AlexAtkinson, sorry for the delay on addressing this issue.

Unfortunately, this is not something that can currently be done. The Terraform resource model only allows modifying the current resource, and trying to modify other resources can lead to problems. We’ve documented this in https://github.com/hashicorp/terraform-provider-aws/blob/main/docs/design-decisions/rds-bluegreen-deployments.md. I’ll update it to address the case of replica DBs.

If there are changes to how the Terraform resource model works that allow one resource to modify another, we can reconsider this.

2024-01-28

Doug Bergh avatar
Doug Bergh

i’m using cloudposse/api-gateway/aws to create an api-gateway. it seems to create a log-group role with principal service “[ec2.amazonaws.com](http://ec2.amazonaws.com)” . I’m trying to update the api-gateway and i’m getting │ Error: updating API Gateway Account: BadRequestException: The role ARN does not have required permissions configured. Please grant trust permission for API Gateway and add the required role policy. Shouldn’t the principal service be “[apigateway.amazonaws.com](http://apigateway.amazonaws.com)”?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you are correct, this is an omission

here https://github.com/cloudposse/terraform-aws-api-gateway/blob/main/main.tf#L28

the variable principals needs to be added since the https://github.com/cloudposse/terraform-aws-cloudwatch-logs/blob/main/variables.tf module supports it.

then you could override principals in your code when instantiating the https://github.com/cloudposse/terraform-aws-api-gateway module

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

PRs are welcome

Doug Bergh avatar
Doug Bergh

Andriy, thanks for the quick reply! I’ll try to put together a PR!

2024-01-29

2024-01-31

Release notes from terraform avatar
Release notes from terraform
03:03:30 PM

v1.7.2 1.7.2 (January 31, 2024) BUG FIXES:

backend/s3: No longer returns error when IAM user or role does not have access to the default workspace prefix env:. (#34511) cloud: When triggering a run, the .terraform/modules directory was being excluded from the configuration upload causing Terraform Cloud to try (and sometimes fail) to…

backend/s3: Ignore default workspace prefix errors by gdavison · Pull Request #34511 · hashicorp/terraformattachment image

In versions prior to v1.6, the S3 backend ignored all errors other than NoSuchBucket when listing workspaces. This allowed cases where the user did not have access to the default workspace prefix e…

Release notes from terraform avatar
Release notes from terraform
04:03:31 PM

v1.8.0-alpha20240131 1.8.0-alpha20240131 (January 31, 2024) UPGRADE NOTES:

The first plan after upgrading may show resource updates with no apparent changes if -refresh-only or -refresh=false is used. The fix introduced for <a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2098393853” data-permission-text=”Title is private” data-url=”https://github.com/hashicorp/terraform/issues/34567” data-hovercard-type=”pull_request”…

apply schema marks to returned instance values by jbardin · Pull Request #34567 · hashicorp/terraformattachment image

The original sensitivity handling implementation applied the marks from a resource schema only when decoding values for evaluation. This appeared to work in most cases, since the resource value cou…

    keyboard_arrow_up