#terraform (2023-08)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2023-08-01

2023-08-02

Release notes from terraform avatar
Release notes from terraform
11:03:29 AM

v1.6.0-alpha20230802 1.6.0-alpha20230802 (August 02, 2023) NEW FEATURES:

terraform test: The previously experimental terraform test command has been moved out of experimental. This comes with a significant change in how Terraform tests are written and executed. Terraform tests are now written within .tftest.hcl files, controlled by a series of run blocks. Each run block will execute a Terraform plan or apply command against the Terraform configuration under test and can execute conditions against the resultant…

Release v1.6.0-alpha20230802 · hashicorp/terraformattachment image

1.6.0-alpha20230802 (August 02, 2023) NEW FEATURES:

terraform test: The previously experimental terraform test command has been moved out of experimental. This comes with a significant change in …

2023-08-03

Andrew Schwartz avatar
Andrew Schwartz

I’m trying to customize the image used for the datadog agent, specifically to install the puma integration as described here. Datadog support directs me at running this command in the image build. I am having trouble figuring out where to start looking to do this. In our configuration, we have a datadog-agent terraform component, but the abstraction is too great for my limited terraform knowledge to figure out how to dig deeper. I see that this component defines

module "datadog_agent" {
  source  = "cloudposse/helm-release/aws"

but I am unable to make sense of this terraform source to understand where it sources the datadog agent image, and how I could go about customizing ours. If anyone could give me a few pointers to help me know where to look, it’d be greatly appreciated!

Fizz avatar

You can configure the values.yaml used by the helm chart and pass it into the module. Within the values.yaml you can add your configuration https://github.com/cloudposse/terraform-aws-helm-release/blob/main/examples/complete/main.tf line 45

data "aws_eks_cluster_auth" "kubernetes" {
  name = module.eks_cluster.eks_cluster_id
}

provider "helm" {
  kubernetes {
    host                   = module.eks_cluster.eks_cluster_endpoint
    token                  = data.aws_eks_cluster_auth.kubernetes.token
    cluster_ca_certificate = base64decode(module.eks_cluster.eks_cluster_certificate_authority_data)
  }
}

provider "kubernetes" {
  host                   = module.eks_cluster.eks_cluster_endpoint
  token                  = data.aws_eks_cluster_auth.kubernetes.token
  cluster_ca_certificate = base64decode(module.eks_cluster.eks_cluster_certificate_authority_data)
}


module "helm_release" {
  source = "../../"

  # source  = "cloudposse/helm-release/aws"
  # Cloud Posse recommends pinning every module to a specific version
  # version = "x.x.x"

  repository    = var.repository
  chart         = var.chart
  chart_version = var.chart_version

  create_namespace_with_kubernetes = var.create_namespace
  kubernetes_namespace             = var.kubernetes_namespace
  service_account_namespace        = var.kubernetes_namespace
  service_account_name             = "aws-node-termination-handler"
  iam_role_enabled                 = true
  iam_source_policy_documents      = [one(data.aws_iam_policy_document.node_termination_handler[*].json)]

  eks_cluster_oidc_issuer_url = module.eks_cluster.eks_cluster_identity_oidc_issuer

  atomic          = var.atomic
  cleanup_on_fail = var.cleanup_on_fail
  timeout         = var.timeout
  wait            = var.wait

  values = [
    file("${path.module}/values.yaml")
  ]

  context = module.this.context

  depends_on = [
    module.eks_cluster,
    module.eks_node_group,
  ]
}

data "aws_iam_policy_document" "node_termination_handler" {
  #bridgecrew:skip=BC_AWS_IAM_57:Skipping `Ensure IAM policies does not allow write access without constraint` because this is a test case
  statement {
    sid       = ""
    effect    = "Allow"
    resources = ["*"]

    actions = [
      "autoscaling:CompleteLifecycleAction",
      "autoscaling:DescribeAutoScalingInstances",
      "autoscaling:DescribeTags",
      "ec2:DescribeInstances",
      "sqs:DeleteMessage",
      "sqs:ReceiveMessage",
    ]
  }
}

Fizz avatar

```

Default values for Datadog Agent

See Datadog helm documentation to learn more:

https://docs.datadoghq.com/agent/kubernetes/helm/

FOR AN EFFORTLESS UPGRADE PATH, DO NOT COPY THIS FILE AS YOUR OWN values.yaml.

ONLY SET THE VALUES YOU WANT TO OVERRIDE IN YOUR values.yaml.

nameOverride – Override name of app

nameOverride: # “”

fullnameOverride – Override the full qualified app name

fullnameOverride: # “”

targetSystem – Target OS for this deployment (possible values: linux, windows)

targetSystem: “linux”

commonLabels – Labels to apply to all resources

commonLabels: {}

team_name: dev

registry – Registry to use for all Agent images (default gcr.io)

Currently we offer Datadog Agent images on:

GCR - use gcr.io/datadoghq (default)

DockerHub - use docker.io/datadog

AWS - use public.ecr.aws/datadog

registry: gcr.io/datadoghq

datadog: # datadog.apiKey – Your Datadog API key

## ref: https://app.datadoghq.com/account/settings#agent/kubernetes apiKey: #

# datadog.apiKeyExistingSecret – Use existing Secret which stores API key instead of creating a new one. The value should be set with the api-key key inside the secret.

## If set, this parameter takes precedence over “apiKey”. apiKeyExistingSecret: #

# datadog.appKey – Datadog APP key required to use metricsProvider

## If you are using clusterAgent.metricsProvider.enabled = true, you must set ## a Datadog application key for read access to your metrics. appKey: #

# datadog.appKeyExistingSecret – Use existing Secret which stores APP key instead of creating a new one. The value should be set with the app-key key inside the secret.

## If set, this parameter takes precedence over “appKey”. appKeyExistingSecret: #

# agents.secretAnnotations – Annotations to add to the Secrets secretAnnotations: {} # key: “value”

## Configure the secret backend feature https://docs.datadoghq.com/agent/guide/secrets-management ## Examples: https://docs.datadoghq.com/agent/guide/secrets-management/#setup-examples-1 secretBackend: # datadog.secretBackend.command – Configure the secret backend command, path to the secret backend binary.

## Note: If the command value is "/readsecret_multiple_providers.sh", and datadog.secretBackend.enableGlobalPermissions is enabled below, the agents will have permissions to get secret objects across the cluster.
## Read more about "/readsecret_multiple_providers.sh": <https://docs.datadoghq.com/agent/guide/secrets-management/#script-for-reading-from-multiple-secret-providers-readsecret_multiple_providerssh>
command:  # "/readsecret.sh" or "/readsecret_multiple_providers.sh" or any custom binary path

# datadog.secretBackend.arguments -- Configure the secret backend command arguments (space-separated strings).
arguments:  # "/etc/secret-volume" or any other custom arguments

# datadog.secretBackend.timeout -- Configure the secret backend command timeout in seconds.
timeout:  # 30

# datadog.secretBackend.enableGlobalPermissions -- Whether to create a global permission allowing Datadog agents to read all secrets when `datadog.secretBackend.command` is set to `"/readsecret_multiple_providers.sh"`.
enableGlobalPermissions: true

# datadog.secretBackend.roles -- Creates roles for Datadog to read the specified secrets - replacing `datadog.secretBackend.enableGlobalPermissions`.
roles: []
# - namespace: secret-location-namespace
#   secrets:
#     - secret-1
#     - secret-2

# datadog.securityContext – Allows you to overwrite the default PodSecurityContext on the Daemonset or Deployment securityContext: runAsUser: 0 # seLinuxOptions: # user: “system_u” # role: “system_r” # type: “spc_t” # level: “s0”

# datadog.hostVolumeMountPropagation – Allow to specify the mountPropagation value on all volumeMounts using HostPath

## ref: https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation hostVolumeMountPropagation: None

# datadog.clusterName – Set a unique cluster name to allow scoping hosts and Cluster Checks easily

## The name must be unique and must be dot-separated tokens with the following restrictions: ## * Lowercase letters, numbers, and hyphens only. ## * Must start with a letter. ## * Must end with a number or a letter. ## * Overall length should not be higher than 80 characters. ## Compared to the rules of GKE, dots are allowed whereas they are not allowed on GKE: ## https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1beta1/projects.locations.clusters#Cluster.FIELDS.name clusterName: #

# datadog.site – The site of the Datadog intake to send Agent data to. # (documentation: https://docs.datadoghq.com/getting_started/site/)

## Set to ‘datadoghq.com’ to send data to the US1 site (default). ## Set to ‘datadoghq.eu’ to send data to the EU site. ## Set to ‘us3.datadoghq.com’ to send data to the US3 site. ## Set to ‘us5.datadoghq.com’ to send data to the US5 site. ## Set to ‘ddog-gov.com’ to send data to the US1-FED site. ## Set to ‘ap1.datadoghq.com’ to send data to the AP1 site. site: # datadoghq.com

# datadog.dd_url – The host of the Datadog intake server to send Agent data to, only set this option if you need the Agent to send data to a custom URL

## Overrides the site setting defined in “site”. dd_url: # https://app.datadoghq.com

# datadog.logLevel – Set logging verbosity, valid log levels are: trace, debug, info, warn, error, critical, off logLevel: INFO

# datadog.kubeStateMetricsEnabled – If true, deploys the kube-state-metrics deployment

## ref: https://github.com/kubernetes/kube-state-metrics/tree/kube-state-metrics-helm-chart-2.13.2/charts/kube-state-metrics # The kubeStateMetricsEnabled option will be removed in the 4.0 version of the Datadog Agent chart. kubeStateMetricsEnabled: false

kubeStateMetricsNetworkPolicy: # datadog.kubeStateMetricsNetworkPolicy.create – If true, create a NetworkPolicy for kube state metrics create: false

kubeStateMetricsCore: # datadog.kubeStateMetricsCore.enabled – Enable the kubernetes_state_core check in the Cluster Agent (Requires Cluster Agent 1.12.0+)

## ref: <https://docs.datadoghq.com/integrations/kubernetes_state_core>
enabled: true

rbac:
# datadog.kubeStateMetricsCore.rbac.create -- If true, create & use RBAC resources
  create: true

# datadog.kubeStateMetricsCore.ignoreLegacyKSMCheck -- Disable the auto-configuration of legacy kubernetes_state check (taken into account only when datadog.kubeStateMetricsCore.enabled is true)

## Disabling this field is not recommended as it results in enabling both checks, it can be useful though during the migration phase.
## Migration guide: <https://docs.datadoghq.com/integrations/kubernetes_state_core/?tab=helm#migration-from-kubernetes_state-to-kubernetes_state_core>
ignoreLegacyKSMCheck: true

# datadog.kubeStateMetricsCore.collectSecretMetrics -- Enable watching secret objects and collecting their corresponding metrics kubernetes_state.secret.*

## Configuring this field will change the default kubernetes_state_core check configuration and the RBACs granted to Datadog Cluster Agent to run the kubernetes_state_core check.
collectSecretMetrics: true

# datadog.kubeStateMetricsCore.collectVpaMetrics -- Enable watching VPA objects and collecting their corresponding metrics kubernetes_state.vpa.*

## Configuring this field will change the…
Fizz avatar

You want the section named confd on line 500

Andrew Schwartz avatar
Andrew Schwartz

Thanks! This is very helpful. One addition; from the docs and from datadog support, we are being instructed to explicitly install the puma integration as it is not provided by default in the datadog agent. Specifically, we need to add

RUN agent integration install -r -t datadog-puma==1.2.0'

to the image dockerfile, or otherwise invoke this same command on a boot hook.

I am inferring from this yaml file that the gcr datadoghq registry is how the image is specified, and if Ii want to add a custom image to add the above, I would possibly fork this chart and add our own image? Or should there be an easier method to customize the container we are using for our datadog agent?

registry: gcr.io/datadoghq
Fizz avatar

You would specify your registry in the values.yaml and then deploy it with helm. That would tell helm how you want the install customized.

2023-08-04

2023-08-05

2023-08-09

Release notes from terraform avatar
Release notes from terraform
01:43:31 PM

v1.5.5 1.5.5 (August 9, 2023) terraform init: Fix crash when using invalid configuration in backend blocks. (#33628)

Make config errors more important during init operations by liamcervante · Pull Request #33628 · hashicorp/terraformattachment image

This PR updates the backend initialisation logic so that if the value returned by parsing the configuration isn’t wholly known it returns an error diagnostic instead of crashing. This happens becau…

2023-08-10

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
HashiCorp adopts Business Source Licenseattachment image

HashiCorp adopts the Business Source License to ensure continued investment in its community and to continue providing open, freely available products.

fb-wow7
2
1
4
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


What is considered a competitive offering?
HashiCorp considers a competitive offering to be a product or service provided to users or customers outside of your organization that has significant overlap with the capabilities of HashiCorp’s commercial products or services. For example, this definition would include providing a HashiCorp tool as a hosted service or embedding HashiCorp products in a solution that is sold competitively against our offerings. If you need further clarification with respect to a particular use case, you can email [email protected]. Custom licensing terms are also available to provide more clarity and enable use cases beyond the BSL limitations.

Matthew James avatar
Matthew James
#3663 Hashicorp Mozilla Public License v2.0 (MPL 2.0) to the Business Source License v1.1 (BSL or BUSL) change

Community Note

• Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request. Searching for pre-existing feature requests helps us consolidate datapoints for identical requirements into a single place, thank you! • Please do not leave “+1” or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request. • If you are interested in working on this issue or have submitted a pull request, please leave a comment.


Overview of the Issue

• Not a bug, more of seeking guidance. Do we know how the change in licensing will affect this project, and if so, what actions need to be taken?

https://www.hashicorp.com/blog/hashicorp-adopts-business-source-license

tommy avatar

Will this lead to a fork of free terraform?

joeherman7 avatar
joeherman7

Spacelift responds: https://spacelift.io/blog/hashicorps-license-change

Hope the impacts are manageable for CloudPosse

What HashiCorp’s license change means for Spacelift customersattachment image

HashiCorp’s license change - what does it mean in practice and how does it impact us.

Eamon Keane avatar
Eamon Keane

Think Spacelift/Digger/Env0 will give it a go.

https://github.com/diggerhq/open-terraform

Based on this year, terraform core looks to have around 4 main developers. https://github.com/hashicorp/terraform/graphs/contributors?from=2021-11-08&to=2023-08-11&type=c

I guess the issue will be if a breaking change in Terraform occurs that the main TF providers keep supporting v1.5.5. The trend is to auto-generate the providers so perhaps it won’t be too difficult.

3
Alex Atkinson avatar
Alex Atkinson

The reaction is strong. I really hope that digger fork, or another, gets going. There are a lot of PRs that were never accepted on the tf project that would have made it so much better. https://news.ycombinator.com/item?id=37081306

3
Mohammed Yahya avatar
Mohammed Yahya

So What this drama meaning for someone like a DevOps engineer, who help companies with their IaC work? I’m I a competitor?

Eamon Keane avatar
Eamon Keane

you’re fine, Hashicorp aren’t going to come after you. The issue is that conservative/paranoid organisations, the kind who are just coming into Terraform, decide it’s not worth the legal ambiguity.

Earthly, for example, went BUSL but found ground-up adoption in Enterprises was hampered by ambiguity so they then reverted to open source.

https://podcasts.apple.com/us/podcast/two-time-founder-vlad-a-ionescu-on-finding-success/id1514646781?i=1000623896594

https://twitter.com/thockin/status/1690223611372851200

Tim Hockin (thockin.yaml) on Twitter

@msw @kelseyhightower @bassamtabbara @upbound_io @crossplane_io Relying on FUD and an abundance of caution from larger companies to get the effect they want without spelling it out completely.

Sad.

I can’t recommend any product team touch anything Hashi now, and all of the OSS projects I work with seem to be ripping out all Hashi deps.

1
Mohammed Yahya avatar
Mohammed Yahya
HashiCorp's BSL License Change: What this means for Upbound customers and the Crossplane community

What does HashiCorp’s license change mean for our customers and the Crossplane community? It does not impact Upbound’s use of Terraform. As maintainers of Crossplane, it also does not impact the project’s use of Terraform.

DaniC (he/him) avatar
DaniC (he/him)

The fact various vendors quickly jumped and tried to calm down the mass is great however that is only the tip of the iceberg.

If you overlay what Tim Hockin & https://typefully.com/iamvlaaaaaaad said … well the cats are out of the bag and that is the un-measurable damage Corporations don’t account for - see RH/ IBM with CentOS, the move RH did a while back with ansible.

Very simple view:

Nowadays my world is in GCP where entire IaC investment put in by Google is not on Cloud Deployment Manager (similar to Cloudformation) but TF.

If a very well respected person like Tim H says what he said then ….. sad

The precedent was created with Mongo and Elastic hence copycat …

Eamon Keane avatar
Eamon Keane

Seems Digger are going to collab with others. Terragrunt had the wishful thinking idea of them being the new ‘open source’ terraform.

This is in the works, precisely along the lines of teaming up / industry backing. It was a fun weekend :) Stay tuned!

Hopefully GCP, Oracle and maybe Tencent back it with engineers and it lands in CNCF. GCP did some of this with Airflow where they outsourced maintenance/core contributions to some Polish folks.

https://github.com/diggerhq/open-terraform/issues/5#issuecomment-1676830159 https://github.com/gruntwork-io/terragrunt/issues/2662

MattyB avatar

Has anyone seen any discussions on potential forks of Hashi Vault?

Eamon Keane avatar
Eamon Keane

On forks of vault, not seen anything. The reasons is probably won’t be are that the hyperscalers have their own solutions to cover the 80% use case, the abstraction by tools like External Secrets Operator, it’s a complex product, and hashicorp’s FAQs seem to be tolerant of internal teams at enterprises continuing to operate it for free.

New projects might take a chance on Infisical ($3m seed) but which is also single-vendor.

Reasons it might be forked is it’s priced very highly and if many second and third tier clouds use it for their own (external facing) secrets service and they decide to pool maintenance.

Meb avatar

I don’t get the point here. Would you like to see Hashicorp become like docker inc? All for not paying licences? And still in this case it’s more spacelift/env0 taking the hit as they are competing with Terraform cloud that by the way could be improved. Hashicorp over priced? But how compared to gitlab and other solutions? I see dumb saas for project management like miro and so racking a lot more if you count per user licences!

MattyB avatar

It made sense for our company to run Hashi Vault before they hiked up their rates. Now it’s cheaper to use the cloud provider’s secret manager, especially given the additional overhead of running and managing the enterprise instance.

2
O K avatar

Do you know why I’m getting this DNS fwd/rev mismatch error for aws msk module? previously I used different older module and didn’t see such error

nc -vz msk-dev-broker-1.dev.project.internal 9092
DNS fwd/rev mismatch: b-1.egdev1devmskdev.avevhi.c8.kafka.eu-central-1.amazonaws.com != ip-10-10-21-57.eu-central-1.compute.internal
b-1.egdev1devmskdev.avevhi.c8.kafka.eu-central-1.amazonaws.com [10.10.21.57] 9092 (?) open
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

PTR records are not supported for private IP addresses like 10.10.21.57. Probably what has changed is that you are using an updated tool that now checks for PTR records where it did not before, rather than anything changing with the deployment itself.

2023-08-11

Brent G avatar
Brent G

Do none of the cloudposse subnet modules (dynamic, multi-az, named) support a single NAT gateway mode, rather than a 1 per az? They all seem to feed off the number of priv subnets you pass in

Alex Jurkiewicz avatar
Alex Jurkiewicz

Why have many AZs if you only have one gateway?

Brent G avatar
Brent G

Because we’re fine in the trade off of redundancy at the price difference

Brent G avatar
Brent G

We dont need that many nines of uptime, usually when things break, the whole region is broken, I can’t remember a time when we’ve had an issue of a single AZ.

mrwacky avatar
mrwacky

You’ll pay a bunch in cross-az network transit too.. That’ll offset your savings.

Maybe the workaround is to create the nat GW separately

1
Brent G avatar
Brent G

only if there’s a lot of outbound traffic

Alex Jurkiewicz avatar
Alex Jurkiewicz

What I mean is, why spread your infra over many AZs in the first place? Put everything in one AZ. You’ll save on cross AZ traffic and complexity.

Using many AZs but a NAT in only one of them is worst of both worlds. Less reliable and more expensive.

this1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It’s a false sense of security to have multiple AZ and a single NAT gateway

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That said I think recent changes to dynamic subnets supports a configurable number of NAT gateways. Cc @Jeremy G (Cloud Posse)

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

We have too many overly-specialized subnet modules, so in an effort to cut down, we are focusing our effort on terraform-aws-dynamic-subnets , which supports a configurable number of NATs and many other nice features.

cloudposse/terraform-aws-dynamic-subnets
1
Deepak Verma avatar
Deepak Verma
clouddrove/terraform-aws-subnet

Terraform module to create public, private and public-private subnet with network acl, route table, Elastic IP, nat gateway, flow log.

Deepak Verma avatar
Deepak Verma

you can find multiple az with multiple Natgateway.. Its a managed module.

2023-08-12

2023-08-13

2023-08-14

2023-08-15

Hao Wang avatar
Hao Wang
OpenTF Foundation

The OpenTF Foundation. Supporting an impartial, open, and community-driven Terraform.

6
Meb avatar

And guess who’s behind this? See the list of the compagnies already. Most of those are directly threatened by BSUL no end customer.

OpenTF Foundation

The OpenTF Foundation. Supporting an impartial, open, and community-driven Terraform.

loren avatar

Indirectly, all the users of the products built by those companies, for whom TFC didn’t cut it or was too expensive

Meb avatar

Well they are making money with Hashicorp software and they could cut a deal with them and pay licences. That’s how it works. Nope they claim hero’s of the open source cause while all their offering is closed source.

loren avatar

I don’t see a problem with it. TFC is also closed source

loren avatar

And free open source is exactly free to use as they did. They helped build the Terraform community, and then competed well. Hashi just had to build a paid product people actually wanted, and they failed

Meb avatar

TFC is the way Hashicorp make money to fund TF. Same over Vault Entreprice offering and HCP.

loren avatar

Hashi could easily have spread the load of terraform maintenance, but they largely rejected community involvement and contributions. If folks fork and succeed, perhaps Hashicorp will switch to use the fork in their enterprise and cloud offerings

this3
Meb avatar

Hashicorp did a lot of great job with their open source and free products. They deserve to make more money. And yes TFC sucks we ditched it.

Meb avatar

Hashicorp using the fork that would kill them. That’s insane

loren avatar

I don’t see why it would kill them, if it provides as good or better capabilities, and the license is amenable

Meb avatar

it will make Hashicorp like docker inc which again something I don’t want to see.

loren avatar

That doesn’t track to me. They’d lose some control, but also be able to refocus resources on their paid offerings

loren avatar

I agree they ought to be able to make money, though “deserve” is probably too strong IMO. They need to compete and win with a paid product

Justin Smith avatar
Justin Smith

That’s a good way of putting it. Instead of trying to prevent people from competing, win people over by competing with a better product.

1
Justin Smith avatar
Justin Smith

I’m interested to see if Cloudposse has any thoughts on OpenTF or intends to back it.

loren avatar

Lots more folks pledging support, checking the prs… https://github.com/opentffoundation/manifesto

opentffoundation/manifesto
Meb avatar

Pledging is free. Efforts on the long run is another.

2
loren avatar

Of course, its just getting started. The future is unwritten. There was a previous version where several companies were also committing 1 or more engineers for 5 years. I believe that commitment still holds, since they removed it only because they didn’t want to deter folks from pledging who weren’t comfortable with a commitment. Check the prs

Justin Smith avatar
Justin Smith

The future hasn’t been written, but the BSL change is a bad sign for how HashiCorp is doing as a company. If you make money doing Terraform consulting and the creator is desperately trying to strange money from people, well… put two and two together.

1
loren avatar
#73 add cloud posse

Add Cloud Posse to list of co-signers

3
Hao Wang avatar
Hao Wang

With Hashicorp’s innovation, other companies may not have the same great features for I always learned from their CTO’s speeches, just feel pity/shameful that their CTO may not be that great without the spot light of open source projects, for there will be no next Terraform any more

Hao Wang avatar
Hao Wang

sorry for spamming, sent a wrong place at the first time..

Eamon Keane avatar
Eamon Keane

Ironicly there may be some legal ambiguity for Hashicorp’s BSL change as Terragrunt founder pointed out.

Oh wow, that's a very good point on the CLA language! I wonder if that invalidates this license change? At least for external contributions?
...

Might be worth checking with a lawyer...

https://www.reddit.com/r/Terraform/comments/15rtieg/comment/jwf8qap/?utm_source=reddit&utm_medium=web2x&context=3&rdt=60848

DaniC (he/him) avatar
DaniC (he/him)

@marcinw in case you folks have the resources to see if the CLA wording and the pledge can change Hashi’s mind .. just saying

marcinw avatar
marcinw

We have our legal looking at it today.

1
marcinw avatar
marcinw


TFC is the way Hashicorp make money to fund TF
That’s an interesting take. Perhaps a few numbers worth considering though:

• the revenue made from TFC/TFE;

• the maintenance and development cost of core Terraform;

• the value of unpaid work (outside contributions) that went into the above from non-Hashi folks to whom the CLA promised FOSS forever;

• the value of unpaid work on the ecosystem (providers, modules, tutorials, linters, etc.)

• the marketing value from running as FOSS;

• the value of open source projects that they build on;

3
Hao Wang avatar
Hao Wang

considering the complexity of open source license, and oversight from lawyers… it may happen 20 years ago, but not sure it will be the case for now

el avatar

hey all :wave: anyone have advice for 1) sorting a [variables.tf](http://variables.tf) file alphabetically, and 2) linting variables to make sure they all have types and descriptions?

Mohammed Yahya avatar
Mohammed Yahya

tflint for linting

tfsort for sorting

terraform-linters/tflint
AlexNabokikh/tfsort
1
Mohammed Yahya avatar
Mohammed Yahya

if you like them, please give stars

el avatar

thanks! I didn’t realize tflint could check for missing variable descriptions

el avatar

and will check out tfsort, didn’t know about that

Mohammed Yahya avatar
Mohammed Yahya

it can

Mohammed Yahya avatar
Mohammed Yahya

Configuration

This plugin can take advantage of additional features by configuring the plugin block. Currently, this configuration is only available for preset.

Here’s an example:

plugin "terraform" {
    // Plugin common attributes

    preset = "recommended"
}

preset

Default: all (recommended for the bundled plugin)

Enable multiple rules at once. Please see Rules for details. Possible values are recommended and all.

The preset have higher priority than disabled_by_default and lower than each rule block.

When using the bundled plugin built into TFLint, you can use this plugin without declaring a “plugin” block. In this case the default is recommended.

el avatar

perfect, thanks! Just used tfsort as well and it was exactly what I needed. It’s missing from this list https://github.com/shuaibiyy/awesome-terraform

shuaibiyy/awesome-terraform

Curated list of resources on HashiCorp’s Terraform

Mohammed Yahya avatar
Mohammed Yahya

you can check my star list here https://github.com/stars/mhmdio/lists/terraform-helpers in case you are missing something

2
Mohammed Yahya avatar
Mohammed Yahya

I just test tflint with variable description

Notice: `app_fqdn` output has no description (terraform_documented_outputs)

  on outputs.tf line 29:
  29: output "app_fqdn" {

Reference: <https://github.com/terraform-linters/tflint-ruleset-terraform/blob/v0.4.0/docs/rules/terraform_documented_outputs.md>
Mohammed Yahya avatar
Mohammed Yahya
Notice: `webapp_container_port` variable has no description (terraform_documented_variables)

  on variables.tf line 126:
 126: variable "webapp_container_port" {

Reference: <https://github.com/terraform-linters/tflint-ruleset-terraform/blob/v0.4.0/docs/rules/terraform_documented_variables.md>
el avatar

ahh cool thank you! saved me a lot of time I wasn’t using the “all” preset, just “recommended”

1
mrwacky avatar
mrwacky

That’s a great list @Mohammed Yahya

1
Hao Wang avatar
Hao Wang

Thanks @Mohamed Naseer

2023-08-16

Elad Levi avatar
Elad Levi

I would appreciate if you could take a look on the PR Its for firewall-manager - [waf_v2.tf](http://waf_v2.tf) @Andriy Knysh (Cloud Posse) @Dan Miller (Cloud Posse)

1
Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

thanks for the fix. approved and merged

Elad Levi avatar
Elad Levi

Thankssweetops

Release notes from terraform avatar
Release notes from terraform
08:53:31 PM

v1.6.0-alpha20230816 1.6.0-alpha20230816 (Unreleased) NEW FEATURES:

terraform test: The previously experimental terraform test command has been moved out of experimental. This comes with a significant change in how Terraform tests are written and executed. Terraform tests are now written within .tftest.hcl files, controlled by a series of run blocks. Each run block will execute a Terraform plan or apply command against the Terraform configuration under test and can execute conditions against the resultant plan…

Release v1.6.0-alpha20230816 · hashicorp/terraformattachment image

1.6.0-alpha20230816 (Unreleased) NEW FEATURES:

terraform test: The previously experimental terraform test command has been moved out of experimental. This comes with a significant change in how T…

el avatar

anyone know of any tools to facilitate provider upgrades? I’m looking at dependabot and wondering if I should consider something else

Alex Jurkiewicz avatar
Alex Jurkiewicz

Dependabot is great. But be sure provider upgrades are worth doing. IMO, provider upgrades aren’t useful because they only contain new features. It’s less work to upgrade providers on demand then to review dependabot PRs every Monday morning forever

2
mrwacky avatar
mrwacky

I was looking at this the other day - https://github.com/minamijoyo/tfupdate

minamijoyo/tfupdate

Update version constraints in your Terraform configurations

z0rc3r avatar


because they only contain new features
this is false. new features are around 30 or less of regular terraform provider release notes

z0rc3r avatar


less work to upgrade providers on demand
in my experience it’s contrary. it requires major PITA to upgrade providers if current provider is year or more old

z0rc3r avatar

i prefer regular frequent updates with small changes that easy to track and verify

Alex Jurkiewicz avatar
Alex Jurkiewicz

My experience has been AWS provider fixes are very rarely things we update for. And the only upgrades with any upgrade pain have been the few major version bumps

loren avatar

I’m looking forward to the upcoming “groups” feature in dependabot to combine updates of different dependencies but within the same language. Should reduce the pr burden. I also like to set a monthly schedule, weekly was definitely too much

Alex Jurkiewicz avatar
Alex Jurkiewicz

i believe you can do all that already

loren avatar

Schedule yes, groups only if you’ve opted into the feature currently in beta

loren avatar

Oh it’s public beta now, with some new options behind another private beta… https://github.com/dependabot/dependabot-core/issues/1190#issuecomment-1623832701

Comment on #1190 Add grouped updates

GREAT NEWS! In case you haven’t seen it already, grouping rules for version updates is now in PUBLIC BETA which means you will be able to set up grouping rules for any repo now!

Since we are in public beta, you may notice some instability or changes in behaviour without notice. If you encounter any bugs, please file new issues for them and the team will take a look.

You can see the docs on how to set these up here: https://docs.github.com/en/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file#groups

We’ll be leaving this issue open until we exit public beta.

If you have any feedback, please feel free to email me at [[email protected]](mailto:[email protected]) or set up a meeting with me by picking a convenient time in my calendar here.

2023-08-17

Meriç Özkayagan avatar
Meriç Özkayagan

Hello, I have a question. In the https://github.com/cloudposse/terraform-aws-cloudwatch-events module I am trying to create an event_pattern, I am using terragrunt to create an here is my terragrunt.hcl. Where i do not understand is whatevet i’ve tried the event pattern is always wrong.

include {
  path = find_in_parent_folders()
}

locals {
  common_vars                       = yamldecode(file(find_in_parent_folders("common_vars.yaml")))
  name                              = "cms"
  cloudwatch_event_rule_description = "ecs task autoscale was stopped from an external state"
  cloudwatch_event_rule_is_enabled  = true
  cloudwatch_event_target_id        = "ECSTaskStopped"
  cloudwatch_event_rule_pattern = {
    source      = ["aws.ecs"]
    detail-type = ["ECS Task State Change"]
    detail = {
      group = ["service:${local.common_vars.namespace}-${local.common_vars.environment}-${local.name}"]
      stoppedReason = [{
        anything-but = {
          prefix = "Scaling activity initiated by (deployment"
        }
      }]
      lastStatus = ["STOPPED"]
    }
  }
}

terraform {
  source = "github.com/cloudposse/terraform-aws-cloudwatch-events//.?ref=0.6.1"
}

inputs = {
  name                              = "${local.common_vars.namespace}-${local.common_vars.environment}-${local.name}"
  cloudwatch_event_target_arn       = dependency.sns_topic.outputs.sns_topic_arn
  cloudwatch_event_rule_description = local.cloudwatch_event_rule_description
  cloudwatch_event_rule_is_enabled  = local.cloudwatch_event_rule_is_enabled
  cloudwatch_event_target_id        = local.cloudwatch_event_target_id
  cloudwatch_event_rule_pattern     = local.cloudwatch_event_rule_pattern

  tags = local.common_vars.tags
}

dependency "sns_topic" {
  config_path = "../../sns/slack-notify"
}

I am not 100% sure that the event pattern is correct, but i have tried with the example in the module too it is not working and here is the error message.

aws_cloudwatch_event_rule.this: Creating...
╷
│ Error: creating EventBridge Rule (dummy-service-name): InvalidEventPatternException: Event pattern is not valid. Reason: Filter is not an object
│  at [Source: (String)""{\"detail\":{\"eventTypeCategory\":[\"issue\"],\"service\":[\"EC2\"]},\"detail-type\":[\"AWS Health Event\"],\"source\":[\"aws.health\"]}""; line: 1, column: 2]
│
│   with aws_cloudwatch_event_rule.this,
│   on main.tf line 10, in resource "aws_cloudwatch_event_rule" "this":
│   10: resource "aws_cloudwatch_event_rule" "this" {
│

Any ideas here ?

cloudposse/terraform-aws-cloudwatch-events

Terraform Module for provisioning CloudWatch Events rules connected with targets.

Gary Mclean avatar
Gary Mclean

Does your module/resource jsonencode it?

cloudposse/terraform-aws-cloudwatch-events

Terraform Module for provisioning CloudWatch Events rules connected with targets.

Meriç Özkayagan avatar
Meriç Özkayagan

I’ve solved it actually the module has some errors when jsonencoding it so I’ve removed the json encoding in the module and do not use a hcl map. This down belowed work for me when you change event_pattern = jsonencode(var.cloudwatch_event_rule_pattern) to this event_pattern = var.cloudwatch_event_rule_pattern

1
Quentin BERTRAND avatar
Quentin BERTRAND

wave I’ve just opened an issue concerning this bug https://github.com/gruntwork-io/terragrunt/issues/2782

#2782 `jsonencode` bad return

Describe the bug
Using jsonencode with variable value in inputs block generate bad json

To Reproduce
Example using this module : https://github.com/cloudposse/terraform-aws-cloudwatch-events

In native terraform

provider "aws" {
  region = "eu-west-3"
}

module "cloudwatch_event" {
  source  = "cloudposse/cloudwatch-events/aws"
  version = "0.6.1"

  name      = "test"

  cloudwatch_event_rule_description = "This is event rule description."

  cloudwatch_event_rule_pattern = {
    "source"      = ["aws.autoscaling"]
    "detail-type" = ["EC2 Instance Launch Successful", "EC2 Instance Terminate Successful"]

    "detail" = {
      "AutoScalingGroupName" = ["test"]
    }
  }
  cloudwatch_event_target_arn = "arn:aws:lambda:eu-west-3:000000000000:function:fake"

  context = module.this.context
}

I get this for the cloudwatch rule event_pattern when I run terragrunt plan and it works with apply.

 # module.cloudwatch_event.aws_cloudwatch_event_rule.this will be created
  + resource "aws_cloudwatch_event_rule" "this" {
      + arn            = (known after apply)
      + description    = "This is event rule description."
      + event_bus_name = "default"
      + event_pattern  = jsonencode(
            {
              + detail      = {
                  + AutoScalingGroupName = [
                      + "test",
                    ]
                }
              + detail-type = [
                  + "EC2 Instance Launch Successful",
                  + "EC2 Instance Terminate Successful",
                ]
              + source      = [
                  + "aws.autoscaling",
                ]
            }
        )
      + id             = (known after apply)
      + is_enabled     = true
      + name           = "test"
      + name_prefix    = (known after apply)
      + tags_all       = (known after apply)
    }

With Terragrunt

terraform {
  source = "github.com/cloudposse/terraform-aws-cloudwatch-events//.?ref=0.6.1"
}


### Inputs
inputs = {
  name = "test"

  cloudwatch_event_target_arn = "arn:aws:lambda:eu-west-3:000000000000:function:fake"

  cloudwatch_event_rule_pattern = {
    "source"      = ["aws.autoscaling"]
    "detail-type" = ["EC2 Instance Launch Successful", "EC2 Instance Terminate Successful"]

    "detail" = {
      "AutoScalingGroupName" = ["test"]
    }
  }
}

I get this for the cloudwatch rule event_pattern when I run terraform plan and it fails with apply.

   # aws_cloudwatch_event_rule.this will be created
  + resource "aws_cloudwatch_event_rule" "this" {
      + arn            = (known after apply)
      + description    = "test"
      + event_bus_name = "default"
      + event_pattern  = "\"{\\\"detail\\\":{\\\"AutoScalingGroupName\\\":[\\\"test\\\"]},\\\"detail-type\\\":[\\\"EC2 Instance Launch Successful\\\",\\\"EC2 Instance Terminate Successful\\\"],\\\"source\\\":[\\\"aws.autoscaling\\\"]}\""
      + id             = (known after apply)
      + is_enabled     = true
      + name           = "test"
      + name_prefix    = (known after apply)
      + tags_all       = (known after apply)
    }

Expected behavior
The value of event_pattern is badly formatted using Terragrunt

Nice to have

☑︎ Terminal output ☐ Screenshots

Versions

• Terragrunt version: v1.6.3 • Terraform version: v0.53.2 • Environment details (Ubuntu 20.04, Windows 10, etc.): MacOS 14.1

Additional context

• provider registry.terraform.io/hashicorp/aws v5.24.0 • provider registry.terraform.io/hashicorp/local v2.4.0

1
kirupakaran1799 avatar
kirupakaran1799

Hi all, I was trying to create eks cluster using terraform with eksctl cluster config file, however terraform eksctl provider doesn’t have any proper documentation and updates, is there any way to achieve this, please suggest

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Jeremy G (Cloud Posse) @Andriy Knysh (Cloud Posse)

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

eksctl is a separate tool and not compatible with Terraform. They are 2 different ways of managing a cluster.

Ben G avatar

Our team recently released an open source tool, cloud-concierge, a container that implements drift-detection, codification, cost estimation and security scanning, allowing you to add these features to an existing Terraform management stack. All results are output directly as a Pull Request. Still very early stages, so please share any and all feedback! https://github.com/dragondrop-cloud/cloud-concierge, video demo of managed instance attached below

2
2

2023-08-18

2023-08-19

2023-08-20

2023-08-21

Matt Gowie avatar
Matt Gowie

Cross-posting here as a suggestion from a commenter: https://sweetops.slack.com/archives/CB2PXUHLL/p1691684768008609

The new Terraform check block was a bit confusing and the existing posts that were out there on the topic weren’t the best, so we wrote our own: https://masterpoint.io/updates/understanding-terraform-check/

2

2023-08-22

Andrew Miskell avatar
Andrew Miskell

Hey all, looking for a little assistance/advice (e.g. if I’m doing it wrong) on terraform. I want to have a map of configuration for each tenant and their configuration that I can reference to pass into something like the aws ec2_instance module to create EC2 instances. My configuration block looks something like below as an example.

locals {
  tenant_config = {
    tenant1 = {
      ec2_config = {
        vm1 = {
          instance_type    = "m6i.xlarge"
          root_volume_size = "32"
          data_volume_size = "200"
        },
        vm2 = {
          instance_type    = "m6i.2xlarge"
          root_volume_size = "32"
          data_volume_size = "500"
        },
        vm3 = {
          instance_type    = "m6i.4xlarge"
          root_volume_size = "32"
          data_volume_size = "250"
        }
      },
      elastic_ips = {
        vm1 = ["1.1.1.1"],
        vm2 = ["1.1.1.2"],
        vm3 = ["1.1.1.3"]
      }
    },
    tenant2 = {
      ec2_config = {
        vm1 = {
          instance_type    = "m6i.xlarge"
          root_volume_size = "32"
          data_volume_size = "200"
        },
        vm2 = {
          instance_type    = "m6i.2xlarge"
          root_volume_size = "32"
          data_volume_size = "500"
        }
      },
      elastic_ips = {
        vm1 = ["1.1.1.4"],
        vm2 = ["1.1.1.5"]
      }
    }
  }
}

and the module I’m using (the variables are all currently broken because I can’t figure out a good way to reference the items in the map).

module "ec2_instance" {
  source = "terraform-aws-modules/ec2-instance/aws"

  for_each = var.ec2_config

  name = "${var.tenant}-${each.key}"

  ami                = try(each.value.ami, "ami-xxxxxx")
  instance_type      = try(each.value.instance_type, "m6i.large")
  key_name           = try(each.value.key_name, "ssh_key")
  monitoring         = true
  enable_volume_tags = false

  root_block_device = [
    {
      delete_on_termination = false
      encrypted             = true
      volume_size           = try(each.value.root_volume_size, null)
      volume_type           = try(each.value.root_volume_type, "gp3")
      iops                  = try(each.value.root_volume_iops, null)
      throughput            = try(each.value.root_volume_throughput, null)

      tags = merge(var.default_tags, {
        Name   = "${var.tenant}-${each.key} - root"
        Tenant = "${var.tenant}"
      })
    }
  ]

  network_interface = [
    {
      device_index          = 0
      network_interface_id  = aws_network_interface.private_interface[each.key].id
      delete_on_termination = false
    }
  ]
}

resource "aws_ebs_volume" "data" {
  for_each          = var.ec2_config
  availability_zone = element(random_shuffle.availability_zone[each.key].result, 0)
  encrypted         = true
  size              = try(each.value.data_volume_size, 100)
  type              = try(each.value.data_volume_type, "gp3")
  iops              = try(each.value.data_volume_iops, null)
  throughput        = try(each.value.data_volume_throughput, null)
  final_snapshot    = true

  tags = {
    Name   = "${var.tenant}-${each.key} - data"
    Tenant = "${var.tenant}"
  }
}

resource "aws_volume_attachment" "data" {
  for_each    = var.ec2_config
  device_name = "/dev/sdf"
  volume_id   = aws_ebs_volume.data[each.key].id
  instance_id = module.ec2_instance[each.key].id
}

There’s probably some for loop magic to do what I want but I still have a really hard time wrapping my head around using for loops in terraform.

Any suggestions? Am I going about this all wrong?

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Max Lobur (Cloud Posse) @Zinovii Dmytriv (Cloud Posse)

Max Lobur (Cloud Posse) avatar
Max Lobur (Cloud Posse)
  1. try switching from map to lists
       ec2_config = [
         {
           name = "vm1" 
           instance_type    = "m6i.xlarge"
           root_volume_size = "32"
           data_volume_size = "200"
         },
         {
           name = "vm2
           instance_type    = "m6i.2xlarge"
           root_volume_size = "32"
           data_volume_size = "500"
         }
    
Max Lobur (Cloud Posse) avatar
Max Lobur (Cloud Posse)
  1. try creating a module that represents 1 object (vm, volume) -> getting a module that represents 1 group of related objects (vm+ip+volume = tenant) -> iterate that for multiple tenants
Max Lobur (Cloud Posse) avatar
Max Lobur (Cloud Posse)
  1. try grouping related vars, if elastic IP is used by a VM it may be easier to just put int inside vm object map
Andrew Miskell avatar
Andrew Miskell

After much banging my head on my desk, that’s kinda where I arrived to.

locals {
  tenant_config = {
    tenant1 = {
      vm1 = {
        instance_type    = "m6i.xlarge"
        data_volume_size = "200"
        elastic_ips      = ["x.x.x.x"]
      }
    },
    tenant2 = {
      vm1 = {
        instance_type    = "m6i.xlarge"
        data_volume_size = "200"
        elastic_ips      = ["x.x.x.x"]
      }
    },
    tenant3 = {
      vm1 = {
        instance_type    = "m6i.xlarge"
        data_volume_size = "200"
      },
      vm2 = {
        instance_type    = "m6i.xlarge"
        data_volume_size = "200"
        elastic_ips      = ["x.x.x.x"]
      },
      vm3 = {
        instance_type    = "m6i.xlarge"
        data_volume_size = "200"
        elastic_ips      = ["x.x.x.x"]
      },
      wm4 = {
        instance_type    = "m6i.xlarge"
        data_volume_size = "200"
        elastic_ips      = ["x.x.x.x"]
      }
    },
    tenant4 = {
      vm1 = {
        instance_type    = "m6i.xlarge"
        data_volume_size = "200"
      },
      vm2 = {
        instance_type    = "m6i.xlarge"
        data_volume_size = "200"
        elastic_ips      = ["x.x.x.x"]
      },
      vm4 = {
        instance_type    = "m6i.xlarge"
        data_volume_size = "200"
        elastic_ips      = ["x.x.x.x"]
      }
    }
  }
}
Andrew Miskell avatar
Andrew Miskell

Then building a module which takes that information and builds everything required for that tenant.

Max Lobur (Cloud Posse) avatar
Max Lobur (Cloud Posse)

looks good

Max Lobur (Cloud Posse) avatar
Max Lobur (Cloud Posse)

vm does not need to be a key, since you just iterating it. Just make it a list of VMs where vm name is just a field

Andrew Miskell avatar
Andrew Miskell
module "app" {
  source = "./modules/app"

  for_each = local.tenant_config

  tenant_name = each.key
  instances   = each.value

  vpc_id             = module.vpc.vpc_id
  vpc_public_subnets = module.vpc.public_subnets
  availability_zones = module.vpc.azs

  default_tags = data.aws_default_tags.default.tags

  efs_id = module.messagestudio_efs.id

  private_sg_ids       = [module.app_ec2_sg.security_group_id, module.app_backend_sg.security_group_id]
  sending_sg_ids       = [module.app_outbound_smtp_sg.security_group_id]
  load_balancer_sg_ids = [module.app_frontend_sg.security_group_id]

}
Max Lobur (Cloud Posse) avatar
Max Lobur (Cloud Posse)
[
        {
          name = "vm1" 
          instance_type    = "m6i.xlarge"
          root_volume_size = "32"
          data_volume_size = "200"
        },
Andrew Miskell avatar
Andrew Miskell

Well, I’m using that to build additional locals within the module.

Andrew Miskell avatar
Andrew Miskell
locals {
  instance_config = flatten([
    for instance, attribute in var.instances : {
      instance    = instance
      attribute   = attribute
      instance_no = length(var.instances)
    }
  ])
  sending_ip_map = flatten([
    for instance, attribute in var.instances : [
      for ip in attribute.sending_ips : {
        instance    = instance
        external_ip = ip
        private_ip  = format("%s.%s", join(".", slice(split(".", data.aws_subnet.sending_subnet[instance].cidr_block), 0, 3)), split(".", ip)[3])
      }
    ] if can(attribute.sending_ips)
  ])
  private_ip_list = {
    for instance, attribute in var.instances : instance => [
      for ip in attribute.sending_ips : format("%s.%s", join(".", slice(split(".", data.aws_subnet.sending_subnet[instance].cidr_block), 0, 3)), split(".", ip)[3])
    ] if can(attribute.sending_ips)
  }
}
Max Lobur (Cloud Posse) avatar
Max Lobur (Cloud Posse)

ah I see

Andrew Miskell avatar
Andrew Miskell

I’ll eventually rework tenant config slightly to go from a local to an actual variable (because these could be different in a dev vs prod environment) so I can leverage workspaces to select the right set of values.

carlos.clemente avatar
carlos.clemente

random question, anyone by any chance have a list of times Hashicorp have changed their pricing model for TFC/TFE? I think that could help me elaborate why I really don’t trust them cause they keep changing the game and majority of time results in a expensive bill for its customers u_u

Alex Jurkiewicz avatar
Alex Jurkiewicz

The pricing model has been pretty stable. It’s changed twice in a few years iirc

Jake Lundberg (HashiCorp) avatar
Jake Lundberg (HashiCorp)

I’m not exactly sure of times, but TF Enterprise is licensed by number of workspaces and has been for awhile. I think originally there was just a platform price.

TF Cloud has changed a little bit. It started off with workspaces, then moved to successful applies. And just recently changed to Resources Under Management (RUM).

There’s a lot of work in the background with sales teams and licensing to ensure that migrations to new licensing schemes don’t have shocking financial issues for customers. If you get a significant change in price (for the worse, nobody complains about paying less), definitely work with your sales team on this.

And while not an official HC messaging, one thing I’ve noticed about these updates is that they’re not arbitrary and certainly not quick. RUM has been something I’ve heard discussed for years and it was originally shot down.

1
1
Eamon Keane avatar
Eamon Keane

interview with Terragrunt, Massdriver and Terrateam. They sound confident that collectively they have enough FTEs and VC dollars to support the fork which will be published next week.

we estimated that Hashicorp has a small fraction of the people working on Terraform as compared to what we can marshall as a consortium.
...
we had members of the terraform core contributor team from times past express their support

Also sounds like morale at Hashicorp is pretty low so perhaps some current maintainers could be enticed.

https://www.youtube.com/watch?v=QaU94LY891M&t=134s

2
Eamon Keane avatar
Eamon Keane

Hashicorp also seem to have clarified the definition of embedded to include all TACOS, initially some like digger thought they’d be exempt.

No VC would give any money to someone beholden to regular price hikes by Hashicorp (e.g. if they went private (PE owns 30% of the company)), so it is existential for them.

https://www.hashicorp.com/blog/hashicorp-updates-licensing-faq-based-on-community-questions

Q: What does the term "embedded" mean under the HashiCorp BSL license?

A: Under the HashiCorp BSL license, the term "embedded" means including the source code or object code, including executable binaries, from a HashiCorp product in a competitive product. "Embedded" also means packaging the competitive product in such a way that the HashiCorp product must be accessed or downloaded for the competitive product to operate.
Eamon Keane avatar
Eamon Keane

interesting, GCP just dropped their own first-party TACO. Hashicorp are still speaking at Cloud Next, so I wonder has Hashicorp already got long-term sweeheart deals with the hyperscalers?

https://cloud.google.com/infrastructure-manager/docs/overview

Infrastructure Manager overview  |  Google Cloudattachment image

Learn about the features and benefits of Infrastructure Manager.

2023-08-23

Release notes from terraform avatar
Release notes from terraform
04:13:34 PM

v1.5.6 1.5.6 (Unreleased) BUG FIXES: terraform_remote_state: Fixed a potential unsafe read panic when reading from multiple terraform_remote_state data sources (#33333)

Terraform fatal error: concurrent map read and map write · Issue #33333 · hashicorp/terraformattachment image

Terraform Version Terraform v1.4.6 on darwin_arm64 Debug Output data.terraform_remote_state.dns: Reading… fatal error: concurrent map read and map write goroutine 3724 [running]: github.com/hashi

Release notes from terraform avatar
Release notes from terraform
04:43:33 PM

v1.5.6 1.5.6 (August 23, 2023) BUG FIXES: terraform_remote_state: Fixed a potential unsafe read panic when reading from multiple terraform_remote_state data sources (#33333)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
opentffoundation/brand-artifacts

OpenTF brand artifacts

5
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

added <i class="em em-opentf"></i>

opentffoundation/brand-artifacts

OpenTF brand artifacts

2023-08-24

Hao Wang avatar
Hao Wang

I am seeing a similar trend as when MySQL was acquired by Oracle/SUN, now it is another time to migrate from Terraform to next software, System Initiative may be the one

Hao Wang avatar
Hao Wang

Their github repo needs more loves

Hao Wang avatar
Hao Wang
systeminit/si

The System Initiative software

Alex Jurkiewicz avatar
Alex Jurkiewicz

i’m surprised how little pulumi seems to have gotten in the past week

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ya haven’t seen more than normal in my news feed

Eamon Keane avatar
Eamon Keane

Gruntworks have a great overview of the Iac tools which you may have seen. Obviously biased, but his point that there’s no responsible way to use Pulumi open source in production is worth considering.

You can switch to other supported backends for state storage, such as Amazon S3, Azure Blob Storage, or Google Cloud Storage, but the Pulumi backend documentation explains that only Pulumi Service supports transactional checkpointing (for fault tolerance and recovery), concurrent state locking (to prevent corrupting your infrastructure state in a team environment), and encrypted state in transit and at rest. In my opinion, without these features, it's not practical to use Pulumi in any sort of production environment (i.e., with more than one developer), so if you're going to use Pulumi, you more or less have to pay for Pulumi Service.

https://blog.gruntwork.io/why-we-use-terraform-and-not-chef-puppet-ansible-saltstack-or-cloudformation-7989dad2865c

Alex Jurkiewicz avatar
Alex Jurkiewicz

It looks like S3 supports state locking but this isn’t documented: https://github.com/pulumi/pulumi-hugo/issues/1448 I guess there is almost zero appeal in moving to another tool if the cynical take in that pr is right. (It’s not documented so that people use pulumi cloud more.)

#1448 Missing documentation on locking for S3 (and other cloud buckets?)

Problem description

Locking is supported now, according to pulumi/pulumi#2697, but I do not see any information on how to take advantage of this. https://www.pulumi.com/docs/intro/concepts/state/#logging-into-the-aws-s3-backend has no mention of this at all.

kunalsingthakur avatar
kunalsingthakur

then what about opentf

kunalsingthakur avatar
kunalsingthakur

?

kunalsingthakur avatar
kunalsingthakur

if systeminit

kunalsingthakur avatar
kunalsingthakur

is it alternative to terraform

kunalsingthakur avatar
kunalsingthakur

is it opensource

Alex Jurkiewicz avatar
Alex Jurkiewicz

You can answer these questions by looking at the repository.

I would say that SI looks a little immature, and also not a direct replacement for Terraform in the same way pulumi is

2023-08-25

Hao Wang avatar
Hao Wang

Maybe SI’s ambition is not to replace Terraform, https://github.com/systeminit/si/issues/2694#issuecomment-1692290443

Comment on #2694 A migration document needed for migration from Terraform

This is obviously a good idea, but a little early for us yet. We’re not quite ready to actually migrate anyones workload, unless they’re willing to dig in and create some of the assets they need themselves (not to mention the software isn’t quite ready to run in production.)

The strategy eventually is that you’ll be able to discover the resources you’ve built with Terraform, and then start managing those resources inside SI directly if you so choose (or keep managing them in terraform, and updating SIs model as things evolve.)

I’m closing this issue in order to keep things tidy - but this is obviously a good idea eventually.

Hao Wang avatar
Hao Wang

thought a small example would help to understand more about how SI can do

Comment on #2694 A migration document needed for migration from Terraform

This is obviously a good idea, but a little early for us yet. We’re not quite ready to actually migrate anyones workload, unless they’re willing to dig in and create some of the assets they need themselves (not to mention the software isn’t quite ready to run in production.)

The strategy eventually is that you’ll be able to discover the resources you’ve built with Terraform, and then start managing those resources inside SI directly if you so choose (or keep managing them in terraform, and updating SIs model as things evolve.)

I’m closing this issue in order to keep things tidy - but this is obviously a good idea eventually.

Jake Lundberg (HashiCorp) avatar
Jake Lundberg (HashiCorp)

I don’t read it that way. The fact they’re anticipating taking current infrastructure and importing it into SI is pretty telling.

Hao Wang avatar
Hao Wang

others may see it in a quite different way, this would be a scary part for many project owners tbh

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

OpenTF fork & roadmap announced today https://github.com/orgs/opentffoundation/projects/3

1
2
sheldonh avatar
sheldonh

Is there any go pkg that wraps up cli better than just invoking directly or even better using the hashicorp source directly?

I think terratest has methods but wanted to know something else out there that was well recognized for Go based control eliminating wrapper around cli.

Going to write some tf automation today /refactor and figured maybe y’all here had a good recommendation. Well supported/used and org maintained.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

cc @Matt Calhoun

1
Matt Calhoun avatar
Matt Calhoun

I don’t know of any packages, per se. I was a former maintainer of terratest and terragrunt and now work with atmos and if you look at how some of those tools do it, they are all wrapping exec.Commandfrom the os/exec package and capturing stdin, stdout and stderr. The easiest example to follow is definitely in terratest source code.

package shell

import (
	"bufio"
	"errors"
	"fmt"
	"io"
	"os"
	"os/exec"
	"strings"
	"sync"
	"syscall"

	"github.com/gruntwork-io/terratest/modules/logger"
	"github.com/gruntwork-io/terratest/modules/testing"
	"github.com/stretchr/testify/require"
)

// Command is a simpler struct for defining commands than Go's built-in Cmd.
type Command struct {
	Command    string            // The command to run
	Args       []string          // The args to pass to the command
	WorkingDir string            // The working directory
	Env        map[string]string // Additional environment variables to set
	// Use the specified logger for the command's output. Use logger.Discard to not print the output while executing the command.
	Logger *logger.Logger
}

// RunCommand runs a shell command and redirects its stdout and stderr to the stdout of the atomic script itself. If
// there are any errors, fail the test.
func RunCommand(t testing.TestingT, command Command) {
	err := RunCommandE(t, command)
	require.NoError(t, err)
}

// RunCommandE runs a shell command and redirects its stdout and stderr to the stdout of the atomic script itself. Any
// returned error will be of type ErrWithCmdOutput, containing the output streams and the underlying error.
func RunCommandE(t testing.TestingT, command Command) error {
	output, err := runCommand(t, command)
	if err != nil {
		return &ErrWithCmdOutput{err, output}
	}
	return nil
}

// RunCommandAndGetOutput runs a shell command and returns its stdout and stderr as a string. The stdout and stderr of
// that command will also be logged with Command.Log to make debugging easier. If there are any errors, fail the test.
func RunCommandAndGetOutput(t testing.TestingT, command Command) string {
	out, err := RunCommandAndGetOutputE(t, command)
	require.NoError(t, err)
	return out
}

// RunCommandAndGetOutputE runs a shell command and returns its stdout and stderr as a string. The stdout and stderr of
// that command will also be logged with Command.Log to make debugging easier. Any returned error will be of type
// ErrWithCmdOutput, containing the output streams and the underlying error.
func RunCommandAndGetOutputE(t testing.TestingT, command Command) (string, error) {
	output, err := runCommand(t, command)
	if err != nil {
		return output.Combined(), &ErrWithCmdOutput{err, output}
	}

	return output.Combined(), nil
}

// RunCommandAndGetStdOut runs a shell command and returns solely its stdout (but not stderr) as a string. The stdout and
// stderr of that command will also be logged with Command.Log to make debugging easier. If there are any errors, fail
// the test.
func RunCommandAndGetStdOut(t testing.TestingT, command Command) string {
	output, err := RunCommandAndGetStdOutE(t, command)
	require.NoError(t, err)
	return output
}

// RunCommandAndGetStdOutE runs a shell command and returns solely its stdout (but not stderr) as a string. The stdout
// and stderr of that command will also be printed to the stdout and stderr of this Go program to make debugging easier.
// Any returned error will be of type ErrWithCmdOutput, containing the output streams and the underlying error.
func RunCommandAndGetStdOutE(t testing.TestingT, command Command) (string, error) {
	output, err := runCommand(t, command)
	if err != nil {
		return output.Stdout(), &ErrWithCmdOutput{err, output}
	}

	return output.Stdout(), nil
}

type ErrWithCmdOutput struct {
	Underlying error
	Output     *output
}

func (e *ErrWithCmdOutput) Error() string {
	return fmt.Sprintf("error while running command: %v; %s", e.Underlying, e.Output.Stderr())
}

// runCommand runs a shell command and stores each line from stdout and stderr in Output. Depending on the logger, the
// stdout and stderr of that command will also be printed to the stdout and stderr of this Go program to make debugging
// easier.
func runCommand(t testing.TestingT, command Command) (*output, error) {
	command.Logger.Logf(t, "Running command %s with args %s", command.Command, command.Args)

	cmd := exec.Command(command.Command, command.Args...)
	cmd.Dir = command.WorkingDir
	cmd.Stdin = os.Stdin
	cmd.Env = formatEnvVars(command)

	stdout, err := cmd.StdoutPipe()
	if err != nil {
		return nil, err
	}

	stderr, err := cmd.StderrPipe()
	if err != nil {
		return nil, err
	}

	err = cmd.Start()
	if err != nil {
		return nil, err
	}

	output, err := readStdoutAndStderr(t, command.Logger, stdout, stderr)
	if err != nil {
		return output, err
	}

	return output, cmd.Wait()
}

// This function captures stdout and stderr into the given variables while still printing it to the stdout and stderr
// of this Go program
func readStdoutAndStderr(t testing.TestingT, log *logger.Logger, stdout, stderr io.ReadCloser) (*output, error) {
	out := newOutput()
	stdoutReader := bufio.NewReader(stdout)
	stderrReader := bufio.NewReader(stderr)

	wg := &sync.WaitGroup{}

	wg.Add(2)
	var stdoutErr, stderrErr error
	go func() {
		defer wg.Done()
		stdoutErr = readData(t, log, stdoutReader, out.stdout)
	}()
	go func() {
		defer wg.Done()
		stderrErr = readData(t, log, stderrReader, out.stderr)
	}()
	wg.Wait()

	if stdoutErr != nil {
		return out, stdoutErr
	}
	if stderrErr != nil {
		return out, stderrErr
	}

	return out, nil
}

func readData(t testing.TestingT, log *logger.Logger, reader *bufio.Reader, writer io.StringWriter) error {
	var line string
	var readErr error
	for {
		line, readErr = reader.ReadString('\n')

		// remove newline, our output is in a slice,
		// one element per line.
		line = strings.TrimSuffix(line, "\n")

		// only return early if the line does not have
		// any contents. We could have a line that does
		// not not have a newline before io.EOF, we still
		// need to add it to the output.
		if len(line) == 0 && readErr == io.EOF {
			break
		}

		// logger.Logger has a Logf method, but not a Log method.
		// We have to use the format string indirection to avoid
		// interpreting any possible formatting characters in
		// the line.
		//
		// See <https://github.com/gruntwork-io/terratest/issues/982>.
		log.Logf(t, "%s", line)

		if _, err := writer.WriteString(line); err != nil {
			return err
		}

		if readErr != nil {
			break
		}
	}
	if readErr != io.EOF {
		return readErr
	}
	return nil
}

// GetExitCodeForRunCommandError tries to read the exit code for the error object returned from running a shell command. This is a bit tricky to do
// in a way that works across platforms.
func GetExitCodeForRunCommandError(err error) (int, error) {
	if errWithOutput, ok := err.(*ErrWithCmdOutput); ok {
		err = errWithOutput.Underlying
	}

	// <http://stackoverflow.com/a/10385867/483528>
	if exitErr, ok := err.(*exec.ExitError); ok {
		// The program has exited with an exit code != 0

		// This works on both Unix and Windows. Although package
		// syscall is generally platform dependent, WaitStatus is
		// defined for both Unix and Windows and in both cases has
		// an ExitStatus() method with the same signature.
		if status, ok := exitErr.Sys().(syscall.WaitStatus); ok {
			return status.ExitStatus(), nil
		}
		return 1, errors.New("could not determine exit code")
	}

	return 0, nil
}

func formatEnvVars(command Command) []string {
	env := os.Environ()
	for key, value := range command.Env {
		env = append(env, fmt.Sprintf("%s=%s", key, value))
	}
	return env
}

1
sheldonh avatar
sheldonh

Apparently this is the library hashicorp provides now pre v1.0.0


A Go module for constructing and running Terraform CLI commands. Structured return values use the data types defined in terraform-json.
License type: hashicorp/terraform-exec is licensed under the Mozilla Public License 2.0

hashicorp/terraform-exec

Terraform CLI commands via Go.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
1
jose.amengual avatar
jose.amengual

I think OpenTF should pool some money and poach apparentlymart from HC

jose.amengual avatar
jose.amengual

imagine if he leaves HC and creates a fork!!!!!, lets not even go there….bad idea bad idea…

Eamon Keane avatar
Eamon Keane

Probably they can afford one or two superstars and then a few journeymen.

https://news.ycombinator.com/item?id=37263022

Marcin here, co-founder of Spacelift, one of the members of the OpenTF initiative

We provided a dedicated team on a temporary basis. Once the project is in the foundation, we will make a financial contribution, with which dedicated developers will be funded. At that point our devs will gradually hand over to the new, fully independent team. The other members of the initiative so far follow the same pattern but I can't speak on their behalf re: exact commitments
2
MrAtheist avatar
MrAtheist

With regards to templatefile, is there a way to do some sort of bash magic like around array expansion…?

# within the template, and im only doing this because it doesnt recognize local variables, and i need to pass in the var from template_file... /facepalm
# anyways, as you can see in my snippet i need to expand the array but somehow tf doesnt like @...

jq --null-input \
   --arg region "${AWS_REGION}" \
   --argjson collect_list_json "$(echo ${COLLECT_LIST[@]} | jq -Rs....
...

# error
Call to function "templatefile" failed: ...../user-data.sh:58,64-65: Invalid character; This character is not used within the language., and 1 other diagnostic(s)

it worked with echo $COLLECT_LIST[@] , but thats not the result i wanted

Hao Wang avatar
Hao Wang

can use an environment var like TF_VAR_collect_list

Hao Wang avatar
Hao Wang

and pass it into the template file,

Hao Wang avatar
Hao Wang

may also need join or split functions

MrAtheist avatar
MrAtheist
Environment Variables | Terraform | HashiCorp Developerattachment image

Learn to use environment variables to change Terraform’s default behavior. Configure log content and output, set variables, and more.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Kelsey Hightower on X

I believe OpenTF, a fork of HashiCorp’s Terraform project, will end up growing Terraform adoption in the long run.

Take HTTP for example, which has many implementations, the adoption is higher than ever. TF has just become the HTTP of configuration management.

2
1
Hao Wang avatar
Hao Wang

I was inspired by Kelsey and he is right

Kelsey Hightower on X

I believe OpenTF, a fork of HashiCorp’s Terraform project, will end up growing Terraform adoption in the long run.

Take HTTP for example, which has many implementations, the adoption is higher than ever. TF has just become the HTTP of configuration management.

Michael Lee avatar
Michael Lee

Hi all, I have a question regarding terraform elastic beanstalk: https://registry.terraform.io/modules/cloudposse/elastic-beanstalk-environment/aws/latest

When running the complete example from https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/tree/main/examples/complete w/ terraform apply, I get the following error:

module.elastic_beanstalk_environment.data.aws_lb_listener.http[0]: Still reading... [30s elapsed]
module.elastic_beanstalk_environment.module.dns_hostname.aws_route53_record.default[0]: Still creating... [30s elapsed]
module.elastic_beanstalk_environment.data.aws_lb_listener.http[0]: Still reading... [40s elapsed]
module.elastic_beanstalk_environment.module.dns_hostname.aws_route53_record.default[0]: Still creating... [40s elapsed]
module.elastic_beanstalk_environment.module.dns_hostname.aws_route53_record.default[0]: Creation complete after 40s [id=Z0GNBFM_api_CNAME]
╷
│ Error: Search returned 0 results, please revise so only one is returned
│ 
│   with module.elastic_beanstalk_environment.data.aws_lb_listener.http[0],
│   on .terraform/modules/elastic_beanstalk_environment/main.tf line 1125, in data "aws_lb_listener" "http":
│ 1125: data "aws_lb_listener" "http" {
│ 
╵

Any help will be appreciated, thanks in advance.

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

Hi @Michael Lee Did the new thread suggestions solve the issue? https://sweetops.slack.com/archives/CB6GHNLG0/p1693230062027589

Hi all, I’m provisioning an elastic beanstalk environment along the eb application. I was able to provision the eb using the complete example. Now I want to provision RDS and Elasticache (single node redis). Does anyone have example for it? Thanks in advance.

Michael Lee avatar
Michael Lee

Hi, thanks for the reply. Unfortunately no, the issue still present.

Michael Lee avatar
Michael Lee

Hi, has anyone faced the same issue? below is my script for eb provisioning

module "vpc" {
  source  = "cloudposse/vpc/aws"
  version = "2.1.0"

  ipv4_primary_cidr_block = "172.16.0.0/16"

  context = module.this.context
}

module "subnets" {
  source  = "cloudposse/dynamic-subnets/aws"
  version = "2.4.1"

  availability_zones   = var.availability_zones
  vpc_id               = module.vpc.vpc_id
  igw_id               = [module.vpc.igw_id]
  ipv4_enabled         = true
  ipv4_cidr_block      = [module.vpc.vpc_cidr_block]
  nat_gateway_enabled  = true
  nat_instance_enabled = false

  context = module.this.context
}

module "elastic_beanstalk_application" {
  source  = "cloudposse/elastic-beanstalk-application/aws"
  version = "0.11.1"

  description = "EB app for ${var.app_name}"

  context = module.this.context
}

module "elastic_beanstalk_environment" {
  source  = "cloudposse/elastic-beanstalk-environment/aws"
  version = "0.51.1"

  description                = "EB env for ${var.app_name}"
  region                     = var.region
  availability_zone_selector = var.availability_zone_selector
  dns_zone_id                = var.dns_zone_id

  wait_for_ready_timeout             = var.wait_for_ready_timeout
  elastic_beanstalk_application_name = module.elastic_beanstalk_application.elastic_beanstalk_application_name
  environment_type                   = var.environment_type
  loadbalancer_type                  = var.loadbalancer_type
  tier                               = var.tier
  version_label                      = var.version_label
  force_destroy                      = var.force_destroy

  instance_type    = var.instance_type
  root_volume_size = var.root_volume_size
  root_volume_type = var.root_volume_type

  autoscale_min             = var.autoscale_min
  autoscale_max             = var.autoscale_max
  autoscale_measure_name    = var.autoscale_measure_name
  autoscale_statistic       = var.autoscale_statistic
  autoscale_unit            = var.autoscale_unit
  autoscale_lower_bound     = var.autoscale_lower_bound
  autoscale_lower_increment = var.autoscale_lower_increment
  autoscale_upper_bound     = var.autoscale_upper_bound
  autoscale_upper_increment = var.autoscale_upper_increment

  vpc_id                              = module.vpc.vpc_id
  application_subnets                 = module.subnets.private_subnet_ids
  loadbalancer_subnets                = module.subnets.public_subnet_ids
  loadbalancer_redirect_http_to_https = true
  loadbalancer_certificate_arn        = var.loadbalancer_certificate_arn
  loadbalancer_ssl_policy             = var.loadbalancer_ssl_policy

  allow_all_egress = true

  additional_security_group_rules = [
    {
      type                     = "ingress"
      from_port                = 0
      to_port                  = 65535
      protocol                 = "-1"
      source_security_group_id = module.vpc.vpc_default_security_group_id
      description              = "Allow all inbound traffic from trusted Security Groups"
    },
    {
      type                     = "ingress"
      from_port                = 3306
      to_port                  = 3306
      protocol                 = "-1"
      source_security_group_id = "sg-07ddd9db717161661"
      description              = "Allow MYSQL inbound traffic from trusted Security Groups"
    }
  ]

  rolling_update_enabled  = var.rolling_update_enabled
  rolling_update_type     = var.rolling_update_type
  updating_min_in_service = var.updating_min_in_service
  updating_max_batch      = var.updating_max_batch

  healthcheck_url  = var.healthcheck_url
  application_port = var.application_port

  solution_stack_name = var.solution_stack_name

  additional_settings = var.additional_settings
  env_vars            = var.env_vars

  extended_ec2_policy_document = data.aws_iam_policy_document.minimal_s3_permissions.json
  prefer_legacy_ssm_policy     = false
  prefer_legacy_service_policy = false
  scheduled_actions            = var.scheduled_actions

  s3_bucket_versioning_enabled = var.s3_bucket_versioning_enabled
  enable_loadbalancer_logs     = var.enable_loadbalancer_logs

  context = module.this.context
}

2023-08-27

Mahesh avatar

When I am using module “lambda” { source = “cloudposse/lambda-function/aws” version = “0.5.1”

Inspite of placing context.tf which has this module , when I run tfplan shows below error

│ Error: Reference to undeclared module │ │ on main.tf line 5, in locals: │ 5: policy_name_inside = “${module.label.id}-inside” │ │ No module call named “label” is declared in the root module. ╵ ╷ │ Error: Reference to undeclared resource │ │ on main.tf line 10, in locals: │ 10: join(“”, data.aws_caller_identity.current.*.account_id), │ │ A data resource “aws_caller_identity” “current” has not been declared in the root module.

Any hints will be appreciated.

Segun Olaiya avatar
Segun Olaiya

Can you paste the full (masked) contents of your main.tf

Hao Wang avatar
Hao Wang

need to add data “aws_caller_identity” “current” {}

Mahesh avatar

@Segun Olaiya data “aws_partition” “current” {}

locals { enabled = module.this.enabled policy_name_inside = “${module.label.id}-inside”

policy_arn_prefix = format(
"arn<i class="em em-%s"></i>iam:policy",
join("", data.aws_partition.current.*.partition),
join("", data.aws_caller_identity.current.*.account_id),   )   policy_arn_inside = format("%s/%s", local.policy_arn_prefix, local.policy_name_inside)   policy_json = jsonencode({
Version = "2012-10-17"
Statement = [
  {
    Action = [
      "ec2:Describe*",
    ]
    Effect   = "Allow"
    Resource = "*"
  },
]   }) }

module “lambda” { source = “cloudposse/lambda-function/aws” version = “0.5.1”

filename = var.filename function_name = var.function_name handler = var.handler runtime = var.runtime cloudwatch_lambda_insights_enabled = var.cloudwatch_lambda_insights_enabled cloudwatch_logs_retention_in_days = var.cloudwatch_logs_retention_in_days iam_policy_description = var.iam_policy_description

custom_iam_policy_arns = [
"arn<img src="/assets/images/custom_emojis/aws.png" alt="aws" class="em em--custom-icon em-aws">iam:policy/job-function/ViewOnlyAccess",
local.policy_arn_inside,

]

}

Mahesh avatar

@Hao Wang: I have added data partition still does not work

Mahesh avatar

@Hao Wang Thanks for your right pointer , after adding caller identity resolved. But now its new error

1
Mahesh avatar

│ Error: Reference to undeclared module │ │ on main.tf line 6, in locals: │ 6: policy_name_inside = “${module.label.id}-inside” │ │ No module call named “label” is declared in the root module.

Hao Wang avatar
Hao Wang

oh guessing you may need to find the original reference which lead you to this rabbit hole lol

1

2023-08-28

Michael Lee avatar
Michael Lee

Hi all, I’m provisioning an elastic beanstalk environment along the eb application. I was able to provision the eb using the complete example. Now I want to provision RDS and Elasticache (single node redis). Does anyone have example for it? Thanks in advance.

Alex Jurkiewicz avatar
Alex Jurkiewicz

There are cloudposse modules for both these things, have you checked them for examples?

this1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yep! all our modules have an examples/ subfolder with a working example

Michael Lee avatar
Michael Lee

Thanks for the reply. I will have a look at rds and elasticache example.

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

Hi, has anyone faced the same issue? below is my script for eb provisioning

module "vpc" {
  source  = "cloudposse/vpc/aws"
  version = "2.1.0"

  ipv4_primary_cidr_block = "172.16.0.0/16"

  context = module.this.context
}

module "subnets" {
  source  = "cloudposse/dynamic-subnets/aws"
  version = "2.4.1"

  availability_zones   = var.availability_zones
  vpc_id               = module.vpc.vpc_id
  igw_id               = [module.vpc.igw_id]
  ipv4_enabled         = true
  ipv4_cidr_block      = [module.vpc.vpc_cidr_block]
  nat_gateway_enabled  = true
  nat_instance_enabled = false

  context = module.this.context
}

module "elastic_beanstalk_application" {
  source  = "cloudposse/elastic-beanstalk-application/aws"
  version = "0.11.1"

  description = "EB app for ${var.app_name}"

  context = module.this.context
}

module "elastic_beanstalk_environment" {
  source  = "cloudposse/elastic-beanstalk-environment/aws"
  version = "0.51.1"

  description                = "EB env for ${var.app_name}"
  region                     = var.region
  availability_zone_selector = var.availability_zone_selector
  dns_zone_id                = var.dns_zone_id

  wait_for_ready_timeout             = var.wait_for_ready_timeout
  elastic_beanstalk_application_name = module.elastic_beanstalk_application.elastic_beanstalk_application_name
  environment_type                   = var.environment_type
  loadbalancer_type                  = var.loadbalancer_type
  tier                               = var.tier
  version_label                      = var.version_label
  force_destroy                      = var.force_destroy

  instance_type    = var.instance_type
  root_volume_size = var.root_volume_size
  root_volume_type = var.root_volume_type

  autoscale_min             = var.autoscale_min
  autoscale_max             = var.autoscale_max
  autoscale_measure_name    = var.autoscale_measure_name
  autoscale_statistic       = var.autoscale_statistic
  autoscale_unit            = var.autoscale_unit
  autoscale_lower_bound     = var.autoscale_lower_bound
  autoscale_lower_increment = var.autoscale_lower_increment
  autoscale_upper_bound     = var.autoscale_upper_bound
  autoscale_upper_increment = var.autoscale_upper_increment

  vpc_id                              = module.vpc.vpc_id
  application_subnets                 = module.subnets.private_subnet_ids
  loadbalancer_subnets                = module.subnets.public_subnet_ids
  loadbalancer_redirect_http_to_https = true
  loadbalancer_certificate_arn        = var.loadbalancer_certificate_arn
  loadbalancer_ssl_policy             = var.loadbalancer_ssl_policy

  allow_all_egress = true

  additional_security_group_rules = [
    {
      type                     = "ingress"
      from_port                = 0
      to_port                  = 65535
      protocol                 = "-1"
      source_security_group_id = module.vpc.vpc_default_security_group_id
      description              = "Allow all inbound traffic from trusted Security Groups"
    },
    {
      type                     = "ingress"
      from_port                = 3306
      to_port                  = 3306
      protocol                 = "-1"
      source_security_group_id = "sg-07ddd9db717161661"
      description              = "Allow MYSQL inbound traffic from trusted Security Groups"
    }
  ]

  rolling_update_enabled  = var.rolling_update_enabled
  rolling_update_type     = var.rolling_update_type
  updating_min_in_service = var.updating_min_in_service
  updating_max_batch      = var.updating_max_batch

  healthcheck_url  = var.healthcheck_url
  application_port = var.application_port

  solution_stack_name = var.solution_stack_name

  additional_settings = var.additional_settings
  env_vars            = var.env_vars

  extended_ec2_policy_document = data.aws_iam_policy_document.minimal_s3_permissions.json
  prefer_legacy_ssm_policy     = false
  prefer_legacy_service_policy = false
  scheduled_actions            = var.scheduled_actions

  s3_bucket_versioning_enabled = var.s3_bucket_versioning_enabled
  enable_loadbalancer_logs     = var.enable_loadbalancer_logs

  context = module.this.context
}

2023-08-29

Aaron avatar

Hi, i was just looking at the terraform-aws-route53-dnssec module but this seems to miss of the part where you establish a chain of trust (https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-configuring-dnssec-enable-signing.html ) section 3… Is there anyway to do this as i can’t see a way to create an output of the public key thats generated when dnssec is enabled with the ksk. It seems that i can only create the chain of trust either through cli or through the console. Thanks, Aaron

Enabling DNSSEC signing and establishing a chain of trust - Amazon Route 53

We recommend following the steps in this article to have your zone signed and included in the chain of trust. The following steps will minimize the risk of onboarding onto DNSSEC.

ugns/terraform-aws-route53-dnssec

Terraform AWS Route53 DNSSEC module

Hao Wang avatar
Hao Wang
Enabling DNSSEC signing and establishing a chain of trust - Amazon Route 53

We recommend following the steps in this article to have your zone signed and included in the chain of trust. The following steps will minimize the risk of onboarding onto DNSSEC.

ugns/terraform-aws-route53-dnssec

Terraform AWS Route53 DNSSEC module

Igor Zalutski avatar
Igor Zalutski

^ whoa that’s getting viral Igor here, building Digger and supporting OpenTF

super humbled to see that level of support - thank you guys so much for spreading the word!

Funny chart:

5
1

2023-08-30

z0rc3r avatar
Vlad Ionescu (he/him) on X

OpenTF is disconnected from reality.

They don’t understand Terraform, they don’t understand users, they don’t understand the ecosystem, and they don’t even understand who’s at the table. Or that there is a table!

Let me explain how dumb this whole thing is…

1/75 (i know)

1
1
2
1
2
Alex Jurkiewicz avatar
Alex Jurkiewicz

This guy needs a big hug

Vlad Ionescu (he/him) on X

OpenTF is disconnected from reality.

They don’t understand Terraform, they don’t understand users, they don’t understand the ecosystem, and they don’t even understand who’s at the table. Or that there is a table!

Let me explain how dumb this whole thing is…

1/75 (i know)

3
2
2
Alex Jurkiewicz avatar
Alex Jurkiewicz

So angry!

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

Oh, I absolutely need all the hugs, kisses, and cuddles — that’s my love language!

1
Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

But this wasn’t that. Nononono

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

I find it offending when OpenTF spreads misinformation and I find it absolutely disgusting when a bunch of trusted tech leaders praised OpenTF as “a beacon of open-source success” without doing any due diligence.

marcinw avatar
marcinw

You’re entitled to your opinion.

Just a note that there are real people on the other side of the receiving end of your writing, folks you may stumble upon at meetups, conferences etc.

1
Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

And I would say the same thing to people’s faces. This is a major failure and a breach of trust! I certainly expected more from Spacelift

marcinw avatar
marcinw


I would say the same thing to people’s faces
Really looking forward to being called a moron and an imbecile in person

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

I did not criticize individuals, I criticized actions! There is a difference.

marcinw avatar
marcinw

This is a fine line to tread.

Re: actions. Hopefully in a few weeks time you will be able to appreciate the motivations.

marcinw avatar
marcinw

In the meantime please remember that the people whose actions you comment on, folks whose words you misrepresent, they are members of the same community, and sooner or later you will need to look them in the eye.

3
marcinw avatar
marcinw

Over and out, have a nice day Sir!

Pawel Rein avatar
Pawel Rein

A panel discussion hosted by SweetOps could be a good idea perhaps?

marcinw avatar
marcinw

@Pawel Rein please let our actions over the next few weeks speak for themselves. There is a lot happening behind the scenes, and personally I am not willing to get into a battle of ad hominems.

1
1
Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

I look forward to being proven wrong and I am genuinely hoping that will happen. HashiCorp does need a kick in the butt and OpenTF could be that.

At the same time, I will keep trying to criticize actions and not individuals. I don’t want to hurt people.

1
1
1
marcinw avatar
marcinw

Let’s have a deal @Vlad Ionescu (he/him). Let’s have a few pints in 3 months. If you’re right, drinks are on me. If not, they’re on you. How’s that?

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

Deal @marcinw!

1
marcinw avatar
marcinw

In the meantime, you badly misrepresented the words of one of my colleagues here.

Spacelift currently provides a number of employees (way more than 5). But one thing we want to avoid is have a bunch of alternative vendors be in charge of something that’s a public good.

Which is why we believe that it’s in the interest of the community to have a dedicated team of folks who do not ultimately report to one of the vendors, and whose work is in the interest of the community.

You may or may not like this approach, but it hurtful to a real human on the other side of it to be called a liar.

Vlad Ionescu (he/him) on Xattachment image

One of OpenTF’s biggest supporters, Spacelift, is already starting to pull back support

I said the OpenTF list of supporters is “sign here to save the starving orphans” which is very different from “every evening spend 3 hours cooking for the orphans”.

As per the attached…

DaniC (he/him) avatar
DaniC (he/him)


A panel discussion hosted by SweetOps could be a good idea perhaps?
is not a good idea imo, not at this point in time. Once the regular calls start to be put in place then this sort of conversation could take place.

Until then the tone should be kept within some common sense boundaries to give time for folks who had the courage to embark into this initiative to get the wheels in motion.

I’m sure folks who managed to secure funds and build / start a business have done their due diligence in terms of $ and i’m also sure they are aware of the effort required to fund an open-source project - see K8s and the cost of running Prow and how much Google funded CNCF.

In closure, i’d like to believe that the strong language used by Vlad (even criticising the actions and not individuals) is maybe due to his latin roots and i think more restrain should be used in public ..a word/ phrase written sometimes can be twisted in many ways and could have a negative impact.

2
Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

@marcinw I very much agree with the approach, I just don’t see how it would work. CNCF does not hire developers to work on projects. OpenTF creating a new foundation means a lot of money spent on lawyers, IP, admin, etc. Is there a third alternative I am missing?

marcinw avatar
marcinw

Yes. But doing big things for real takes time and effort. Having to deal in the meantime with online aggression and misinterpretations does not help.

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

Can you expand on what that alternative is?

I am happy to delete the posting or to add a reply with “I was wrong about this because X” (I’d rather delete it tho cause people don’t usually read replies and I don’t wanna spread lies) but I need to see a realistic plan first or even just an outline.

marcinw avatar
marcinw

Patience, please.

marcinw avatar
marcinw

Things will fall into place eventually, I promise.

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

As an aside, this is the problem with everybody doing communication and PR. Incomplete information gets twisted and we all end up worse

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

I look forward to them falling into place and I hope you’ll succeed, but I can’t delete that post until Spacelift clarifies how funding OpenTF will work.

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

I am happy to jump on an off-the-record Zoom to chat about this too if you want.

marcinw avatar
marcinw


everybody doing communication and PR
For sure. That’s how spontaneous initiatives work though.
on an off-the-record Zoom
I am sure you will understand that based on your recent writing it would be highly risky for me to discuss anything off-the-record

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

I get it.

Would you be on with me adding a reply to that Tweet saying something like “Spacelift allegedly has a concrete and realistic plan for how funding will work that’ll be released in a few weeks. I look forward to seeing that!” ?

Again, I don’t want to spread misinformation but I can’t just delete that based on “yes we have a plan”

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

(I am happy to hear alternative wordings of that message)

marcinw avatar
marcinw

No @Vlad Ionescu (he/him), this is your opinion, your writing, and ultimately your responsibility to whatever guides your moral code.

I respect your right to have opinions, theories, assumptions etc. I would just like you to not hurt real people.

And if you actually care about the cause, please give us time, don’t add your writing to a list of challenges But again, this is your choice, and in 3 months you can call me a clown all you want.

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

Ok, I won’t add any reply to that tweet then.

I am not planning any other writing, so no worries there!

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

I do have an ask though: can OpenTF please consolidate communication and postings? Having fragments of information posted by everybody involved only leads to bad situations like these and further hurts OpenTF

marcinw avatar
marcinw

Valid point, work in progress!

1
2
Eamon Keane avatar
Eamon Keane

Does anyone have a read on how the big three cloud providers are evaluating it internally? GCP in particular just launched a (basic) TACO and are known to have Hashicorp employees embedded in the company to help with Terraform. Are they waiting to see before expressing an opinion or have they already decided to stick with Terraform?

loren avatar

The cloud providers are generally contributing to the terraform providers (not terraform core), and the terraform providers are unaffected by the license change. so i believe the intent is for opentf to continue using the hashicorp terraform providers. essentially, that layer of the design would be unchanged, with no impact to users of either hashi terraform or opentf

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

The cloud providers also have Business Development contracts with HashiCorp. It’s a whole different discussion

1
jose.amengual avatar
jose.amengual

@Vlad Ionescu (he/him) do not have kids, otherwise you will be complaining that they can’t walk in the first 2 weeks

1
Alex Jurkiewicz avatar
Alex Jurkiewicz

If nobody is posting a negative take, do it yourself to get attention. When called on toxic behaviour, double down. You weren’t calling people dumb-fucks, but “actions”! When mistakes are pointed out, refuse to apologise, claim the true error is with “incomplete information”.

Hao Wang avatar
Hao Wang

my 2 cents, sorry, we have to move from Hashicorp

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

@marcinw I owe you at least 1 drink! You are doing funding though the Linux Foundation (not through the CNCF) and I was wrong on that

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

(or I assume that’s what you’ll do based on the LF acceptance)

marcinw avatar
marcinw

Let’s wait some more time, I mean to collect all the drinks @Vlad Ionescu (he/him)

3
1
1
loren avatar

new enhancement request on the aws provider, would appreciate thumbs-up if you have the same use case… https://github.com/hashicorp/terraform-provider-aws/issues/33242

#33242 [Enhancement]: Exclusive management of aws_ssoadmin_managed_policy_attachment and aws_ssoadmin_customer_managed_policy_attachment

Description

We would like to be able to manage the exact set of managed policies attached to an AWS SSO Permission Set. Currently, using aws_ssoadmin_customer_managed_policy_attachment or aws_ssoadmin_managed_policy_attachment, the attachments are “non-exclusive”. Meaning, if a user attaches another policy to the permission set, terraform is blind to that change, and cannot detect or alert or remove the attachment.

This would be similar to the implementation of exclusive management of IAM Role attachments, or Security Group rules.

Affected Resource(s) and/or Data Source(s)

• aws_ssoadmin_permission_set • aws_ssoadmin_managed_policy_attachment • aws_ssoadmin_customer_managed_policy_attachment

Potential Terraform Configuration

This could be accomplished by adding new attributes to the aws_ssoadmin_permission_set resource. For example:

resource "aws_ssoadmin_permission_set" "example" {
  name             = var.name
  description      = var.description
  instance_arn     = local.sso_instance_arn
  relay_state      = var.relay_state
  session_duration = var.session_duration
  tags             = var.tags

  managed_policy_attachments = [...]
  customer_managed_policy_attachments = [...]
}

Alternatively, aligning with the desire to map a resource to a single primary API call, it could be accomplished through new “plural” resources:

resource "aws_ssoadmin_managed_policy_attachments" "example" {

  instance_arn       = local.sso_instance_arn
  permission_set_arn = aws_ssoadmin_permission_set.this.arn

  managed_policy_arns = [...]
}

resource "aws_ssoadmin_customer_managed_policy_attachments" "example" {

  instance_arn       = local.sso_instance_arn
  permission_set_arn = aws_ssoadmin_permission_set.this.arn

  customer_managed_policy_attachments = [...]
}

References

#17510#17511#17512#5904#26352

Would you like to implement a fix?

None

1
omry avatar
Harness joins OpenTF foundation | Harnessattachment image

This blog covers the fact that Harness is joining OpenTF foundation.

8
Eamon Keane avatar
Eamon Keane

not too shabby, $425m total raised and last valued at $3.7bn.

Harness joins OpenTF foundation | Harnessattachment image

This blog covers the fact that Harness is joining OpenTF foundation.

nyan_parrot2

2023-08-31

Release notes from terraform avatar
Release notes from terraform
02:13:32 PM

v1.6.0-beta1 No content.

Release v1.6.0-beta1 · hashicorp/terraformattachment image

Terraform enables you to safely and predictably create, change, and improve infrastructure. It is a source-available tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned. - Release v1.6.0-beta1 · hashicorp/terraform

jose.amengual avatar
jose.amengual

Someone in my company posted this:

Hashicorp has updated the terms of use for their registry: https://registry.terraform.io/terms

Looking via the wayback machine: https://web.archive.org/web/20221220134052/https://registry.terraform.io/terms

The part that changed (section 2)

Original:

You may download or copy the Content (and other items displayed on the Services for download) for personal non-commercial use only, provided that you maintain all copyright and other notices contained in such Content.

New:

You may download providers, modules, policy libraries and/or other Services or Content from this website solely for use with, or in support of, HashiCorp Terraform. You may download or copy the Content (and other items displayed on the Services for download) for personal non-commercial use only, provided that you maintain all copyright and other notices contained in such Content.

So that reads as:

You may not use anything hosted on registry.terraform.io? but providers are MPL no?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Erik Osterman (Cloud Posse) on LinkedIn: Terraform Registry | 41 commentsattachment image
HashiCorp flexes. Updates Terraform Registry Terms of Service. It&#39;s well within their right to do what they want with their registry. That said, one the most…41 comments on LinkedIn
1
Hao Wang avatar
Hao Wang

I’m waiting for the CTO of Hashicorp to leave and join another Opensource company, his style of vision and presentations are convincing

2
Jake Lundberg (HashiCorp) avatar
Jake Lundberg (HashiCorp)

He’s a co-founder and that’s very unlikely. His whiteboards are excellent though.

Hao Wang avatar
Hao Wang

maybe only one of co-founders can live thru the capital wars, seems HashiCorp is talking to giant companies, and sold itself before going downturn like Docker, may happen in 6 months to a couple of years

Jake Lundberg (HashiCorp) avatar
Jake Lundberg (HashiCorp)

I’m in no way suggesting this is going to happen, but I could potentially see Mitchell leaving. I also don’t think it’s going to happen as the company is named after him and he’s doing what he loves, but you never know.

As for whether HashiCorp remains an independent entity…time will tell. I know there’s been a handful of offers over the years, and here we are.

I sometimes think people forget we have more products than Terraform, and Vault is a major player in the industry. I came here after running Consul for years and that’s everywhere (though very few folks talk about it). Heck, Nomad is getting traction these days.

As for “capital wars”…not really sure what that means. HashiCorp is a public company, it’s not a non-profit. Unless run by a state or a non-profit, every company is capitalist. You either make money and profit and survive or die or get sold off.

Alex Jurkiewicz avatar
Alex Jurkiewicz

I think people forget about the other products because nobody (even Hashicorp) have a convincing monetization strategy

Hao Wang avatar
Hao Wang

Mitchell is also great presenter, oh yeah, I almost forgot his role changed a couple of years back

Hao Wang avatar
Hao Wang

These people who can present and write codes are geniuses

Jake Lundberg (HashiCorp) avatar
Jake Lundberg (HashiCorp)

Well, that’s clearly not true Alex.

1
Vinko Vrsalovic avatar
Vinko Vrsalovic

Hey - what happened to hashicorp/terraform? I’ve been reading about replacements and that something has happened but I don’t know what exactly happened, can you share an informative link?

Hao Wang avatar
Hao Wang

The licenses of Hashicorp open source projects (maybe just Terraform or all) got updates and will adopt biz license which will restrict competitors to use their products

Balazs Varga avatar
Balazs Varga

Competitiors only ? What about companies who just use it ofr IaaS… go back to Puppet/Ansible/Cloudformation ?

Hao Wang avatar
Hao Wang

Hsshicorp said no impacts on the end users

1
Hao Wang avatar
Hao Wang

Nomad is a great product and getting more attentions recently years even under k8s shadow, but the future of nomad is clear now, like Docker Swam

Hao Wang avatar
Hao Wang

oh is Terragrunt taken as a competitor by new license?

Dhamodharan avatar
Dhamodharan

Hello all, i am new to GCP, I have few resources created under one project-A, now im trying to create resources under another project-B, but im connecting to same backend state. Issue is the service account in project-B unable to refresh the statefile to get the resource details in project-A. its giving me permission denied error. How can set permission to the serviceaccount in project-B to access the resources in project-A.

When i run terraform apply, its giving this error.

│ Error: Error when reading or editing ComputeNetwork "projects/project-A/global/networks/wazuh-siem-vpc-01": Get "<https://compute.googleapis.com/compute/v1/projects/project-A/global/networks/wazuh-siem-vpc-01?alt=json>": impersonate: status code 403: {
│   "error": {
│     "code": 403,
│     "message": "Permission 'iam.serviceAccounts.getAccessToken' denied on resource (or it may not exist).",
│     "status": "PERMISSION_DENIED",
│     "details": [
│       {
│         "@type": "type.googleapis.com/google.rpc.ErrorInfo",
│         "reason": "IAM_PERMISSION_DENIED",
│         "domain": "iam.googleapis.com",
│         "metadata": {
│           "permission": "iam.serviceAccounts.getAccessToken"
│         }
│       }
│     ]
│   }
│ }

Can someone help me to fix this? Thanks

Dhamodharan avatar
Dhamodharan

Any suggestions on this please?

Rajat Verma avatar
Rajat Verma

it clearly says this permission is missing on the service account iam.serviceAccounts.getAccessToken

    keyboard_arrow_up