#terraform (2024-04)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2024-04-01

2024-04-02

Ryan avatar

stupid question but are you guys terraforming your github configs?

1
1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
resource "github_repository_file" "gitignore" {
  count = local.enabled ? 1 : 0

  repository = local.github_repository.name
  branch     = local.github_repository.default_branch
  file       = ".gitignore"
  content = templatefile("${path.module}/templates/.gitignore.tpl", {
    entries = var.gitignore_entries
  })
  commit_message      = "Create .gitignore file."
  commit_author       = var.github_user
  commit_email        = var.github_user_email
  overwrite_on_create = true
}

resource "github_repository_file" "readme" {
  count = local.enabled ? 1 : 0

  repository = local.github_repository.name
  branch     = local.github_repository.default_branch
  file       = "README.md"
  content = templatefile("${path.module}/templates/README.md.tpl", {
    repository_name        = local.github_repository.name
    repository_description = local.github_repository.description
    github_organization    = var.github_organization
  })
  commit_message      = "Create README.md file."
  commit_author       = var.github_user
  commit_email        = var.github_user_email
  overwrite_on_create = true
}

resource "github_repository_file" "codeowners_file" {
  count = local.enabled ? 1 : 0

  repository = local.github_repository.name
  branch     = local.github_repository.default_branch
  file       = ".github/CODEOWNERS"
  content = templatefile("${path.module}/templates/CODEOWNERS.tpl", {
    codeowners = var.github_codeowner_teams
  })
  commit_message      = "Create CODEOWNERS file."
  commit_author       = var.github_user
  commit_email        = var.github_user_email
  overwrite_on_create = true
}

resource "github_repository_file" "pull_request_template" {
  count = local.enabled ? 1 : 0

  repository          = local.github_repository.name
  branch              = local.github_repository.default_branch
  file                = ".github/PULL_REQUEST_TEMPLATE.md"
  content             = file("${path.module}/templates/PULL_REQUEST_TEMPLATE.md")
  commit_message      = "Create PULL_REQUEST_TEMPLATE.md file."
  commit_author       = var.github_user
  commit_email        = var.github_user_email
  overwrite_on_create = true
}

Ryan avatar

Yea I have a req of redeploying Github inside our boundaries and so I have a chance to do it clean. I was about to start clicking but I was finding some of what you’re saying above.

Ryan avatar

someone emoted nope, hmmmm

Ryan avatar

ill have to read more on the code, getting to a control state would be cool

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the auth token for the github provider is read from SSM or ASM in case of AWS, the rest should be straightforward (variables configured with Terraform and/or Atmos), let us know if you need any help

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We recently rolled this out for cloud posse: https://github.com/repository-settings/app

repository-settings/app

Pull Requests for GitHub repository settings

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
# These settings are synced to GitHub by <https://probot.github.io/apps/settings/>

repository:
  # See <https://docs.github.com/en/rest/reference/repos#update-a-repository> for all available settings.

  # Note: You cannot unarchive repositories through the API. `true` to archive this repository. 
  archived: false

  # Either `true` to enable issues for this repository, `false` to disable them.
  has_issues: true

  # Either `true` to enable projects for this repository, or `false` to disable them.
  # If projects are disabled for the organization, passing `true` will cause an API error.
  has_projects: true

  # Either `true` to enable the wiki for this repository, `false` to disable it.
  has_wiki: false

  # Either `true` to enable downloads for this repository, `false` to disable them.
  has_downloads: true

  # Updates the default branch for this repository.
  #default_branch: main

  # Either `true` to allow squash-merging pull requests, or `false` to prevent
  # squash-merging.
  allow_squash_merge: true

  # Either `true` to allow merging pull requests with a merge commit, or `false`
  # to prevent merging pull requests with merge commits.
  allow_merge_commit: false

  # Either `true` to allow rebase-merging pull requests, or `false` to prevent
  # rebase-merging.
  allow_rebase_merge: false

  # Either `true` to enable automatic deletion of branches on merge, or `false` to disable
  delete_branch_on_merge: true

  # Either `true` to enable automated security fixes, or `false` to disable
  # automated security fixes.
  enable_automated_security_fixes: true

  # Either `true` to enable vulnerability alerts, or `false` to disable
  # vulnerability alerts.
  enable_vulnerability_alerts: true

  # Either `true` to make this repo available as a template repository or `false` to prevent it.
  #is_template: false

environments:
  - name: release
    deployment_branch_policy:
      custom_branches:
        - main
        - release/**
  - name: security
    deployment_branch_policy:
      custom_branches:
        - main
        - release/**

# Labels: define labels for Issues and Pull Requests
labels:
  - name: bug
    color: '#d73a4a'
    description: 🐛 An issue with the system

  - name: feature
    color: '#336699'
    description: New functionality

  - name: bugfix
    color: '#fbca04'
    description: Change that restores intended behavior

  - name: auto-update
    color: '#ededed'
    description: This PR was automatically generated

  - name: do not merge
    color: '#B60205'
    description: Do not merge this PR, doing so would cause problems

  - name: documentation
    color: '#0075ca'
    description: Improvements or additions to documentation

  - name: readme
    color: '#0075ca'
    description: Improvements or additions to the README

  - name: duplicate
    color: '#cfd3d7'
    description: This issue or pull request already exists

  - name: enhancement
    color: '#a2eeef'
    description: New feature or request

  - name: good first issue
    color: '#7057ff'
    description: 'Good for newcomers'

  - name: help wanted
    color: '#008672'
    description: 'Extra attention is needed'

  - name: invalid
    color: '#e4e669'
    description: "This doesn't seem right"

  - name: major
    color: '#00FF00'
    description: 'Breaking changes (or first stable release)'

  - name: minor
    color: '#00cc33'
    description: New features that do not break anything

  - name: no-release
    color: '#0075ca'
    description: 'Do not create a new release (wait for additional code changes)'

  - name: patch
    color: '#0E8A16'
    description: A minor, backward compatible change

  - name: question
    color: '#d876e3'

  - name: wip
    color: '#B60205'
    description: 'Work in Progress: Not ready for final review or merge'

  - name: wontfix
    color: '#B60205'
    description: 'This will not be worked on'

  - name: needs-cloudposse
    color: '#B60205'
    description: 'Needs Cloud Posse assistance'
  
  - name: needs-test
    color: '#B60205'
    description: 'Needs testing'

  - name: triage
    color: '#fcb32c'
    description: 'Needs triage'

  - name: conflict
    color: '#B60205'
    description: 'This PR has conflicts'

  - name: no-changes
    color: '#cccccc'
    description: 'No changes were made in this PR'

  - name: stale
    color: '#e69138'
    description: 'This PR has gone stale'

  - name: migration
    color: '#2f81f7'
    description: 'This PR involves a migration'

  - name: terraform/0.13
    color: '#ffd9c4'
    description: 'Module requires Terraform 0.13 or later'

# Note: `permission` is only valid on organization-owned repositories.
# The permission to grant the collaborator. Can be one of:
# * `pull` - can pull, but not push to or administer this repository.
# * `push` - can pull and push, but not administer this repository.
# * `admin` - can pull, push and administer this repository.
# * `maintain` - Recommended for project managers who need to manage the repository without access to sensitive or destructive actions.
# * `triage` - Recommended for contributors who need to proactively manage issues and pull requests without write access.
#
# See <https://docs.github.com/en/rest/reference/teams#add-or-update-team-repository-permissions> for available options
teams:
  - name: approvers
    permission: push
  - name: admins
    permission: admin
  - name: bots
    permission: admin
  - name: engineering
    permission: write
  - name: contributors
    permission: write
  - name: security
    permission: pull

Ryan avatar

That is very slick

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This combined with GitHub organizational repository rulesets works well for for us and eliminates the need to manage branch protections at the repo level

Ryan avatar

From a compliance perspective, I have to enforce any baseline I have

Ryan avatar

So like this might work well to enforce

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If you decide to go the route with terraform, there are some gotchas. I know there are bugs/limitations with the HashiCorp managed provider for GitHub that others have worked around, but those forks are seemingly abandoned. We would recommend a single terraform component to manage a single repo, that using atmos to define the configuration for each repo with inheritance. This would mitigate one of the most common problems companies experience when using terraform to manage GitHub repos: GitHub API rate limits. Defining a factory in terraform for repos will guaranteed be hamstrung by these rate limits. So defining the factory in atmos instead and combining it with our GHA will ensure only affected repos are planned/applied when changes are made.

2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But I rather like the approach we took instead.

joshmyers avatar
joshmyers

wave I’m using Terraform to manage all our internal Github Enterprise repos and teams etc. Happy to answer any questions.

2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@joshmyers what issues have you run into along the way and are you using the official hashicorp provider?

joshmyers avatar
joshmyers

Mostly slowness in the provider, against Github Enterprise on several gigs over the last few years. Sometimes this comes from API limits but mostly slow provider when you start managing 50-100+ repos and plenty of settings. We managed to hunt down a few of the slow resources and switch them out for others e.g. graphql for github_branch_protection_v3 resource. Things got a bit better in more recent provider versions. Can also split the states at some business logical point so each doesn’t get too big. I like the drift detection and alignment here. Plenty of manual tweaks/bodges forgotten to be removed got cleaned up.

#567 Slow performance when managing dozens of repositories

Terraform Version

0.12.6

Affected Resource(s)

Please list the resources as a list, for example:

github_repositorygithub_branch_protectiongithub_team_repositorygithub_actions_secret

Terraform Configuration Files

Here’s our repo module (slightly redacted ****):

terraform {
  required_providers {
    github = ">= 3.1.0"
  }
}

locals {
  # Terraform modules must be named `terraform-<provider>-<module name>`
  # so we can extract the provider easily
  provider = element(split("-", var.repository), 1)
}

data "github_team" "****" {
  slug = "****"
}

data "github_team" "****" {
  slug = "****"
}

resource "github_repository" "main" {
  name        = var.repository
  description = var.description

  visibility = var.visibility

  topics = [
    "terraform",
    "terraform-module",
    "terraform-${local.provider}"
  ]

  has_issues   = var.has_issues
  has_projects = var.has_projects
  has_wiki     = var.has_wiki

  vulnerability_alerts   = true
  delete_branch_on_merge = true

  archived = var.archived

  dynamic "template" {
    for_each = var.fork ? [] : [var.fork]

    content {
      owner      = "waveaccounting"
      repository = "****"
    }
  }
}

resource "github_branch_protection" "main" {
  repository_id = github_repository.main.node_id
  pattern       = github_repository.main.default_branch

  required_status_checks {
    strict = true
    contexts = [
      "Terraform",
      "docs",
    ]
  }

  required_pull_request_reviews {
    dismiss_stale_reviews      = true
    require_code_owner_reviews = true
  }
}

resource "github_team_repository" "****" {
  team_id    = data.github_team.****.id
  repository = github_repository.main.name
  permission = "admin"
}

resource "github_team_repository" "****" {
  team_id    = data.github_team.****.id
  repository = github_repository.main.name
  permission = "admin"
}

resource "github_actions_secret" "secrets" {
  for_each = var.secrets

  repository      = github_repository.main.name
  secret_name     = each.key
  plaintext_value = each.value
}

Actual Behavior

We are managing approximately 90 repositories using this module via Terraform Cloud remote operations (which means we can’t disable refresh or change parallelization afaik). I timed a refresh + plan: 9m22s (562s) == 6.2s per repository

Are there any optimizations we can make on our side or in the github provider / API to try to improve this? We’re discussing breaking up our repos into smaller workspaces, but that feels like a bit of a hack.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform plan on large numbers of repositories / branch protection configs

Important Factoids

• Running on Terraform Cloud Remote Operation

References

• Similar issue to https://github.com/terraform-providers/terraform-provider-github/issues/565, although things weren’t particularly fast before the update either

joshmyers avatar
joshmyers

The whole ownership move to integrations happened quite a while ago, not too many limitations I’ve hit other than the obvious resources and lack of certain things (most that you’d want is available). It’s the main provider. No competing fork silliness.

joshmyers avatar
joshmyers
WARNING: Note that this app inherently escalates anyone with push permissions to the admin role, since they can push config settings to the master branch, which will be synced.
1
joshmyers avatar
joshmyers

I think https://github.com/repository-settings/app looks cool in terms of getting some consistency across multiple repos, but I wouldn’t call the model enforcing. Certainly lighter touch than TF. Wanted to manage teams too so we’re already here…

repository-settings/app

Pull Requests for GitHub repository settings

joshmyers avatar
joshmyers

Ironically you could use the provider to manage the <https://github.com/cloudposse/.github/blob/main/.github/settings.yml> files in repos, but have the probot do the heavy lifting.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think this statement is easily misunderstood.

https://sweetops.slack.com/archives/CB6GHNLG0/p1712345595034359?thread_ts=1712075779.976419&cid=CB6GHNLG0

The same is :100: true of any implementation that manages repository settings via GitOps.

It’s entirely mitigated with CODEOWNERS and branch protections (e.g. via organizational repository rulesets).

WARNING: Note that this app inherently escalates anyone with push permissions to the admin role, since they can push config settings to the master branch, which will be synced.
Ryan avatar

Yea josh I was specifically thinking of watching drift

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Thanks for clarifying. Yes, that’s true.

Ryan avatar

I have to play with the provider a bit

joshmyers avatar
joshmyers

https://sweetops.slack.com/archives/CB6GHNLG0/p1712348663742749?thread_ts=1712075779.976419&cid=CB6GHNLG0 ish, the repo that manages the other repos can be managed/run by a separate team/different policies etc vs directly applying in your own repo. Aye CODEOWNERS/protections help.

I think this statement is easily misunderstood.

https://sweetops.slack.com/archives/CB6GHNLG0/p1712345595034359?thread_ts=1712075779.976419&cid=CB6GHNLG0

The same is :100: true of any implementation that manages repository settings via GitOps.

It’s entirely mitigated with CODEOWNERS and branch protections (e.g. via organizational repository rulesets).

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

fwiw, that’s the approach taken by https://github.com/github/safe-settings

github/safe-settings
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It’s the centralized/decentralized argument.

Note, the same exact teams required to approve changes to the centralized repo, can/should be the same teams required to approve the PRs in decentralized repos. About the only difference I can see is visibility. In the centralized approach, the configuration can be private, which is beneficial. But the controls guarding the changes to the configuration itself, are the same in both cases: CODEOWNERS & branch protections with approvals.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


´ Ironically you could use the provider to manage the <https://github.com/cloudposse/.github/blob/main/.github/settings.yml> files in repos, but have the probot do the heavy lifting.
This would be pretty awesome.

joshmyers avatar
joshmyers
github/safe-settings
venkata.mutyala avatar
venkata.mutyala

I wasn’t across safe-settings or repository-settings.

As of this week we have about 1k resources being managed in github via terraform and our plans take close to 10mins.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Eeeekkkk that’s a long time. Btw, did you publish your modules for this?

venkata.mutyala avatar
venkata.mutyala

It’s private and it’s pretty much code that is unmaintable. I just spent the past hour looking at repository-settings and for a moment I thought I was going to switch but it looks like there is a known issue with branch protection rules. I wasn’t able to get it to work in my repo: https://github.com/repository-settings/app/issues/857

#857 branches config does not work at all

Problem Description

Unable to define branch protection rules through the branches configuration.

I’ve tried numerous iterations, including a direct copy/paste from the docs. No branch protection is ever created.

What is actually happening

nothing at all.

What is the expected behaviour

branch protection rules should be created / maintained by the app

Error output, if available

n/a

Context Are you using the hosted instance of repository-settings/app or running your own?

hosted

venkata.mutyala avatar
venkata.mutyala

Other then branch protection (a big feature that we need to have) it seems like a pretty good tool. Given this recent experience I’m wondering even if it worked how would i know when it “stops” working in the future. Ex. they fix it and in 3 months it breaks again and my new repos don’t have branch protection.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@venkata.mutyala you don’t even need those, since your GHE. We don’t use those. We use Organizational Repository Rulesets, which IMO are better.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Why manage branch protections on hundreds of repos when you can do it in one place.

venkata.mutyala avatar
venkata.mutyala

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

To be clear, Organizational Repository Rulesets implement branch protections

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But they are more powerful. You can match on repository properties, wildcards, etc. You can easily exclude bots and apps to bypass protections. You can enable them in a dry-run mode without enforcement, and turn on enforcement once they look good.

1
venkata.mutyala avatar
venkata.mutyala

Dangggggg. How did i forget about this.

venkata.mutyala avatar
venkata.mutyala

Thanks Erik! This is going to clear out a lot of TF that i didn’t need to write. I recall seeing this before but totally forgot about it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Note, we also have an open PR for repository-settings. It will probably be a while before it merges. The repo is maintained, but sparingly.

https://github.com/repository-settings/app/pull/910

#910 Environment custom branch policies support type ('branch', 'tag')

What

• Added type support for environment deployment_branch_policy custom branches.

Why

• Environment deployment_branch_policy supports custom branches type - branch or tag. The type is an option parameter that sets branch by default. These changes allow us to specify deployment_branch_policy for tag

Config example

environments:
  - name: development
    deployment_branch_policy:
      custom_branches:
        - dev/*
        - name: release/*
          type: branch
        - name: v*
          type: tag

You can specify custom_branches list item as string for back-compatibility or as object

name: `string`
type: `branch | tag`

Releated links

Create a deployment branch policy API

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


I recall seeing this before but totally forgot about it.
And I just talked about it on #office-hours!! I must have really sucked presenting how we use it. I even did a screenshare

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That said, safe-settings (a hard fork of repository-settings i believe and also in probot) is very well maintained and by GitHub

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I just didn’t like the centralized approach.

1
venkata.mutyala avatar
venkata.mutyala

How are you managing membership into each team?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That’s still manual for us.

venkata.mutyala avatar
venkata.mutyala

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Did you automate that too?

joshmyers avatar
joshmyers

Yeah we have

venkata.mutyala avatar
venkata.mutyala

Yeah we have some automation but it’s not very good. I was hoping to copy a cloudposse module hence my tears’ :sob:.

@joshmyers did you go super granular with your permissions management and just use github_repository_collaborators or do you use teams/memberships?

joshmyers avatar
joshmyers

Teams and membership, this is GHE internal if that makes a difference

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(We would accept contributions for cloudposse style modules for managing github repos and teams)

1
Ryan avatar

We being me too

3
Matt Gowie avatar
Matt Gowie
mineiros-io/terraform-github-repository
1
joshmyers avatar
joshmyers

Nice, looks very similar to ours. We also manage a few things like the CODEOWNERS/PR template files there too

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Matt Gowie are you using the forked (abandoned?) provider or the official hashicorp provider

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(because mineiros also forked the provider)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Cool! Then I’m more bullish on it

2024-04-04

Release notes from terraform avatar
Release notes from terraform
08:13:32 AM

v1.8.0-rc2 1.8.0-rc2 (April 4, 2024) If you are upgrading from Terraform v1.7 or earlier, please refer to the Terraform v1.8 Upgrade Guide. NEW FEATURES:

Providers can now offer functions which can be used from within the Terraform configuration language. The syntax for calling a provider-contributed function is provider::function_name(). (<a…

Release v1.8.0-rc2 · hashicorp/terraformattachment image

1.8.0-rc2 (April 4, 2024) If you are upgrading from Terraform v1.7 or earlier, please refer to the Terraform v1.8 Upgrade Guide. NEW FEATURES:

Providers can now offer functions which can be used …

Soren Jensen avatar
Soren Jensen

I’m trying to setup a pipe in EventBridge, it mostly works. The problem is the logging config isn’t applied. Terraform isn’t giving any errors, but when you look at the web console you see configure logging instead of the below logging config. Anyone who got an idea of why that could be? The role got the right logging permissions.

resource "aws_cloudwatch_log_group" "pipes_log_group_raw_data_stream_transformer" {
  name = "/aws/pipes/RawDataStreamTransformer"
  retention_in_days = 365
  tags = local.tags
}

resource "awscc_pipes_pipe" "raw_data_stream_transformer" {
  name       = "raw-data-stream-transformer"
  role_arn   = aws_iam_role.pipes_raw_data_transformer_role.arn
  source     = aws_kinesis_stream.raw_data_stream.arn
  target     = aws_kinesis_stream.transformed_data_stream.arn
  enrichment = aws_lambda_function.data_transformer.arn

  source_parameters = {
    kinesis_stream_parameters = {
      starting_position      = "LATEST"
      maximum_retry_attempts = 3
      dead_letter_config = {
        arn = aws_sqs_queue.raw_data_stream_deadletter.arn
      }
    }
  }

  target_parameters = {
    kinesis_stream_parameters = {
      partition_key = "$.partition_key"
    }
  }

  log_configuration = {
    enabled        = true
    log_level      = "ERROR"
    log_group_name = aws_cloudwatch_log_group.pipes_log_group_raw_data_stream_transformer.name
  }
}
Fizz avatar

I think cw logs for event bridge need to have the aws/events/ prefix. Fairly sure I had the same problem and noted that when I created it through the console it created a log group with that prefix

Soren Jensen avatar
Soren Jensen

Oh, interesting.. Let me try that

Soren Jensen avatar
Soren Jensen

Unfortunately that wasn’t the issue..

Soren Jensen avatar
Soren Jensen
log_configuration = {
    cloudwatch_logs_log_destination = {
      log_group_arn = aws_cloudwatch_log_group.pipes_log_group_raw_data_stream_transformer.arn
    }
    level                  = "ERROR"
    include_execution_data = ["ALL"]
  }

It was mostly a syntax error. This ended up being the solution

Release notes from terraform avatar
Release notes from terraform
11:03:29 PM

v1.9.0-alpha20240404 1.9.0-alpha20240404 (April 4, 2024) ENHANCEMENTS:

terraform console: Now has basic support for multi-line input in interactive mode. (#34822) If an entered line contains opening paretheses/etc that are not closed, Terraform will await another line of input to complete the expression. This initial implementation is primarily…

Release v1.9.0-alpha20240404 · hashicorp/terraformattachment image

1.9.0-alpha20240404 (April 4, 2024) ENHANCEMENTS:

terraform console: Now has basic support for multi-line input in interactive mode. (#34822) If an entered line contains opening paretheses/etc th…

terraform console: Multi-line entry support by apparentlymart · Pull Request #34822 · hashicorp/terraformattachment image

The console command, when running in interactive mode, will now detect if the input seems to be an incomplete (but valid enough so far) expression, and if so will produce another prompt to accept a…

loren avatar

from the v1.9 alpha release notes!!!
• The experimental “deferred actions” feature, enabled by passing the -allow-deferral option to terraform plan, permits count and for_each arguments in module, resource, and data blocks to have unknown values and allows providers to react more flexibly to unknown values. This experiment is under active development, and so it’s not yet useful to participate in this experiment.

3
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Wow!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Cc @Jeremy G (Cloud Posse) @Andriy Knysh (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Great news this is something being worked on

1

2024-04-05

joshmyers avatar
joshmyers

Anyone have any info on Terraform Stacks? Anyone used in private beta? Know when/what functionality maybe coming to OSS?

Terraform stacks, explainedattachment image

Terraform stacks simplify provisioning and managing resources at scale, reducing the time and overhead of managing infrastructure.

Igor Savchenko avatar
Igor Savchenko

According to the latest Hashicorp earnings call - this feature will be TFC/E specific and will not come to BSL TF cli.

Terraform stacks, explainedattachment image

Terraform stacks simplify provisioning and managing resources at scale, reducing the time and overhead of managing infrastructure.

joshmyers avatar
joshmyers

Uff, thanks.

2024-04-06

2024-04-07

2024-04-08

2024-04-09

mayank2299 avatar
mayank2299

Hello everyone Can we spin up two dms instances from this module? I am facing one issue regarding name , i have copied the same folder structure from this module. Please help me and let me know if this is possible to do or not Thanks Here is the module

https://github.com/cloudposse/terraform-aws-dms/blob/main/examples/complete/main.tf

locals {
  enabled              = module.this.enabled
  vpc_id               = module.vpc.vpc_id
  vpc_cidr_block       = module.vpc.vpc_cidr_block
  subnet_ids           = module.subnets.private_subnet_ids
  route_table_ids      = module.subnets.private_route_table_ids
  security_group_id    = module.security_group.id
  create_dms_iam_roles = local.enabled && var.create_dms_iam_roles
}

# Database Migration Service requires
# the below IAM Roles to be created before
# replication instances can be created.
# The roles should be provisioned only once per account.
# <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html>
# <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html#CHAP_Security.APIRole>
# <https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/dms_replication_instance>
#  * dms-vpc-role
#  * dms-cloudwatch-logs-role
#  * dms-access-for-endpoint
module "dms_iam" {
  source = "../../modules/dms-iam"

  enabled = local.create_dms_iam_roles

  context = module.this.context
}

module "dms_replication_instance" {
  source = "../../modules/dms-replication-instance"

  # <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReleaseNotes.html>
  engine_version             = "3.4"
  replication_instance_class = "dms.t2.small"

  allocated_storage            = 50
  apply_immediately            = true
  auto_minor_version_upgrade   = true
  allow_major_version_upgrade  = false
  multi_az                     = false
  publicly_accessible          = false
  preferred_maintenance_window = "sun:10:30-sun:14:30"
  vpc_security_group_ids       = [local.security_group_id, module.aurora_postgres_cluster.security_group_id]
  subnet_ids                   = local.subnet_ids

  context = module.this.context

  depends_on = [
    # The required DMS roles must be present before replication instances can be provisioned
    module.dms_iam,
    aws_vpc_endpoint.s3
  ]
}

module "dms_endpoint_aurora_postgres" {
  source = "../../modules/dms-endpoint"

  endpoint_type                   = "source"
  engine_name                     = "aurora-postgresql"
  server_name                     = module.aurora_postgres_cluster.endpoint
  database_name                   = var.database_name
  port                            = var.database_port
  username                        = var.admin_user
  password                        = var.admin_password
  extra_connection_attributes     = ""
  secrets_manager_access_role_arn = null
  secrets_manager_arn             = null
  ssl_mode                        = "none"

  attributes = ["source"]
  context    = module.this.context

  depends_on = [
    module.aurora_postgres_cluster
  ]
}

module "dms_endpoint_s3_bucket" {
  source = "../../modules/dms-endpoint"

  endpoint_type = "target"
  engine_name   = "s3"

  s3_settings = {
    bucket_name                      = module.s3_bucket.bucket_id
    bucket_folder                    = null
    cdc_inserts_only                 = false
    csv_row_delimiter                = " "
    csv_delimiter                    = ","
    data_format                      = "parquet"
    compression_type                 = "GZIP"
    date_partition_delimiter         = "NONE"
    date_partition_enabled           = true
    date_partition_sequence          = "YYYYMMDD"
    include_op_for_full_load         = true
    parquet_timestamp_in_millisecond = true
    timestamp_column_name            = "timestamp"
    service_access_role_arn          = join("", aws_iam_role.s3[*].arn)
  }

  extra_connection_attributes = ""

  attributes = ["target"]
  context    = module.this.context

  depends_on = [
    aws_iam_role.s3,
    module.s3_bucket
  ]
}

resource "time_sleep" "wait_for_dms_endpoints" {
  count = local.enabled ? 1 : 0

  depends_on = [
    module.dms_endpoint_aurora_postgres,
    module.dms_endpoint_s3_bucket
  ]

  create_duration  = "2m"
  destroy_duration = "30s"
}

# `dms_replication_task` will be created (at least) 2 minutes after `dms_endpoint_aurora_postgres` and `dms_endpoint_s3_bucket`
# `dms_endpoint_aurora_postgres` and `dms_endpoint_s3_bucket` will be destroyed (at least) 30 seconds after `dms_replication_task`
module "dms_replication_task" {
  source = "../../modules/dms-replication-task"

  replication_instance_arn = module.dms_replication_instance.replication_instance_arn
  start_replication_task   = true
  migration_type           = "full-load-and-cdc"
  source_endpoint_arn      = module.dms_endpoint_aurora_postgres.endpoint_arn
  target_endpoint_arn      = module.dms_endpoint_s3_bucket.endpoint_arn

  # <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.html>
  replication_task_settings = file("${path.module}/config/replication-task-settings.json")

  # <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TableMapping.html>
  table_mappings = file("${path.module}/config/replication-task-table-mappings.json")

  context = module.this.context

  depends_on = [
    module.dms_endpoint_aurora_postgres,
    module.dms_endpoint_s3_bucket,
    time_sleep.wait_for_dms_endpoints
  ]
}

module "dms_replication_instance_event_subscription" {
  source = "../../modules/dms-event-subscription"

  event_subscription_enabled = true
  source_type                = "replication-instance"
  source_ids                 = [module.dms_replication_instance.replication_instance_id]
  sns_topic_arn              = module.sns_topic.sns_topic_arn

  # <https://awscli.amazonaws.com/v2/documentation/api/latest/reference/dms/describe-event-categories.html>
  event_categories = [
    "low storage",
    "configuration change",
    "maintenance",
    "deletion",
    "creation",
    "failover",
    "failure"
  ]

  attributes = ["instance"]
  context    = module.this.context
}

module "dms_replication_task_event_subscription" {
  source = "../../modules/dms-event-subscription"

  event_subscription_enabled = true
  source_type                = "replication-task"
  source_ids                 = [module.dms_replication_task.replication_task_id]
  sns_topic_arn              = module.sns_topic.sns_topic_arn

  # <https://awscli.amazonaws.com/v2/documentation/api/latest/reference/dms/describe-event-categories.html>
  event_categories = [
    "configuration change",
    "state change",
    "deletion",
    "creation",
    "failure"
  ]

  attributes = ["task"]
  context    = module.this.context
}

Piotr Pawlowski avatar
Piotr Pawlowski

hey guys, I am trying to setup vpc peering with the module and I am receiving an error that The "count" value depends on resource attributes that cannot be determined until apply. My module definition is simple but relies on the vpc module which should be created in the same terraform run. Question - can vpc peering module be applied alongside the module which created vpc for which peering should be configured once created?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Jeremy G (Cloud Posse) where’s your recent post on count-of issues?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

from the v1.9 alpha release notes!!!
• The experimental “deferred actions” feature, enabled by passing the -allow-deferral option to terraform plan, permits count and for_each arguments in module, resource, and data blocks to have unknown values and allows providers to react more flexibly to unknown values. This experiment is under active development, and so it’s not yet useful to participate in this experiment.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It might be less of a problem in the future.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Piotr Pawlowski I wrote this article to help people understand the issue you are facing. I hope it helps you, too.

Error: Values Cannot Be Determined Until Apply | The Cloud Posse Developer Hub

Details about computed values can cause terraform plan to fail

2
Msk avatar

Hello everyone. We are using your great datadog wrapper for terraform but when I try to use the latest version I dont see the new properties added in 1.4.1 like on_missing_data. Was that released to terraform registry? I see it is in the release zip and in the main branch but when I init the module I dont see it in the code

Msk avatar

Ah I think i get it. These were all migrated to options: … but for backwards compatibility the old values still work

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Jeremy G (Cloud Posse) can add context, if necessary

1
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Right, we moved the old top-level setting, and only the old settings, to their API location under options, where they belong. (Side note: priority actually belongs at the top level, not under options, and designating it as legacy was a mistake). You can see the on_missing_data setting being set here.

  on_missing_data          = try(each.value.options.on_missing_data, null)
1

2024-04-10

Release notes from terraform avatar
Release notes from terraform
07:23:33 PM

v1.8.0 1.8.0 (April 10, 2024) If you are upgrading from Terraform v1.7 or earlier, please refer to the Terraform v1.8 Upgrade Guide. NEW FEATURES:

Providers can now offer functions which can be used from within the Terraform configuration language. The syntax for calling a provider-contributed function is provider::function_name(). (<a…

party_parrot1
TechHippie avatar
TechHippie

Hello All - I want to deploy few resources in all the accounts in an AWS organization. Is there a way to do it in terraform? I know i can use different providers to do it multiple accounts, but what if I want to create in all accounts in the organization?

AdamP avatar

You can use multiple providers, or Terraform workspaces. There may be another approach too via some 3rd party tooling.

AdamP avatar

honestly, type that exact same question into ChatGPT.. it will give you some options

TechHippie avatar
TechHippie

Haha.. did that and it wasn’t helpful . Thought of checking internally as some of you might have wanted the same.

AdamP avatar

lol right on, yeah the feedback it gave me seamed reasonable, but there has to be a better way to manage a scenario like that for sure, tons of people have to be doing that exact same thing

AdamP avatar

I’ve also seen this popping up in my feed on LinkedIN and on Cloud Posse Repos.. this may be worth looking into (I plan on looking further into it as well): https://github.com/cloudposse/atmos

cloudposse/atmos

Terraform Orchestration Tool for DevOps. Keep environment configuration DRY with hierarchical imports of configurations, inheritance, and WAY more. Native support for Terraform and Helmfile.

AdamP avatar

yeah, seems like a good solution… see “Use Cases” on that URL

2024-04-11

Piotr Pawlowski avatar
Piotr Pawlowski

I am trying to use terraform-aws-eks-node-group module to create EKS nodes, once created I am installing cluster-autoscaler. At this moment autoscaler is missing proper IAM permissions. I see, that at some point proper policy has been added but I do not see it in the current module code. Question: what is the proper approach to enable/create IAM permissions required by the autoscaler?

Piotr Pawlowski avatar
Piotr Pawlowski

I solved it by adding custom aws_iam_policy with proper policy defined with aws_iam_policy_document datasource and pass it to the module via node_role_policy_arns variable, not sure if this is the proper approach, though

AdamP avatar

I have it declared like so:

AdamP avatar
node_role_arn                 = [module.eks_node_group_main.eks_node_group_role_arn]

that is in my node group module code, and my eks cluster module is also in the same .tf file as well.

AdamP avatar

oh snaps, you’re talking about the autoscaler, not only the node group.. never mind

1

2024-04-12

joshmyers avatar
joshmyers

wave What schema does the DDB table used by https://github.com/cloudposse/github-action-terraform-plan-storage/tree/main need to have? cc @Erik Osterman (Cloud Posse)

joshmyers avatar
joshmyers

I ended up here - so is the plan DDB table the same as the internal state/locks table TF uses, in your case?

module "s3_bucket" {
  source  = "cloudposse/stack-config/yaml//modules/remote-state"
  version = "1.5.0"

  component   = var.s3_bucket_component_name
  environment = try(var.s3_bucket_environment_name, module.this.environment)

  context = module.this.context
}

module "dynamodb" {
  source  = "cloudposse/stack-config/yaml//modules/remote-state"
  version = "1.5.0"

  component   = var.dynamodb_component_name
  environment = try(var.dynamodb_environment_name, module.this.environment)

  context = module.this.context
}

joshmyers avatar
joshmyers

tl;dr Need to stop storing the plan as an artifact. This action looks nice. Need to provision the DDB table (if indeed is different to the locks table)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Aha, @Igor Rodionov @Dan Miller (Cloud Posse) can you get unblocked on the actions and their dependencies.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


Need to stop storing the plan as an artifact.
Why do you need to stop storing the planfile as an artifact? That ensures what you approve in the Pull Request is identical to what you apply upon merge to the default branch

joshmyers avatar
joshmyers

As a workflow artifact attached to the PR, because putting in S3/DDB would allow finer grained access control.

joshmyers avatar
joshmyers
joshmyers avatar
joshmyers

Would rather store/fetch from S3.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

As a workflow artifact attached to the PR Aha! Yes, that should be doable, but not something we’ve yet implemented. We would like that though… What we tried to do was something even more extreme, which was using artifact storage for everything, which proved challenging because how limited the artifact API is. But only storing the planfile as an artifact, and not trying to use the artifact storage to also replace dynamodb, that should be much easier.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Would rather store/fetch from S3. Oh, wait, I think we’re saying the same thing. We use S3 in our customer implementations, so that is already supported.

I thought you wanted to use artifact storage

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So there must be a feature flag somewhere not set.

joshmyers avatar
joshmyers

Already use artifact storage but would rather S3 - the above action looks like it’ll do that, right? Just not sure what schema the metadata DDB table is expecting…?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Nono, we use both dynamodb and S3.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We use dynamo to store the metadata, so we can find planfiles and invalidate planfiles across PRs.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Use use S3 because I think dynamodb has limits on blob storage. Also, it serves as a permanent record, if so desired.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We didn’t want to use S3 as a database and have to scan the bucket to find planfiles.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Note that 2 PRs could merge and affect the same components therefore we need some way to invalidate one or the other. Technically, a merge queue could alleviate some of the problems, but we don’t yet support that.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Let me find the schema for dynamo

1
joshmyers avatar
joshmyers

Yeah makes sense. Where do you actually create that DDB table/whats the schema? e.g. the Terraform DDB state lock table has a partitionKey of LockID of type String

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, so here’s how we do it. Sicne we do everything with re-useable root modules (components) with atmos, here’s what our configuration looks like.

import:
  - catalog/s3-bucket/defaults
  - catalog/dynamodb/defaults

components:
  terraform:
    # S3 Bucket for storing Terraform Plans
    gitops/s3-bucket:
      metadata:
        component: s3-bucket
        inherits:
          - s3-bucket/defaults
      vars:
        name: gitops-plan-storage
        allow_encrypted_uploads_only: false

    # DynamoDB table used to store metadata for Terraform Plans
    gitops/dynamodb:
      metadata:
        component: dynamodb
        inherits:
          - dynamodb/defaults
      vars:
        name: gitops-plan-storage
        # These keys (case-sensitive) are required for the cloudposse/github-action-terraform-plan-storage action
        hash_key: id
        range_key: createdAt

    gitops:
      vars:
        enabled: true
        github_actions_iam_role_enabled: true
        github_actions_iam_role_attributes: ["gitops"]
        github_actions_allowed_repos:
          - "acmeOrg/infra"
        s3_bucket_component_name: gitops/s3-bucket
        dynamodb_component_name: gitops/dynamodb
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The gitops component is what grants GitHub OIDC permissions to access the bucket and dyanmodb table

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Oh, let me get those defaults

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
# Deploys S3 Bucket and DynamoDB table for managing Terraform Plans
# Then deploys GitHub OIDC role for access these resources

# NOTE: If you make any changes to this file, please make sure the integration tests still pass in the
# <https://github.com/cloudposse/github-action-terraform-plan-storage> repo.
import:
  - catalog/s3-bucket/defaults
  - catalog/dynamodb/defaults
  - catalog/github-oidc-role/gitops

components:
  terraform:
    gitops/s3-bucket:
      metadata:
        component: s3-bucket
        inherits:
          - s3-bucket/defaults
      vars:
        name: gitops
        allow_encrypted_uploads_only: false

    gitops/dynamodb:
      metadata:
        component: dynamodb
        inherits:
          - dynamodb/defaults
      vars:
        name: gitops-plan-storage
        # This key (case-sensitive) is required for the cloudposse/github-action-terraform-plan-storage action
        hash_key: id
        range_key: ""
        # Only these 2 attributes are required for creating the GSI,
        # but there will be several other attributes on the table itself
        dynamodb_attributes:
          - name: 'createdAt'
            type: 'S'
          - name: 'pr'
            type: 'N'
        # This GSI is used to Query the latest plan file for a given PR.
        global_secondary_index_map:
          - name: pr-createdAt-index
            hash_key: pr
            range_key: createdAt
            projection_type: ALL
            non_key_attributes: []
            read_capacity: null
            write_capacity: null
        # Auto delete old entries
        ttl_enabled: true
        ttl_attribute: ttl
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Matt Calhoun where do we describe the entire shape of the dynamodb table?

joshmyers avatar
joshmyers

Doh, I’d seen the above components but missed vars passing hash_key and range_key - thank you!!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

aha, ok, sorry - also, this question has sparked a ton of issues on the backend here. Some docs weren’t updated, others missing.

joshmyers avatar
joshmyers

No worries, thank you for checking

joshmyers avatar
joshmyers
    new TableV2(this, 'plan-storage-metadata', {
      tableName: `app-${WORKLOAD_NAME}-terraform-plan-storage-metadata`,
      billing: Billing.onDemand(),
      pointInTimeRecovery: true,
      timeToLiveAttribute: 'ttl',
      partitionKey: {
        name: 'id',
        type: AttributeType.STRING,
      },
      sortKey: {
        name: 'createdAt',
        type: AttributeType.STRING,
      },
      globalSecondaryIndexes: [
        {
          indexName: 'pr-createdAt-index',
          partitionKey: { name: 'pr', type: AttributeType.NUMBER },
          sortKey: { name: 'createdAt', type: AttributeType.STRING },
          projectionType: ProjectionType.ALL,
        },
      ],
    })
joshmyers avatar
joshmyers

Think got it

joshmyers avatar
joshmyers

OK, some progress…

joshmyers avatar
joshmyers
      - name: Store Plan
        if: (github.event_name == 'pull_request') || (github.event.issue.pull_request && github.event.comment.body == '/terraform plan')
        uses: cloudposse/github-action-terraform-plan-storage@v1
        id: store-plan
        with:
          action: storePlan
          planPath: tfplan
          component: ${{ github.event.repository.name }}
          stack: ${{ steps.get_issue_number.outputs.result }}-tfplan
          commitSHA: ${{ github.event.pull_request.head.sha || github.sha }}
          tableName: app-playback-terraform-plan-storage-metadata
          bucketName: app-playback-terraform-state-prod-us-east-1
joshmyers avatar
joshmyers

Plan runs, can see object in S3

joshmyers avatar
joshmyers

but nothing in DDB…no writes, can see a single read. What am I not grokking about what this thing does? I kinda expected to see some metadata about the plan/pr in DDB? Action completed successfully.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, you should see something like that. Our west-coast team should be coming on line shortly and can advise.

1
joshmyers avatar
joshmyers
##[debug]tableName: badgers
##[debug]bucketName: badgers
##[debug]metadataRepositoryType: dynamo
##[debug]planRepositoryType: s3
##[debug]bucketName: badgers
##[debug]Node Action run completed with exit code 0
##[debug]Finishing: Store Plan
joshmyers avatar
joshmyers

^^ debug output

Matt Calhoun avatar
Matt Calhoun

Hi Josh…just to be clear, do you have a PR open? It should write metadata to the DDB table whenever you push changes. And another sanity check, does your user/role in GHA have access to write to that table in DDB?

joshmyers avatar
joshmyers

Hey Matt - thanks so much. Yes PR is open (internal GHE), chaining roles and yes it has access to DDB and S3. I was getting explicit permission denied writing to DDB before adding it - so something was trying…

joshmyers avatar
joshmyers

Hmm, I wonder if same PR meant it didn’t try to re write on subsequent push? Let me push another change.

joshmyers avatar
joshmyers

Committed a new change, can see S3 plan file under the new sha…but still nothing in DDB (Item count/size etc 0) …

joshmyers avatar
joshmyers

Actually maybe the previous DDB IAM issues was on scan…which would make sense as I can see reads on the table but no writes.

joshmyers avatar
joshmyers

Yup, confirmed previous fail was trying to scan. (since succeeded)

joshmyers avatar
joshmyers

Bah - so sorry, problem between keyboard and computer. I’m looking in the wrong account - this is working as intended

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

When you’re ready, we have example workflows we can share for drift detection/remediation as well.

1
joshmyers avatar
joshmyers

Working nicely - thank you!

Matt Gowie avatar
Matt Gowie

Does anyone know of or has built a custom VSCode automation for refactoring a resource or module argument out into a variable?

As in I want to select an argument value like in the below screenshot and be able to right click or hit a keyboard shortcut and it’ll create a new variable block in [variables.tf](http://variables.tf) with the argument’s name and that value as the default value?

RB avatar

No but that sounds very cool.

RB avatar

Perhaps code pilot can be trained to do it?

Matt Gowie avatar
Matt Gowie

Would be super useful, right?

I think I tried to use copilot to do it, but it only wanted to do it in the same file.

I’m sure this would be a no-brainer for somebody who has built a proper VSCode app.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Or even just something on the command line

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

For this specific, ask I am 99% with the right prompt, and a shell script, you could iterate over all the files with mods because the prompt would be easy, and the chances to get it wrong are small.

https://github.com/charmbracelet/mods

charmbracelet/mods

AI on the command line

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

In your prompt, just make sure convey to respond in HCL and to only focus on parameterizing constants. You can give it the convention for naming variables, etc.

george.m.sedky avatar
george.m.sedky

I’m working on this extension for the monaco editor, and will port it to a vscode plugin among other things soon

Matt Gowie avatar
Matt Gowie

Will check out mods for this… Charmbracelet

Matt Gowie avatar
Matt Gowie

@george.m.sedky awesome to hear! Happy to be a beta tester when you’ve got a VSCode plugin.

1
george.m.sedky avatar
george.m.sedky

awesome matt! will text you as soon as it’s ready

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Matt Gowie - report back if it works out, or was a flop

1
george.m.sedky avatar
george.m.sedky

@Matt Gowie this is how it works now with monaco before the VS code plugin https://youtu.be/ktyXJpf36W0?si=xJaaQ5Pn1i7L0m_j

1
miko avatar

Closed - Manage to make it work by switching to a different region that supports the API Gateway that I’m trying to create

Hey guys, I’m experiencing something weird, I have a very simple AWS API Gateway definition and for some reason it’s not being created–it’s stuck in creation state:

tamish60 avatar
tamish60

What is the issue Its trying to create…… Is there any error ur facing??

miko avatar

Hi @tamish60, I’ve managed to make it work, this region doesn’t seem to have the support for the kind of API Gateway that I’m trying to do, I moved a different region and it was able to work just fine

2024-04-13

2024-04-15

2024-04-16

Stephan Helas avatar
Stephan Helas

Hi,

i’m currently using terragrunt and want to migrate to atmos. One very convenient thing about terragrunt is that i can simply overwrite the terraform module git repo urls with a local path (https://terragrunt.gruntwork.io/docs/reference/cli-options/#terragrunt-source-map). This allows me to develop tf modules using terragrunt in a efficient way.

Is there anything like it in atmos? If not, what is the best way to develop tf modules while using atmos?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(@Stephan Helas best to use atmos)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So we’ve seen more and more similar requests to these, and I can really identify with this request as something we could/should support via the atmos vendor command.

One of our design goal in atmos is to avoid code generation / manipulation as much as possible. This ensures future compatibility with terraform.

So while we don’t support it the way terragrunt does, we support it the way terraform already supports it. :smiley:

That’s using the [_override.tf](http://_override.tf) pattern.

We like this approach because it keeps code as vanilla as possible while sticking with features native to Terraform.

https://developer.hashicorp.com/terraform/language/files/override

So, let’s say you have a [main.tf](http://main.tf) with something like this (from the terragrunt docs you linked)

module "example" {
  source = "github.com/org/modules.git//example"
  // other parameters
}

To do what you want to do in native terraform, create a file called [main_override.tf](http://main_override.tf)

module "example" {
  source = "/local/path/to/modules//example"
}

You don’t need to duplicate the rest of the definition, only the parts you want to “override”, like the source

Override Files - Configuration Language | Terraform | HashiCorp Developerattachment image

Override files merge additional settings into existing configuration objects. Learn how to use override files and about merging behavior.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So this will work together with the atmos vendor command.

  1. create components/terraform/mycomponent
  2. create the [main_override.tf](http://main_override.tf) file with the new source (for example)
  3. configure vendoring via vendor.yml or component.yml
  4. run atmos vendor This works because atmos vendor will not overwrite the [main_override.tf](http://main_override.tf) file since that does not exist upstream. You can use this strategy to “monkey patch” anything in terraform.
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

https://en.wikipedia.org/wiki/Monkey_patch#<i class="em em-~"</i>text=In%20computer%20programming%2C%20monkey%20patching,altering%20the%20original%20source%20code>.

Monkey patch

In computer programming, monkey patching is a technique used to dynamically update the behavior of a piece of code at run-time. It is used to extend or modify the runtime code of dynamic languages such as Smalltalk, JavaScript, Objective-C, Ruby, Perl, Python, Groovy, and Lisp without altering the original source code.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Stephan Helas also consider the following: if you are developing a new version of a component and don’t want to “touch” the existing version. Let’s say you already have a TF component in components/terraform/my-component and have an Atmos manifest like this

components:
  terraform:
    my-component:
      metadata:
        component: my-component  # Point to the Terraform component (code)
      vars: 
        ...
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then you want to create a new (complete diff) version of the TF component (possibly with breaking changes). You place it into another folder in components/terraform, for example in components/terraform/my-component/v2 or in components/terraform/my-component-v2 (any folder names can be used, it’s up to you to organize it)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then suppose you want to test it only in the dev account. You add this manifest to the top-level dev stack, e.g. in the plat-ue2-dev stack

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
components:
  terraform:
    my-component:   # You can also use a new Atmos component name, e.g. my-component/v2:
      metadata:
        component: my-component/v2  # Point to the new Terraform component (code) under development
      vars: 
        ...
Stephan Helas avatar
Stephan Helas

Wow, thx very much. i’ll try the vendor aproach.

in fact, my second question was, how to develop a newer component versions

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can have many different versions of the “same” Terraform component, and point to them in Atmos stack manifests at diff scopes (orgs, tenant, account, region)

Stephan Helas avatar
Stephan Helas

i think i got the idea, but i need to test it though. its a lot to take in. i love the deep yaml merging approach a lot. i’ll try it later and will probably come back with new questions .)

1
Stephan Helas avatar
Stephan Helas

i didn’t knew about the override feature of terraform. thats awesome - and yes - i’ll totally use that. So, as soon as i have mastered the handling of the remote-state (something terragrunt does for me) i think i can leave terragrunt behind

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

as soon as i have mastered the handling of the remote-state (something terragrunt does for me) Are you clear on the path forward here? As you might guess, we take the “terraform can handle that natively for you” approach by making components more intelligent using datas sources and relying less on the tooling.

That said, if that wouldn’t work for you, would like to better understand your challenges.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


in fact, my second question was, how to develop a newer component versions
We have a lot of “opinions” on this that diverge from established terragrunt patterns. @Andriy Knysh (Cloud Posse) alluded to it in his example.

This probably warrants a separate thread in atmos, if / when you need any guidance.

Stephan Helas avatar
Stephan Helas

as short summary, i use terragrunt (like many others i suppose) mainly for generating backend and provider config. on top of that i use the dependency feature for example to place a ec2 instance in a generated subnet. as i don’t have to deal with remote states manually - i don’t know how it works behind the terragrunt curtain.

right now i try to understand the concept of the “context” module. after that i’ll try to create the backend config and use remote state. but i thing i will at leaat need another day for that.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, all good questions. We should probably write up a guide that maps concepts and techniques common to #terragrunt and how we accomplish them in atmos

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Backends, is a good one - like we didn’t like that TG generates it, when the whole point is IAC So we wrote a component for that.

Stephan Helas avatar
Stephan Helas

@Erik Osterman (Cloud Posse)

thx again. i’ll open the next thread in atmos

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The best write up on context might be by @Matt Gowie https://masterpoint.io/updates/terraform-null-label/

terraform-null-label: the why and how it should be used | Masterpoint Consultingattachment image

A post highlighting one of our favorite terraform modules: terraform-null-label. We dive into what it is, why it’s great, and some potential use cases in …

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(you keep nerd sniping me in this thread!)

1
Stephan Helas avatar
Stephan Helas

ok. thx soo much. its a real schame, that i didn’t find out about atmos sooner. could have saved me from a lot of weird things i did in terragrunt.

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Atmos already generates many things like the backend config for diff clouds, and the override pattern (e.g. for providers)

https://atmos.tools/core-concepts/components/terraform-backends

Terraform Backends | atmos

Configure Terraform Backends.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the remote-state question comes up often. We did not want to make YAML a programming language, but use it just for config, so the remote state is done in the TF module and then configured in Atmos, see https://atmos.tools/core-concepts/components/remote-state

Terraform Component Remote State | atmos

The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

having said that, Atmos now supports Go templating with Sprig and Gomplate functions and datasources

https://atmos.tools/core-concepts/stacks/templating

Stack Manifest Templating | atmos

Atmos supports Go templates in stack manifests.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

which brings YAML with Go templates close to being a “programming” language . (w/o hardcoding anything in Atmos to work with functions in YAML, just embedding the existing template engines)

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

having said that, we might consider adding an Atmos-specific remote-state datasource to get the remote state for a component in Atmos directly (YAML + Go templates) w/o going through Terraform.

we might support both approaches (no ETA on the “remote-state” template datasource yet)

Stephan Helas avatar
Stephan Helas

@Andriy Knysh (Cloud Posse)

how would i get access to the provisioning documentation? I’ve created a account to login.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Stephan Helas that part of our documentation (docs.cloudposse.com/reference-architecture) is included with our commercial reference architecture.

That’s how we make money at Cloud Posse to support our open source. If you’d like more information, feel free to book a meeting with me at cloudposse.com/quiz

Stephan Helas avatar
Stephan Helas

Aaaah i understand

1
Stephan Helas avatar
Stephan Helas

@Erik Osterman (Cloud Posse)

i’ve got a design question:

so i tried to use s3 backend and versioned components. for that i simply created folders in components like this:

.
└── terraform
    ├── infra
    │   ├── account-map
    │   └── vpc-flow-logs-bucket
    ├── tfstate-backend
    │   └── v1.4.0
    └── wms-base
        ├── develop
        ├── v1.0.0
        └── v1.1.0

I then created the stack like this:

▶ cat stacks/KN/wms/it03/_defaults.yaml
import:
  - mixins/_defaults
  - mixins/tennant/it03

components:
  terraform:
    it03/defaults:
      metadata:
        type: abstract
      vars:
        location: Wandsbek
        lang: de
    base:
      metadata:
        type: real
        component: wms-base/v1.0.0
        inherits:
          - base/defaults
          - it03/defaults

if i include the s3 backend without a key prefix definition, the prefix would be named like the component (as described in the documentation). this would lead to a loss of state, if i changed the component version (as a new s3 prefix was created).

so i integrated a tenant mixin for the prefix like this:

import:
  - catalog/component/wms/defaults
  - mixins/region/eu-central-1

vars:
  tenant: wms
  environment: it03
  tags:
    instance: it03

terraform:
  backend:
    s3:
      workspace_key_prefix: wms-it03

So that the bucket prefix name stays stable.

questions:

• is the a sound design or am i doing it wrong?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You identified and understand the gist of the problem. The workspace_key_prefix is how to keep the terraform state stable, when versioning multiple components.

So in your example

    └── wms-base
        ├── develop
        ├── v1.0.0
        └── v1.1.0

This is the correct way to version components. The idea is ultimately that you want all stacks that use a given component to converge on the same one.

Switching versions is as easy as your example, with the caveat that the location in TF state would change.

    base:
      metadata:
        type: real
        component: wms-base/v1.0.0

So we need to ensure the backend.s3.workspace_key_prefix is stable for that component, but not all components.

So we allow component to vary and point to the version. And we require backend.s3.workspace_key_prefix to point to the a constant of the component name without the version like wms-base

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You’re doing part of that, but this seems wrong to me

terraform:
  backend:
    s3:
      workspace_key_prefix: wms-it03
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) would be best to chime in here.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Gabriela Campana (Cloud Posse) we should have a “design pattern” that describes how to use multiple versions of a component while keeping state stable

1
Stephan Helas avatar
Stephan Helas

thx. this was helpful. a design pattern would be awesome.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we’ll describe such a pattern in the docs

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Stephan Helas workspace_key_prefix is by default the terraform component name (in components/terraform/<component>) , pointed to metadata.component attribute

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so yes, if you want to kep it the same regardless of the versions, you can specify it in backend.s3.workspace_key_prefix and use for all versions. For example

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
components:
  terraform:
    it03/defaults:
      metadata:
        type: abstract
      vars:
        location: Wandsbek
        lang: de
      backend:
        s3: 
          workspace_key_prefix: wms-base
    it03/v1.0:
      metadata:
        type: real
        component: wms-base/v1.0.0
        inherits:
          - base/defaults
          - it03/defaults
    it03/v1.1:
      metadata:
        type: real # type 'real' is the default and optional, you can omit it
        component: wms-base/v1.1.0
        inherits:
          - base/defaults
          - it03/defaults
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

don’t use globals like the following since it will be applied to ALL components in your infra

terraform:
  backend:
    s3:
      workspace_key_prefix: wms-it03
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in the example above, the TF state S3 bucket will look like this:

bucket:
  wms-base:  # this is the key prefix, which becomes a bucket folder for the TF component
    <stack-name>-it03-v1.0:  # subfolder for the `it03/v1.0` component in the stack
      `tf_state` file
    <stack-name>-it03-v1.1:  # subfolder for the `it03/v1.1` component in the stack
      `tf_state` file    
    <stack-2-name>-it03-v1.1:  # subfolder for the `it03/v1.1` component in another stack `stack-2-name`
      `tf_state` file
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you want to keep not only workspace_key_prefix the same for all versions of the component, but also the Terraform workspace, you can do the following:

components:
  terraform:
    it03/defaults:
      metadata:
        type: abstract
      vars:
        location: Wandsbek
        lang: de
      backend:
        s3: 
          workspace_key_prefix: wms-base
    it03/v1.0:
      metadata:
        type: real
        component: wms-base/v1.0.0
        inherits:
          - base/defaults
          - it03/defaults
        # Override Terraform workspace
        terraform_workspace_pattern: "{tenant}-{environment}-{stage}"
    it03/v1.1:
      metadata:
        type: real # type 'real' is the default and optional, you can omit it
        component: wms-base/v1.1.0
        inherits:
          - base/defaults
          - it03/defaults
        # Override Terraform workspace
        terraform_workspace_pattern: "{tenant}-{environment}-{stage}"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

note that in this case, both workspace_key_prefix and TF workspace will be the same for it03/v1.0 and it03/v1.1 components, so you will not be able to provision both at the same time (or if you do so, one will override the other in the state since they will be using the same state file in the same folder)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
bucket:
  wms-base:  # this is the key prefix, which becomes a bucket folder for the TF component
    <{tenant}-{environment}-{stage}>:  # this is the subfolder for the Terraform workspace 
      `tf_state` file
Stephan Helas avatar
Stephan Helas

Thx again for all the information.

Stephan Helas avatar
Stephan Helas

@Andriy Knysh (Cloud Posse)

i can’t inherit the terraform_workspace_pattern, right? so i need to overwrite the terraform_workspace_pattern on every component in the stack

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

no, you can’t inherit it b/c it’s in metadata section which is per component only and is not inheritable

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


so i need to overwrite the terraform_workspace_pattern on every component in the stack

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let me know why you would want to do it in the first place

Tom Bee avatar
Tom Bee

Hey all,

(Edit: got the initial module working, now have questions about whether I’ve set it up optimally, and how to configure other ses features)

~~~~~~ I’m attempting to get the cloudposse/ses/aws terraform module working but am not having much luck. I’ve only really done small tweaks to terraform scripts so it’s probably just a skill issue, but hopefully someone here can point me in the right direction.

The project is using terragrunt and I’ve attempted to copy over ‘/examples/complete’ code.

To my amateur eye everything looks ok, but I’m getting this error:

│ Error: invalid value for name (must only contain alphanumeric characters, hyphens, underscores, commas, periods, @ symbols, plus and equals signs)
│ 
│   with module.ses.module.ses_user.aws_iam_user.default[0],
│   on .terraform/modules/ses.ses_user/main.tf line 11, in resource "aws_iam_user" "default":
│   11:   name                 = module.this.id
Tom Bee avatar
Tom Bee

in my ./modules/ses folder I have copied the following example files exactly:

context.tf
outputs.tf
variables.tf
versions.tf
Tom Bee avatar
Tom Bee

My main.tf looks like this:

module "ses" {
  source  = "cloudposse/ses/aws"
  version = "0.25.0"

  domain        = var.domain
  zone_id       = ""
  verify_dkim   = false
  verify_domain = false

  context = module.this.context
}

I don’t want the route53 verification stuff set up (as we will need to set up the MX record ourselves) so I didn’t add that resource, or the vpc module. Is that where I’ve messed up?

Tom Bee avatar
Tom Bee

and I also have a terraform.tf file with this content (matching how others have set up other modules in our project):

provider "aws" {
  region = var.region
}

provider "awsutils" {
  region = var.region
}
Tom Bee avatar
Tom Bee

I’m really stuck on why this error is being thrown inside this ses.ses_user module. Any help at all would be greatly appreciated!

Tom Bee avatar
Tom Bee

I removed the terraform.tf file and moved the providers into the main.tf file, and added in the vpc module and route53 resource to see if that was the issue, but it wasn’t.

provider "aws" {
  region = var.region
}

provider "awsutils" {
  region = var.region
}

module "vpc" {
  source  = "cloudposse/vpc/aws"
  version = "2.1.1"

  ipv4_primary_cidr_block = "<VPC_BLOCK - not sure if confidential>"

  context = module.this.context
}

resource "aws_route53_zone" "private_dns_zone" {
  name = var.domain

  vpc {
    vpc_id = module.vpc.vpc_id
  }

  tags = module.this.tags
}

module "ses" {
  source  = "cloudposse/ses/aws"
  version = "0.25.0"

  domain        = var.domain
  zone_id       = ""
  verify_dkim   = false
  verify_domain = false

  context = module.this.context
}

Really not sure what’s going wrong here. I’ve matched the examples pretty much exactly. Is it because it’s running via terragrunt or something?

Tom Bee avatar
Tom Bee

Am I supposed to set up every parameter in the context.tf file by passing in variables? It looks like there’s a “name” var in there which defaults to null - that could explain why I’m getting the error maybe?

Tom Bee avatar
Tom Bee

I think that might be what I was missing, passing in the name variable. I just set it to our application name and it worked! I’ll refactor things to pass these variables down from the environment so it should be good, finally got the basic ses setup up.

Now the question is how to configure all the other SES stuff.

Is it possible to configure the ‘Monitoring your email sending’ (publish to SNS topic) via the terraform script? This required a configuration set to be applied too, is that possible via the module? I can’t see any options for the inputs for this.

RB avatar

You should give it a simple name ses-user and it should work

RB avatar

The name goes from the context as an input, to being used in the module.this null label, and then gets passed to the ses module using the context input. The module.this.id is the fully qualified name composed of the namespace, environment, stage, name, and other inputs. Technically none of those are required but at least one needs a value such as name for the id to have a value so you don’t get that error message

Tom Bee avatar
Tom Bee

Awesome thanks for confirming. I had it as “ses” to start with, then I’ve changed it to use the same iam user name that the backend uses for s3 as well so it has the perms to talk to both services. One thing I haven’t checked yet is whether it deletes the other perms on that user or not.

Tom Bee avatar
Tom Bee

There’s also a deprecated warning when running the terraform job:

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
╷
│ Warning: Argument is deprecated
│ 
│   with module.ses.module.ses_user.module.store_write[0].aws_ssm_parameter.default["/system_user/ccp-dev/access_key_id"],
│   on .terraform/modules/ses.ses_user.store_write/main.tf line 22, in resource "aws_ssm_parameter" "default":
│   22:   overwrite       = each.value.overwrite
│ 
│ this attribute has been deprecated
│ 
│ (and one more similar warning elsewhere)
╵
RB avatar

looks like a deprecation in the ssm parameter write module

https://github.com/cloudposse/terraform-aws-ssm-parameter-store/issues/58

#58 Deprecated `overwrite` parameter

Describe the Bug

After a terraform apply

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
╷
│ Warning: Argument is deprecated
│ 
│   with module.ses.module.ses_user.module.store_write[0].aws_ssm_parameter.default["/system_user/ccp-dev/access_key_id"],
│   on .terraform/modules/ses.ses_user.store_write/main.tf line 22, in resource "aws_ssm_parameter" "default":
│   22:   overwrite       = each.value.overwrite
│ 
│ this attribute has been deprecated
│ 
│ (and one more similar warning elsewhere)

From upstream docs

Warning
overwrite also makes it possible to overwrite an existing SSM Parameter that’s not created by Terraform before. This argument has been deprecated and will be removed in v6.0.0 of the provider. For more information on how this affects the behavior of this resource, see this issue comment.

Expected Behavior

No deprecation warning

Steps to Reproduce

Run terraform apply using this module

Screenshots

No response

Environment

No response

Additional Context

https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_parameter

Kamil Grabowski avatar
Kamil Grabowski

Hello, Have you had a chance to test the new Terraform 1.8.0 version? In my experience, the new version appears to be approximately three times slower compared to version 1.7.5. I’ve created a new GitHub Issue regarding this performance problem, but we need to reproduce the issue (we need more cases). If you upgrade to 1.8, could you gauge the performance and memory usage before and after, then share the results here: https://github.com/hashicorp/terraform/issues/34984

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I know one of our customers tested it (accidentally) in their github actions by not pinning their terraform version. When 1.8 was released, their terraform started throwing exceptions and crashing. Pinning to the previous 1.7.x released fixed it.

2
2
this1
Kamil Grabowski avatar
Kamil Grabowski

The new release - 1.8.2 - has solved the performance issue. In my case it’s even faster than 1.7.5

3
party_parrot1
1

2024-04-17

Release notes from terraform avatar
Release notes from terraform
10:23:29 AM

v1.8.1 1.8.1 (April 17, 2024) BUG FIXES:

Fix crash in terraform plan when referencing a module output that does not exist within the try(…) function. (#34985) Fix crash in terraform apply when referencing a module with no planned changes. (<a href=”https://github.com/hashicorp/terraform/pull/34985” data-hovercard-type=”pull_request”…

Fix panics due to missing graph edges on empty modules or invalid refs by liamcervante · Pull Request #34985 · hashicorp/terraformattachment image

This PR fixes two crashes within Terraform v1.8.0. In both cases a module expansion was being missed, and then crashing when something tried to reference the missed module. The first occurs when re…

Hao Wang avatar
Hao Wang
#34984 v1.8 is ~3x slower than v1.7.x

Terraform Version

$ terraform --version
Terraform v1.8.0
on darwin_arm64
+ provider registry.terraform.io/hashicorp/aws v5.39.1
+ provider registry.terraform.io/hashicorp/local v2.4.1
+ provider registry.terraform.io/hashicorp/null v3.2.2
+ provider registry.terraform.io/hashicorp/random v3.6.0

Terraform Configuration Files

...terraform config...

Debug Output

n/a

Expected Behavior

$ time terraform validate
Success! The configuration is valid.


real	0m1.452s
user	0m2.379s
sys	0m0.318s

Actual Behavior

$ time terraform validate
Success! The configuration is valid.


real	0m9.720s
user	0m2.701s
sys	0m0.444s

Steps to Reproduce

  1. git clone <https://github.com/philips-labs/terraform-aws-github-runner.git>
  2. cd terraform-aws-github-runner/examples/multi-runner/
  3. time terraform validate

Additional Context

Terraform v1.8 is ~~~3x slower than v1.7 and consumes ~~~5x more memory.

References

No response

2024-04-18

Zing avatar

how powerful are the provider functions? trying to understand the use cases

RB avatar

extremely powerful.

You can write your own function in golang within a provider and then run the function in terraform.

Kuba Martin avatar
Kuba Martin

In OpenTofu 1.7, where provider functions are coming as well, we have some cool functional expansions on it that allow use cases like e.g. writing custom functions in Lua files side-by-side with your .tf files.

We’ll have a livestream next week on Wednesday diving into some of the details: https://www.youtube.com/watch?v=6OXBv0MYalY

1
RB avatar

opentofu 1.7 is still in beta tho, no ?

Kuba Martin avatar
Kuba Martin

Yep, the stable release is coming out in ~2 weeks

2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

v1.7.0-beta1 Do not use this release for production workloads! It’s time for the first beta release of the 1.7.0 version! This includes a lot of major and minor new features, as well as a ton of community contributions! The highlights are:

State Encryption (docs) Provider-defined Functions (<a href=”https://1-7-0-beta1.opentofu.pages.dev/docs/language/functions/#provider-defined-functions“…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Beta 1 released this morning!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


writing custom functions in Lua files side-by-side with your .tf files.

RB avatar
function main_a( input )
    local animal_sounds = {
        cat = 'meow',
        dog = 'woof',
        cow = 'moo'
    }
    return animal_sounds
end
Kuba Martin avatar
Kuba Martin

We’re working to get the experimental Lua provider on the registry as we speak, so you’ll be able to play around with it when test driving the beta!

3
Hao Wang avatar
Hao Wang

are there other languages being supported besides Lua?

Kuba Martin avatar
Kuba Martin

It’s not support for any specific languages per se, we’re just enabling the creation of providers that can dynamically expose custom functions based on e.g. a file passed to the provider as a config parameter.

So like here

provider "lua" {
    lua = file("./main.lua")
}

So the community will be able to create providers for whatever languages they want.

However, the providers still have to be written in Go, and after investigating options for embedding both Python and JavaScript in a provider written in Go, it’s not looking very promising.

I’ll be doing a PoC for dynamically writing those in Go though, via https://github.com/traefik/yaegi

1
Hao Wang avatar
Hao Wang

how about security side with functions?

RB avatar

do you mean how does opentofu prevent supply chain attacks with providers that integrate new languages ?

Hao Wang avatar
Hao Wang

yeah

1
Kuba Martin avatar
Kuba Martin

I don’t think that’s any different to normal providers and their resources though, is it?

Kuba Martin avatar
Kuba Martin

Btw. Lua provider is in and ready to play with https://github.com/opentofu/terraform-provider-lua

Hao Wang avatar
Hao Wang

awesome

Hao Wang avatar
Hao Wang

thinking if that is an issue if the codes in lua will replace Terraform binaries etc

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this is freggin awesome!

Using provider functions, it will be possible to run Lua along side terraform / opentofu.

function main_a( input )
    local animal_sounds = {
        cat = 'meow',
        dog = 'woof',
        cow = 'moo'
    }
    return animal_sounds
end

Then call that from terraform.

terraform {
  required_providers {
    tester = {
      source  = "terraform.local/local/testfunctions"
      version = "0.0.1"
    }
  }
}

provider "tester" {
        lua = file("./main.lua")
}

output "test" {
        value = provider::tester::main_a(tomap({"foo": {"bar": 190}}))
}

See example: https://github.com/opentofu/opentofu/pull/1491

Granted, I don’t love the way this looks: provider::tester::main_a(tomap({"foo": {"bar": 190}})) https://sweetops.slack.com/archives/CB6GHNLG0/p1713454132172149?thread_ts=1713440027.477669&cid=CB6GHNLG0

#1491 Allow configured providers to provide additional functions.

Prior to this change a single unconfigured lazy provider instance was used per provider type to supply functions. This used the functions provided by GetSchema only.

With this change, provider function calls are detected and supplied via GraphNodeReferencer and are used in the ProviderFunctionTransformer to add dependencies between referencers and the providers that supply their functions.

With that information EvalContextBuiltin can now assume that all providers that require configuration have been configured by the time a particular scope is requested. It can then use it’s initialized providers to supply all requested functions.

At a high level, it allows providers to dynamically register functions based on their configurations.

main.lua

function main_a( input )
    local animal_sounds = {
        cat = 'meow',
        dog = 'woof',
        cow = 'moo'
    }
    return animal_sounds
end

main.tf

terraform {
  required_providers {
    tester = {
      source  = "terraform.local/local/testfunctions"
      version = "0.0.1"
    }
  }
}

provider "tester" {
        lua = file("./main.lua")
}

output "test" {
        value = provider::tester::main_a(tomap({"foo": {"bar": 190}}))
}

Output:

Changes to Outputs:
  + test = {
      + cat = "meow"
      + cow = "moo"
      + dog = "woof"
    }

This requires some enhancements to the HCL library and is currently pointing at my personal fork, with a PR into the main HCL repository: hashicorp/hcl#676. As this may take some time for the HCL Team to review, I will likely move the forked code under the OpenTofu umbrella before this is merged [done].

This PR will be accompanied by a simple provider framework for implementing function providers similar to the example above.

Related to #1326

Target Release

1.7.0

Matt Gowie avatar
Matt Gowie

It is awesome! To be clear: This will likely only be available for OpenTofu and not for Terraform because the OpenTofu devs are taking the provider functions feature further with this update that they came up with. I haven’t see anything that says Terraform will do the same, which will be an interesting diversion!

#1491 Allow configured providers to provide additional functions.

Prior to this change a single unconfigured lazy provider instance was used per provider type to supply functions. This used the functions provided by GetSchema only.

With this change, provider function calls are detected and supplied via GraphNodeReferencer and are used in the ProviderFunctionTransformer to add dependencies between referencers and the providers that supply their functions.

With that information EvalContextBuiltin can now assume that all providers that require configuration have been configured by the time a particular scope is requested. It can then use it’s initialized providers to supply all requested functions.

At a high level, it allows providers to dynamically register functions based on their configurations.

main.lua

function main_a( input )
    local animal_sounds = {
        cat = 'meow',
        dog = 'woof',
        cow = 'moo'
    }
    return animal_sounds
end

main.tf

terraform {
  required_providers {
    tester = {
      source  = "terraform.local/local/testfunctions"
      version = "0.0.1"
    }
  }
}

provider "tester" {
        lua = file("./main.lua")
}

output "test" {
        value = provider::tester::main_a(tomap({"foo": {"bar": 190}}))
}

Output:

Changes to Outputs:
  + test = {
      + cat = "meow"
      + cow = "moo"
      + dog = "woof"
    }

This requires some enhancements to the HCL library and is currently pointing at my personal fork, with a PR into the main HCL repository: hashicorp/hcl#676. As this may take some time for the HCL Team to review, I will likely move the forked code under the OpenTofu umbrella before this is merged [done].

This PR will be accompanied by a simple provider framework for implementing function providers similar to the example above.

Related to #1326

Target Release

1.7.0

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Oh, I didn’t catch that.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I was more looking to see provider functions adding uniformity and improving interoperability, than introducing incompatbilities

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


This requires some enhancements to the HCL library and is currently pointing at my personal fork, with a PR into the main HCL repository: hashicorp/hcl#676. As this may take some time for the HCL Team to review, I will likely move the forked code under the OpenTofu umbrella before this is merged [done].

#676 Add function inspections

With the focus on functions in recent hcl releases, I thought it time to introduce the ability to inspect which functions are required to evaluate expressions. This mirrors the Variable traversal present throughout the codebase.

This allows for better error messages and optimizations to be built around supplying custom functions by consumers of HCL.

These changes are backwards compatible and is not a breaking change. I explicitly introduced hcl.ExpressionWithFunctions to allow consumers to opt-into supporting function inspection. I would recommend that this be moved into hcl.Expression if a major version with API changes is ever considered.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Got it

Matt Gowie avatar
Matt Gowie

OpenTofu will still likely add provider functions that the providers define and open up and AFAIU that should be available across OTF + Terraform, BUT these dynamically defined functions that you can create yourselves and are project specific will be OTF only.

This is my understanding so far, so not 100% on that.

1
Hao Wang avatar
Hao Wang
04:04:00 PM

Got a quick question about the projects that Terraform depends on, e.g. https://github.com/hashicorp/hcl/tree/main, why are the licenses of the similar projects not converted by Hashicorp?

2024-04-19

2024-04-20

Hao Wang avatar
Hao Wang
OpenTofu forges on with beta feature that drew HashiCorp ire | TechTargetattachment image

OpenTofu fended off a HashiCorp legal threat and shipped its 1.7 beta release with the disputed feature intact, along with client-side state encryption.

Jake Lundberg (HashiCorp) avatar
Jake Lundberg (HashiCorp)

Shouldn’t this be in the opentofu channel?

OpenTofu forges on with beta feature that drew HashiCorp ire | TechTargetattachment image

OpenTofu fended off a HashiCorp legal threat and shipped its 1.7 beta release with the disputed feature intact, along with client-side state encryption.

Hao Wang avatar
Hao Wang

they are still quite related, aren’t they?

Hao Wang avatar
Hao Wang
Initializing and Migrating | OpenTofuattachment image

Learn how to use the OpenTofu CLI to migrate local or remote state to a Cloud Backend.

2024-04-22

2024-04-23

Shivam s avatar
Shivam s

Getting this error while changing the version from helm chart, please suggest

Michael avatar
Michael
IBM nearing a buyout deal for cloud software firm HashiCorp, source saysattachment image

International Business Machines is nearing a deal to buy cloud software provider HashiCorp , according to a person familiar with the matter.

1
2
loren avatar

What a shame.

IBM nearing a buyout deal for cloud software firm HashiCorp, source saysattachment image

International Business Machines is nearing a deal to buy cloud software provider HashiCorp , according to a person familiar with the matter.

Michael avatar
Michael

Break out the forks

1
Stephen Tan avatar
Stephen Tan

James Humphries avatar
James Humphries

Break out the forks Now’s a good time to mention that the opentofu channel exists

2
Hao Wang avatar
Hao Wang

what a bad timing for an acquisition!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
IBM to Acquire HashiCorp, Inc. Creating a Comprehensive End-to-End Hybrid Cloud Platformattachment image

$6.4 billion acquisition adds suite of leading hybrid and multi-cloud lifecycle management products to help clients grappling with today’s AI-driven…

Hao Wang avatar
Hao Wang

more fun

loren avatar

Am I just missing it, or is there no data source to return the available EKS Access Policies? I opened a ticket to request a new data source, please if it would help you also! https://github.com/hashicorp/terraform-provider-aws/issues/37065

#37065 [New Data Source]: `aws_eks_access_policies` to return available EKS Access Policies

Description

I would like a data source that returns all the available EKS Access Policies, essentially the results from aws eks list-access-policies. This would help simplify the constructs/logic around configuring EKS Access Policy Associations. It could also be used by module authors to validate user input.

Requested Resource(s) and/or Data Source(s)

data "aws_eks_access_policies"

Potential Terraform Configuration

data "aws_eks_access_policies" "this" {}

References

https://docs.aws.amazon.com/eks/latest/APIReference/API_ListAccessPolicies.html

Would you like to implement a fix?

None

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

AFAIK there is not such a data source. The Access Policies are, IMHO, only useful for bootstrapping access to a new cluster or one where you lost access to the admin IAM role. You really should be using Kubernetes RBAC for access control in most cases.

#37065 [New Data Source]: `aws_eks_access_policies` to return available EKS Access Policies

Description

I would like a data source that returns all the available EKS Access Policies, essentially the results from aws eks list-access-policies. This would help simplify the constructs/logic around configuring EKS Access Policy Associations. It could also be used by module authors to validate user input.

Requested Resource(s) and/or Data Source(s)

data "aws_eks_access_policies"

Potential Terraform Configuration

data "aws_eks_access_policies" "this" {}

References

https://docs.aws.amazon.com/eks/latest/APIReference/API_ListAccessPolicies.html

Would you like to implement a fix?

None

loren avatar

What does “using Kubernetes RBAC” look like from the Access Entry perspective? I only enabled API mode.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

See AWS docs on access entries and Kubernetes docs on RBAC for details.
If an access entry’s type is STANDARD, and you want to use Kubernetes RBAC authorization, you can add one or more group names to the access entry. After you create an access entry you can add and remove group names. For the IAM principal to have access to Kubernetes objects on your cluster, you must create and manage Kubernetes role-based authorization (RBAC) objects. Create Kubernetes RoleBinding or ClusterRoleBinding objects on your cluster that specify the group name as a subject for kind: Group. Kubernetes authorizes the IAM principal access to any cluster objects that you’ve specified in a Kubernetes Role or ClusterRole object that you’ve also specified in your binding’s roleRef.

Manage access entries - Amazon EKS

Grant users and apps access to Kubernetes APIs.

Using RBAC Authorization

Role-based access control (RBAC) is a method of regulating access to computer or network resources based on the roles of individual users within your organization. RBAC authorization uses the rbac.authorization.k8s.io API group to drive authorization decisions, allowing you to dynamically configure policies through the Kubernetes API. To enable RBAC, start the API server with the –authorization-mode flag set to a comma-separated list that includes RBAC; for example: kube-apiserver –authorization-mode=Example,RBAC –other-options –more-options API objects The RBAC API declares four kinds of Kubernetes object: Role, ClusterRole, RoleBinding and ClusterRoleBinding.

loren avatar

That sounds hard. I’m still a kubernetes noob. We have no k8s dedicated platform team for this work. Just using built-in cloud stuff as much as possible. I think I’d rather just spin up more accounts and/or clusters than try to bother with rbac at that level

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

I understand. In that case, I’d just leave the list of policies hard coded for now. There are only 4.

loren avatar

That’s what I did, yeah. A map of policy name to arn. Just would be nice to have a data source. I think there’s like 6 or 7 policies now. On my phone so don’t have my code in front of me

loren avatar

Oh and we do work in govcloud and iso partitions. So a hardcoded map gets cumbersome…

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

I see 6 policies now. But if you’re not going to lock things down, then you can just give everyone AmazonEKSClusterAdminPolicy which is full control.

1
loren avatar

it’s kinda “cluster as a service” in this case. I’m just writing the terraform, not managing the cluster or any apps. I don’t care what they do with it afterwards. maybe they’ll use eks rbac. I’m just giving them the option to manage the eks access entries, and trying to simplify the input where I can

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

You can look at what we did for guidance and inspiration

variable "access_entry_map" {
  type = map(object({
    # key is principal_arn
    user_name = optional(string)
    # Cannot assign "system:*" groups to IAM users, use ClusterAdmin and Admin instead
    kubernetes_groups = optional(list(string), [])
    type              = optional(string, "STANDARD")
    access_policy_associations = optional(map(object({
      # key is policy_arn or policy_name
      access_scope = optional(object({
        type       = optional(string, "cluster")
        namespaces = optional(list(string))
      }), {}) # access_scope
    })), {})  # access_policy_associations
  }))         # access_entry_map
  description = <<-EOT
    Map of IAM Principal ARNs to access configuration.
    Preferred over other inputs as this configuration remains stable
    when elements are added or removed, but it requires that the Principal ARNs
    and Policy ARNs are known at plan time.
    Can be used along with other `access_*` inputs, but do not duplicate entries.
    Map `access_policy_associations` keys are policy ARNs, policy
    full name (AmazonEKSViewPolicy), or short name (View).
    It is recommended to use the default `user_name` because the default includes
    IAM role or user name and the session name for assumed roles.
    As a special case in support of backwards compatibility, membership in the
    `system:masters` group is is translated to an association with the ClusterAdmin policy.
    In all other cases, including any `system:*` group in `kubernetes_groups` is prohibited.
    EOT
  default     = {}
  nullable    = false
}

variable "access_entries" {
  type = list(object({
    principal_arn     = string
    user_name         = optional(string, null)
    kubernetes_groups = optional(list(string), null)
  }))
  description = <<-EOT
    List of IAM principles to allow to access the EKS cluster.
    It is recommended to use the default `user_name` because the default includes
    the IAM role or user name and the session name for assumed roles.
    Use when Principal ARN is not known at plan time.
    EOT
  default     = []
  nullable    = false
}

variable "access_policy_associations" {
  type = list(object({
    principal_arn = string
    policy_arn    = string
    access_scope = object({
      type       = optional(string, "cluster")
      namespaces = optional(list(string))
    })
  }))
  description = <<-EOT
    List of AWS managed EKS access policies to associate with IAM principles.
    Use when Principal ARN or Policy ARN is not known at plan time.
    `policy_arn` can be the full ARN, the full name (AmazonEKSViewPolicy) or short name (View).
    EOT
  default     = []
  nullable    = false
}
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Jeremy G (Cloud Posse)

2024-04-24

Release notes from terraform avatar
Release notes from terraform
10:43:27 AM

v1.8.2 1.8.2 (April 24, 2024) BUG FIXES:

terraform apply: Prevent panic when a provider erroneously provides unknown values. (#35048) terraform plan: Replace panic with error message when self-referencing resources and data sources from the count and for_each meta attributes. (<a href=”https://github.com/hashicorp/terraform/pull/35047“…

terraform apply: restore marks after unknown validation by liamcervante · Pull Request #35048 · hashicorp/terraformattachment image

This PR updates the order of validations that happen after an ApplyResourceChange operation. Previously, we restored the sensitive marks on the new value before validating that is was wholly known….

Validate self-references within resource count and foreach arguments by liamcervante · Pull Request #35047 · hashicorp/terraformattachment image

Currently Terraform will crash if a resource references itself from the count or for_each attributes. This PR updates the expansion node for resources to perform the same check that the execution n…

Hao Wang avatar
Hao Wang
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
IBM to Acquire HashiCorp, Inc. Creating a Comprehensive End-to-End Hybrid Cloud Platformattachment image

$6.4 billion acquisition adds suite of leading hybrid and multi-cloud lifecycle management products to help clients grappling with today’s AI-driven…

1
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So I am trying to look at it optimistically. IBM is behind OpenBao (fork of Vault), which is a vote for open source. While healthy competition is good, I don’t like the current hostilities between HashiCorp and OpenTofu. Best case is if the projects merge and move back to MPL or into CNCF as a unified project. Kind of like what happened with NodeJS and IO.js. Vault is where the money is at for them.

IBM to Acquire HashiCorp, Inc. Creating a Comprehensive End-to-End Hybrid Cloud Platformattachment image

$6.4 billion acquisition adds suite of leading hybrid and multi-cloud lifecycle management products to help clients grappling with today’s AI-driven…

2
Matt Gowie avatar
Matt Gowie

I like your perspective Erik and I hope for the best out of this. Regardless, there is a path forward here and that is what matters to me.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

An interesting theory I saw on LinkedIn is that it was a contingency for the acquisition to take place; that IBM required the move to BSL. To preserve their image, it is better that HashiCorp do it than RedHat. It would also explain the “business logic” for HashiCorp, making such a drastic/sweeping change, with seemingly little regard for the downstream effects and virtually no advance notice.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It’s just confusing to me that IBM would then also be a backer of OpenBao and going to the extent it has…

Matt Gowie avatar
Matt Gowie

Yeah those two pieces seem counter-intuitive.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
False flagattachment image

A false flag operation is an act committed with the intent of disguising the actual source of responsibility and pinning blame on another party. The term “false flag” originated in the 16th century as an expression meaning an intentional misrepresentation of someone’s allegiance. The term was famously used to describe a ruse in naval warfare whereby a vessel flew the flag of a neutral or enemy country in order to hide its true identity. The tactic was originally used by pirates and privateers to deceive other ships into allowing them to move closer before attacking them. It later was deemed an acceptable practice during naval warfare according to international maritime laws, provided the attacking vessel displayed its true flag once an attack had begun. The term today extends to include countries that organize attacks on themselves and make the attacks appear to be by enemy nations or terrorists, thus giving the nation that was supposedly attacked a pretext for domestic repression or foreign military aggression. Similarly deceptive activities carried out during peacetime by individuals or nongovernmental organizations have been called false flag operations, but the more common legal term is a “frameup”, “stitch up”, or “setup”.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It’s official

Jake Lundberg (HashiCorp) avatar
Jake Lundberg (HashiCorp)

Yes it is.

jose.amengual avatar
jose.amengual

Happy? is it seen as possitive in hashicorp @Jake Lundberg (HashiCorp)?

1
Jake Lundberg (HashiCorp) avatar
Jake Lundberg (HashiCorp)

It’s all rather new, I can’t really speak for the company or others.

My personal stance is “wait and see”. If we continue on the trajectory we’ve set for ourselves from a strategy perspective, I think it’ll be great.

I know a lot of folks seem to think we’ve stagnated, but it’s far from the truth.

jose.amengual avatar
jose.amengual

I also hope is for the best

Hao Wang avatar
Hao Wang

I’m leaving this channel, bye bye

wave2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Sorry to see you go. We have plenty of places to debate open source and other project forks. The point is just not in this channel.

2024-04-25

Patrick Berrigan avatar
Patrick Berrigan

Dont mind the XXXX I removed my account ID

Patrick Berrigan avatar
Patrick Berrigan

Hey all Im trying to use the cloudposse SSM patch manager module located here: https://registry.terraform.io/modules/cloudposse/ssm-patch-manager/aws/latest However there is an issue that I cant seem to resolve. I am trying to get it to patch a windows machine with the key of PatchGroup and value of Windows but cant seem to figure out the correct syntax for it. Im also having trouble with it setting the patch baseline to Windows. By default its Amazon Linux 2 however when I change it to Windows it gives me an error message. Here is my code snippet and the error message I get.

Patrick Berrigan avatar
Patrick Berrigan
Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

this error message says that EnableNonSecurity isnt supported with Windows. You can disable non security with var.patch_baseline_approval_rules, but the default option is set like this: https://github.com/cloudposse/terraform-aws-ssm-patch-manager/blob/main/variables.tf#L191-L211

  default = [
    {
      approve_after_days  = 7
      compliance_level    = "HIGH"
      enable_non_security = true
      patch_baseline_filters = [
        {
          name   = "PRODUCT"
          values = ["AmazonLinux2", "AmazonLinux2.0"]
        },
        {
          name   = "CLASSIFICATION"
          values = ["Security", "Bugfix", "Recommended"]
        },
        {
          name   = "SEVERITY"
          values = ["Critical", "Important", "Medium"]
        }
      ]
    }
  ]
Patrick Berrigan avatar
Patrick Berrigan

Thanks Dan. How does this work with a Windows asset? Just trying to patch one windows box. No Linux is involved.

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

it should work the same way, but you should change the default option to what you need for windows. for example

  patch_baseline_approval_rules = [
    {
      approve_after_days  = 7
      compliance_level    = "HIGH"
      enable_non_security = false
      patch_baseline_filters = [
        {
          name   = "PRODUCT"
          values = ["WINDOWS"]
        },
        {
          name   = "CLASSIFICATION"
          values = ["Security", "Bugfix", "Recommended"]
        },
        {
          name   = "SEVERITY"
          values = ["Critical", "Important", "Medium"]
        }
      ]
    }
  ]

https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_patch_baseline

https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribePatchProperties.html

DescribePatchProperties - AWS Systems Manager

Lists the properties of available patches organized by product, product family, classification, severity, and other properties of available patches. You can use the reported properties in the filters you specify in requests for operations such as ,

Patrick Berrigan avatar
Patrick Berrigan

This is what I get now.

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

I’d recommend trying one of the options listed under “valid values”

jose.amengual avatar
jose.amengual

Hi! As some of you know, I’m one of the Atlantis(runatlantis.io) maintainers, and we are seeking input from our users. The Core Atlantis Team has created an anonymous survey for Atlantis users to help us understand the community needs and prioritize our roadmap. If you have the time, please take 5 minutes to fill it out https://docs.google.com/forms/d/1fOGWkdinDV2_46CZvzQRdz8401ypZR8Z-iwkNNt3EX0

4

2024-04-26

2024-04-27

DevOpsGuy avatar
DevOpsGuy

I have a repo and in the root I have three directories for qa, stage and prod for each environment to create infrastructure in respective environment. And I want to keep the code DRY (Don’t repeat yourself).

NOTE: In each directory qa, stage and prod we are calling the child modules which are in the remote and not placing the child module configuration at the root of the module. I have a providers.tf file which is a globally common where all the providers are defined and each have alias. But, I want to place the provideres.tf file in the root of the repo instead of placing the SAME file in all the three qa, stage and prod directories.

Is it possible to place the common globally defined Providers.tf file in the root of the repo and build the infrastructure in all the environments?

Joe Perez avatar
Joe Perez

Quick and dirty is a symlink, but you still need to figure out the backend stanza for each since they are different environments

Joe Perez avatar
Joe Perez

That could be done via cli flag or custom wrapper

DevOpsGuy avatar
DevOpsGuy

Hi @Joe Perez, Thank you for the quick response. Do you have an example or link for a document.

Joe Perez avatar
Joe Perez

I don’t have exactly that, but I did write a terraform wrapper article on how to do this with templates https://www.taccoform.com/posts/tf_wrapper_p1/

Terraform Wrappers - Simplify Your Workflow

Overview Cloud providers are complex. You’ll often ask yourself three questions: “Is it me?”, “Is it Terraform?”, and “Is it AWS?” The answer will be yes to at least one of those questions. Fighting complexity can happen at many different levels. It could be standardizing the tagging of cloud resources, creating and tuning the right abstraction points (Terraform modules) to help engineers build new services, or streamlining the IaC development process with wrappers.

loren avatar

Atmos would be the cloudposse way. There’s also terragrunt and terramate

DevOpsGuy avatar
DevOpsGuy

thank you @loren But, we are trying to stick with the vanilla terraform.

loren avatar

Good luck with that. You’ll end up with your own wrapper or script, to meet that requirement

1
DevOpsGuy avatar
DevOpsGuy

I was trying terraform -chdir, not sure how it can help us in our scenario.

loren avatar

It doesn’t, really

Jake Lundberg (HashiCorp) avatar
Jake Lundberg (HashiCorp)

Terraform Cloud workspaces help here as well. Stacks will go even further.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@DevOpsGuy working with vanilla terraform and no other tooling is the best and only way to cut your teeth learning terraform. In order to truly appreciate the challenges of working with terraform, there’s no substitute for “learning the hardway”. However, I really do encourage you to read our write up on this, if for no other reason to be aware, so that as you start to encounter all the rough edges you’ll know - you’re not alone. This is a tried and true path, and multiple solutions out there exist, paid or open source. As Jake mentions, this is one of the problems the commercial Terraform Cloud/Enterprise offering solves, but there are alternatives as well which are open source.

https://atmos.tools/reference/terraform-limitations/

Overcoming Terraform Limitations with Atmos | atmos

Overcoming Terraform Limitations with Atmos

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Teams often flounder with Terraform because they’re usually starting from scratch—a costly and outdated approach. Imagine if you tried to build a modern website without frameworks like React or Angular? That’s the uphill battle you’re fighting, without standardized conventions for Terraform, which other tooling brings to the table.

1

2024-04-28

RB avatar

This additional tflint ruleset looks handy for additional terraform stylistic guides such as terraform_variable_order
Recommend proper order for variable blocks The variables without default value are placed prior to those with default value set Then the variables are sorted based on their names (alphabetic order)

this1
Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

Plus 1 to this. It’d definitely helpful to have better organization in variable files

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

cc @Erik Osterman (Cloud Posse) (for v2 components)

Haroon Rasheed avatar
Haroon Rasheed

Hi Team,

Need your help on the provider development using the plugin framework. I’m trying to set up a schema in the resource.go file. I set up an attribute which is required only during create as it is a sensitive attribute and it won’t be present in the response which is causing the below error. I tried multiple ways to accommodate like keeping the required:true ; optional:true and computed:true ; sensitive:true etc..nothing worked. I got below error

2024-04-27T15:09:36.989+0530 [ERROR] vertex "xxxx.cred" error: Provider produced inconsistent result after apply ╷ │ Error: Provider produced inconsistent result after apply │ │ When applying changes to xxxx.cred, provider │ "provider["hashicorp.com/edu/xxxx"]" produced an unexpected new value: │ .secret_key: inconsistent values for sensitive attribute. 

Basically I need to handle an attribute required only during create but shouldn’t be present in the response which needs to be solved.

2024-04-29

2024-04-30

Kuba Martin avatar
Kuba Martin

Hey, OpenTofu 1.7.0 is out with state encryption, provider-defined functions, and a lot more: https://opentofu.org/blog/opentofu-1-7-0/!

2
5
3
    keyboard_arrow_up