#terraform (2024-04)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2024-04-01

2024-04-02

Ryan avatar

stupid question but are you guys terraforming your github configs?

1
1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
resource "github_repository_file" "gitignore" {
  count = local.enabled ? 1 : 0

  repository = local.github_repository.name
  branch     = local.github_repository.default_branch
  file       = ".gitignore"
  content = templatefile("${path.module}/templates/.gitignore.tpl", {
    entries = var.gitignore_entries
  })
  commit_message      = "Create .gitignore file."
  commit_author       = var.github_user
  commit_email        = var.github_user_email
  overwrite_on_create = true
}

resource "github_repository_file" "readme" {
  count = local.enabled ? 1 : 0

  repository = local.github_repository.name
  branch     = local.github_repository.default_branch
  file       = "README.md"
  content = templatefile("${path.module}/templates/README.md.tpl", {
    repository_name        = local.github_repository.name
    repository_description = local.github_repository.description
    github_organization    = var.github_organization
  })
  commit_message      = "Create README.md file."
  commit_author       = var.github_user
  commit_email        = var.github_user_email
  overwrite_on_create = true
}

resource "github_repository_file" "codeowners_file" {
  count = local.enabled ? 1 : 0

  repository = local.github_repository.name
  branch     = local.github_repository.default_branch
  file       = ".github/CODEOWNERS"
  content = templatefile("${path.module}/templates/CODEOWNERS.tpl", {
    codeowners = var.github_codeowner_teams
  })
  commit_message      = "Create CODEOWNERS file."
  commit_author       = var.github_user
  commit_email        = var.github_user_email
  overwrite_on_create = true
}

resource "github_repository_file" "pull_request_template" {
  count = local.enabled ? 1 : 0

  repository          = local.github_repository.name
  branch              = local.github_repository.default_branch
  file                = ".github/PULL_REQUEST_TEMPLATE.md"
  content             = file("${path.module}/templates/PULL_REQUEST_TEMPLATE.md")
  commit_message      = "Create PULL_REQUEST_TEMPLATE.md file."
  commit_author       = var.github_user
  commit_email        = var.github_user_email
  overwrite_on_create = true
}

Ryan avatar

Yea I have a req of redeploying Github inside our boundaries and so I have a chance to do it clean. I was about to start clicking but I was finding some of what you’re saying above.

Ryan avatar

someone emoted nope, hmmmm

Ryan avatar

ill have to read more on the code, getting to a control state would be cool

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the auth token for the github provider is read from SSM or ASM in case of AWS, the rest should be straightforward (variables configured with Terraform and/or Atmos), let us know if you need any help

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We recently rolled this out for cloud posse: https://github.com/repository-settings/app

repository-settings/app

Pull Requests for GitHub repository settings

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
# These settings are synced to GitHub by <https://probot.github.io/apps/settings/>

repository:
  # See <https://docs.github.com/en/rest/reference/repos#update-a-repository> for all available settings.

  # Note: You cannot unarchive repositories through the API. `true` to archive this repository. 
  archived: false

  # Either `true` to enable issues for this repository, `false` to disable them.
  has_issues: true

  # Either `true` to enable projects for this repository, or `false` to disable them.
  # If projects are disabled for the organization, passing `true` will cause an API error.
  has_projects: true

  # Either `true` to enable the wiki for this repository, `false` to disable it.
  has_wiki: false

  # Either `true` to enable downloads for this repository, `false` to disable them.
  has_downloads: true

  # Updates the default branch for this repository.
  #default_branch: main

  # Either `true` to allow squash-merging pull requests, or `false` to prevent
  # squash-merging.
  allow_squash_merge: true

  # Either `true` to allow merging pull requests with a merge commit, or `false`
  # to prevent merging pull requests with merge commits.
  allow_merge_commit: false

  # Either `true` to allow rebase-merging pull requests, or `false` to prevent
  # rebase-merging.
  allow_rebase_merge: false

  # Either `true` to enable automatic deletion of branches on merge, or `false` to disable
  delete_branch_on_merge: true

  # Either `true` to enable automated security fixes, or `false` to disable
  # automated security fixes.
  enable_automated_security_fixes: true

  # Either `true` to enable vulnerability alerts, or `false` to disable
  # vulnerability alerts.
  enable_vulnerability_alerts: true

  # Either `true` to make this repo available as a template repository or `false` to prevent it.
  #is_template: false

environments:
  - name: release
    deployment_branch_policy:
      custom_branches:
        - main
        - release/**
  - name: security
    deployment_branch_policy:
      custom_branches:
        - main
        - release/**

# Labels: define labels for Issues and Pull Requests
labels:
  - name: bug
    color: '#d73a4a'
    description: 🐛 An issue with the system

  - name: feature
    color: '#336699'
    description: New functionality

  - name: bugfix
    color: '#fbca04'
    description: Change that restores intended behavior

  - name: auto-update
    color: '#ededed'
    description: This PR was automatically generated

  - name: do not merge
    color: '#B60205'
    description: Do not merge this PR, doing so would cause problems

  - name: documentation
    color: '#0075ca'
    description: Improvements or additions to documentation

  - name: readme
    color: '#0075ca'
    description: Improvements or additions to the README

  - name: duplicate
    color: '#cfd3d7'
    description: This issue or pull request already exists

  - name: enhancement
    color: '#a2eeef'
    description: New feature or request

  - name: good first issue
    color: '#7057ff'
    description: 'Good for newcomers'

  - name: help wanted
    color: '#008672'
    description: 'Extra attention is needed'

  - name: invalid
    color: '#e4e669'
    description: "This doesn't seem right"

  - name: major
    color: '#00FF00'
    description: 'Breaking changes (or first stable release)'

  - name: minor
    color: '#00cc33'
    description: New features that do not break anything

  - name: no-release
    color: '#0075ca'
    description: 'Do not create a new release (wait for additional code changes)'

  - name: patch
    color: '#0E8A16'
    description: A minor, backward compatible change

  - name: question
    color: '#d876e3'

  - name: wip
    color: '#B60205'
    description: 'Work in Progress: Not ready for final review or merge'

  - name: wontfix
    color: '#B60205'
    description: 'This will not be worked on'

  - name: needs-cloudposse
    color: '#B60205'
    description: 'Needs Cloud Posse assistance'
  
  - name: needs-test
    color: '#B60205'
    description: 'Needs testing'

  - name: triage
    color: '#fcb32c'
    description: 'Needs triage'

  - name: conflict
    color: '#B60205'
    description: 'This PR has conflicts'

  - name: no-changes
    color: '#cccccc'
    description: 'No changes were made in this PR'

  - name: stale
    color: '#e69138'
    description: 'This PR has gone stale'

  - name: migration
    color: '#2f81f7'
    description: 'This PR involves a migration'

  - name: terraform/0.13
    color: '#ffd9c4'
    description: 'Module requires Terraform 0.13 or later'

# Note: `permission` is only valid on organization-owned repositories.
# The permission to grant the collaborator. Can be one of:
# * `pull` - can pull, but not push to or administer this repository.
# * `push` - can pull and push, but not administer this repository.
# * `admin` - can pull, push and administer this repository.
# * `maintain` - Recommended for project managers who need to manage the repository without access to sensitive or destructive actions.
# * `triage` - Recommended for contributors who need to proactively manage issues and pull requests without write access.
#
# See <https://docs.github.com/en/rest/reference/teams#add-or-update-team-repository-permissions> for available options
teams:
  - name: approvers
    permission: push
  - name: admins
    permission: admin
  - name: bots
    permission: admin
  - name: engineering
    permission: write
  - name: contributors
    permission: write
  - name: security
    permission: pull

Ryan avatar

That is very slick

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This combined with GitHub organizational repository rulesets works well for for us and eliminates the need to manage branch protections at the repo level

Ryan avatar

From a compliance perspective, I have to enforce any baseline I have

Ryan avatar

So like this might work well to enforce

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If you decide to go the route with terraform, there are some gotchas. I know there are bugs/limitations with the HashiCorp managed provider for GitHub that others have worked around, but those forks are seemingly abandoned. We would recommend a single terraform component to manage a single repo, that using atmos to define the configuration for each repo with inheritance. This would mitigate one of the most common problems companies experience when using terraform to manage GitHub repos: GitHub API rate limits. Defining a factory in terraform for repos will guaranteed be hamstrung by these rate limits. So defining the factory in atmos instead and combining it with our GHA will ensure only affected repos are planned/applied when changes are made.

2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But I rather like the approach we took instead.

joshmyers avatar
joshmyers

wave I’m using Terraform to manage all our internal Github Enterprise repos and teams etc. Happy to answer any questions.

2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@joshmyers what issues have you run into along the way and are you using the official hashicorp provider?

joshmyers avatar
joshmyers

Mostly slowness in the provider, against Github Enterprise on several gigs over the last few years. Sometimes this comes from API limits but mostly slow provider when you start managing 50-100+ repos and plenty of settings. We managed to hunt down a few of the slow resources and switch them out for others e.g. graphql for github_branch_protection_v3 resource. Things got a bit better in more recent provider versions. Can also split the states at some business logical point so each doesn’t get too big. I like the drift detection and alignment here. Plenty of manual tweaks/bodges forgotten to be removed got cleaned up.

#567 Slow performance when managing dozens of repositories

Terraform Version

0.12.6

Affected Resource(s)

Please list the resources as a list, for example:

github_repositorygithub_branch_protectiongithub_team_repositorygithub_actions_secret

Terraform Configuration Files

Here’s our repo module (slightly redacted ****):

terraform {
  required_providers {
    github = ">= 3.1.0"
  }
}

locals {
  # Terraform modules must be named `terraform-<provider>-<module name>`
  # so we can extract the provider easily
  provider = element(split("-", var.repository), 1)
}

data "github_team" "****" {
  slug = "****"
}

data "github_team" "****" {
  slug = "****"
}

resource "github_repository" "main" {
  name        = var.repository
  description = var.description

  visibility = var.visibility

  topics = [
    "terraform",
    "terraform-module",
    "terraform-${local.provider}"
  ]

  has_issues   = var.has_issues
  has_projects = var.has_projects
  has_wiki     = var.has_wiki

  vulnerability_alerts   = true
  delete_branch_on_merge = true

  archived = var.archived

  dynamic "template" {
    for_each = var.fork ? [] : [var.fork]

    content {
      owner      = "waveaccounting"
      repository = "****"
    }
  }
}

resource "github_branch_protection" "main" {
  repository_id = github_repository.main.node_id
  pattern       = github_repository.main.default_branch

  required_status_checks {
    strict = true
    contexts = [
      "Terraform",
      "docs",
    ]
  }

  required_pull_request_reviews {
    dismiss_stale_reviews      = true
    require_code_owner_reviews = true
  }
}

resource "github_team_repository" "****" {
  team_id    = data.github_team.****.id
  repository = github_repository.main.name
  permission = "admin"
}

resource "github_team_repository" "****" {
  team_id    = data.github_team.****.id
  repository = github_repository.main.name
  permission = "admin"
}

resource "github_actions_secret" "secrets" {
  for_each = var.secrets

  repository      = github_repository.main.name
  secret_name     = each.key
  plaintext_value = each.value
}

Actual Behavior

We are managing approximately 90 repositories using this module via Terraform Cloud remote operations (which means we can’t disable refresh or change parallelization afaik). I timed a refresh + plan: 9m22s (562s) == 6.2s per repository

Are there any optimizations we can make on our side or in the github provider / API to try to improve this? We’re discussing breaking up our repos into smaller workspaces, but that feels like a bit of a hack.

Steps to Reproduce

Please list the steps required to reproduce the issue, for example:

  1. terraform plan on large numbers of repositories / branch protection configs

Important Factoids

• Running on Terraform Cloud Remote Operation

References

• Similar issue to https://github.com/terraform-providers/terraform-provider-github/issues/565, although things weren’t particularly fast before the update either

joshmyers avatar
joshmyers

The whole ownership move to integrations happened quite a while ago, not too many limitations I’ve hit other than the obvious resources and lack of certain things (most that you’d want is available). It’s the main provider. No competing fork silliness.

joshmyers avatar
joshmyers
WARNING: Note that this app inherently escalates anyone with push permissions to the admin role, since they can push config settings to the master branch, which will be synced.
1
joshmyers avatar
joshmyers

I think https://github.com/repository-settings/app looks cool in terms of getting some consistency across multiple repos, but I wouldn’t call the model enforcing. Certainly lighter touch than TF. Wanted to manage teams too so we’re already here…

repository-settings/app

Pull Requests for GitHub repository settings

joshmyers avatar
joshmyers

Ironically you could use the provider to manage the <https://github.com/cloudposse/.github/blob/main/.github/settings.yml> files in repos, but have the probot do the heavy lifting.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think this statement is easily misunderstood.

https://sweetops.slack.com/archives/CB6GHNLG0/p1712345595034359?thread_ts=1712075779.976419&cid=CB6GHNLG0

The same is :100: true of any implementation that manages repository settings via GitOps.

It’s entirely mitigated with CODEOWNERS and branch protections (e.g. via organizational repository rulesets).

WARNING: Note that this app inherently escalates anyone with push permissions to the admin role, since they can push config settings to the master branch, which will be synced.
Ryan avatar

Yea josh I was specifically thinking of watching drift

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Thanks for clarifying. Yes, that’s true.

Ryan avatar

I have to play with the provider a bit

joshmyers avatar
joshmyers

https://sweetops.slack.com/archives/CB6GHNLG0/p1712348663742749?thread_ts=1712075779.976419&cid=CB6GHNLG0 ish, the repo that manages the other repos can be managed/run by a separate team/different policies etc vs directly applying in your own repo. Aye CODEOWNERS/protections help.

I think this statement is easily misunderstood.

https://sweetops.slack.com/archives/CB6GHNLG0/p1712345595034359?thread_ts=1712075779.976419&cid=CB6GHNLG0

The same is :100: true of any implementation that manages repository settings via GitOps.

It’s entirely mitigated with CODEOWNERS and branch protections (e.g. via organizational repository rulesets).

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

fwiw, that’s the approach taken by https://github.com/github/safe-settings

github/safe-settings
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It’s the centralized/decentralized argument.

Note, the same exact teams required to approve changes to the centralized repo, can/should be the same teams required to approve the PRs in decentralized repos. About the only difference I can see is visibility. In the centralized approach, the configuration can be private, which is beneficial. But the controls guarding the changes to the configuration itself, are the same in both cases: CODEOWNERS & branch protections with approvals.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


´ Ironically you could use the provider to manage the <https://github.com/cloudposse/.github/blob/main/.github/settings.yml> files in repos, but have the probot do the heavy lifting.
This would be pretty awesome.

joshmyers avatar
joshmyers
github/safe-settings
venkata.mutyala avatar
venkata.mutyala

I wasn’t across safe-settings or repository-settings.

As of this week we have about 1k resources being managed in github via terraform and our plans take close to 10mins.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Eeeekkkk that’s a long time. Btw, did you publish your modules for this?

venkata.mutyala avatar
venkata.mutyala

It’s private and it’s pretty much code that is unmaintable. I just spent the past hour looking at repository-settings and for a moment I thought I was going to switch but it looks like there is a known issue with branch protection rules. I wasn’t able to get it to work in my repo: https://github.com/repository-settings/app/issues/857

#857 branches config does not work at all

Problem Description

Unable to define branch protection rules through the branches configuration.

I’ve tried numerous iterations, including a direct copy/paste from the docs. No branch protection is ever created.

What is actually happening

nothing at all.

What is the expected behaviour

branch protection rules should be created / maintained by the app

Error output, if available

n/a

Context Are you using the hosted instance of repository-settings/app or running your own?

hosted

venkata.mutyala avatar
venkata.mutyala

Other then branch protection (a big feature that we need to have) it seems like a pretty good tool. Given this recent experience I’m wondering even if it worked how would i know when it “stops” working in the future. Ex. they fix it and in 3 months it breaks again and my new repos don’t have branch protection.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@venkata.mutyala you don’t even need those, since your GHE. We don’t use those. We use Organizational Repository Rulesets, which IMO are better.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Why manage branch protections on hundreds of repos when you can do it in one place.

venkata.mutyala avatar
venkata.mutyala

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

To be clear, Organizational Repository Rulesets implement branch protections

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But they are more powerful. You can match on repository properties, wildcards, etc. You can easily exclude bots and apps to bypass protections. You can enable them in a dry-run mode without enforcement, and turn on enforcement once they look good.

1
venkata.mutyala avatar
venkata.mutyala

Dangggggg. How did i forget about this.

venkata.mutyala avatar
venkata.mutyala

Thanks Erik! This is going to clear out a lot of TF that i didn’t need to write. I recall seeing this before but totally forgot about it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Note, we also have an open PR for repository-settings. It will probably be a while before it merges. The repo is maintained, but sparingly.

https://github.com/repository-settings/app/pull/910

#910 Environment custom branch policies support type ('branch', 'tag')

What

• Added type support for environment deployment_branch_policy custom branches.

Why

• Environment deployment_branch_policy supports custom branches type - branch or tag. The type is an option parameter that sets branch by default. These changes allow us to specify deployment_branch_policy for tag

Config example

environments:
  - name: development
    deployment_branch_policy:
      custom_branches:
        - dev/*
        - name: release/*
          type: branch
        - name: v*
          type: tag

You can specify custom_branches list item as string for back-compatibility or as object

name: `string`
type: `branch | tag`

Releated links

Create a deployment branch policy API

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


I recall seeing this before but totally forgot about it.
And I just talked about it on #office-hours!! I must have really sucked presenting how we use it. I even did a screenshare

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That said, safe-settings (a hard fork of repository-settings i believe and also in probot) is very well maintained and by GitHub

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I just didn’t like the centralized approach.

1
venkata.mutyala avatar
venkata.mutyala

How are you managing membership into each team?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That’s still manual for us.

venkata.mutyala avatar
venkata.mutyala

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Did you automate that too?

joshmyers avatar
joshmyers

Yeah we have

venkata.mutyala avatar
venkata.mutyala

Yeah we have some automation but it’s not very good. I was hoping to copy a cloudposse module hence my tears’ :sob:.

@joshmyers did you go super granular with your permissions management and just use github_repository_collaborators or do you use teams/memberships?

joshmyers avatar
joshmyers

Teams and membership, this is GHE internal if that makes a difference

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(We would accept contributions for cloudposse style modules for managing github repos and teams)

1
Ryan avatar

We being me too

3
Matt Gowie avatar
Matt Gowie
mineiros-io/terraform-github-repository
1
joshmyers avatar
joshmyers

Nice, looks very similar to ours. We also manage a few things like the CODEOWNERS/PR template files there too

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Matt Gowie are you using the forked (abandoned?) provider or the official hashicorp provider

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(because mineiros also forked the provider)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Cool! Then I’m more bullish on it

2024-04-04

Release notes from terraform avatar
Release notes from terraform
08:13:32 AM

v1.8.0-rc2 1.8.0-rc2 (April 4, 2024) If you are upgrading from Terraform v1.7 or earlier, please refer to the Terraform v1.8 Upgrade Guide. NEW FEATURES:

Providers can now offer functions which can be used from within the Terraform configuration language. The syntax for calling a provider-contributed function is provider::function_name(). (<a…

Release v1.8.0-rc2 · hashicorp/terraformattachment image

1.8.0-rc2 (April 4, 2024) If you are upgrading from Terraform v1.7 or earlier, please refer to the Terraform v1.8 Upgrade Guide. NEW FEATURES:

Providers can now offer functions which can be used …

Soren Jensen avatar
Soren Jensen

I’m trying to setup a pipe in EventBridge, it mostly works. The problem is the logging config isn’t applied. Terraform isn’t giving any errors, but when you look at the web console you see configure logging instead of the below logging config. Anyone who got an idea of why that could be? The role got the right logging permissions.

resource "aws_cloudwatch_log_group" "pipes_log_group_raw_data_stream_transformer" {
  name = "/aws/pipes/RawDataStreamTransformer"
  retention_in_days = 365
  tags = local.tags
}

resource "awscc_pipes_pipe" "raw_data_stream_transformer" {
  name       = "raw-data-stream-transformer"
  role_arn   = aws_iam_role.pipes_raw_data_transformer_role.arn
  source     = aws_kinesis_stream.raw_data_stream.arn
  target     = aws_kinesis_stream.transformed_data_stream.arn
  enrichment = aws_lambda_function.data_transformer.arn

  source_parameters = {
    kinesis_stream_parameters = {
      starting_position      = "LATEST"
      maximum_retry_attempts = 3
      dead_letter_config = {
        arn = aws_sqs_queue.raw_data_stream_deadletter.arn
      }
    }
  }

  target_parameters = {
    kinesis_stream_parameters = {
      partition_key = "$.partition_key"
    }
  }

  log_configuration = {
    enabled        = true
    log_level      = "ERROR"
    log_group_name = aws_cloudwatch_log_group.pipes_log_group_raw_data_stream_transformer.name
  }
}
Fizz avatar

I think cw logs for event bridge need to have the aws/events/ prefix. Fairly sure I had the same problem and noted that when I created it through the console it created a log group with that prefix

Soren Jensen avatar
Soren Jensen

Oh, interesting.. Let me try that

Soren Jensen avatar
Soren Jensen

Unfortunately that wasn’t the issue..

Soren Jensen avatar
Soren Jensen
log_configuration = {
    cloudwatch_logs_log_destination = {
      log_group_arn = aws_cloudwatch_log_group.pipes_log_group_raw_data_stream_transformer.arn
    }
    level                  = "ERROR"
    include_execution_data = ["ALL"]
  }

It was mostly a syntax error. This ended up being the solution

Release notes from terraform avatar
Release notes from terraform
11:03:29 PM

v1.9.0-alpha20240404 1.9.0-alpha20240404 (April 4, 2024) ENHANCEMENTS:

terraform console: Now has basic support for multi-line input in interactive mode. (#34822) If an entered line contains opening paretheses/etc that are not closed, Terraform will await another line of input to complete the expression. This initial implementation is primarily…

Release v1.9.0-alpha20240404 · hashicorp/terraformattachment image

1.9.0-alpha20240404 (April 4, 2024) ENHANCEMENTS:

terraform console: Now has basic support for multi-line input in interactive mode. (#34822) If an entered line contains opening paretheses/etc th…

terraform console: Multi-line entry support by apparentlymart · Pull Request #34822 · hashicorp/terraformattachment image

The console command, when running in interactive mode, will now detect if the input seems to be an incomplete (but valid enough so far) expression, and if so will produce another prompt to accept a…

loren avatar

from the v1.9 alpha release notes!!!
• The experimental “deferred actions” feature, enabled by passing the -allow-deferral option to terraform plan, permits count and for_each arguments in module, resource, and data blocks to have unknown values and allows providers to react more flexibly to unknown values. This experiment is under active development, and so it’s not yet useful to participate in this experiment.

3
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Wow!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Cc @Jeremy G (Cloud Posse) @Andriy Knysh (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Great news this is something being worked on

1

2024-04-05

joshmyers avatar
joshmyers

Anyone have any info on Terraform Stacks? Anyone used in private beta? Know when/what functionality maybe coming to OSS?

Terraform stacks, explainedattachment image

Terraform stacks simplify provisioning and managing resources at scale, reducing the time and overhead of managing infrastructure.

2024-04-06

2024-04-07

2024-04-08

2024-04-09

mayank2299 avatar
mayank2299

Hello everyone Can we spin up two dms instances from this module? I am facing one issue regarding name , i have copied the same folder structure from this module. Please help me and let me know if this is possible to do or not Thanks Here is the module

https://github.com/cloudposse/terraform-aws-dms/blob/main/examples/complete/main.tf

locals {
  enabled              = module.this.enabled
  vpc_id               = module.vpc.vpc_id
  vpc_cidr_block       = module.vpc.vpc_cidr_block
  subnet_ids           = module.subnets.private_subnet_ids
  route_table_ids      = module.subnets.private_route_table_ids
  security_group_id    = module.security_group.id
  create_dms_iam_roles = local.enabled && var.create_dms_iam_roles
}

# Database Migration Service requires
# the below IAM Roles to be created before
# replication instances can be created.
# The roles should be provisioned only once per account.
# <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html>
# <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html#CHAP_Security.APIRole>
# <https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/dms_replication_instance>
#  * dms-vpc-role
#  * dms-cloudwatch-logs-role
#  * dms-access-for-endpoint
module "dms_iam" {
  source = "../../modules/dms-iam"

  enabled = local.create_dms_iam_roles

  context = module.this.context
}

module "dms_replication_instance" {
  source = "../../modules/dms-replication-instance"

  # <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReleaseNotes.html>
  engine_version             = "3.4"
  replication_instance_class = "dms.t2.small"

  allocated_storage            = 50
  apply_immediately            = true
  auto_minor_version_upgrade   = true
  allow_major_version_upgrade  = false
  multi_az                     = false
  publicly_accessible          = false
  preferred_maintenance_window = "sun:10:30-sun:14:30"
  vpc_security_group_ids       = [local.security_group_id, module.aurora_postgres_cluster.security_group_id]
  subnet_ids                   = local.subnet_ids

  context = module.this.context

  depends_on = [
    # The required DMS roles must be present before replication instances can be provisioned
    module.dms_iam,
    aws_vpc_endpoint.s3
  ]
}

module "dms_endpoint_aurora_postgres" {
  source = "../../modules/dms-endpoint"

  endpoint_type                   = "source"
  engine_name                     = "aurora-postgresql"
  server_name                     = module.aurora_postgres_cluster.endpoint
  database_name                   = var.database_name
  port                            = var.database_port
  username                        = var.admin_user
  password                        = var.admin_password
  extra_connection_attributes     = ""
  secrets_manager_access_role_arn = null
  secrets_manager_arn             = null
  ssl_mode                        = "none"

  attributes = ["source"]
  context    = module.this.context

  depends_on = [
    module.aurora_postgres_cluster
  ]
}

module "dms_endpoint_s3_bucket" {
  source = "../../modules/dms-endpoint"

  endpoint_type = "target"
  engine_name   = "s3"

  s3_settings = {
    bucket_name                      = module.s3_bucket.bucket_id
    bucket_folder                    = null
    cdc_inserts_only                 = false
    csv_row_delimiter                = " "
    csv_delimiter                    = ","
    data_format                      = "parquet"
    compression_type                 = "GZIP"
    date_partition_delimiter         = "NONE"
    date_partition_enabled           = true
    date_partition_sequence          = "YYYYMMDD"
    include_op_for_full_load         = true
    parquet_timestamp_in_millisecond = true
    timestamp_column_name            = "timestamp"
    service_access_role_arn          = join("", aws_iam_role.s3[*].arn)
  }

  extra_connection_attributes = ""

  attributes = ["target"]
  context    = module.this.context

  depends_on = [
    aws_iam_role.s3,
    module.s3_bucket
  ]
}

resource "time_sleep" "wait_for_dms_endpoints" {
  count = local.enabled ? 1 : 0

  depends_on = [
    module.dms_endpoint_aurora_postgres,
    module.dms_endpoint_s3_bucket
  ]

  create_duration  = "2m"
  destroy_duration = "30s"
}

# `dms_replication_task` will be created (at least) 2 minutes after `dms_endpoint_aurora_postgres` and `dms_endpoint_s3_bucket`
# `dms_endpoint_aurora_postgres` and `dms_endpoint_s3_bucket` will be destroyed (at least) 30 seconds after `dms_replication_task`
module "dms_replication_task" {
  source = "../../modules/dms-replication-task"

  replication_instance_arn = module.dms_replication_instance.replication_instance_arn
  start_replication_task   = true
  migration_type           = "full-load-and-cdc"
  source_endpoint_arn      = module.dms_endpoint_aurora_postgres.endpoint_arn
  target_endpoint_arn      = module.dms_endpoint_s3_bucket.endpoint_arn

  # <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.html>
  replication_task_settings = file("${path.module}/config/replication-task-settings.json")

  # <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TableMapping.html>
  table_mappings = file("${path.module}/config/replication-task-table-mappings.json")

  context = module.this.context

  depends_on = [
    module.dms_endpoint_aurora_postgres,
    module.dms_endpoint_s3_bucket,
    time_sleep.wait_for_dms_endpoints
  ]
}

module "dms_replication_instance_event_subscription" {
  source = "../../modules/dms-event-subscription"

  event_subscription_enabled = true
  source_type                = "replication-instance"
  source_ids                 = [module.dms_replication_instance.replication_instance_id]
  sns_topic_arn              = module.sns_topic.sns_topic_arn

  # <https://awscli.amazonaws.com/v2/documentation/api/latest/reference/dms/describe-event-categories.html>
  event_categories = [
    "low storage",
    "configuration change",
    "maintenance",
    "deletion",
    "creation",
    "failover",
    "failure"
  ]

  attributes = ["instance"]
  context    = module.this.context
}

module "dms_replication_task_event_subscription" {
  source = "../../modules/dms-event-subscription"

  event_subscription_enabled = true
  source_type                = "replication-task"
  source_ids                 = [module.dms_replication_task.replication_task_id]
  sns_topic_arn              = module.sns_topic.sns_topic_arn

  # <https://awscli.amazonaws.com/v2/documentation/api/latest/reference/dms/describe-event-categories.html>
  event_categories = [
    "configuration change",
    "state change",
    "deletion",
    "creation",
    "failure"
  ]

  attributes = ["task"]
  context    = module.this.context
}

PiotrP avatar

hey guys, I am trying to setup vpc peering with the module and I am receiving an error that The "count" value depends on resource attributes that cannot be determined until apply. My module definition is simple but relies on the vpc module which should be created in the same terraform run. Question - can vpc peering module be applied alongside the module which created vpc for which peering should be configured once created?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Jeremy G (Cloud Posse) where’s your recent post on count-of issues?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

from the v1.9 alpha release notes!!!
• The experimental “deferred actions” feature, enabled by passing the -allow-deferral option to terraform plan, permits count and for_each arguments in module, resource, and data blocks to have unknown values and allows providers to react more flexibly to unknown values. This experiment is under active development, and so it’s not yet useful to participate in this experiment.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It might be less of a problem in the future.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@PiotrP I wrote this article to help people understand the issue you are facing. I hope it helps you, too.

Error: Values Cannot Be Determined Until Apply | The Cloud Posse Developer Hub

Details about computed values can cause terraform plan to fail

2
Msk avatar

Hello everyone. We are using your great datadog wrapper for terraform but when I try to use the latest version I dont see the new properties added in 1.4.1 like on_missing_data. Was that released to terraform registry? I see it is in the release zip and in the main branch but when I init the module I dont see it in the code

Msk avatar

Ah I think i get it. These were all migrated to options: … but for backwards compatibility the old values still work

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Jeremy G (Cloud Posse) can add context, if necessary

1
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Right, we moved the old top-level setting, and only the old settings, to their API location under options, where they belong. (Side note: priority actually belongs at the top level, not under options, and designating it as legacy was a mistake). You can see the on_missing_data setting being set here.

  on_missing_data          = try(each.value.options.on_missing_data, null)
1

2024-04-10

Release notes from terraform avatar
Release notes from terraform
07:23:33 PM

v1.8.0 1.8.0 (April 10, 2024) If you are upgrading from Terraform v1.7 or earlier, please refer to the Terraform v1.8 Upgrade Guide. NEW FEATURES:

Providers can now offer functions which can be used from within the Terraform configuration language. The syntax for calling a provider-contributed function is provider::function_name(). (<a…

party_parrot1
TechHippie avatar
TechHippie

Hello All - I want to deploy few resources in all the accounts in an AWS organization. Is there a way to do it in terraform? I know i can use different providers to do it multiple accounts, but what if I want to create in all accounts in the organization?

AdamP avatar

You can use multiple providers, or Terraform workspaces. There may be another approach too via some 3rd party tooling.

AdamP avatar

honestly, type that exact same question into ChatGPT.. it will give you some options

TechHippie avatar
TechHippie

Haha.. did that and it wasn’t helpful . Thought of checking internally as some of you might have wanted the same.

AdamP avatar

lol right on, yeah the feedback it gave me seamed reasonable, but there has to be a better way to manage a scenario like that for sure, tons of people have to be doing that exact same thing

AdamP avatar

I’ve also seen this popping up in my feed on LinkedIN and on Cloud Posse Repos.. this may be worth looking into (I plan on looking further into it as well): https://github.com/cloudposse/atmos

cloudposse/atmos

Terraform Orchestration Tool for DevOps. Keep environment configuration DRY with hierarchical imports of configurations, inheritance, and WAY more. Native support for Terraform and Helmfile.

AdamP avatar

yeah, seems like a good solution… see “Use Cases” on that URL

2024-04-11

PiotrP avatar

I am trying to use terraform-aws-eks-node-group module to create EKS nodes, once created I am installing cluster-autoscaler. At this moment autoscaler is missing proper IAM permissions. I see, that at some point proper policy has been added but I do not see it in the current module code. Question: what is the proper approach to enable/create IAM permissions required by the autoscaler?

PiotrP avatar

I solved it by adding custom aws_iam_policy with proper policy defined with aws_iam_policy_document datasource and pass it to the module via node_role_policy_arns variable, not sure if this is the proper approach, though

AdamP avatar

I have it declared like so:

AdamP avatar
node_role_arn                 = [module.eks_node_group_main.eks_node_group_role_arn]

that is in my node group module code, and my eks cluster module is also in the same .tf file as well.

AdamP avatar

oh snaps, you’re talking about the autoscaler, not only the node group.. never mind

2024-04-12

joshmyers avatar
joshmyers

wave What schema does the DDB table used by https://github.com/cloudposse/github-action-terraform-plan-storage/tree/main need to have? cc @Erik Osterman (Cloud Posse)

joshmyers avatar
joshmyers

I ended up here - so is the plan DDB table the same as the internal state/locks table TF uses, in your case?

module "s3_bucket" {
  source  = "cloudposse/stack-config/yaml//modules/remote-state"
  version = "1.5.0"

  component   = var.s3_bucket_component_name
  environment = try(var.s3_bucket_environment_name, module.this.environment)

  context = module.this.context
}

module "dynamodb" {
  source  = "cloudposse/stack-config/yaml//modules/remote-state"
  version = "1.5.0"

  component   = var.dynamodb_component_name
  environment = try(var.dynamodb_environment_name, module.this.environment)

  context = module.this.context
}

joshmyers avatar
joshmyers

tl;dr Need to stop storing the plan as an artifact. This action looks nice. Need to provision the DDB table (if indeed is different to the locks table)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Aha, @Igor Rodionov @Dan Miller (Cloud Posse) can you get unblocked on the actions and their dependencies.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


Need to stop storing the plan as an artifact.
Why do you need to stop storing the planfile as an artifact? That ensures what you approve in the Pull Request is identical to what you apply upon merge to the default branch

joshmyers avatar
joshmyers

As a workflow artifact attached to the PR, because putting in S3/DDB would allow finer grained access control.

joshmyers avatar
joshmyers
joshmyers avatar
joshmyers

Would rather store/fetch from S3.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

As a workflow artifact attached to the PR Aha! Yes, that should be doable, but not something we’ve yet implemented. We would like that though… What we tried to do was something even more extreme, which was using artifact storage for everything, which proved challenging because how limited the artifact API is. But only storing the planfile as an artifact, and not trying to use the artifact storage to also replace dynamodb, that should be much easier.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Would rather store/fetch from S3. Oh, wait, I think we’re saying the same thing. We use S3 in our customer implementations, so that is already supported.

I thought you wanted to use artifact storage

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So there must be a feature flag somewhere not set.

joshmyers avatar
joshmyers

Already use artifact storage but would rather S3 - the above action looks like it’ll do that, right? Just not sure what schema the metadata DDB table is expecting…?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Nono, we use both dynamodb and S3.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We use dynamo to store the metadata, so we can find planfiles and invalidate planfiles across PRs.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Use use S3 because I think dynamodb has limits on blob storage. Also, it serves as a permanent record, if so desired.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We didn’t want to use S3 as a database and have to scan the bucket to find planfiles.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Note that 2 PRs could merge and affect the same components therefore we need some way to invalidate one or the other. Technically, a merge queue could alleviate some of the problems, but we don’t yet support that.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Let me find the schema for dynamo

1
joshmyers avatar
joshmyers

Yeah makes sense. Where do you actually create that DDB table/whats the schema? e.g. the Terraform DDB state lock table has a partitionKey of LockID of type String

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, so here’s how we do it. Sicne we do everything with re-useable root modules (components) with atmos, here’s what our configuration looks like.

import:
  - catalog/s3-bucket/defaults
  - catalog/dynamodb/defaults

components:
  terraform:
    # S3 Bucket for storing Terraform Plans
    gitops/s3-bucket:
      metadata:
        component: s3-bucket
        inherits:
          - s3-bucket/defaults
      vars:
        name: gitops-plan-storage
        allow_encrypted_uploads_only: false

    # DynamoDB table used to store metadata for Terraform Plans
    gitops/dynamodb:
      metadata:
        component: dynamodb
        inherits:
          - dynamodb/defaults
      vars:
        name: gitops-plan-storage
        # These keys (case-sensitive) are required for the cloudposse/github-action-terraform-plan-storage action
        hash_key: id
        range_key: createdAt

    gitops:
      vars:
        enabled: true
        github_actions_iam_role_enabled: true
        github_actions_iam_role_attributes: ["gitops"]
        github_actions_allowed_repos:
          - "acmeOrg/infra"
        s3_bucket_component_name: gitops/s3-bucket
        dynamodb_component_name: gitops/dynamodb
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The gitops component is what grants GitHub OIDC permissions to access the bucket and dyanmodb table

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Oh, let me get those defaults

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
# Deploys S3 Bucket and DynamoDB table for managing Terraform Plans
# Then deploys GitHub OIDC role for access these resources

# NOTE: If you make any changes to this file, please make sure the integration tests still pass in the
# <https://github.com/cloudposse/github-action-terraform-plan-storage> repo.
import:
  - catalog/s3-bucket/defaults
  - catalog/dynamodb/defaults
  - catalog/github-oidc-role/gitops

components:
  terraform:
    gitops/s3-bucket:
      metadata:
        component: s3-bucket
        inherits:
          - s3-bucket/defaults
      vars:
        name: gitops
        allow_encrypted_uploads_only: false

    gitops/dynamodb:
      metadata:
        component: dynamodb
        inherits:
          - dynamodb/defaults
      vars:
        name: gitops-plan-storage
        # This key (case-sensitive) is required for the cloudposse/github-action-terraform-plan-storage action
        hash_key: id
        range_key: ""
        # Only these 2 attributes are required for creating the GSI,
        # but there will be several other attributes on the table itself
        dynamodb_attributes:
          - name: 'createdAt'
            type: 'S'
          - name: 'pr'
            type: 'N'
        # This GSI is used to Query the latest plan file for a given PR.
        global_secondary_index_map:
          - name: pr-createdAt-index
            hash_key: pr
            range_key: createdAt
            projection_type: ALL
            non_key_attributes: []
            read_capacity: null
            write_capacity: null
        # Auto delete old entries
        ttl_enabled: true
        ttl_attribute: ttl
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Matt Calhoun where do we describe the entire shape of the dynamodb table?

joshmyers avatar
joshmyers

Doh, I’d seen the above components but missed vars passing hash_key and range_key - thank you!!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

aha, ok, sorry - also, this question has sparked a ton of issues on the backend here. Some docs weren’t updated, others missing.

joshmyers avatar
joshmyers

No worries, thank you for checking

joshmyers avatar
joshmyers
    new TableV2(this, 'plan-storage-metadata', {
      tableName: `app-${WORKLOAD_NAME}-terraform-plan-storage-metadata`,
      billing: Billing.onDemand(),
      pointInTimeRecovery: true,
      timeToLiveAttribute: 'ttl',
      partitionKey: {
        name: 'id',
        type: AttributeType.STRING,
      },
      sortKey: {
        name: 'createdAt',
        type: AttributeType.STRING,
      },
      globalSecondaryIndexes: [
        {
          indexName: 'pr-createdAt-index',
          partitionKey: { name: 'pr', type: AttributeType.NUMBER },
          sortKey: { name: 'createdAt', type: AttributeType.STRING },
          projectionType: ProjectionType.ALL,
        },
      ],
    })
joshmyers avatar
joshmyers

Think got it

joshmyers avatar
joshmyers

OK, some progress…

joshmyers avatar
joshmyers
      - name: Store Plan
        if: (github.event_name == 'pull_request') || (github.event.issue.pull_request && github.event.comment.body == '/terraform plan')
        uses: cloudposse/github-action-terraform-plan-storage@v1
        id: store-plan
        with:
          action: storePlan
          planPath: tfplan
          component: ${{ github.event.repository.name }}
          stack: ${{ steps.get_issue_number.outputs.result }}-tfplan
          commitSHA: ${{ github.event.pull_request.head.sha || github.sha }}
          tableName: app-playback-terraform-plan-storage-metadata
          bucketName: app-playback-terraform-state-prod-us-east-1
joshmyers avatar
joshmyers

Plan runs, can see object in S3

joshmyers avatar
joshmyers

but nothing in DDB…no writes, can see a single read. What am I not grokking about what this thing does? I kinda expected to see some metadata about the plan/pr in DDB? Action completed successfully.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes, you should see something like that. Our west-coast team should be coming on line shortly and can advise.

1
joshmyers avatar
joshmyers
##[debug]tableName: badgers
##[debug]bucketName: badgers
##[debug]metadataRepositoryType: dynamo
##[debug]planRepositoryType: s3
##[debug]bucketName: badgers
##[debug]Node Action run completed with exit code 0
##[debug]Finishing: Store Plan
joshmyers avatar
joshmyers

^^ debug output

Matt Calhoun avatar
Matt Calhoun

Hi Josh…just to be clear, do you have a PR open? It should write metadata to the DDB table whenever you push changes. And another sanity check, does your user/role in GHA have access to write to that table in DDB?

joshmyers avatar
joshmyers

Hey Matt - thanks so much. Yes PR is open (internal GHE), chaining roles and yes it has access to DDB and S3. I was getting explicit permission denied writing to DDB before adding it - so something was trying…

joshmyers avatar
joshmyers

Hmm, I wonder if same PR meant it didn’t try to re write on subsequent push? Let me push another change.

joshmyers avatar
joshmyers

Committed a new change, can see S3 plan file under the new sha…but still nothing in DDB (Item count/size etc 0) …

joshmyers avatar
joshmyers

Actually maybe the previous DDB IAM issues was on scan…which would make sense as I can see reads on the table but no writes.

joshmyers avatar
joshmyers

Yup, confirmed previous fail was trying to scan. (since succeeded)

joshmyers avatar
joshmyers

Bah - so sorry, problem between keyboard and computer. I’m looking in the wrong account - this is working as intended

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

When you’re ready, we have example workflows we can share for drift detection/remediation as well.

1
joshmyers avatar
joshmyers

Working nicely - thank you!

Matt Gowie avatar
Matt Gowie

Does anyone know of or has built a custom VSCode automation for refactoring a resource or module argument out into a variable?

As in I want to select an argument value like in the below screenshot and be able to right click or hit a keyboard shortcut and it’ll create a new variable block in [variables.tf](http://variables.tf) with the argument’s name and that value as the default value?

RB avatar

No but that sounds very cool.

RB avatar

Perhaps code pilot can be trained to do it?

Matt Gowie avatar
Matt Gowie

Would be super useful, right?

I think I tried to use copilot to do it, but it only wanted to do it in the same file.

I’m sure this would be a no-brainer for somebody who has built a proper VSCode app.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Or even just something on the command line

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

For this specific, ask I am 99% with the right prompt, and a shell script, you could iterate over all the files with mods because the prompt would be easy, and the chances to get it wrong are small.

https://github.com/charmbracelet/mods

charmbracelet/mods

AI on the command line

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

In your prompt, just make sure convey to respond in HCL and to only focus on parameterizing constants. You can give it the convention for naming variables, etc.

george.m.sedky avatar
george.m.sedky

I’m working on this extension for the monaco editor, and will port it to a vscode plugin among other things soon

Matt Gowie avatar
Matt Gowie

Will check out mods for this… Charmbracelet

Matt Gowie avatar
Matt Gowie

@george.m.sedky awesome to hear! Happy to be a beta tester when you’ve got a VSCode plugin.

1
george.m.sedky avatar
george.m.sedky

awesome matt! will text you as soon as it’s ready

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Matt Gowie - report back if it works out, or was a flop

1
miko avatar

Closed - Manage to make it work by switching to a different region that supports the API Gateway that I’m trying to create

Hey guys, I’m experiencing something weird, I have a very simple AWS API Gateway definition and for some reason it’s not being created–it’s stuck in creation state:

tamish60 avatar
tamish60

What is the issue Its trying to create…… Is there any error ur facing??

miko avatar

Hi @tamish60, I’ve managed to make it work, this region doesn’t seem to have the support for the kind of API Gateway that I’m trying to do, I moved a different region and it was able to work just fine

2024-04-13

2024-04-15

    keyboard_arrow_up