#terraform (2024-04)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2024-04-01
2024-04-02
stupid question but are you guys terraforming your github configs?
one example is to use the github
provider
https://github.com/cloudposse/terraform-aws-components/blob/main/modules/argocd-repo/versions.tf#L9
then use the provider resources to provision
https://github.com/cloudposse/terraform-aws-components/blob/main/modules/argocd-repo/git-files.tf
resource "github_repository_file" "gitignore" {
count = local.enabled ? 1 : 0
repository = local.github_repository.name
branch = local.github_repository.default_branch
file = ".gitignore"
content = templatefile("${path.module}/templates/.gitignore.tpl", {
entries = var.gitignore_entries
})
commit_message = "Create .gitignore file."
commit_author = var.github_user
commit_email = var.github_user_email
overwrite_on_create = true
}
resource "github_repository_file" "readme" {
count = local.enabled ? 1 : 0
repository = local.github_repository.name
branch = local.github_repository.default_branch
file = "README.md"
content = templatefile("${path.module}/templates/README.md.tpl", {
repository_name = local.github_repository.name
repository_description = local.github_repository.description
github_organization = var.github_organization
})
commit_message = "Create README.md file."
commit_author = var.github_user
commit_email = var.github_user_email
overwrite_on_create = true
}
resource "github_repository_file" "codeowners_file" {
count = local.enabled ? 1 : 0
repository = local.github_repository.name
branch = local.github_repository.default_branch
file = ".github/CODEOWNERS"
content = templatefile("${path.module}/templates/CODEOWNERS.tpl", {
codeowners = var.github_codeowner_teams
})
commit_message = "Create CODEOWNERS file."
commit_author = var.github_user
commit_email = var.github_user_email
overwrite_on_create = true
}
resource "github_repository_file" "pull_request_template" {
count = local.enabled ? 1 : 0
repository = local.github_repository.name
branch = local.github_repository.default_branch
file = ".github/PULL_REQUEST_TEMPLATE.md"
content = file("${path.module}/templates/PULL_REQUEST_TEMPLATE.md")
commit_message = "Create PULL_REQUEST_TEMPLATE.md file."
commit_author = var.github_user
commit_email = var.github_user_email
overwrite_on_create = true
}
Yea I have a req of redeploying Github inside our boundaries and so I have a chance to do it clean. I was about to start clicking but I was finding some of what you’re saying above.
someone emoted nope, hmmmm
the auth token for the github
provider is read from SSM or ASM in case of AWS, the rest should be straightforward (variables configured with Terraform and/or Atmos), let us know if you need any help
We recently rolled this out for cloud posse: https://github.com/repository-settings/app
Pull Requests for GitHub repository settings
# These settings are synced to GitHub by <https://probot.github.io/apps/settings/>
repository:
# See <https://docs.github.com/en/rest/reference/repos#update-a-repository> for all available settings.
# Note: You cannot unarchive repositories through the API. `true` to archive this repository.
archived: false
# Either `true` to enable issues for this repository, `false` to disable them.
has_issues: true
# Either `true` to enable projects for this repository, or `false` to disable them.
# If projects are disabled for the organization, passing `true` will cause an API error.
has_projects: true
# Either `true` to enable the wiki for this repository, `false` to disable it.
has_wiki: false
# Either `true` to enable downloads for this repository, `false` to disable them.
has_downloads: true
# Updates the default branch for this repository.
#default_branch: main
# Either `true` to allow squash-merging pull requests, or `false` to prevent
# squash-merging.
allow_squash_merge: true
# Either `true` to allow merging pull requests with a merge commit, or `false`
# to prevent merging pull requests with merge commits.
allow_merge_commit: false
# Either `true` to allow rebase-merging pull requests, or `false` to prevent
# rebase-merging.
allow_rebase_merge: false
# Either `true` to enable automatic deletion of branches on merge, or `false` to disable
delete_branch_on_merge: true
# Either `true` to enable automated security fixes, or `false` to disable
# automated security fixes.
enable_automated_security_fixes: true
# Either `true` to enable vulnerability alerts, or `false` to disable
# vulnerability alerts.
enable_vulnerability_alerts: true
# Either `true` to make this repo available as a template repository or `false` to prevent it.
#is_template: false
environments:
- name: release
deployment_branch_policy:
custom_branches:
- main
- release/**
- name: security
deployment_branch_policy:
custom_branches:
- main
- release/**
# Labels: define labels for Issues and Pull Requests
labels:
- name: bug
color: '#d73a4a'
description: 🐛 An issue with the system
- name: feature
color: '#336699'
description: New functionality
- name: bugfix
color: '#fbca04'
description: Change that restores intended behavior
- name: auto-update
color: '#ededed'
description: This PR was automatically generated
- name: do not merge
color: '#B60205'
description: Do not merge this PR, doing so would cause problems
- name: documentation
color: '#0075ca'
description: Improvements or additions to documentation
- name: readme
color: '#0075ca'
description: Improvements or additions to the README
- name: duplicate
color: '#cfd3d7'
description: This issue or pull request already exists
- name: enhancement
color: '#a2eeef'
description: New feature or request
- name: good first issue
color: '#7057ff'
description: 'Good for newcomers'
- name: help wanted
color: '#008672'
description: 'Extra attention is needed'
- name: invalid
color: '#e4e669'
description: "This doesn't seem right"
- name: major
color: '#00FF00'
description: 'Breaking changes (or first stable release)'
- name: minor
color: '#00cc33'
description: New features that do not break anything
- name: no-release
color: '#0075ca'
description: 'Do not create a new release (wait for additional code changes)'
- name: patch
color: '#0E8A16'
description: A minor, backward compatible change
- name: question
color: '#d876e3'
- name: wip
color: '#B60205'
description: 'Work in Progress: Not ready for final review or merge'
- name: wontfix
color: '#B60205'
description: 'This will not be worked on'
- name: needs-cloudposse
color: '#B60205'
description: 'Needs Cloud Posse assistance'
- name: needs-test
color: '#B60205'
description: 'Needs testing'
- name: triage
color: '#fcb32c'
description: 'Needs triage'
- name: conflict
color: '#B60205'
description: 'This PR has conflicts'
- name: no-changes
color: '#cccccc'
description: 'No changes were made in this PR'
- name: stale
color: '#e69138'
description: 'This PR has gone stale'
- name: migration
color: '#2f81f7'
description: 'This PR involves a migration'
- name: terraform/0.13
color: '#ffd9c4'
description: 'Module requires Terraform 0.13 or later'
# Note: `permission` is only valid on organization-owned repositories.
# The permission to grant the collaborator. Can be one of:
# * `pull` - can pull, but not push to or administer this repository.
# * `push` - can pull and push, but not administer this repository.
# * `admin` - can pull, push and administer this repository.
# * `maintain` - Recommended for project managers who need to manage the repository without access to sensitive or destructive actions.
# * `triage` - Recommended for contributors who need to proactively manage issues and pull requests without write access.
#
# See <https://docs.github.com/en/rest/reference/teams#add-or-update-team-repository-permissions> for available options
teams:
- name: approvers
permission: push
- name: admins
permission: admin
- name: bots
permission: admin
- name: engineering
permission: write
- name: contributors
permission: write
- name: security
permission: pull
That is very slick
This combined with GitHub organizational repository rulesets works well for for us and eliminates the need to manage branch protections at the repo level
From a compliance perspective, I have to enforce any baseline I have
So like this might work well to enforce
If you decide to go the route with terraform, there are some gotchas. I know there are bugs/limitations with the HashiCorp managed provider for GitHub that others have worked around, but those forks are seemingly abandoned. We would recommend a single terraform component to manage a single repo, that using atmos to define the configuration for each repo with inheritance. This would mitigate one of the most common problems companies experience when using terraform to manage GitHub repos: GitHub API rate limits. Defining a factory in terraform for repos will guaranteed be hamstrung by these rate limits. So defining the factory in atmos instead and combining it with our GHA will ensure only affected repos are planned/applied when changes are made.
But I rather like the approach we took instead.
I’m using Terraform to manage all our internal Github Enterprise repos and teams etc. Happy to answer any questions.
@joshmyers what issues have you run into along the way and are you using the official hashicorp provider?
Mostly slowness in the provider, against Github Enterprise on several gigs over the last few years. Sometimes this comes from API limits but mostly slow provider when you start managing 50-100+ repos and plenty of settings. We managed to hunt down a few of the slow resources and switch them out for others e.g. graphql for github_branch_protection_v3
resource. Things got a bit better in more recent provider versions. Can also split the states at some business logical point so each doesn’t get too big. I like the drift detection and alignment here. Plenty of manual tweaks/bodges forgotten to be removed got cleaned up.
Terraform Version
0.12.6
Affected Resource(s)
Please list the resources as a list, for example:
• github_repository
• github_branch_protection
• github_team_repository
• github_actions_secret
Terraform Configuration Files
Here’s our repo module (slightly redacted ****
):
terraform {
required_providers {
github = ">= 3.1.0"
}
}
locals {
# Terraform modules must be named `terraform-<provider>-<module name>`
# so we can extract the provider easily
provider = element(split("-", var.repository), 1)
}
data "github_team" "****" {
slug = "****"
}
data "github_team" "****" {
slug = "****"
}
resource "github_repository" "main" {
name = var.repository
description = var.description
visibility = var.visibility
topics = [
"terraform",
"terraform-module",
"terraform-${local.provider}"
]
has_issues = var.has_issues
has_projects = var.has_projects
has_wiki = var.has_wiki
vulnerability_alerts = true
delete_branch_on_merge = true
archived = var.archived
dynamic "template" {
for_each = var.fork ? [] : [var.fork]
content {
owner = "waveaccounting"
repository = "****"
}
}
}
resource "github_branch_protection" "main" {
repository_id = github_repository.main.node_id
pattern = github_repository.main.default_branch
required_status_checks {
strict = true
contexts = [
"Terraform",
"docs",
]
}
required_pull_request_reviews {
dismiss_stale_reviews = true
require_code_owner_reviews = true
}
}
resource "github_team_repository" "****" {
team_id = data.github_team.****.id
repository = github_repository.main.name
permission = "admin"
}
resource "github_team_repository" "****" {
team_id = data.github_team.****.id
repository = github_repository.main.name
permission = "admin"
}
resource "github_actions_secret" "secrets" {
for_each = var.secrets
repository = github_repository.main.name
secret_name = each.key
plaintext_value = each.value
}
Actual Behavior
We are managing approximately 90 repositories using this module via Terraform Cloud remote operations (which means we can’t disable refresh or change parallelization afaik). I timed a refresh + plan: 9m22s (562s) == 6.2s per repository
Are there any optimizations we can make on our side or in the github provider / API to try to improve this? We’re discussing breaking up our repos into smaller workspaces, but that feels like a bit of a hack.
Steps to Reproduce
Please list the steps required to reproduce the issue, for example:
terraform plan
on large numbers of repositories / branch protection configs
Important Factoids
• Running on Terraform Cloud Remote Operation
References
• Similar issue to https://github.com/terraform-providers/terraform-provider-github/issues/565, although things weren’t particularly fast before the update either
The whole ownership move to integrations happened quite a while ago, not too many limitations I’ve hit other than the obvious resources and lack of certain things (most that you’d want is available). It’s the main provider. No competing fork silliness.
WARNING: Note that this app inherently escalates anyone with push permissions to the admin role, since they can push config settings to the master branch, which will be synced.
I think https://github.com/repository-settings/app looks cool in terms of getting some consistency across multiple repos, but I wouldn’t call the model enforcing. Certainly lighter touch than TF. Wanted to manage teams too so we’re already here…
Pull Requests for GitHub repository settings
Ironically you could use the provider to manage the <https://github.com/cloudposse/.github/blob/main/.github/settings.yml>
files in repos, but have the probot do the heavy lifting.
I think this statement is easily misunderstood.
The same is :100: true of any implementation that manages repository settings via GitOps.
It’s entirely mitigated with CODEOWNERS
and branch protections (e.g. via organizational repository rulesets).
WARNING: Note that this app inherently escalates anyone with push permissions to the admin role, since they can push config settings to the master branch, which will be synced.
Thanks for clarifying. Yes, that’s true.
I have to play with the provider a bit
https://sweetops.slack.com/archives/CB6GHNLG0/p1712348663742749?thread_ts=1712075779.976419&cid=CB6GHNLG0 ish, the repo that manages the other repos can be managed/run by a separate team/different policies etc vs directly applying in your own repo. Aye CODEOWNERS/protections help.
I think this statement is easily misunderstood.
The same is :100: true of any implementation that manages repository settings via GitOps.
It’s entirely mitigated with CODEOWNERS
and branch protections (e.g. via organizational repository rulesets).
fwiw, that’s the approach taken by https://github.com/github/safe-settings
It’s the centralized/decentralized argument.
Note, the same exact teams required to approve changes to the centralized repo, can/should be the same teams required to approve the PRs in decentralized repos. About the only difference I can see is visibility. In the centralized approach, the configuration can be private, which is beneficial. But the controls guarding the changes to the configuration itself, are the same in both cases: CODEOWNERS
& branch protections with approvals.
´ Ironically you could use the provider to manage the <https://github.com/cloudposse/.github/blob/main/.github/settings.yml>
files in repos, but have the probot do the heavy lifting.
This would be pretty awesome.
I wasn’t across safe-settings or repository-settings.
As of this week we have about 1k resources being managed in github via terraform and our plans take close to 10mins.
Eeeekkkk that’s a long time. Btw, did you publish your modules for this?
It’s private and it’s pretty much code that is unmaintable. I just spent the past hour looking at repository-settings and for a moment I thought I was going to switch but it looks like there is a known issue with branch protection rules. I wasn’t able to get it to work in my repo: https://github.com/repository-settings/app/issues/857
Problem Description
Unable to define branch protection rules through the branches
configuration.
I’ve tried numerous iterations, including a direct copy/paste from the docs. No branch protection is ever created.
What is actually happening
nothing at all.
What is the expected behaviour
branch protection rules should be created / maintained by the app
Error output, if available
n/a
Context Are you using the hosted instance of repository-settings/app or running your own?
hosted
Other then branch protection (a big feature that we need to have) it seems like a pretty good tool. Given this recent experience I’m wondering even if it worked how would i know when it “stops” working in the future. Ex. they fix it and in 3 months it breaks again and my new repos don’t have branch protection.
@venkata.mutyala you don’t even need those, since your GHE. We don’t use those. We use Organizational Repository Rulesets, which IMO are better.
Why manage branch protections on hundreds of repos when you can do it in one place.
To be clear, Organizational Repository Rulesets implement branch protections
But they are more powerful. You can match on repository properties, wildcards, etc. You can easily exclude bots and apps to bypass protections. You can enable them in a dry-run mode without enforcement, and turn on enforcement once they look good.
Dangggggg. How did i forget about this.
Thanks Erik! This is going to clear out a lot of TF that i didn’t need to write. I recall seeing this before but totally forgot about it.
Note, we also have an open PR for repository-settings. It will probably be a while before it merges. The repo is maintained, but sparingly.
What
• Added type
support for environment deployment_branch_policy
custom branches.
Why
• Environment deployment_branch_policy
supports custom branches type - branch
or tag
. The type is an option parameter that sets branch
by default. These changes allow us to specify deployment_branch_policy for tag
Config example
environments:
- name: development
deployment_branch_policy:
custom_branches:
- dev/*
- name: release/*
type: branch
- name: v*
type: tag
You can specify custom_branches
list item as string
for back-compatibility or as object
name: `string`
type: `branch | tag`
Releated links
I recall seeing this before but totally forgot about it.
And I just talked about it on #office-hours!! I must have really sucked presenting how we use it. I even did a screenshare
That said, safe-settings
(a hard fork of repository-settings
i believe and also in probot) is very well maintained and by GitHub
How are you managing membership into each team?
That’s still manual for us.
Did you automate that too?
Yeah we have
Yeah we have some automation but it’s not very good. I was hoping to copy a cloudposse module hence my tears’ :sob:.
@joshmyers did you go super granular with your permissions management and just use github_repository_collaborators
or do you use teams/memberships?
(We would accept contributions for cloudposse style modules for managing github repos and teams)
We do this and use https://github.com/mineiros-io/terraform-github-repository.
We have a very light root module around it here: https://github.com/masterpointio/terraform-components/tree/main/components/github-repositories
Nice, looks very similar to ours. We also manage a few things like the CODEOWNERS/PR template files there too
@Matt Gowie are you using the forked (abandoned?) provider or the official hashicorp provider
(because mineiros also forked the provider)
https://github.com/mineiros-io/terraform-github-repository/blob/main/versions.tf#L11-L12 < the official provider.
Cool! Then I’m more bullish on it
2024-04-04
v1.8.0-rc2 1.8.0-rc2 (April 4, 2024) If you are upgrading from Terraform v1.7 or earlier, please refer to the Terraform v1.8 Upgrade Guide. NEW FEATURES:
Providers can now offer functions which can be used from within the Terraform configuration language. The syntax for calling a provider-contributed function is provider::function_name(). (<a…
1.8.0-rc2 (April 4, 2024) If you are upgrading from Terraform v1.7 or earlier, please refer to the Terraform v1.8 Upgrade Guide. NEW FEATURES:
Providers can now offer functions which can be used …
Upgrading to Terraform v1.8
I’m trying to setup a pipe in EventBridge, it mostly works. The problem is the logging config isn’t applied. Terraform isn’t giving any errors, but when you look at the web console you see configure logging instead of the below logging config. Anyone who got an idea of why that could be? The role got the right logging permissions.
resource "aws_cloudwatch_log_group" "pipes_log_group_raw_data_stream_transformer" {
name = "/aws/pipes/RawDataStreamTransformer"
retention_in_days = 365
tags = local.tags
}
resource "awscc_pipes_pipe" "raw_data_stream_transformer" {
name = "raw-data-stream-transformer"
role_arn = aws_iam_role.pipes_raw_data_transformer_role.arn
source = aws_kinesis_stream.raw_data_stream.arn
target = aws_kinesis_stream.transformed_data_stream.arn
enrichment = aws_lambda_function.data_transformer.arn
source_parameters = {
kinesis_stream_parameters = {
starting_position = "LATEST"
maximum_retry_attempts = 3
dead_letter_config = {
arn = aws_sqs_queue.raw_data_stream_deadletter.arn
}
}
}
target_parameters = {
kinesis_stream_parameters = {
partition_key = "$.partition_key"
}
}
log_configuration = {
enabled = true
log_level = "ERROR"
log_group_name = aws_cloudwatch_log_group.pipes_log_group_raw_data_stream_transformer.name
}
}
I think cw logs for event bridge need to have the aws/events/
prefix. Fairly sure I had the same problem and noted that when I created it through the console it created a log group with that prefix
Oh, interesting.. Let me try that
Unfortunately that wasn’t the issue..
log_configuration = {
cloudwatch_logs_log_destination = {
log_group_arn = aws_cloudwatch_log_group.pipes_log_group_raw_data_stream_transformer.arn
}
level = "ERROR"
include_execution_data = ["ALL"]
}
It was mostly a syntax error. This ended up being the solution
v1.9.0-alpha20240404 1.9.0-alpha20240404 (April 4, 2024) ENHANCEMENTS:
terraform console: Now has basic support for multi-line input in interactive mode. (#34822) If an entered line contains opening paretheses/etc that are not closed, Terraform will await another line of input to complete the expression. This initial implementation is primarily…
1.9.0-alpha20240404 (April 4, 2024) ENHANCEMENTS:
terraform console: Now has basic support for multi-line input in interactive mode. (#34822) If an entered line contains opening paretheses/etc th…
The console command, when running in interactive mode, will now detect if the input seems to be an incomplete (but valid enough so far) expression, and if so will produce another prompt to accept a…
from the v1.9 alpha release notes!!!
• The experimental “deferred actions” feature, enabled by passing the -allow-deferral
option to terraform plan
, permits count
and for_each
arguments in module
, resource
, and data
blocks to have unknown values and allows providers to react more flexibly to unknown values. This experiment is under active development, and so it’s not yet useful to participate in this experiment.
Wow!
Cc @Jeremy G (Cloud Posse) @Andriy Knysh (Cloud Posse)
2024-04-05
Anyone have any info on Terraform Stacks? Anyone used in private beta? Know when/what functionality maybe coming to OSS?
Terraform stacks simplify provisioning and managing resources at scale, reducing the time and overhead of managing infrastructure.
2024-04-06
2024-04-07
2024-04-08
2024-04-09
Hello everyone Can we spin up two dms instances from this module? I am facing one issue regarding name , i have copied the same folder structure from this module. Please help me and let me know if this is possible to do or not Thanks Here is the module
https://github.com/cloudposse/terraform-aws-dms/blob/main/examples/complete/main.tf
locals {
enabled = module.this.enabled
vpc_id = module.vpc.vpc_id
vpc_cidr_block = module.vpc.vpc_cidr_block
subnet_ids = module.subnets.private_subnet_ids
route_table_ids = module.subnets.private_route_table_ids
security_group_id = module.security_group.id
create_dms_iam_roles = local.enabled && var.create_dms_iam_roles
}
# Database Migration Service requires
# the below IAM Roles to be created before
# replication instances can be created.
# The roles should be provisioned only once per account.
# <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html>
# <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html#CHAP_Security.APIRole>
# <https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/dms_replication_instance>
# * dms-vpc-role
# * dms-cloudwatch-logs-role
# * dms-access-for-endpoint
module "dms_iam" {
source = "../../modules/dms-iam"
enabled = local.create_dms_iam_roles
context = module.this.context
}
module "dms_replication_instance" {
source = "../../modules/dms-replication-instance"
# <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReleaseNotes.html>
engine_version = "3.4"
replication_instance_class = "dms.t2.small"
allocated_storage = 50
apply_immediately = true
auto_minor_version_upgrade = true
allow_major_version_upgrade = false
multi_az = false
publicly_accessible = false
preferred_maintenance_window = "sun:10:30-sun:14:30"
vpc_security_group_ids = [local.security_group_id, module.aurora_postgres_cluster.security_group_id]
subnet_ids = local.subnet_ids
context = module.this.context
depends_on = [
# The required DMS roles must be present before replication instances can be provisioned
module.dms_iam,
aws_vpc_endpoint.s3
]
}
module "dms_endpoint_aurora_postgres" {
source = "../../modules/dms-endpoint"
endpoint_type = "source"
engine_name = "aurora-postgresql"
server_name = module.aurora_postgres_cluster.endpoint
database_name = var.database_name
port = var.database_port
username = var.admin_user
password = var.admin_password
extra_connection_attributes = ""
secrets_manager_access_role_arn = null
secrets_manager_arn = null
ssl_mode = "none"
attributes = ["source"]
context = module.this.context
depends_on = [
module.aurora_postgres_cluster
]
}
module "dms_endpoint_s3_bucket" {
source = "../../modules/dms-endpoint"
endpoint_type = "target"
engine_name = "s3"
s3_settings = {
bucket_name = module.s3_bucket.bucket_id
bucket_folder = null
cdc_inserts_only = false
csv_row_delimiter = " "
csv_delimiter = ","
data_format = "parquet"
compression_type = "GZIP"
date_partition_delimiter = "NONE"
date_partition_enabled = true
date_partition_sequence = "YYYYMMDD"
include_op_for_full_load = true
parquet_timestamp_in_millisecond = true
timestamp_column_name = "timestamp"
service_access_role_arn = join("", aws_iam_role.s3[*].arn)
}
extra_connection_attributes = ""
attributes = ["target"]
context = module.this.context
depends_on = [
aws_iam_role.s3,
module.s3_bucket
]
}
resource "time_sleep" "wait_for_dms_endpoints" {
count = local.enabled ? 1 : 0
depends_on = [
module.dms_endpoint_aurora_postgres,
module.dms_endpoint_s3_bucket
]
create_duration = "2m"
destroy_duration = "30s"
}
# `dms_replication_task` will be created (at least) 2 minutes after `dms_endpoint_aurora_postgres` and `dms_endpoint_s3_bucket`
# `dms_endpoint_aurora_postgres` and `dms_endpoint_s3_bucket` will be destroyed (at least) 30 seconds after `dms_replication_task`
module "dms_replication_task" {
source = "../../modules/dms-replication-task"
replication_instance_arn = module.dms_replication_instance.replication_instance_arn
start_replication_task = true
migration_type = "full-load-and-cdc"
source_endpoint_arn = module.dms_endpoint_aurora_postgres.endpoint_arn
target_endpoint_arn = module.dms_endpoint_s3_bucket.endpoint_arn
# <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.html>
replication_task_settings = file("${path.module}/config/replication-task-settings.json")
# <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TableMapping.html>
table_mappings = file("${path.module}/config/replication-task-table-mappings.json")
context = module.this.context
depends_on = [
module.dms_endpoint_aurora_postgres,
module.dms_endpoint_s3_bucket,
time_sleep.wait_for_dms_endpoints
]
}
module "dms_replication_instance_event_subscription" {
source = "../../modules/dms-event-subscription"
event_subscription_enabled = true
source_type = "replication-instance"
source_ids = [module.dms_replication_instance.replication_instance_id]
sns_topic_arn = module.sns_topic.sns_topic_arn
# <https://awscli.amazonaws.com/v2/documentation/api/latest/reference/dms/describe-event-categories.html>
event_categories = [
"low storage",
"configuration change",
"maintenance",
"deletion",
"creation",
"failover",
"failure"
]
attributes = ["instance"]
context = module.this.context
}
module "dms_replication_task_event_subscription" {
source = "../../modules/dms-event-subscription"
event_subscription_enabled = true
source_type = "replication-task"
source_ids = [module.dms_replication_task.replication_task_id]
sns_topic_arn = module.sns_topic.sns_topic_arn
# <https://awscli.amazonaws.com/v2/documentation/api/latest/reference/dms/describe-event-categories.html>
event_categories = [
"configuration change",
"state change",
"deletion",
"creation",
"failure"
]
attributes = ["task"]
context = module.this.context
}
hey guys, I am trying to setup vpc peering with the module and I am receiving an error that The "count" value depends on resource attributes that cannot be determined until apply
. My module definition is simple but relies on the vpc module which should be created in the same terraform run.
Question - can vpc peering module be applied alongside the module which created vpc for which peering should be configured once created?
@Jeremy G (Cloud Posse) where’s your recent post on count-of issues?
Also, you’ll dig this. https://sweetops.slack.com/archives/CB6GHNLG0/p1712272028318039
from the v1.9 alpha release notes!!!
• The experimental “deferred actions” feature, enabled by passing the -allow-deferral
option to terraform plan
, permits count
and for_each
arguments in module
, resource
, and data
blocks to have unknown values and allows providers to react more flexibly to unknown values. This experiment is under active development, and so it’s not yet useful to participate in this experiment.
It might be less of a problem in the future.
@PiotrP I wrote this article to help people understand the issue you are facing. I hope it helps you, too.
Details about computed values can cause terraform plan
to fail
Hello everyone. We are using your great datadog wrapper for terraform but when I try to use the latest version I dont see the new properties added in 1.4.1 like on_missing_data. Was that released to terraform registry? I see it is in the release zip and in the main branch but when I init the module I dont see it in the code
Ah I think i get it. These were all migrated to options: … but for backwards compatibility the old values still work
Right, we moved the old top-level setting, and only the old settings, to their API location under options
, where they belong. (Side note: priority
actually belongs at the top level, not under options, and designating it as legacy was a mistake). You can see the on_missing_data
setting being set here.
on_missing_data = try(each.value.options.on_missing_data, null)
2024-04-10
v1.8.0 1.8.0 (April 10, 2024) If you are upgrading from Terraform v1.7 or earlier, please refer to the Terraform v1.8 Upgrade Guide. NEW FEATURES:
Providers can now offer functions which can be used from within the Terraform configuration language. The syntax for calling a provider-contributed function is provider::function_name(). (<a…
Upgrading to Terraform v1.8
Hello All - I want to deploy few resources in all the accounts in an AWS organization. Is there a way to do it in terraform? I know i can use different providers to do it multiple accounts, but what if I want to create in all accounts in the organization?
You can use multiple providers, or Terraform workspaces. There may be another approach too via some 3rd party tooling.
honestly, type that exact same question into ChatGPT.. it will give you some options
Haha.. did that and it wasn’t helpful . Thought of checking internally as some of you might have wanted the same.
lol right on, yeah the feedback it gave me seamed reasonable, but there has to be a better way to manage a scenario like that for sure, tons of people have to be doing that exact same thing
I’ve also seen this popping up in my feed on LinkedIN and on Cloud Posse Repos.. this may be worth looking into (I plan on looking further into it as well): https://github.com/cloudposse/atmos
Terraform Orchestration Tool for DevOps. Keep environment configuration DRY with hierarchical imports of configurations, inheritance, and WAY more. Native support for Terraform and Helmfile.
yeah, seems like a good solution… see “Use Cases” on that URL
2024-04-11
I am trying to use terraform-aws-eks-node-group
module to create EKS nodes, once created I am installing cluster-autoscaler. At this moment autoscaler is missing proper IAM permissions. I see, that at some point proper policy has been added but I do not see it in the current module code.
Question: what is the proper approach to enable/create IAM permissions required by the autoscaler?
I solved it by adding custom aws_iam_policy
with proper policy defined with aws_iam_policy_document
datasource and pass it to the module via node_role_policy_arns
variable, not sure if this is the proper approach, though
I have it declared like so:
node_role_arn = [module.eks_node_group_main.eks_node_group_role_arn]
that is in my node group module code, and my eks cluster module is also in the same .tf file as well.
oh snaps, you’re talking about the autoscaler, not only the node group.. never mind
2024-04-12
What schema does the DDB table used by https://github.com/cloudposse/github-action-terraform-plan-storage/tree/main need to have? cc @Erik Osterman (Cloud Posse)
I ended up here - so is the plan DDB table the same as the internal state/locks table TF uses, in your case?
module "s3_bucket" {
source = "cloudposse/stack-config/yaml//modules/remote-state"
version = "1.5.0"
component = var.s3_bucket_component_name
environment = try(var.s3_bucket_environment_name, module.this.environment)
context = module.this.context
}
module "dynamodb" {
source = "cloudposse/stack-config/yaml//modules/remote-state"
version = "1.5.0"
component = var.dynamodb_component_name
environment = try(var.dynamodb_environment_name, module.this.environment)
context = module.this.context
}
tl;dr Need to stop storing the plan as an artifact. This action looks nice. Need to provision the DDB table (if indeed is different to the locks table)
Aha, @Igor Rodionov @Dan Miller (Cloud Posse) can you get unblocked on the actions and their dependencies.
Need to stop storing the plan as an artifact.
Why do you need to stop storing the planfile as an artifact? That ensures what you approve in the Pull Request is identical to what you apply upon merge to the default branch
As a workflow artifact attached to the PR, because putting in S3/DDB would allow finer grained access control.
Would rather store/fetch from S3.
As a workflow artifact attached to the PR Aha! Yes, that should be doable, but not something we’ve yet implemented. We would like that though… What we tried to do was something even more extreme, which was using artifact storage for everything, which proved challenging because how limited the artifact API is. But only storing the planfile as an artifact, and not trying to use the artifact storage to also replace dynamodb, that should be much easier.
Would rather store/fetch from S3. Oh, wait, I think we’re saying the same thing. We use S3 in our customer implementations, so that is already supported.
I thought you wanted to use artifact storage
So there must be a feature flag somewhere not set.
Already use artifact storage but would rather S3 - the above action looks like it’ll do that, right? Just not sure what schema the metadata DDB table is expecting…?
Nono, we use both dynamodb and S3.
We use dynamo to store the metadata, so we can find planfiles and invalidate planfiles across PRs.
Use use S3 because I think dynamodb has limits on blob storage. Also, it serves as a permanent record, if so desired.
We didn’t want to use S3 as a database and have to scan the bucket to find planfiles.
Note that 2 PRs could merge and affect the same components therefore we need some way to invalidate one or the other. Technically, a merge queue could alleviate some of the problems, but we don’t yet support that.
Yeah makes sense. Where do you actually create that DDB table/whats the schema? e.g. the Terraform DDB state lock table has a partitionKey of LockID
of type String
Yes, so here’s how we do it. Sicne we do everything with re-useable root modules (components) with atmos, here’s what our configuration looks like.
import:
- catalog/s3-bucket/defaults
- catalog/dynamodb/defaults
components:
terraform:
# S3 Bucket for storing Terraform Plans
gitops/s3-bucket:
metadata:
component: s3-bucket
inherits:
- s3-bucket/defaults
vars:
name: gitops-plan-storage
allow_encrypted_uploads_only: false
# DynamoDB table used to store metadata for Terraform Plans
gitops/dynamodb:
metadata:
component: dynamodb
inherits:
- dynamodb/defaults
vars:
name: gitops-plan-storage
# These keys (case-sensitive) are required for the cloudposse/github-action-terraform-plan-storage action
hash_key: id
range_key: createdAt
gitops:
vars:
enabled: true
github_actions_iam_role_enabled: true
github_actions_iam_role_attributes: ["gitops"]
github_actions_allowed_repos:
- "acmeOrg/infra"
s3_bucket_component_name: gitops/s3-bucket
dynamodb_component_name: gitops/dynamodb
The gitops component is what grants GitHub OIDC permissions to access the bucket and dyanmodb table
Oh, let me get those defaults
# Deploys S3 Bucket and DynamoDB table for managing Terraform Plans
# Then deploys GitHub OIDC role for access these resources
# NOTE: If you make any changes to this file, please make sure the integration tests still pass in the
# <https://github.com/cloudposse/github-action-terraform-plan-storage> repo.
import:
- catalog/s3-bucket/defaults
- catalog/dynamodb/defaults
- catalog/github-oidc-role/gitops
components:
terraform:
gitops/s3-bucket:
metadata:
component: s3-bucket
inherits:
- s3-bucket/defaults
vars:
name: gitops
allow_encrypted_uploads_only: false
gitops/dynamodb:
metadata:
component: dynamodb
inherits:
- dynamodb/defaults
vars:
name: gitops-plan-storage
# This key (case-sensitive) is required for the cloudposse/github-action-terraform-plan-storage action
hash_key: id
range_key: ""
# Only these 2 attributes are required for creating the GSI,
# but there will be several other attributes on the table itself
dynamodb_attributes:
- name: 'createdAt'
type: 'S'
- name: 'pr'
type: 'N'
# This GSI is used to Query the latest plan file for a given PR.
global_secondary_index_map:
- name: pr-createdAt-index
hash_key: pr
range_key: createdAt
projection_type: ALL
non_key_attributes: []
read_capacity: null
write_capacity: null
# Auto delete old entries
ttl_enabled: true
ttl_attribute: ttl
@Matt Calhoun where do we describe the entire shape of the dynamodb table?
Doh, I’d seen the above components but missed vars passing hash_key
and range_key
- thank you!!
aha, ok, sorry - also, this question has sparked a ton of issues on the backend here. Some docs weren’t updated, others missing.
No worries, thank you for checking
new TableV2(this, 'plan-storage-metadata', {
tableName: `app-${WORKLOAD_NAME}-terraform-plan-storage-metadata`,
billing: Billing.onDemand(),
pointInTimeRecovery: true,
timeToLiveAttribute: 'ttl',
partitionKey: {
name: 'id',
type: AttributeType.STRING,
},
sortKey: {
name: 'createdAt',
type: AttributeType.STRING,
},
globalSecondaryIndexes: [
{
indexName: 'pr-createdAt-index',
partitionKey: { name: 'pr', type: AttributeType.NUMBER },
sortKey: { name: 'createdAt', type: AttributeType.STRING },
projectionType: ProjectionType.ALL,
},
],
})
Think got it
OK, some progress…
- name: Store Plan
if: (github.event_name == 'pull_request') || (github.event.issue.pull_request && github.event.comment.body == '/terraform plan')
uses: cloudposse/github-action-terraform-plan-storage@v1
id: store-plan
with:
action: storePlan
planPath: tfplan
component: ${{ github.event.repository.name }}
stack: ${{ steps.get_issue_number.outputs.result }}-tfplan
commitSHA: ${{ github.event.pull_request.head.sha || github.sha }}
tableName: app-playback-terraform-plan-storage-metadata
bucketName: app-playback-terraform-state-prod-us-east-1
Plan runs, can see object in S3
but nothing in DDB…no writes, can see a single read. What am I not grokking about what this thing does? I kinda expected to see some metadata about the plan/pr in DDB? Action completed successfully.
Yes, you should see something like that. Our west-coast team should be coming on line shortly and can advise.
##[debug]tableName: badgers
##[debug]bucketName: badgers
##[debug]metadataRepositoryType: dynamo
##[debug]planRepositoryType: s3
##[debug]bucketName: badgers
##[debug]Node Action run completed with exit code 0
##[debug]Finishing: Store Plan
^^ debug output
Hi Josh…just to be clear, do you have a PR open? It should write metadata to the DDB table whenever you push changes. And another sanity check, does your user/role in GHA have access to write to that table in DDB?
Hey Matt - thanks so much. Yes PR is open (internal GHE), chaining roles and yes it has access to DDB and S3. I was getting explicit permission denied writing to DDB before adding it - so something was trying…
Hmm, I wonder if same PR meant it didn’t try to re write on subsequent push? Let me push another change.
Committed a new change, can see S3 plan file under the new sha…but still nothing in DDB (Item count/size etc 0) …
Actually maybe the previous DDB IAM issues was on scan…which would make sense as I can see reads on the table but no writes.
Yup, confirmed previous fail was trying to scan. (since succeeded)
Bah - so sorry, problem between keyboard and computer. I’m looking in the wrong account - this is working as intended
When you’re ready, we have example workflows we can share for drift detection/remediation as well.
Working nicely - thank you!
Does anyone know of or has built a custom VSCode automation for refactoring a resource or module argument out into a variable?
As in I want to select an argument value like in the below screenshot and be able to right click or hit a keyboard shortcut and it’ll create a new variable block in [variables.tf](http://variables.tf)
with the argument’s name and that value as the default value?
No but that sounds very cool.
Perhaps code pilot can be trained to do it?
Would be super useful, right?
I think I tried to use copilot to do it, but it only wanted to do it in the same file.
I’m sure this would be a no-brainer for somebody who has built a proper VSCode app.
Or even just something on the command line
For this specific, ask I am 99% with the right prompt, and a shell script, you could iterate over all the files with mods
because the prompt would be easy, and the chances to get it wrong are small.
AI on the command line
In your prompt, just make sure convey to respond in HCL and to only focus on parameterizing constants. You can give it the convention for naming variables, etc.
I’m working on this extension for the monaco editor, and will port it to a vscode plugin among other things soon
Will check out mods for this… Charmbracelet
@george.m.sedky awesome to hear! Happy to be a beta tester when you’ve got a VSCode plugin.
@Matt Gowie this is how it works now with monaco before the VS code plugin https://youtu.be/ktyXJpf36W0?si=xJaaQ5Pn1i7L0m_j
Closed - Manage to make it work by switching to a different region that supports the API Gateway that I’m trying to create
Hey guys, I’m experiencing something weird, I have a very simple AWS API Gateway definition and for some reason it’s not being created–it’s stuck in creation state:
What is the issue Its trying to create…… Is there any error ur facing??
Hi @tamish60, I’ve managed to make it work, this region doesn’t seem to have the support for the kind of API Gateway that I’m trying to do, I moved a different region and it was able to work just fine
2024-04-13
2024-04-15
2024-04-16
Hi,
i’m currently using terragrunt and want to migrate to atmos. One very convenient thing about terragrunt is that i can simply overwrite the terraform module git repo urls with a local path (https://terragrunt.gruntwork.io/docs/reference/cli-options/#terragrunt-source-map). This allows me to develop tf modules using terragrunt in a efficient way.
Is there anything like it in atmos? If not, what is the best way to develop tf modules while using atmos?
(@Stephan Helas best to use atmos)
So we’ve seen more and more similar requests to these, and I can really identify with this request as something we could/should support via the atmos vendor
command.
One of our design goal in atmos is to avoid code generation / manipulation as much as possible. This ensures future compatibility with terraform.
So while we don’t support it the way terragrunt
does, we support it the way terraform
already supports it. :smiley:
That’s using the [_override.tf](http://_override.tf)
pattern.
We like this approach because it keeps code as vanilla as possible while sticking with features native to Terraform.
https://developer.hashicorp.com/terraform/language/files/override
So, let’s say you have a [main.tf](http://main.tf)
with something like this (from the terragrunt docs you linked)
module "example" {
source = "github.com/org/modules.git//example"
// other parameters
}
To do what you want to do in native terraform, create a file called [main_override.tf](http://main_override.tf)
module "example" {
source = "/local/path/to/modules//example"
}
You don’t need to duplicate the rest of the definition, only the parts you want to “override”, like the source
Override files merge additional settings into existing configuration objects. Learn how to use override files and about merging behavior.
So this will work together with the atmos vendor
command.
- create
components/terraform/mycomponent
- create the
[main_override.tf](http://main_override.tf)
file with the new source (for example) - configure vendoring via
vendor.yml
orcomponent.yml
- run
atmos vendor
This works becauseatmos vendor
will not overwrite the[main_override.tf](http://main_override.tf)
file since that does not exist upstream. You can use this strategy to “monkey patch” anything in terraform.
https://en.wikipedia.org/wiki/Monkey_patch#<i class="em em-~"</i>text=In%20computer%20programming%2C%20monkey%20patching,altering%20the%20original%20source%20code>.
In computer programming, monkey patching is a technique used to dynamically update the behavior of a piece of code at run-time. It is used to extend or modify the runtime code of dynamic languages such as Smalltalk, JavaScript, Objective-C, Ruby, Perl, Python, Groovy, and Lisp without altering the original source code.
@Stephan Helas also consider the following: if you are developing a new version of a component and don’t want to “touch” the existing version. Let’s say you already have a TF component in components/terraform/my-component
and have an Atmos manifest like this
components:
terraform:
my-component:
metadata:
component: my-component # Point to the Terraform component (code)
vars:
...
then you want to create a new (complete diff) version of the TF component (possibly with breaking changes). You place it into another folder in components/terraform
, for example in components/terraform/my-component/v2
or in components/terraform/my-component-v2
(any folder names can be used, it’s up to you to organize it)
then suppose you want to test it only in the dev
account. You add this manifest to the top-level dev
stack, e.g. in the plat-ue2-dev
stack
components:
terraform:
my-component: # You can also use a new Atmos component name, e.g. my-component/v2:
metadata:
component: my-component/v2 # Point to the new Terraform component (code) under development
vars:
...
Wow, thx very much. i’ll try the vendor aproach.
in fact, my second question was, how to develop a newer component versions
you can have many different versions of the “same” Terraform component, and point to them in Atmos stack manifests at diff scopes (orgs, tenant, account, region)
i think i got the idea, but i need to test it though. its a lot to take in. i love the deep yaml merging approach a lot. i’ll try it later and will probably come back with new questions .)
i didn’t knew about the override feature of terraform. thats awesome - and yes - i’ll totally use that. So, as soon as i have mastered the handling of the remote-state (something terragrunt does for me) i think i can leave terragrunt behind
as soon as i have mastered the handling of the remote-state (something terragrunt does for me) Are you clear on the path forward here? As you might guess, we take the “terraform can handle that natively for you” approach by making components more intelligent using datas sources and relying less on the tooling.
That said, if that wouldn’t work for you, would like to better understand your challenges.
in fact, my second question was, how to develop a newer component versions
We have a lot of “opinions” on this that diverge from established terragrunt patterns. @Andriy Knysh (Cloud Posse) alluded to it in his example.
This probably warrants a separate thread in atmos, if / when you need any guidance.
as short summary, i use terragrunt (like many others i suppose) mainly for generating backend and provider config. on top of that i use the dependency feature for example to place a ec2 instance in a generated subnet. as i don’t have to deal with remote states manually - i don’t know how it works behind the terragrunt curtain.
right now i try to understand the concept of the “context” module. after that i’ll try to create the backend config and use remote state. but i thing i will at leaat need another day for that.
Yes, all good questions. We should probably write up a guide that maps concepts and techniques common to #terragrunt and how we accomplish them in atmos
Backends, is a good one - like we didn’t like that TG generates it, when the whole point is IAC So we wrote a component for that.
The best write up on context might be by @Matt Gowie https://masterpoint.io/updates/terraform-null-label/
A post highlighting one of our favorite terraform modules: terraform-null-label. We dive into what it is, why it’s great, and some potential use cases in …
ok. thx soo much. its a real schame, that i didn’t find out about atmos sooner. could have saved me from a lot of weird things i did in terragrunt.
Atmos already generates many things like the backend config for diff clouds, and the override
pattern (e.g. for providers)
https://atmos.tools/core-concepts/components/terraform-backends
Configure Terraform Backends.
the remote-state
question comes up often. We did not want to make YAML a programming language, but use it just for config, so the remote state is done in the TF module and then configured in Atmos, see https://atmos.tools/core-concepts/components/remote-state
The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,
having said that, Atmos now supports Go
templating with Sprig and Gomplate functions and datasources
Atmos supports Go templates in stack manifests.
which brings YAML with Go
templates close to being a “programming” language . (w/o hardcoding anything in Atmos to work with functions in YAML, just embedding the existing template engines)
having said that, we might consider adding an Atmos-specific remote-state
datasource to get the remote state for a component in Atmos directly (YAML + Go templates) w/o going through Terraform.
we might support both approaches (no ETA on the “remote-state” template datasource yet)
@Andriy Knysh (Cloud Posse)
how would i get access to the provisioning documentation? I’ve created a account to login.
@Stephan Helas that part of our documentation (docs.cloudposse.com/reference-architecture) is included with our commercial reference architecture.
That’s how we make money at Cloud Posse to support our open source. If you’d like more information, feel free to book a meeting with me at cloudposse.com/quiz
Hey all,
(Edit: got the initial module working, now have questions about whether I’ve set it up optimally, and how to configure other ses features)
~~~~~~ I’m attempting to get the cloudposse/ses/aws terraform module working but am not having much luck. I’ve only really done small tweaks to terraform scripts so it’s probably just a skill issue, but hopefully someone here can point me in the right direction.
The project is using terragrunt and I’ve attempted to copy over ‘/examples/complete’ code.
To my amateur eye everything looks ok, but I’m getting this error:
│ Error: invalid value for name (must only contain alphanumeric characters, hyphens, underscores, commas, periods, @ symbols, plus and equals signs)
│
│ with module.ses.module.ses_user.aws_iam_user.default[0],
│ on .terraform/modules/ses.ses_user/main.tf line 11, in resource "aws_iam_user" "default":
│ 11: name = module.this.id
in my ./modules/ses folder I have copied the following example files exactly:
context.tf
outputs.tf
variables.tf
versions.tf
My main.tf looks like this:
module "ses" {
source = "cloudposse/ses/aws"
version = "0.25.0"
domain = var.domain
zone_id = ""
verify_dkim = false
verify_domain = false
context = module.this.context
}
I don’t want the route53 verification stuff set up (as we will need to set up the MX record ourselves) so I didn’t add that resource, or the vpc module. Is that where I’ve messed up?
and I also have a terraform.tf file with this content (matching how others have set up other modules in our project):
provider "aws" {
region = var.region
}
provider "awsutils" {
region = var.region
}
I’m really stuck on why this error is being thrown inside this ses.ses_user module. Any help at all would be greatly appreciated!
I removed the terraform.tf file and moved the providers into the main.tf file, and added in the vpc module and route53 resource to see if that was the issue, but it wasn’t.
provider "aws" {
region = var.region
}
provider "awsutils" {
region = var.region
}
module "vpc" {
source = "cloudposse/vpc/aws"
version = "2.1.1"
ipv4_primary_cidr_block = "<VPC_BLOCK - not sure if confidential>"
context = module.this.context
}
resource "aws_route53_zone" "private_dns_zone" {
name = var.domain
vpc {
vpc_id = module.vpc.vpc_id
}
tags = module.this.tags
}
module "ses" {
source = "cloudposse/ses/aws"
version = "0.25.0"
domain = var.domain
zone_id = ""
verify_dkim = false
verify_domain = false
context = module.this.context
}
Really not sure what’s going wrong here. I’ve matched the examples pretty much exactly. Is it because it’s running via terragrunt or something?
Am I supposed to set up every parameter in the context.tf file by passing in variables? It looks like there’s a “name” var in there which defaults to null - that could explain why I’m getting the error maybe?
I think that might be what I was missing, passing in the name variable. I just set it to our application name and it worked! I’ll refactor things to pass these variables down from the environment so it should be good, finally got the basic ses setup up.
Now the question is how to configure all the other SES stuff.
Is it possible to configure the ‘Monitoring your email sending’ (publish to SNS topic) via the terraform script? This required a configuration set to be applied too, is that possible via the module? I can’t see any options for the inputs for this.
You should give it a simple name ses-user
and it should work
The name goes from the context as an input, to being used in the module.this null label, and then gets passed to the ses module using the context input. The module.this.id
is the fully qualified name composed of the namespace, environment, stage, name, and other inputs. Technically none of those are required but at least one needs a value such as name
for the id
to have a value so you don’t get that error message
Awesome thanks for confirming. I had it as “ses” to start with, then I’ve changed it to use the same iam user name that the backend uses for s3 as well so it has the perms to talk to both services. One thing I haven’t checked yet is whether it deletes the other perms on that user or not.
There’s also a deprecated warning when running the terraform job:
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
╷
│ Warning: Argument is deprecated
│
│ with module.ses.module.ses_user.module.store_write[0].aws_ssm_parameter.default["/system_user/ccp-dev/access_key_id"],
│ on .terraform/modules/ses.ses_user.store_write/main.tf line 22, in resource "aws_ssm_parameter" "default":
│ 22: overwrite = each.value.overwrite
│
│ this attribute has been deprecated
│
│ (and one more similar warning elsewhere)
╵
looks like a deprecation in the ssm parameter write module
https://github.com/cloudposse/terraform-aws-ssm-parameter-store/issues/58
Describe the Bug
After a terraform apply
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
╷
│ Warning: Argument is deprecated
│
│ with module.ses.module.ses_user.module.store_write[0].aws_ssm_parameter.default["/system_user/ccp-dev/access_key_id"],
│ on .terraform/modules/ses.ses_user.store_write/main.tf line 22, in resource "aws_ssm_parameter" "default":
│ 22: overwrite = each.value.overwrite
│
│ this attribute has been deprecated
│
│ (and one more similar warning elsewhere)
From upstream docs
Warning
overwrite also makes it possible to overwrite an existing SSM Parameter that’s not created by Terraform before. This argument has been deprecated and will be removed in v6.0.0 of the provider. For more information on how this affects the behavior of this resource, see this issue comment.
Expected Behavior
No deprecation warning
Steps to Reproduce
Run terraform apply using this module
Screenshots
No response
Environment
No response
Additional Context
• https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_parameter
Hello, Have you had a chance to test the new Terraform 1.8.0 version? In my experience, the new version appears to be approximately three times slower compared to version 1.7.5. I’ve created a new GitHub Issue regarding this performance problem, but we need to reproduce the issue (we need more cases). If you upgrade to 1.8, could you gauge the performance and memory usage before and after, then share the results here: https://github.com/hashicorp/terraform/issues/34984
I know one of our customers tested it (accidentally) in their github actions by not pinning their terraform version. When 1.8 was released, their terraform started throwing exceptions and crashing. Pinning to the previous 1.7.x released fixed it.
2024-04-17
v1.8.1 1.8.1 (April 17, 2024) BUG FIXES:
Fix crash in terraform plan when referencing a module output that does not exist within the try(…) function. (#34985) Fix crash in terraform apply when referencing a module with no planned changes. (<a href=”https://github.com/hashicorp/terraform/pull/34985” data-hovercard-type=”pull_request”…
This PR fixes two crashes within Terraform v1.8.0. In both cases a module expansion was being missed, and then crashing when something tried to reference the missed module. The first occurs when re…
Terraform Version
$ terraform --version
Terraform v1.8.0
on darwin_arm64
+ provider registry.terraform.io/hashicorp/aws v5.39.1
+ provider registry.terraform.io/hashicorp/local v2.4.1
+ provider registry.terraform.io/hashicorp/null v3.2.2
+ provider registry.terraform.io/hashicorp/random v3.6.0
Terraform Configuration Files
...terraform config...
Debug Output
n/a
Expected Behavior
$ time terraform validate
Success! The configuration is valid.
real 0m1.452s
user 0m2.379s
sys 0m0.318s
Actual Behavior
$ time terraform validate
Success! The configuration is valid.
real 0m9.720s
user 0m2.701s
sys 0m0.444s
Steps to Reproduce
git clone <https://github.com/philips-labs/terraform-aws-github-runner.git>
cd terraform-aws-github-runner/examples/multi-runner/
time terraform validate
Additional Context
Terraform v1.8 is ~~~3x slower than v1.7 and consumes ~~~5x more memory.
References
No response
2024-04-18
how powerful are the provider functions? trying to understand the use cases
extremely powerful.
You can write your own function in golang within a provider and then run the function in terraform.
In OpenTofu 1.7, where provider functions are coming as well, we have some cool functional expansions on it that allow use cases like e.g. writing custom functions in Lua files side-by-side with your .tf files.
We’ll have a livestream next week on Wednesday diving into some of the details: https://www.youtube.com/watch?v=6OXBv0MYalY
opentofu 1.7 is still in beta tho, no ?
v1.7.0-beta1 Do not use this release for production workloads! It’s time for the first beta release of the 1.7.0 version! This includes a lot of major and minor new features, as well as a ton of community contributions! The highlights are:
State Encryption (docs) Provider-defined Functions (<a href=”https://1-7-0-beta1.opentofu.pages.dev/docs/language/functions/#provider-defined-functions“…
Beta 1 released this morning!
writing custom functions in Lua files side-by-side with your .tf files.
function main_a( input )
local animal_sounds = {
cat = 'meow',
dog = 'woof',
cow = 'moo'
}
return animal_sounds
end
We’re working to get the experimental Lua provider on the registry as we speak, so you’ll be able to play around with it when test driving the beta!
are there other languages being supported besides Lua?
It’s not support for any specific languages per se, we’re just enabling the creation of providers that can dynamically expose custom functions based on e.g. a file passed to the provider as a config parameter.
So like here
provider "lua" {
lua = file("./main.lua")
}
So the community will be able to create providers for whatever languages they want.
However, the providers still have to be written in Go, and after investigating options for embedding both Python and JavaScript in a provider written in Go, it’s not looking very promising.
I’ll be doing a PoC for dynamically writing those in Go though, via https://github.com/traefik/yaegi
how about security side with functions?
do you mean how does opentofu prevent supply chain attacks with providers that integrate new languages ?
I don’t think that’s any different to normal providers and their resources though, is it?
Btw. Lua provider is in and ready to play with https://github.com/opentofu/terraform-provider-lua
awesome
thinking if that is an issue if the codes in lua will replace Terraform binaries etc
this is freggin awesome!
Using provider functions, it will be possible to run Lua along side terraform / opentofu.
function main_a( input )
local animal_sounds = {
cat = 'meow',
dog = 'woof',
cow = 'moo'
}
return animal_sounds
end
Then call that from terraform.
terraform {
required_providers {
tester = {
source = "terraform.local/local/testfunctions"
version = "0.0.1"
}
}
}
provider "tester" {
lua = file("./main.lua")
}
output "test" {
value = provider::tester::main_a(tomap({"foo": {"bar": 190}}))
}
See example: https://github.com/opentofu/opentofu/pull/1491
Granted, I don’t love the way this looks: provider::tester::main_a(tomap({"foo": {"bar": 190}}))
https://sweetops.slack.com/archives/CB6GHNLG0/p1713454132172149?thread_ts=1713440027.477669&cid=CB6GHNLG0
Prior to this change a single unconfigured lazy provider instance was used per provider type to supply functions. This used the functions provided by GetSchema only.
With this change, provider function calls are detected and supplied via GraphNodeReferencer and are used in the ProviderFunctionTransformer to add dependencies between referencers and the providers that supply their functions.
With that information EvalContextBuiltin can now assume that all providers that require configuration have been configured by the time a particular scope is requested. It can then use it’s initialized providers to supply all requested functions.
At a high level, it allows providers to dynamically register functions based on their configurations.
main.lua
function main_a( input )
local animal_sounds = {
cat = 'meow',
dog = 'woof',
cow = 'moo'
}
return animal_sounds
end
terraform {
required_providers {
tester = {
source = "terraform.local/local/testfunctions"
version = "0.0.1"
}
}
}
provider "tester" {
lua = file("./main.lua")
}
output "test" {
value = provider::tester::main_a(tomap({"foo": {"bar": 190}}))
}
Output:
Changes to Outputs:
+ test = {
+ cat = "meow"
+ cow = "moo"
+ dog = "woof"
}
This requires some enhancements to the HCL library and is currently pointing at my personal fork, with a PR into the main HCL repository: hashicorp/hcl#676. As this may take some time for the HCL Team to review, I will likely move the forked code under the OpenTofu umbrella before this is merged [done].
This PR will be accompanied by a simple provider framework for implementing function providers similar to the example above.
Related to #1326
Target Release
1.7.0
It is awesome! To be clear: This will likely only be available for OpenTofu and not for Terraform because the OpenTofu devs are taking the provider functions feature further with this update that they came up with. I haven’t see anything that says Terraform will do the same, which will be an interesting diversion!
Prior to this change a single unconfigured lazy provider instance was used per provider type to supply functions. This used the functions provided by GetSchema only.
With this change, provider function calls are detected and supplied via GraphNodeReferencer and are used in the ProviderFunctionTransformer to add dependencies between referencers and the providers that supply their functions.
With that information EvalContextBuiltin can now assume that all providers that require configuration have been configured by the time a particular scope is requested. It can then use it’s initialized providers to supply all requested functions.
At a high level, it allows providers to dynamically register functions based on their configurations.
main.lua
function main_a( input )
local animal_sounds = {
cat = 'meow',
dog = 'woof',
cow = 'moo'
}
return animal_sounds
end
terraform {
required_providers {
tester = {
source = "terraform.local/local/testfunctions"
version = "0.0.1"
}
}
}
provider "tester" {
lua = file("./main.lua")
}
output "test" {
value = provider::tester::main_a(tomap({"foo": {"bar": 190}}))
}
Output:
Changes to Outputs:
+ test = {
+ cat = "meow"
+ cow = "moo"
+ dog = "woof"
}
This requires some enhancements to the HCL library and is currently pointing at my personal fork, with a PR into the main HCL repository: hashicorp/hcl#676. As this may take some time for the HCL Team to review, I will likely move the forked code under the OpenTofu umbrella before this is merged [done].
This PR will be accompanied by a simple provider framework for implementing function providers similar to the example above.
Related to #1326
Target Release
1.7.0
Oh, I didn’t catch that.
I was more looking to see provider functions adding uniformity and improving interoperability, than introducing incompatbilities
This requires some enhancements to the HCL library and is currently pointing at my personal fork, with a PR into the main HCL repository: hashicorp/hcl#676. As this may take some time for the HCL Team to review, I will likely move the forked code under the OpenTofu umbrella before this is merged [done].
With the focus on functions in recent hcl releases, I thought it time to introduce the ability to inspect which functions are required to evaluate expressions. This mirrors the Variable traversal present throughout the codebase.
This allows for better error messages and optimizations to be built around supplying custom functions by consumers of HCL.
These changes are backwards compatible and is not a breaking change. I explicitly introduced hcl.ExpressionWithFunctions to allow consumers to opt-into supporting function inspection. I would recommend that this be moved into hcl.Expression if a major version with API changes is ever considered.
Got it
OpenTofu will still likely add provider functions that the providers define and open up and AFAIU that should be available across OTF + Terraform, BUT these dynamically defined functions that you can create yourselves and are project specific will be OTF only.
This is my understanding so far, so not 100% on that.
Got a quick question about the projects that Terraform depends on, e.g. https://github.com/hashicorp/hcl/tree/main, why are the licenses of the similar projects not converted by Hashicorp?