#terraform (2024-04)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2024-04-01
2024-04-02
stupid question but are you guys terraforming your github configs?
one example is to use the github
provider
https://github.com/cloudposse/terraform-aws-components/blob/main/modules/argocd-repo/versions.tf#L9
then use the provider resources to provision
https://github.com/cloudposse/terraform-aws-components/blob/main/modules/argocd-repo/git-files.tf
resource "github_repository_file" "gitignore" {
count = local.enabled ? 1 : 0
repository = local.github_repository.name
branch = local.github_repository.default_branch
file = ".gitignore"
content = templatefile("${path.module}/templates/.gitignore.tpl", {
entries = var.gitignore_entries
})
commit_message = "Create .gitignore file."
commit_author = var.github_user
commit_email = var.github_user_email
overwrite_on_create = true
}
resource "github_repository_file" "readme" {
count = local.enabled ? 1 : 0
repository = local.github_repository.name
branch = local.github_repository.default_branch
file = "README.md"
content = templatefile("${path.module}/templates/README.md.tpl", {
repository_name = local.github_repository.name
repository_description = local.github_repository.description
github_organization = var.github_organization
})
commit_message = "Create README.md file."
commit_author = var.github_user
commit_email = var.github_user_email
overwrite_on_create = true
}
resource "github_repository_file" "codeowners_file" {
count = local.enabled ? 1 : 0
repository = local.github_repository.name
branch = local.github_repository.default_branch
file = ".github/CODEOWNERS"
content = templatefile("${path.module}/templates/CODEOWNERS.tpl", {
codeowners = var.github_codeowner_teams
})
commit_message = "Create CODEOWNERS file."
commit_author = var.github_user
commit_email = var.github_user_email
overwrite_on_create = true
}
resource "github_repository_file" "pull_request_template" {
count = local.enabled ? 1 : 0
repository = local.github_repository.name
branch = local.github_repository.default_branch
file = ".github/PULL_REQUEST_TEMPLATE.md"
content = file("${path.module}/templates/PULL_REQUEST_TEMPLATE.md")
commit_message = "Create PULL_REQUEST_TEMPLATE.md file."
commit_author = var.github_user
commit_email = var.github_user_email
overwrite_on_create = true
}
Yea I have a req of redeploying Github inside our boundaries and so I have a chance to do it clean. I was about to start clicking but I was finding some of what you’re saying above.
someone emoted nope, hmmmm
the auth token for the github
provider is read from SSM or ASM in case of AWS, the rest should be straightforward (variables configured with Terraform and/or Atmos), let us know if you need any help
We recently rolled this out for cloud posse: https://github.com/repository-settings/app
Pull Requests for GitHub repository settings
# These settings are synced to GitHub by <https://probot.github.io/apps/settings/>
repository:
# See <https://docs.github.com/en/rest/reference/repos#update-a-repository> for all available settings.
# Note: You cannot unarchive repositories through the API. `true` to archive this repository.
archived: false
# Either `true` to enable issues for this repository, `false` to disable them.
has_issues: true
# Either `true` to enable projects for this repository, or `false` to disable them.
# If projects are disabled for the organization, passing `true` will cause an API error.
has_projects: true
# Either `true` to enable the wiki for this repository, `false` to disable it.
has_wiki: false
# Either `true` to enable downloads for this repository, `false` to disable them.
has_downloads: true
# Updates the default branch for this repository.
#default_branch: main
# Either `true` to allow squash-merging pull requests, or `false` to prevent
# squash-merging.
allow_squash_merge: true
# Either `true` to allow merging pull requests with a merge commit, or `false`
# to prevent merging pull requests with merge commits.
allow_merge_commit: false
# Either `true` to allow rebase-merging pull requests, or `false` to prevent
# rebase-merging.
allow_rebase_merge: false
# Either `true` to enable automatic deletion of branches on merge, or `false` to disable
delete_branch_on_merge: true
# Either `true` to enable automated security fixes, or `false` to disable
# automated security fixes.
enable_automated_security_fixes: true
# Either `true` to enable vulnerability alerts, or `false` to disable
# vulnerability alerts.
enable_vulnerability_alerts: true
# Either `true` to make this repo available as a template repository or `false` to prevent it.
#is_template: false
environments:
- name: release
deployment_branch_policy:
custom_branches:
- main
- release/**
- name: security
deployment_branch_policy:
custom_branches:
- main
- release/**
# Labels: define labels for Issues and Pull Requests
labels:
- name: bug
color: '#d73a4a'
description: 🐛 An issue with the system
- name: feature
color: '#336699'
description: New functionality
- name: bugfix
color: '#fbca04'
description: Change that restores intended behavior
- name: auto-update
color: '#ededed'
description: This PR was automatically generated
- name: do not merge
color: '#B60205'
description: Do not merge this PR, doing so would cause problems
- name: documentation
color: '#0075ca'
description: Improvements or additions to documentation
- name: readme
color: '#0075ca'
description: Improvements or additions to the README
- name: duplicate
color: '#cfd3d7'
description: This issue or pull request already exists
- name: enhancement
color: '#a2eeef'
description: New feature or request
- name: good first issue
color: '#7057ff'
description: 'Good for newcomers'
- name: help wanted
color: '#008672'
description: 'Extra attention is needed'
- name: invalid
color: '#e4e669'
description: "This doesn't seem right"
- name: major
color: '#00FF00'
description: 'Breaking changes (or first stable release)'
- name: minor
color: '#00cc33'
description: New features that do not break anything
- name: no-release
color: '#0075ca'
description: 'Do not create a new release (wait for additional code changes)'
- name: patch
color: '#0E8A16'
description: A minor, backward compatible change
- name: question
color: '#d876e3'
- name: wip
color: '#B60205'
description: 'Work in Progress: Not ready for final review or merge'
- name: wontfix
color: '#B60205'
description: 'This will not be worked on'
- name: needs-cloudposse
color: '#B60205'
description: 'Needs Cloud Posse assistance'
- name: needs-test
color: '#B60205'
description: 'Needs testing'
- name: triage
color: '#fcb32c'
description: 'Needs triage'
- name: conflict
color: '#B60205'
description: 'This PR has conflicts'
- name: no-changes
color: '#cccccc'
description: 'No changes were made in this PR'
- name: stale
color: '#e69138'
description: 'This PR has gone stale'
- name: migration
color: '#2f81f7'
description: 'This PR involves a migration'
- name: terraform/0.13
color: '#ffd9c4'
description: 'Module requires Terraform 0.13 or later'
# Note: `permission` is only valid on organization-owned repositories.
# The permission to grant the collaborator. Can be one of:
# * `pull` - can pull, but not push to or administer this repository.
# * `push` - can pull and push, but not administer this repository.
# * `admin` - can pull, push and administer this repository.
# * `maintain` - Recommended for project managers who need to manage the repository without access to sensitive or destructive actions.
# * `triage` - Recommended for contributors who need to proactively manage issues and pull requests without write access.
#
# See <https://docs.github.com/en/rest/reference/teams#add-or-update-team-repository-permissions> for available options
teams:
- name: approvers
permission: push
- name: admins
permission: admin
- name: bots
permission: admin
- name: engineering
permission: write
- name: contributors
permission: write
- name: security
permission: pull
That is very slick
This combined with GitHub organizational repository rulesets works well for for us and eliminates the need to manage branch protections at the repo level
From a compliance perspective, I have to enforce any baseline I have
So like this might work well to enforce
If you decide to go the route with terraform, there are some gotchas. I know there are bugs/limitations with the HashiCorp managed provider for GitHub that others have worked around, but those forks are seemingly abandoned. We would recommend a single terraform component to manage a single repo, that using atmos to define the configuration for each repo with inheritance. This would mitigate one of the most common problems companies experience when using terraform to manage GitHub repos: GitHub API rate limits. Defining a factory in terraform for repos will guaranteed be hamstrung by these rate limits. So defining the factory in atmos instead and combining it with our GHA will ensure only affected repos are planned/applied when changes are made.
But I rather like the approach we took instead.
I’m using Terraform to manage all our internal Github Enterprise repos and teams etc. Happy to answer any questions.
@joshmyers what issues have you run into along the way and are you using the official hashicorp provider?
Mostly slowness in the provider, against Github Enterprise on several gigs over the last few years. Sometimes this comes from API limits but mostly slow provider when you start managing 50-100+ repos and plenty of settings. We managed to hunt down a few of the slow resources and switch them out for others e.g. graphql for github_branch_protection_v3
resource. Things got a bit better in more recent provider versions. Can also split the states at some business logical point so each doesn’t get too big. I like the drift detection and alignment here. Plenty of manual tweaks/bodges forgotten to be removed got cleaned up.
Terraform Version
0.12.6
Affected Resource(s)
Please list the resources as a list, for example:
• github_repository
• github_branch_protection
• github_team_repository
• github_actions_secret
Terraform Configuration Files
Here’s our repo module (slightly redacted ****
):
terraform {
required_providers {
github = ">= 3.1.0"
}
}
locals {
# Terraform modules must be named `terraform-<provider>-<module name>`
# so we can extract the provider easily
provider = element(split("-", var.repository), 1)
}
data "github_team" "****" {
slug = "****"
}
data "github_team" "****" {
slug = "****"
}
resource "github_repository" "main" {
name = var.repository
description = var.description
visibility = var.visibility
topics = [
"terraform",
"terraform-module",
"terraform-${local.provider}"
]
has_issues = var.has_issues
has_projects = var.has_projects
has_wiki = var.has_wiki
vulnerability_alerts = true
delete_branch_on_merge = true
archived = var.archived
dynamic "template" {
for_each = var.fork ? [] : [var.fork]
content {
owner = "waveaccounting"
repository = "****"
}
}
}
resource "github_branch_protection" "main" {
repository_id = github_repository.main.node_id
pattern = github_repository.main.default_branch
required_status_checks {
strict = true
contexts = [
"Terraform",
"docs",
]
}
required_pull_request_reviews {
dismiss_stale_reviews = true
require_code_owner_reviews = true
}
}
resource "github_team_repository" "****" {
team_id = data.github_team.****.id
repository = github_repository.main.name
permission = "admin"
}
resource "github_team_repository" "****" {
team_id = data.github_team.****.id
repository = github_repository.main.name
permission = "admin"
}
resource "github_actions_secret" "secrets" {
for_each = var.secrets
repository = github_repository.main.name
secret_name = each.key
plaintext_value = each.value
}
Actual Behavior
We are managing approximately 90 repositories using this module via Terraform Cloud remote operations (which means we can’t disable refresh or change parallelization afaik). I timed a refresh + plan: 9m22s (562s) == 6.2s per repository
Are there any optimizations we can make on our side or in the github provider / API to try to improve this? We’re discussing breaking up our repos into smaller workspaces, but that feels like a bit of a hack.
Steps to Reproduce
Please list the steps required to reproduce the issue, for example:
terraform plan
on large numbers of repositories / branch protection configs
Important Factoids
• Running on Terraform Cloud Remote Operation
References
• Similar issue to https://github.com/terraform-providers/terraform-provider-github/issues/565, although things weren’t particularly fast before the update either
The whole ownership move to integrations happened quite a while ago, not too many limitations I’ve hit other than the obvious resources and lack of certain things (most that you’d want is available). It’s the main provider. No competing fork silliness.
WARNING: Note that this app inherently escalates anyone with push permissions to the admin role, since they can push config settings to the master branch, which will be synced.
I think https://github.com/repository-settings/app looks cool in terms of getting some consistency across multiple repos, but I wouldn’t call the model enforcing. Certainly lighter touch than TF. Wanted to manage teams too so we’re already here…
Pull Requests for GitHub repository settings
Ironically you could use the provider to manage the <https://github.com/cloudposse/.github/blob/main/.github/settings.yml>
files in repos, but have the probot do the heavy lifting.
I think this statement is easily misunderstood.
The same is :100: true of any implementation that manages repository settings via GitOps.
It’s entirely mitigated with CODEOWNERS
and branch protections (e.g. via organizational repository rulesets).
WARNING: Note that this app inherently escalates anyone with push permissions to the admin role, since they can push config settings to the master branch, which will be synced.
Thanks for clarifying. Yes, that’s true.
I have to play with the provider a bit
https://sweetops.slack.com/archives/CB6GHNLG0/p1712348663742749?thread_ts=1712075779.976419&cid=CB6GHNLG0 ish, the repo that manages the other repos can be managed/run by a separate team/different policies etc vs directly applying in your own repo. Aye CODEOWNERS/protections help.
I think this statement is easily misunderstood.
The same is :100: true of any implementation that manages repository settings via GitOps.
It’s entirely mitigated with CODEOWNERS
and branch protections (e.g. via organizational repository rulesets).
fwiw, that’s the approach taken by https://github.com/github/safe-settings
It’s the centralized/decentralized argument.
Note, the same exact teams required to approve changes to the centralized repo, can/should be the same teams required to approve the PRs in decentralized repos. About the only difference I can see is visibility. In the centralized approach, the configuration can be private, which is beneficial. But the controls guarding the changes to the configuration itself, are the same in both cases: CODEOWNERS
& branch protections with approvals.
´ Ironically you could use the provider to manage the <https://github.com/cloudposse/.github/blob/main/.github/settings.yml>
files in repos, but have the probot do the heavy lifting.
This would be pretty awesome.
I wasn’t across safe-settings or repository-settings.
As of this week we have about 1k resources being managed in github via terraform and our plans take close to 10mins.
Eeeekkkk that’s a long time. Btw, did you publish your modules for this?
It’s private and it’s pretty much code that is unmaintable. I just spent the past hour looking at repository-settings and for a moment I thought I was going to switch but it looks like there is a known issue with branch protection rules. I wasn’t able to get it to work in my repo: https://github.com/repository-settings/app/issues/857
Problem Description
Unable to define branch protection rules through the branches
configuration.
I’ve tried numerous iterations, including a direct copy/paste from the docs. No branch protection is ever created.
What is actually happening
nothing at all.
What is the expected behaviour
branch protection rules should be created / maintained by the app
Error output, if available
n/a
Context Are you using the hosted instance of repository-settings/app or running your own?
hosted
Other then branch protection (a big feature that we need to have) it seems like a pretty good tool. Given this recent experience I’m wondering even if it worked how would i know when it “stops” working in the future. Ex. they fix it and in 3 months it breaks again and my new repos don’t have branch protection.
@venkata.mutyala you don’t even need those, since your GHE. We don’t use those. We use Organizational Repository Rulesets, which IMO are better.
Why manage branch protections on hundreds of repos when you can do it in one place.
To be clear, Organizational Repository Rulesets implement branch protections
But they are more powerful. You can match on repository properties, wildcards, etc. You can easily exclude bots and apps to bypass protections. You can enable them in a dry-run mode without enforcement, and turn on enforcement once they look good.
Dangggggg. How did i forget about this.
Thanks Erik! This is going to clear out a lot of TF that i didn’t need to write. I recall seeing this before but totally forgot about it.
Note, we also have an open PR for repository-settings. It will probably be a while before it merges. The repo is maintained, but sparingly.
What
• Added type
support for environment deployment_branch_policy
custom branches.
Why
• Environment deployment_branch_policy
supports custom branches type - branch
or tag
. The type is an option parameter that sets branch
by default. These changes allow us to specify deployment_branch_policy for tag
Config example
environments:
- name: development
deployment_branch_policy:
custom_branches:
- dev/*
- name: release/*
type: branch
- name: v*
type: tag
You can specify custom_branches
list item as string
for back-compatibility or as object
name: `string`
type: `branch | tag`
Releated links
I recall seeing this before but totally forgot about it.
And I just talked about it on #office-hours!! I must have really sucked presenting how we use it. I even did a screenshare
That said, safe-settings
(a hard fork of repository-settings
i believe and also in probot) is very well maintained and by GitHub
How are you managing membership into each team?
That’s still manual for us.
Did you automate that too?
Yeah we have
Yeah we have some automation but it’s not very good. I was hoping to copy a cloudposse module hence my tears’ :sob:.
@joshmyers did you go super granular with your permissions management and just use github_repository_collaborators
or do you use teams/memberships?
(We would accept contributions for cloudposse style modules for managing github repos and teams)
We do this and use https://github.com/mineiros-io/terraform-github-repository.
We have a very light root module around it here: https://github.com/masterpointio/terraform-components/tree/main/components/github-repositories
Nice, looks very similar to ours. We also manage a few things like the CODEOWNERS/PR template files there too
@Matt Gowie are you using the forked (abandoned?) provider or the official hashicorp provider
(because mineiros also forked the provider)
https://github.com/mineiros-io/terraform-github-repository/blob/main/versions.tf#L11-L12 < the official provider.
Cool! Then I’m more bullish on it
2024-04-04
v1.8.0-rc2 1.8.0-rc2 (April 4, 2024) If you are upgrading from Terraform v1.7 or earlier, please refer to the Terraform v1.8 Upgrade Guide. NEW FEATURES:
Providers can now offer functions which can be used from within the Terraform configuration language. The syntax for calling a provider-contributed function is provider::function_name(). (<a…
1.8.0-rc2 (April 4, 2024) If you are upgrading from Terraform v1.7 or earlier, please refer to the Terraform v1.8 Upgrade Guide. NEW FEATURES:
Providers can now offer functions which can be used …
Upgrading to Terraform v1.8
I’m trying to setup a pipe in EventBridge, it mostly works. The problem is the logging config isn’t applied. Terraform isn’t giving any errors, but when you look at the web console you see configure logging instead of the below logging config. Anyone who got an idea of why that could be? The role got the right logging permissions.
resource "aws_cloudwatch_log_group" "pipes_log_group_raw_data_stream_transformer" {
name = "/aws/pipes/RawDataStreamTransformer"
retention_in_days = 365
tags = local.tags
}
resource "awscc_pipes_pipe" "raw_data_stream_transformer" {
name = "raw-data-stream-transformer"
role_arn = aws_iam_role.pipes_raw_data_transformer_role.arn
source = aws_kinesis_stream.raw_data_stream.arn
target = aws_kinesis_stream.transformed_data_stream.arn
enrichment = aws_lambda_function.data_transformer.arn
source_parameters = {
kinesis_stream_parameters = {
starting_position = "LATEST"
maximum_retry_attempts = 3
dead_letter_config = {
arn = aws_sqs_queue.raw_data_stream_deadletter.arn
}
}
}
target_parameters = {
kinesis_stream_parameters = {
partition_key = "$.partition_key"
}
}
log_configuration = {
enabled = true
log_level = "ERROR"
log_group_name = aws_cloudwatch_log_group.pipes_log_group_raw_data_stream_transformer.name
}
}
I think cw logs for event bridge need to have the aws/events/
prefix. Fairly sure I had the same problem and noted that when I created it through the console it created a log group with that prefix
Oh, interesting.. Let me try that
Unfortunately that wasn’t the issue..
log_configuration = {
cloudwatch_logs_log_destination = {
log_group_arn = aws_cloudwatch_log_group.pipes_log_group_raw_data_stream_transformer.arn
}
level = "ERROR"
include_execution_data = ["ALL"]
}
It was mostly a syntax error. This ended up being the solution
v1.9.0-alpha20240404 1.9.0-alpha20240404 (April 4, 2024) ENHANCEMENTS:
terraform console: Now has basic support for multi-line input in interactive mode. (#34822) If an entered line contains opening paretheses/etc that are not closed, Terraform will await another line of input to complete the expression. This initial implementation is primarily…
1.9.0-alpha20240404 (April 4, 2024) ENHANCEMENTS:
terraform console: Now has basic support for multi-line input in interactive mode. (#34822) If an entered line contains opening paretheses/etc th…
The console command, when running in interactive mode, will now detect if the input seems to be an incomplete (but valid enough so far) expression, and if so will produce another prompt to accept a…
from the v1.9 alpha release notes!!!
• The experimental “deferred actions” feature, enabled by passing the -allow-deferral
option to terraform plan
, permits count
and for_each
arguments in module
, resource
, and data
blocks to have unknown values and allows providers to react more flexibly to unknown values. This experiment is under active development, and so it’s not yet useful to participate in this experiment.
Wow!
Cc @Jeremy G (Cloud Posse) @Andriy Knysh (Cloud Posse)
2024-04-05
Anyone have any info on Terraform Stacks? Anyone used in private beta? Know when/what functionality maybe coming to OSS?
Terraform stacks simplify provisioning and managing resources at scale, reducing the time and overhead of managing infrastructure.
According to the latest Hashicorp earnings call - this feature will be TFC/E specific and will not come to BSL TF cli.
Terraform stacks simplify provisioning and managing resources at scale, reducing the time and overhead of managing infrastructure.
Uff, thanks.
2024-04-06
2024-04-07
2024-04-08
2024-04-09
Hello everyone Can we spin up two dms instances from this module? I am facing one issue regarding name , i have copied the same folder structure from this module. Please help me and let me know if this is possible to do or not Thanks Here is the module
https://github.com/cloudposse/terraform-aws-dms/blob/main/examples/complete/main.tf
locals {
enabled = module.this.enabled
vpc_id = module.vpc.vpc_id
vpc_cidr_block = module.vpc.vpc_cidr_block
subnet_ids = module.subnets.private_subnet_ids
route_table_ids = module.subnets.private_route_table_ids
security_group_id = module.security_group.id
create_dms_iam_roles = local.enabled && var.create_dms_iam_roles
}
# Database Migration Service requires
# the below IAM Roles to be created before
# replication instances can be created.
# The roles should be provisioned only once per account.
# <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html>
# <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html#CHAP_Security.APIRole>
# <https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/dms_replication_instance>
# * dms-vpc-role
# * dms-cloudwatch-logs-role
# * dms-access-for-endpoint
module "dms_iam" {
source = "../../modules/dms-iam"
enabled = local.create_dms_iam_roles
context = module.this.context
}
module "dms_replication_instance" {
source = "../../modules/dms-replication-instance"
# <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReleaseNotes.html>
engine_version = "3.4"
replication_instance_class = "dms.t2.small"
allocated_storage = 50
apply_immediately = true
auto_minor_version_upgrade = true
allow_major_version_upgrade = false
multi_az = false
publicly_accessible = false
preferred_maintenance_window = "sun:10:30-sun:14:30"
vpc_security_group_ids = [local.security_group_id, module.aurora_postgres_cluster.security_group_id]
subnet_ids = local.subnet_ids
context = module.this.context
depends_on = [
# The required DMS roles must be present before replication instances can be provisioned
module.dms_iam,
aws_vpc_endpoint.s3
]
}
module "dms_endpoint_aurora_postgres" {
source = "../../modules/dms-endpoint"
endpoint_type = "source"
engine_name = "aurora-postgresql"
server_name = module.aurora_postgres_cluster.endpoint
database_name = var.database_name
port = var.database_port
username = var.admin_user
password = var.admin_password
extra_connection_attributes = ""
secrets_manager_access_role_arn = null
secrets_manager_arn = null
ssl_mode = "none"
attributes = ["source"]
context = module.this.context
depends_on = [
module.aurora_postgres_cluster
]
}
module "dms_endpoint_s3_bucket" {
source = "../../modules/dms-endpoint"
endpoint_type = "target"
engine_name = "s3"
s3_settings = {
bucket_name = module.s3_bucket.bucket_id
bucket_folder = null
cdc_inserts_only = false
csv_row_delimiter = " "
csv_delimiter = ","
data_format = "parquet"
compression_type = "GZIP"
date_partition_delimiter = "NONE"
date_partition_enabled = true
date_partition_sequence = "YYYYMMDD"
include_op_for_full_load = true
parquet_timestamp_in_millisecond = true
timestamp_column_name = "timestamp"
service_access_role_arn = join("", aws_iam_role.s3[*].arn)
}
extra_connection_attributes = ""
attributes = ["target"]
context = module.this.context
depends_on = [
aws_iam_role.s3,
module.s3_bucket
]
}
resource "time_sleep" "wait_for_dms_endpoints" {
count = local.enabled ? 1 : 0
depends_on = [
module.dms_endpoint_aurora_postgres,
module.dms_endpoint_s3_bucket
]
create_duration = "2m"
destroy_duration = "30s"
}
# `dms_replication_task` will be created (at least) 2 minutes after `dms_endpoint_aurora_postgres` and `dms_endpoint_s3_bucket`
# `dms_endpoint_aurora_postgres` and `dms_endpoint_s3_bucket` will be destroyed (at least) 30 seconds after `dms_replication_task`
module "dms_replication_task" {
source = "../../modules/dms-replication-task"
replication_instance_arn = module.dms_replication_instance.replication_instance_arn
start_replication_task = true
migration_type = "full-load-and-cdc"
source_endpoint_arn = module.dms_endpoint_aurora_postgres.endpoint_arn
target_endpoint_arn = module.dms_endpoint_s3_bucket.endpoint_arn
# <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.html>
replication_task_settings = file("${path.module}/config/replication-task-settings.json")
# <https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TableMapping.html>
table_mappings = file("${path.module}/config/replication-task-table-mappings.json")
context = module.this.context
depends_on = [
module.dms_endpoint_aurora_postgres,
module.dms_endpoint_s3_bucket,
time_sleep.wait_for_dms_endpoints
]
}
module "dms_replication_instance_event_subscription" {
source = "../../modules/dms-event-subscription"
event_subscription_enabled = true
source_type = "replication-instance"
source_ids = [module.dms_replication_instance.replication_instance_id]
sns_topic_arn = module.sns_topic.sns_topic_arn
# <https://awscli.amazonaws.com/v2/documentation/api/latest/reference/dms/describe-event-categories.html>
event_categories = [
"low storage",
"configuration change",
"maintenance",
"deletion",
"creation",
"failover",
"failure"
]
attributes = ["instance"]
context = module.this.context
}
module "dms_replication_task_event_subscription" {
source = "../../modules/dms-event-subscription"
event_subscription_enabled = true
source_type = "replication-task"
source_ids = [module.dms_replication_task.replication_task_id]
sns_topic_arn = module.sns_topic.sns_topic_arn
# <https://awscli.amazonaws.com/v2/documentation/api/latest/reference/dms/describe-event-categories.html>
event_categories = [
"configuration change",
"state change",
"deletion",
"creation",
"failure"
]
attributes = ["task"]
context = module.this.context
}
hey guys, I am trying to setup vpc peering with the module and I am receiving an error that The "count" value depends on resource attributes that cannot be determined until apply
. My module definition is simple but relies on the vpc module which should be created in the same terraform run.
Question - can vpc peering module be applied alongside the module which created vpc for which peering should be configured once created?
@Jeremy G (Cloud Posse) where’s your recent post on count-of issues?
Also, you’ll dig this. https://sweetops.slack.com/archives/CB6GHNLG0/p1712272028318039
from the v1.9 alpha release notes!!!
• The experimental “deferred actions” feature, enabled by passing the -allow-deferral
option to terraform plan
, permits count
and for_each
arguments in module
, resource
, and data
blocks to have unknown values and allows providers to react more flexibly to unknown values. This experiment is under active development, and so it’s not yet useful to participate in this experiment.
It might be less of a problem in the future.
@Piotr Pawlowski I wrote this article to help people understand the issue you are facing. I hope it helps you, too.
Details about computed values can cause terraform plan
to fail
Hello everyone. We are using your great datadog wrapper for terraform but when I try to use the latest version I dont see the new properties added in 1.4.1 like on_missing_data. Was that released to terraform registry? I see it is in the release zip and in the main branch but when I init the module I dont see it in the code
Ah I think i get it. These were all migrated to options: … but for backwards compatibility the old values still work
Right, we moved the old top-level setting, and only the old settings, to their API location under options
, where they belong. (Side note: priority
actually belongs at the top level, not under options, and designating it as legacy was a mistake). You can see the on_missing_data
setting being set here.
on_missing_data = try(each.value.options.on_missing_data, null)
2024-04-10
v1.8.0 1.8.0 (April 10, 2024) If you are upgrading from Terraform v1.7 or earlier, please refer to the Terraform v1.8 Upgrade Guide. NEW FEATURES:
Providers can now offer functions which can be used from within the Terraform configuration language. The syntax for calling a provider-contributed function is provider::function_name(). (<a…
Upgrading to Terraform v1.8
Hello All - I want to deploy few resources in all the accounts in an AWS organization. Is there a way to do it in terraform? I know i can use different providers to do it multiple accounts, but what if I want to create in all accounts in the organization?
You can use multiple providers, or Terraform workspaces. There may be another approach too via some 3rd party tooling.
honestly, type that exact same question into ChatGPT.. it will give you some options
Haha.. did that and it wasn’t helpful . Thought of checking internally as some of you might have wanted the same.
lol right on, yeah the feedback it gave me seamed reasonable, but there has to be a better way to manage a scenario like that for sure, tons of people have to be doing that exact same thing
I’ve also seen this popping up in my feed on LinkedIN and on Cloud Posse Repos.. this may be worth looking into (I plan on looking further into it as well): https://github.com/cloudposse/atmos
Terraform Orchestration Tool for DevOps. Keep environment configuration DRY with hierarchical imports of configurations, inheritance, and WAY more. Native support for Terraform and Helmfile.
yeah, seems like a good solution… see “Use Cases” on that URL
2024-04-11
I am trying to use terraform-aws-eks-node-group
module to create EKS nodes, once created I am installing cluster-autoscaler. At this moment autoscaler is missing proper IAM permissions. I see, that at some point proper policy has been added but I do not see it in the current module code.
Question: what is the proper approach to enable/create IAM permissions required by the autoscaler?
I solved it by adding custom aws_iam_policy
with proper policy defined with aws_iam_policy_document
datasource and pass it to the module via node_role_policy_arns
variable, not sure if this is the proper approach, though
I have it declared like so:
node_role_arn = [module.eks_node_group_main.eks_node_group_role_arn]
that is in my node group module code, and my eks cluster module is also in the same .tf file as well.
oh snaps, you’re talking about the autoscaler, not only the node group.. never mind
2024-04-12
What schema does the DDB table used by https://github.com/cloudposse/github-action-terraform-plan-storage/tree/main need to have? cc @Erik Osterman (Cloud Posse)
I ended up here - so is the plan DDB table the same as the internal state/locks table TF uses, in your case?
module "s3_bucket" {
source = "cloudposse/stack-config/yaml//modules/remote-state"
version = "1.5.0"
component = var.s3_bucket_component_name
environment = try(var.s3_bucket_environment_name, module.this.environment)
context = module.this.context
}
module "dynamodb" {
source = "cloudposse/stack-config/yaml//modules/remote-state"
version = "1.5.0"
component = var.dynamodb_component_name
environment = try(var.dynamodb_environment_name, module.this.environment)
context = module.this.context
}
tl;dr Need to stop storing the plan as an artifact. This action looks nice. Need to provision the DDB table (if indeed is different to the locks table)
Aha, @Igor Rodionov @Dan Miller (Cloud Posse) can you get unblocked on the actions and their dependencies.
Need to stop storing the plan as an artifact.
Why do you need to stop storing the planfile as an artifact? That ensures what you approve in the Pull Request is identical to what you apply upon merge to the default branch
As a workflow artifact attached to the PR, because putting in S3/DDB would allow finer grained access control.
Would rather store/fetch from S3.
As a workflow artifact attached to the PR Aha! Yes, that should be doable, but not something we’ve yet implemented. We would like that though… What we tried to do was something even more extreme, which was using artifact storage for everything, which proved challenging because how limited the artifact API is. But only storing the planfile as an artifact, and not trying to use the artifact storage to also replace dynamodb, that should be much easier.
Would rather store/fetch from S3. Oh, wait, I think we’re saying the same thing. We use S3 in our customer implementations, so that is already supported.
I thought you wanted to use artifact storage
So there must be a feature flag somewhere not set.
Already use artifact storage but would rather S3 - the above action looks like it’ll do that, right? Just not sure what schema the metadata DDB table is expecting…?
Nono, we use both dynamodb and S3.
We use dynamo to store the metadata, so we can find planfiles and invalidate planfiles across PRs.
Use use S3 because I think dynamodb has limits on blob storage. Also, it serves as a permanent record, if so desired.
We didn’t want to use S3 as a database and have to scan the bucket to find planfiles.
Note that 2 PRs could merge and affect the same components therefore we need some way to invalidate one or the other. Technically, a merge queue could alleviate some of the problems, but we don’t yet support that.
Yeah makes sense. Where do you actually create that DDB table/whats the schema? e.g. the Terraform DDB state lock table has a partitionKey of LockID
of type String
Yes, so here’s how we do it. Sicne we do everything with re-useable root modules (components) with atmos, here’s what our configuration looks like.
import:
- catalog/s3-bucket/defaults
- catalog/dynamodb/defaults
components:
terraform:
# S3 Bucket for storing Terraform Plans
gitops/s3-bucket:
metadata:
component: s3-bucket
inherits:
- s3-bucket/defaults
vars:
name: gitops-plan-storage
allow_encrypted_uploads_only: false
# DynamoDB table used to store metadata for Terraform Plans
gitops/dynamodb:
metadata:
component: dynamodb
inherits:
- dynamodb/defaults
vars:
name: gitops-plan-storage
# These keys (case-sensitive) are required for the cloudposse/github-action-terraform-plan-storage action
hash_key: id
range_key: createdAt
gitops:
vars:
enabled: true
github_actions_iam_role_enabled: true
github_actions_iam_role_attributes: ["gitops"]
github_actions_allowed_repos:
- "acmeOrg/infra"
s3_bucket_component_name: gitops/s3-bucket
dynamodb_component_name: gitops/dynamodb
The gitops component is what grants GitHub OIDC permissions to access the bucket and dyanmodb table
Oh, let me get those defaults
# Deploys S3 Bucket and DynamoDB table for managing Terraform Plans
# Then deploys GitHub OIDC role for access these resources
# NOTE: If you make any changes to this file, please make sure the integration tests still pass in the
# <https://github.com/cloudposse/github-action-terraform-plan-storage> repo.
import:
- catalog/s3-bucket/defaults
- catalog/dynamodb/defaults
- catalog/github-oidc-role/gitops
components:
terraform:
gitops/s3-bucket:
metadata:
component: s3-bucket
inherits:
- s3-bucket/defaults
vars:
name: gitops
allow_encrypted_uploads_only: false
gitops/dynamodb:
metadata:
component: dynamodb
inherits:
- dynamodb/defaults
vars:
name: gitops-plan-storage
# This key (case-sensitive) is required for the cloudposse/github-action-terraform-plan-storage action
hash_key: id
range_key: ""
# Only these 2 attributes are required for creating the GSI,
# but there will be several other attributes on the table itself
dynamodb_attributes:
- name: 'createdAt'
type: 'S'
- name: 'pr'
type: 'N'
# This GSI is used to Query the latest plan file for a given PR.
global_secondary_index_map:
- name: pr-createdAt-index
hash_key: pr
range_key: createdAt
projection_type: ALL
non_key_attributes: []
read_capacity: null
write_capacity: null
# Auto delete old entries
ttl_enabled: true
ttl_attribute: ttl
@Matt Calhoun where do we describe the entire shape of the dynamodb table?
Doh, I’d seen the above components but missed vars passing hash_key
and range_key
- thank you!!
aha, ok, sorry - also, this question has sparked a ton of issues on the backend here. Some docs weren’t updated, others missing.
No worries, thank you for checking
new TableV2(this, 'plan-storage-metadata', {
tableName: `app-${WORKLOAD_NAME}-terraform-plan-storage-metadata`,
billing: Billing.onDemand(),
pointInTimeRecovery: true,
timeToLiveAttribute: 'ttl',
partitionKey: {
name: 'id',
type: AttributeType.STRING,
},
sortKey: {
name: 'createdAt',
type: AttributeType.STRING,
},
globalSecondaryIndexes: [
{
indexName: 'pr-createdAt-index',
partitionKey: { name: 'pr', type: AttributeType.NUMBER },
sortKey: { name: 'createdAt', type: AttributeType.STRING },
projectionType: ProjectionType.ALL,
},
],
})
Think got it
OK, some progress…
- name: Store Plan
if: (github.event_name == 'pull_request') || (github.event.issue.pull_request && github.event.comment.body == '/terraform plan')
uses: cloudposse/github-action-terraform-plan-storage@v1
id: store-plan
with:
action: storePlan
planPath: tfplan
component: ${{ github.event.repository.name }}
stack: ${{ steps.get_issue_number.outputs.result }}-tfplan
commitSHA: ${{ github.event.pull_request.head.sha || github.sha }}
tableName: app-playback-terraform-plan-storage-metadata
bucketName: app-playback-terraform-state-prod-us-east-1
Plan runs, can see object in S3
but nothing in DDB…no writes, can see a single read. What am I not grokking about what this thing does? I kinda expected to see some metadata about the plan/pr in DDB? Action completed successfully.
Yes, you should see something like that. Our west-coast team should be coming on line shortly and can advise.
##[debug]tableName: badgers
##[debug]bucketName: badgers
##[debug]metadataRepositoryType: dynamo
##[debug]planRepositoryType: s3
##[debug]bucketName: badgers
##[debug]Node Action run completed with exit code 0
##[debug]Finishing: Store Plan
^^ debug output
Hi Josh…just to be clear, do you have a PR open? It should write metadata to the DDB table whenever you push changes. And another sanity check, does your user/role in GHA have access to write to that table in DDB?
Hey Matt - thanks so much. Yes PR is open (internal GHE), chaining roles and yes it has access to DDB and S3. I was getting explicit permission denied writing to DDB before adding it - so something was trying…
Hmm, I wonder if same PR meant it didn’t try to re write on subsequent push? Let me push another change.
Committed a new change, can see S3 plan file under the new sha…but still nothing in DDB (Item count/size etc 0) …
Actually maybe the previous DDB IAM issues was on scan…which would make sense as I can see reads on the table but no writes.
Yup, confirmed previous fail was trying to scan. (since succeeded)
Bah - so sorry, problem between keyboard and computer. I’m looking in the wrong account - this is working as intended
When you’re ready, we have example workflows we can share for drift detection/remediation as well.
Working nicely - thank you!
Does anyone know of or has built a custom VSCode automation for refactoring a resource or module argument out into a variable?
As in I want to select an argument value like in the below screenshot and be able to right click or hit a keyboard shortcut and it’ll create a new variable block in [variables.tf](http://variables.tf)
with the argument’s name and that value as the default value?
No but that sounds very cool.
Perhaps code pilot can be trained to do it?
Would be super useful, right?
I think I tried to use copilot to do it, but it only wanted to do it in the same file.
I’m sure this would be a no-brainer for somebody who has built a proper VSCode app.
Or even just something on the command line
For this specific, ask I am 99% with the right prompt, and a shell script, you could iterate over all the files with mods
because the prompt would be easy, and the chances to get it wrong are small.
AI on the command line
In your prompt, just make sure convey to respond in HCL and to only focus on parameterizing constants. You can give it the convention for naming variables, etc.
I’m working on this extension for the monaco editor, and will port it to a vscode plugin among other things soon
Will check out mods for this… Charmbracelet
@george.m.sedky awesome to hear! Happy to be a beta tester when you’ve got a VSCode plugin.
@Matt Gowie this is how it works now with monaco before the VS code plugin https://youtu.be/ktyXJpf36W0?si=xJaaQ5Pn1i7L0m_j
Closed - Manage to make it work by switching to a different region that supports the API Gateway that I’m trying to create
Hey guys, I’m experiencing something weird, I have a very simple AWS API Gateway definition and for some reason it’s not being created–it’s stuck in creation state:
What is the issue Its trying to create…… Is there any error ur facing??
Hi @tamish60, I’ve managed to make it work, this region doesn’t seem to have the support for the kind of API Gateway that I’m trying to do, I moved a different region and it was able to work just fine
2024-04-13
2024-04-15
2024-04-16
Hi,
i’m currently using terragrunt and want to migrate to atmos. One very convenient thing about terragrunt is that i can simply overwrite the terraform module git repo urls with a local path (https://terragrunt.gruntwork.io/docs/reference/cli-options/#terragrunt-source-map). This allows me to develop tf modules using terragrunt in a efficient way.
Is there anything like it in atmos? If not, what is the best way to develop tf modules while using atmos?
(@Stephan Helas best to use atmos)
So we’ve seen more and more similar requests to these, and I can really identify with this request as something we could/should support via the atmos vendor
command.
One of our design goal in atmos is to avoid code generation / manipulation as much as possible. This ensures future compatibility with terraform.
So while we don’t support it the way terragrunt
does, we support it the way terraform
already supports it. :smiley:
That’s using the [_override.tf](http://_override.tf)
pattern.
We like this approach because it keeps code as vanilla as possible while sticking with features native to Terraform.
https://developer.hashicorp.com/terraform/language/files/override
So, let’s say you have a [main.tf](http://main.tf)
with something like this (from the terragrunt docs you linked)
module "example" {
source = "github.com/org/modules.git//example"
// other parameters
}
To do what you want to do in native terraform, create a file called [main_override.tf](http://main_override.tf)
module "example" {
source = "/local/path/to/modules//example"
}
You don’t need to duplicate the rest of the definition, only the parts you want to “override”, like the source
Override files merge additional settings into existing configuration objects. Learn how to use override files and about merging behavior.
So this will work together with the atmos vendor
command.
- create
components/terraform/mycomponent
- create the
[main_override.tf](http://main_override.tf)
file with the new source (for example) - configure vendoring via
vendor.yml
orcomponent.yml
- run
atmos vendor
This works becauseatmos vendor
will not overwrite the[main_override.tf](http://main_override.tf)
file since that does not exist upstream. You can use this strategy to “monkey patch” anything in terraform.
https://en.wikipedia.org/wiki/Monkey_patch#<i class="em em-~"</i>text=In%20computer%20programming%2C%20monkey%20patching,altering%20the%20original%20source%20code>.
In computer programming, monkey patching is a technique used to dynamically update the behavior of a piece of code at run-time. It is used to extend or modify the runtime code of dynamic languages such as Smalltalk, JavaScript, Objective-C, Ruby, Perl, Python, Groovy, and Lisp without altering the original source code.
@Stephan Helas also consider the following: if you are developing a new version of a component and don’t want to “touch” the existing version. Let’s say you already have a TF component in components/terraform/my-component
and have an Atmos manifest like this
components:
terraform:
my-component:
metadata:
component: my-component # Point to the Terraform component (code)
vars:
...
then you want to create a new (complete diff) version of the TF component (possibly with breaking changes). You place it into another folder in components/terraform
, for example in components/terraform/my-component/v2
or in components/terraform/my-component-v2
(any folder names can be used, it’s up to you to organize it)
then suppose you want to test it only in the dev
account. You add this manifest to the top-level dev
stack, e.g. in the plat-ue2-dev
stack
components:
terraform:
my-component: # You can also use a new Atmos component name, e.g. my-component/v2:
metadata:
component: my-component/v2 # Point to the new Terraform component (code) under development
vars:
...
Wow, thx very much. i’ll try the vendor aproach.
in fact, my second question was, how to develop a newer component versions
you can have many different versions of the “same” Terraform component, and point to them in Atmos stack manifests at diff scopes (orgs, tenant, account, region)
i think i got the idea, but i need to test it though. its a lot to take in. i love the deep yaml merging approach a lot. i’ll try it later and will probably come back with new questions .)
i didn’t knew about the override feature of terraform. thats awesome - and yes - i’ll totally use that. So, as soon as i have mastered the handling of the remote-state (something terragrunt does for me) i think i can leave terragrunt behind
as soon as i have mastered the handling of the remote-state (something terragrunt does for me) Are you clear on the path forward here? As you might guess, we take the “terraform can handle that natively for you” approach by making components more intelligent using datas sources and relying less on the tooling.
That said, if that wouldn’t work for you, would like to better understand your challenges.
in fact, my second question was, how to develop a newer component versions
We have a lot of “opinions” on this that diverge from established terragrunt patterns. @Andriy Knysh (Cloud Posse) alluded to it in his example.
This probably warrants a separate thread in atmos, if / when you need any guidance.
as short summary, i use terragrunt (like many others i suppose) mainly for generating backend and provider config. on top of that i use the dependency feature for example to place a ec2 instance in a generated subnet. as i don’t have to deal with remote states manually - i don’t know how it works behind the terragrunt curtain.
right now i try to understand the concept of the “context” module. after that i’ll try to create the backend config and use remote state. but i thing i will at leaat need another day for that.
Yes, all good questions. We should probably write up a guide that maps concepts and techniques common to #terragrunt and how we accomplish them in atmos
Backends, is a good one - like we didn’t like that TG generates it, when the whole point is IAC So we wrote a component for that.
The best write up on context might be by @Matt Gowie https://masterpoint.io/updates/terraform-null-label/
A post highlighting one of our favorite terraform modules: terraform-null-label. We dive into what it is, why it’s great, and some potential use cases in …
ok. thx soo much. its a real schame, that i didn’t find out about atmos sooner. could have saved me from a lot of weird things i did in terragrunt.
Atmos already generates many things like the backend config for diff clouds, and the override
pattern (e.g. for providers)
https://atmos.tools/core-concepts/components/terraform-backends
Configure Terraform Backends.
the remote-state
question comes up often. We did not want to make YAML a programming language, but use it just for config, so the remote state is done in the TF module and then configured in Atmos, see https://atmos.tools/core-concepts/components/remote-state
The Terraform Component Remote State is used when we need to get the outputs of an Terraform component,
having said that, Atmos now supports Go
templating with Sprig and Gomplate functions and datasources
Atmos supports Go templates in stack manifests.
which brings YAML with Go
templates close to being a “programming” language . (w/o hardcoding anything in Atmos to work with functions in YAML, just embedding the existing template engines)
having said that, we might consider adding an Atmos-specific remote-state
datasource to get the remote state for a component in Atmos directly (YAML + Go templates) w/o going through Terraform.
we might support both approaches (no ETA on the “remote-state” template datasource yet)
@Andriy Knysh (Cloud Posse)
how would i get access to the provisioning documentation? I’ve created a account to login.
@Stephan Helas that part of our documentation (docs.cloudposse.com/reference-architecture) is included with our commercial reference architecture.
That’s how we make money at Cloud Posse to support our open source. If you’d like more information, feel free to book a meeting with me at cloudposse.com/quiz
@Erik Osterman (Cloud Posse)
i’ve got a design question:
so i tried to use s3 backend and versioned components. for that i simply created folders in components like this:
.
└── terraform
├── infra
│ ├── account-map
│ └── vpc-flow-logs-bucket
├── tfstate-backend
│ └── v1.4.0
└── wms-base
├── develop
├── v1.0.0
└── v1.1.0
I then created the stack like this:
▶ cat stacks/KN/wms/it03/_defaults.yaml
import:
- mixins/_defaults
- mixins/tennant/it03
components:
terraform:
it03/defaults:
metadata:
type: abstract
vars:
location: Wandsbek
lang: de
base:
metadata:
type: real
component: wms-base/v1.0.0
inherits:
- base/defaults
- it03/defaults
if i include the s3 backend without a key prefix definition, the prefix would be named like the component (as described in the documentation). this would lead to a loss of state, if i changed the component version (as a new s3 prefix was created).
so i integrated a tenant mixin for the prefix like this:
import:
- catalog/component/wms/defaults
- mixins/region/eu-central-1
vars:
tenant: wms
environment: it03
tags:
instance: it03
terraform:
backend:
s3:
workspace_key_prefix: wms-it03
So that the bucket prefix name stays stable.
questions:
• is the a sound design or am i doing it wrong?
You identified and understand the gist of the problem. The workspace_key_prefix
is how to keep the terraform state stable, when versioning multiple components.
So in your example
└── wms-base
├── develop
├── v1.0.0
└── v1.1.0
This is the correct way to version components. The idea is ultimately that you want all stacks that use a given component to converge on the same one.
Switching versions is as easy as your example, with the caveat that the location in TF state would change.
base:
metadata:
type: real
component: wms-base/v1.0.0
So we need to ensure the backend.s3.workspace_key_prefix
is stable for that component, but not all components.
So we allow component
to vary and point to the version. And we require backend.s3.workspace_key_prefix
to point to the a constant of the component name without the version like wms-base
You’re doing part of that, but this seems wrong to me
terraform:
backend:
s3:
workspace_key_prefix: wms-it03
@Andriy Knysh (Cloud Posse) would be best to chime in here.
@Gabriela Campana (Cloud Posse) we should have a “design pattern” that describes how to use multiple versions of a component while keeping state stable
thx. this was helpful. a design pattern would be awesome.
we’ll describe such a pattern in the docs
@Stephan Helas workspace_key_prefix
is by default the terraform component name (in components/terraform/<component>)
, pointed to metadata.component
attribute
so yes, if you want to kep it the same regardless of the versions, you can specify it in backend.s3.workspace_key_prefix
and use for all versions. For example
components:
terraform:
it03/defaults:
metadata:
type: abstract
vars:
location: Wandsbek
lang: de
backend:
s3:
workspace_key_prefix: wms-base
it03/v1.0:
metadata:
type: real
component: wms-base/v1.0.0
inherits:
- base/defaults
- it03/defaults
it03/v1.1:
metadata:
type: real # type 'real' is the default and optional, you can omit it
component: wms-base/v1.1.0
inherits:
- base/defaults
- it03/defaults
don’t use globals like the following since it will be applied to ALL components in your infra
terraform:
backend:
s3:
workspace_key_prefix: wms-it03
in the example above, the TF state S3 bucket will look like this:
bucket:
wms-base: # this is the key prefix, which becomes a bucket folder for the TF component
<stack-name>-it03-v1.0: # subfolder for the `it03/v1.0` component in the stack
`tf_state` file
<stack-name>-it03-v1.1: # subfolder for the `it03/v1.1` component in the stack
`tf_state` file
<stack-2-name>-it03-v1.1: # subfolder for the `it03/v1.1` component in another stack `stack-2-name`
`tf_state` file
if you want to keep not only workspace_key_prefix
the same for all versions of the component, but also the Terraform workspace, you can do the following:
components:
terraform:
it03/defaults:
metadata:
type: abstract
vars:
location: Wandsbek
lang: de
backend:
s3:
workspace_key_prefix: wms-base
it03/v1.0:
metadata:
type: real
component: wms-base/v1.0.0
inherits:
- base/defaults
- it03/defaults
# Override Terraform workspace
terraform_workspace_pattern: "{tenant}-{environment}-{stage}"
it03/v1.1:
metadata:
type: real # type 'real' is the default and optional, you can omit it
component: wms-base/v1.1.0
inherits:
- base/defaults
- it03/defaults
# Override Terraform workspace
terraform_workspace_pattern: "{tenant}-{environment}-{stage}"
Terraform Workspaces.
note that in this case, both workspace_key_prefix
and TF workspace will be the same for it03/v1.0
and it03/v1.1
components, so you will not be able to provision both at the same time (or if you do so, one will override the other in the state since they will be using the same state file in the same folder)
bucket:
wms-base: # this is the key prefix, which becomes a bucket folder for the TF component
<{tenant}-{environment}-{stage}>: # this is the subfolder for the Terraform workspace
`tf_state` file
Thx again for all the information.
@Andriy Knysh (Cloud Posse)
i can’t inherit the terraform_workspace_pattern, right? so i need to overwrite the terraform_workspace_pattern on every component in the stack
no, you can’t inherit it b/c it’s in metadata
section which is per component only and is not inheritable
so i need to overwrite the terraform_workspace_pattern on every component in the stack
let me know why you would want to do it in the first place
Hey all,
(Edit: got the initial module working, now have questions about whether I’ve set it up optimally, and how to configure other ses features)
~~~~~~ I’m attempting to get the cloudposse/ses/aws terraform module working but am not having much luck. I’ve only really done small tweaks to terraform scripts so it’s probably just a skill issue, but hopefully someone here can point me in the right direction.
The project is using terragrunt and I’ve attempted to copy over ‘/examples/complete’ code.
To my amateur eye everything looks ok, but I’m getting this error:
│ Error: invalid value for name (must only contain alphanumeric characters, hyphens, underscores, commas, periods, @ symbols, plus and equals signs)
│
│ with module.ses.module.ses_user.aws_iam_user.default[0],
│ on .terraform/modules/ses.ses_user/main.tf line 11, in resource "aws_iam_user" "default":
│ 11: name = module.this.id
in my ./modules/ses folder I have copied the following example files exactly:
context.tf
outputs.tf
variables.tf
versions.tf
My main.tf looks like this:
module "ses" {
source = "cloudposse/ses/aws"
version = "0.25.0"
domain = var.domain
zone_id = ""
verify_dkim = false
verify_domain = false
context = module.this.context
}
I don’t want the route53 verification stuff set up (as we will need to set up the MX record ourselves) so I didn’t add that resource, or the vpc module. Is that where I’ve messed up?
and I also have a terraform.tf file with this content (matching how others have set up other modules in our project):
provider "aws" {
region = var.region
}
provider "awsutils" {
region = var.region
}
I’m really stuck on why this error is being thrown inside this ses.ses_user module. Any help at all would be greatly appreciated!
I removed the terraform.tf file and moved the providers into the main.tf file, and added in the vpc module and route53 resource to see if that was the issue, but it wasn’t.
provider "aws" {
region = var.region
}
provider "awsutils" {
region = var.region
}
module "vpc" {
source = "cloudposse/vpc/aws"
version = "2.1.1"
ipv4_primary_cidr_block = "<VPC_BLOCK - not sure if confidential>"
context = module.this.context
}
resource "aws_route53_zone" "private_dns_zone" {
name = var.domain
vpc {
vpc_id = module.vpc.vpc_id
}
tags = module.this.tags
}
module "ses" {
source = "cloudposse/ses/aws"
version = "0.25.0"
domain = var.domain
zone_id = ""
verify_dkim = false
verify_domain = false
context = module.this.context
}
Really not sure what’s going wrong here. I’ve matched the examples pretty much exactly. Is it because it’s running via terragrunt or something?
Am I supposed to set up every parameter in the context.tf file by passing in variables? It looks like there’s a “name” var in there which defaults to null - that could explain why I’m getting the error maybe?
I think that might be what I was missing, passing in the name variable. I just set it to our application name and it worked! I’ll refactor things to pass these variables down from the environment so it should be good, finally got the basic ses setup up.
Now the question is how to configure all the other SES stuff.
Is it possible to configure the ‘Monitoring your email sending’ (publish to SNS topic) via the terraform script? This required a configuration set to be applied too, is that possible via the module? I can’t see any options for the inputs for this.
You should give it a simple name ses-user
and it should work
The name goes from the context as an input, to being used in the module.this null label, and then gets passed to the ses module using the context input. The module.this.id
is the fully qualified name composed of the namespace, environment, stage, name, and other inputs. Technically none of those are required but at least one needs a value such as name
for the id
to have a value so you don’t get that error message
Awesome thanks for confirming. I had it as “ses” to start with, then I’ve changed it to use the same iam user name that the backend uses for s3 as well so it has the perms to talk to both services. One thing I haven’t checked yet is whether it deletes the other perms on that user or not.
There’s also a deprecated warning when running the terraform job:
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
╷
│ Warning: Argument is deprecated
│
│ with module.ses.module.ses_user.module.store_write[0].aws_ssm_parameter.default["/system_user/ccp-dev/access_key_id"],
│ on .terraform/modules/ses.ses_user.store_write/main.tf line 22, in resource "aws_ssm_parameter" "default":
│ 22: overwrite = each.value.overwrite
│
│ this attribute has been deprecated
│
│ (and one more similar warning elsewhere)
╵
looks like a deprecation in the ssm parameter write module
https://github.com/cloudposse/terraform-aws-ssm-parameter-store/issues/58
Describe the Bug
After a terraform apply
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
╷
│ Warning: Argument is deprecated
│
│ with module.ses.module.ses_user.module.store_write[0].aws_ssm_parameter.default["/system_user/ccp-dev/access_key_id"],
│ on .terraform/modules/ses.ses_user.store_write/main.tf line 22, in resource "aws_ssm_parameter" "default":
│ 22: overwrite = each.value.overwrite
│
│ this attribute has been deprecated
│
│ (and one more similar warning elsewhere)
From upstream docs
Warning
overwrite also makes it possible to overwrite an existing SSM Parameter that’s not created by Terraform before. This argument has been deprecated and will be removed in v6.0.0 of the provider. For more information on how this affects the behavior of this resource, see this issue comment.
Expected Behavior
No deprecation warning
Steps to Reproduce
Run terraform apply using this module
Screenshots
No response
Environment
No response
Additional Context
• https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_parameter
Hello, Have you had a chance to test the new Terraform 1.8.0 version? In my experience, the new version appears to be approximately three times slower compared to version 1.7.5. I’ve created a new GitHub Issue regarding this performance problem, but we need to reproduce the issue (we need more cases). If you upgrade to 1.8, could you gauge the performance and memory usage before and after, then share the results here: https://github.com/hashicorp/terraform/issues/34984
I know one of our customers tested it (accidentally) in their github actions by not pinning their terraform version. When 1.8 was released, their terraform started throwing exceptions and crashing. Pinning to the previous 1.7.x released fixed it.
The new release - 1.8.2 - has solved the performance issue. In my case it’s even faster than 1.7.5
2024-04-17
v1.8.1 1.8.1 (April 17, 2024) BUG FIXES:
Fix crash in terraform plan when referencing a module output that does not exist within the try(…) function. (#34985) Fix crash in terraform apply when referencing a module with no planned changes. (<a href=”https://github.com/hashicorp/terraform/pull/34985” data-hovercard-type=”pull_request”…
This PR fixes two crashes within Terraform v1.8.0. In both cases a module expansion was being missed, and then crashing when something tried to reference the missed module. The first occurs when re…
Terraform Version
$ terraform --version
Terraform v1.8.0
on darwin_arm64
+ provider registry.terraform.io/hashicorp/aws v5.39.1
+ provider registry.terraform.io/hashicorp/local v2.4.1
+ provider registry.terraform.io/hashicorp/null v3.2.2
+ provider registry.terraform.io/hashicorp/random v3.6.0
Terraform Configuration Files
...terraform config...
Debug Output
n/a
Expected Behavior
$ time terraform validate
Success! The configuration is valid.
real 0m1.452s
user 0m2.379s
sys 0m0.318s
Actual Behavior
$ time terraform validate
Success! The configuration is valid.
real 0m9.720s
user 0m2.701s
sys 0m0.444s
Steps to Reproduce
git clone <https://github.com/philips-labs/terraform-aws-github-runner.git>
cd terraform-aws-github-runner/examples/multi-runner/
time terraform validate
Additional Context
Terraform v1.8 is ~~~3x slower than v1.7 and consumes ~~~5x more memory.
References
No response
2024-04-18
how powerful are the provider functions? trying to understand the use cases
extremely powerful.
You can write your own function in golang within a provider and then run the function in terraform.
In OpenTofu 1.7, where provider functions are coming as well, we have some cool functional expansions on it that allow use cases like e.g. writing custom functions in Lua files side-by-side with your .tf files.
We’ll have a livestream next week on Wednesday diving into some of the details: https://www.youtube.com/watch?v=6OXBv0MYalY
opentofu 1.7 is still in beta tho, no ?
v1.7.0-beta1 Do not use this release for production workloads! It’s time for the first beta release of the 1.7.0 version! This includes a lot of major and minor new features, as well as a ton of community contributions! The highlights are:
State Encryption (docs) Provider-defined Functions (<a href=”https://1-7-0-beta1.opentofu.pages.dev/docs/language/functions/#provider-defined-functions“…
Beta 1 released this morning!
writing custom functions in Lua files side-by-side with your .tf files.
function main_a( input )
local animal_sounds = {
cat = 'meow',
dog = 'woof',
cow = 'moo'
}
return animal_sounds
end
We’re working to get the experimental Lua provider on the registry as we speak, so you’ll be able to play around with it when test driving the beta!
are there other languages being supported besides Lua?
It’s not support for any specific languages per se, we’re just enabling the creation of providers that can dynamically expose custom functions based on e.g. a file passed to the provider as a config parameter.
So like here
provider "lua" {
lua = file("./main.lua")
}
So the community will be able to create providers for whatever languages they want.
However, the providers still have to be written in Go, and after investigating options for embedding both Python and JavaScript in a provider written in Go, it’s not looking very promising.
I’ll be doing a PoC for dynamically writing those in Go though, via https://github.com/traefik/yaegi
how about security side with functions?
do you mean how does opentofu prevent supply chain attacks with providers that integrate new languages ?
I don’t think that’s any different to normal providers and their resources though, is it?
Btw. Lua provider is in and ready to play with https://github.com/opentofu/terraform-provider-lua
awesome
thinking if that is an issue if the codes in lua will replace Terraform binaries etc
this is freggin awesome!
Using provider functions, it will be possible to run Lua along side terraform / opentofu.
function main_a( input )
local animal_sounds = {
cat = 'meow',
dog = 'woof',
cow = 'moo'
}
return animal_sounds
end
Then call that from terraform.
terraform {
required_providers {
tester = {
source = "terraform.local/local/testfunctions"
version = "0.0.1"
}
}
}
provider "tester" {
lua = file("./main.lua")
}
output "test" {
value = provider::tester::main_a(tomap({"foo": {"bar": 190}}))
}
See example: https://github.com/opentofu/opentofu/pull/1491
Granted, I don’t love the way this looks: provider::tester::main_a(tomap({"foo": {"bar": 190}}))
https://sweetops.slack.com/archives/CB6GHNLG0/p1713454132172149?thread_ts=1713440027.477669&cid=CB6GHNLG0
Prior to this change a single unconfigured lazy provider instance was used per provider type to supply functions. This used the functions provided by GetSchema only.
With this change, provider function calls are detected and supplied via GraphNodeReferencer and are used in the ProviderFunctionTransformer to add dependencies between referencers and the providers that supply their functions.
With that information EvalContextBuiltin can now assume that all providers that require configuration have been configured by the time a particular scope is requested. It can then use it’s initialized providers to supply all requested functions.
At a high level, it allows providers to dynamically register functions based on their configurations.
main.lua
function main_a( input )
local animal_sounds = {
cat = 'meow',
dog = 'woof',
cow = 'moo'
}
return animal_sounds
end
terraform {
required_providers {
tester = {
source = "terraform.local/local/testfunctions"
version = "0.0.1"
}
}
}
provider "tester" {
lua = file("./main.lua")
}
output "test" {
value = provider::tester::main_a(tomap({"foo": {"bar": 190}}))
}
Output:
Changes to Outputs:
+ test = {
+ cat = "meow"
+ cow = "moo"
+ dog = "woof"
}
This requires some enhancements to the HCL library and is currently pointing at my personal fork, with a PR into the main HCL repository: hashicorp/hcl#676. As this may take some time for the HCL Team to review, I will likely move the forked code under the OpenTofu umbrella before this is merged [done].
This PR will be accompanied by a simple provider framework for implementing function providers similar to the example above.
Related to #1326
Target Release
1.7.0
It is awesome! To be clear: This will likely only be available for OpenTofu and not for Terraform because the OpenTofu devs are taking the provider functions feature further with this update that they came up with. I haven’t see anything that says Terraform will do the same, which will be an interesting diversion!
Prior to this change a single unconfigured lazy provider instance was used per provider type to supply functions. This used the functions provided by GetSchema only.
With this change, provider function calls are detected and supplied via GraphNodeReferencer and are used in the ProviderFunctionTransformer to add dependencies between referencers and the providers that supply their functions.
With that information EvalContextBuiltin can now assume that all providers that require configuration have been configured by the time a particular scope is requested. It can then use it’s initialized providers to supply all requested functions.
At a high level, it allows providers to dynamically register functions based on their configurations.
main.lua
function main_a( input )
local animal_sounds = {
cat = 'meow',
dog = 'woof',
cow = 'moo'
}
return animal_sounds
end
terraform {
required_providers {
tester = {
source = "terraform.local/local/testfunctions"
version = "0.0.1"
}
}
}
provider "tester" {
lua = file("./main.lua")
}
output "test" {
value = provider::tester::main_a(tomap({"foo": {"bar": 190}}))
}
Output:
Changes to Outputs:
+ test = {
+ cat = "meow"
+ cow = "moo"
+ dog = "woof"
}
This requires some enhancements to the HCL library and is currently pointing at my personal fork, with a PR into the main HCL repository: hashicorp/hcl#676. As this may take some time for the HCL Team to review, I will likely move the forked code under the OpenTofu umbrella before this is merged [done].
This PR will be accompanied by a simple provider framework for implementing function providers similar to the example above.
Related to #1326
Target Release
1.7.0
Oh, I didn’t catch that.
I was more looking to see provider functions adding uniformity and improving interoperability, than introducing incompatbilities
This requires some enhancements to the HCL library and is currently pointing at my personal fork, with a PR into the main HCL repository: hashicorp/hcl#676. As this may take some time for the HCL Team to review, I will likely move the forked code under the OpenTofu umbrella before this is merged [done].
With the focus on functions in recent hcl releases, I thought it time to introduce the ability to inspect which functions are required to evaluate expressions. This mirrors the Variable traversal present throughout the codebase.
This allows for better error messages and optimizations to be built around supplying custom functions by consumers of HCL.
These changes are backwards compatible and is not a breaking change. I explicitly introduced hcl.ExpressionWithFunctions to allow consumers to opt-into supporting function inspection. I would recommend that this be moved into hcl.Expression if a major version with API changes is ever considered.
Got it
OpenTofu will still likely add provider functions that the providers define and open up and AFAIU that should be available across OTF + Terraform, BUT these dynamically defined functions that you can create yourselves and are project specific will be OTF only.
This is my understanding so far, so not 100% on that.
Got a quick question about the projects that Terraform depends on, e.g. https://github.com/hashicorp/hcl/tree/main, why are the licenses of the similar projects not converted by Hashicorp?
2024-04-19
2024-04-20
OpenTofu fended off a HashiCorp legal threat and shipped its 1.7 beta release with the disputed feature intact, along with client-side state encryption.
Shouldn’t this be in the opentofu channel?
OpenTofu fended off a HashiCorp legal threat and shipped its 1.7 beta release with the disputed feature intact, along with client-side state encryption.
they are still quite related, aren’t they?
Learn how to use the OpenTofu CLI to migrate local or remote state to a Cloud Backend.
2024-04-22
2024-04-23
Getting this error while changing the version from helm chart, please suggest
Looks like a potential Hashicorp buyout by IBM: https://www.reuters.com/markets/deals/ibm-nearing-buyout-deal-hashicorp-wsj-reports-2024-04-23/
International Business Machines is nearing a deal to buy cloud software provider HashiCorp , according to a person familiar with the matter.
What a shame.
International Business Machines is nearing a deal to buy cloud software provider HashiCorp , according to a person familiar with the matter.
what a bad timing for an acquisition!
$6.4 billion acquisition adds suite of leading hybrid and multi-cloud lifecycle management products to help clients grappling with today’s AI-driven…
more fun
Am I just missing it, or is there no data source to return the available EKS Access Policies? I opened a ticket to request a new data source, please if it would help you also! https://github.com/hashicorp/terraform-provider-aws/issues/37065
Description
I would like a data source that returns all the available EKS Access Policies, essentially the results from aws eks list-access-policies
. This would help simplify the constructs/logic around configuring EKS Access Policy Associations. It could also be used by module authors to validate user input.
Requested Resource(s) and/or Data Source(s)
• data "aws_eks_access_policies"
Potential Terraform Configuration
data "aws_eks_access_policies" "this" {}
References
• https://docs.aws.amazon.com/eks/latest/APIReference/API_ListAccessPolicies.html
Would you like to implement a fix?
None
AFAIK there is not such a data source. The Access Policies are, IMHO, only useful for bootstrapping access to a new cluster or one where you lost access to the admin IAM role. You really should be using Kubernetes RBAC for access control in most cases.
Description
I would like a data source that returns all the available EKS Access Policies, essentially the results from aws eks list-access-policies
. This would help simplify the constructs/logic around configuring EKS Access Policy Associations. It could also be used by module authors to validate user input.
Requested Resource(s) and/or Data Source(s)
• data "aws_eks_access_policies"
Potential Terraform Configuration
data "aws_eks_access_policies" "this" {}
References
• https://docs.aws.amazon.com/eks/latest/APIReference/API_ListAccessPolicies.html
Would you like to implement a fix?
None
What does “using Kubernetes RBAC” look like from the Access Entry perspective? I only enabled API mode.
See AWS docs on access entries and Kubernetes docs on RBAC for details.
If an access entry’s type is STANDARD, and you want to use Kubernetes RBAC authorization, you can add one or more group names to the access entry. After you create an access entry you can add and remove group names. For the IAM principal to have access to Kubernetes objects on your cluster, you must create and manage Kubernetes role-based authorization (RBAC) objects. Create Kubernetes RoleBinding or ClusterRoleBinding objects on your cluster that specify the group name as a subject for kind: Group. Kubernetes authorizes the IAM principal access to any cluster objects that you’ve specified in a Kubernetes Role or ClusterRole object that you’ve also specified in your binding’s roleRef.
Grant users and apps access to Kubernetes APIs.
Role-based access control (RBAC) is a method of regulating access to computer or network resources based on the roles of individual users within your organization. RBAC authorization uses the rbac.authorization.k8s.io API group to drive authorization decisions, allowing you to dynamically configure policies through the Kubernetes API. To enable RBAC, start the API server with the –authorization-mode flag set to a comma-separated list that includes RBAC; for example: kube-apiserver –authorization-mode=Example,RBAC –other-options –more-options API objects The RBAC API declares four kinds of Kubernetes object: Role, ClusterRole, RoleBinding and ClusterRoleBinding.
That sounds hard. I’m still a kubernetes noob. We have no k8s dedicated platform team for this work. Just using built-in cloud stuff as much as possible. I think I’d rather just spin up more accounts and/or clusters than try to bother with rbac at that level
I understand. In that case, I’d just leave the list of policies hard coded for now. There are only 4.
That’s what I did, yeah. A map of policy name to arn. Just would be nice to have a data source. I think there’s like 6 or 7 policies now. On my phone so don’t have my code in front of me
Oh and we do work in govcloud and iso partitions. So a hardcoded map gets cumbersome…
I see 6 policies now. But if you’re not going to lock things down, then you can just give everyone AmazonEKSClusterAdminPolicy
which is full control.
it’s kinda “cluster as a service” in this case. I’m just writing the terraform, not managing the cluster or any apps. I don’t care what they do with it afterwards. maybe they’ll use eks rbac. I’m just giving them the option to manage the eks access entries, and trying to simplify the input where I can
You can look at what we did for guidance and inspiration
variable "access_entry_map" {
type = map(object({
# key is principal_arn
user_name = optional(string)
# Cannot assign "system:*" groups to IAM users, use ClusterAdmin and Admin instead
kubernetes_groups = optional(list(string), [])
type = optional(string, "STANDARD")
access_policy_associations = optional(map(object({
# key is policy_arn or policy_name
access_scope = optional(object({
type = optional(string, "cluster")
namespaces = optional(list(string))
}), {}) # access_scope
})), {}) # access_policy_associations
})) # access_entry_map
description = <<-EOT
Map of IAM Principal ARNs to access configuration.
Preferred over other inputs as this configuration remains stable
when elements are added or removed, but it requires that the Principal ARNs
and Policy ARNs are known at plan time.
Can be used along with other `access_*` inputs, but do not duplicate entries.
Map `access_policy_associations` keys are policy ARNs, policy
full name (AmazonEKSViewPolicy), or short name (View).
It is recommended to use the default `user_name` because the default includes
IAM role or user name and the session name for assumed roles.
As a special case in support of backwards compatibility, membership in the
`system:masters` group is is translated to an association with the ClusterAdmin policy.
In all other cases, including any `system:*` group in `kubernetes_groups` is prohibited.
EOT
default = {}
nullable = false
}
variable "access_entries" {
type = list(object({
principal_arn = string
user_name = optional(string, null)
kubernetes_groups = optional(list(string), null)
}))
description = <<-EOT
List of IAM principles to allow to access the EKS cluster.
It is recommended to use the default `user_name` because the default includes
the IAM role or user name and the session name for assumed roles.
Use when Principal ARN is not known at plan time.
EOT
default = []
nullable = false
}
variable "access_policy_associations" {
type = list(object({
principal_arn = string
policy_arn = string
access_scope = object({
type = optional(string, "cluster")
namespaces = optional(list(string))
})
}))
description = <<-EOT
List of AWS managed EKS access policies to associate with IAM principles.
Use when Principal ARN or Policy ARN is not known at plan time.
`policy_arn` can be the full ARN, the full name (AmazonEKSViewPolicy) or short name (View).
EOT
default = []
nullable = false
}
@Jeremy G (Cloud Posse)
2024-04-24
v1.8.2 1.8.2 (April 24, 2024) BUG FIXES:
terraform apply: Prevent panic when a provider erroneously provides unknown values. (#35048) terraform plan: Replace panic with error message when self-referencing resources and data sources from the count and for_each meta attributes. (<a href=”https://github.com/hashicorp/terraform/pull/35047“…
This PR updates the order of validations that happen after an ApplyResourceChange operation. Previously, we restored the sensitive marks on the new value before validating that is was wholly known….
Currently Terraform will crash if a resource references itself from the count or for_each attributes. This PR updates the expansion node for resources to perform the same check that the execution n…
Yea more for #random , although this was discussed in #office-hours
if this is viewed as a protest, it is a correct place Lol
Haha, yea let’s keep #terraform Focused on specific questions, news and events directly tied to Terraform.
Valkey fork discussed on April 4th.
https://sweetops.slack.com/archives/CHDR1EWNA/p1712214652259389
Links from today’s office hours:
https://boehs.org/node/everything-i-know-about-the-xz-backdoor https://docs.localstack.cloud/tutorials/replicate-aws-resources-localstack-extension/ https://blog.cloudflare.com/python-workers https://techcrunch.com/2024/03/31/why-aws-google-and-oracle-are-backing-the-valkey-redis-fork/amp/ https://www.infoworld.com/article/3714688/the-bizarre-defense-of-trillion-dollar-cabals.html https://www.reddit.com/r/Terraform/comments/1bpfjjr/is_checkov_now_paywalled_by_palo_alto/ https://github.com/clivern/lynx https://aws.amazon.com/about-aws/whats-new/2024/03/slack-connect-aws-sales-collaborate-customers-partners/
https://github.com/bridgecrewio/checkov-vscode/issues/141 https://artifacthub.io/packages/helm/codefresh-onprem/codefresh
$6.4 billion acquisition adds suite of leading hybrid and multi-cloud lifecycle management products to help clients grappling with today’s AI-driven…
So I am trying to look at it optimistically. IBM is behind OpenBao (fork of Vault), which is a vote for open source. While healthy competition is good, I don’t like the current hostilities between HashiCorp and OpenTofu. Best case is if the projects merge and move back to MPL or into CNCF as a unified project. Kind of like what happened with NodeJS and IO.js. Vault is where the money is at for them.
$6.4 billion acquisition adds suite of leading hybrid and multi-cloud lifecycle management products to help clients grappling with today’s AI-driven…
I like your perspective Erik and I hope for the best out of this. Regardless, there is a path forward here and that is what matters to me.
An interesting theory I saw on LinkedIn is that it was a contingency for the acquisition to take place; that IBM required the move to BSL. To preserve their image, it is better that HashiCorp do it than RedHat. It would also explain the “business logic” for HashiCorp, making such a drastic/sweeping change, with seemingly little regard for the downstream effects and virtually no advance notice.
It’s just confusing to me that IBM would then also be a backer of OpenBao and going to the extent it has…
Yeah those two pieces seem counter-intuitive.
A false flag operation is an act committed with the intent of disguising the actual source of responsibility and pinning blame on another party. The term “false flag” originated in the 16th century as an expression meaning an intentional misrepresentation of someone’s allegiance. The term was famously used to describe a ruse in naval warfare whereby a vessel flew the flag of a neutral or enemy country in order to hide its true identity. The tactic was originally used by pirates and privateers to deceive other ships into allowing them to move closer before attacking them. It later was deemed an acceptable practice during naval warfare according to international maritime laws, provided the attacking vessel displayed its true flag once an attack had begun. The term today extends to include countries that organize attacks on themselves and make the attacks appear to be by enemy nations or terrorists, thus giving the nation that was supposedly attacked a pretext for domestic repression or foreign military aggression. Similarly deceptive activities carried out during peacetime by individuals or nongovernmental organizations have been called false flag operations, but the more common legal term is a “frameup”, “stitch up”, or “setup”.
It’s official
Yes it is.
Happy? is it seen as possitive in hashicorp @Jake Lundberg (HashiCorp)?
It’s all rather new, I can’t really speak for the company or others.
My personal stance is “wait and see”. If we continue on the trajectory we’ve set for ourselves from a strategy perspective, I think it’ll be great.
I know a lot of folks seem to think we’ve stagnated, but it’s far from the truth.
I also hope is for the best
I’m leaving this channel, bye bye
Sorry to see you go. We have plenty of places to debate open source and other project forks. The point is just not in this channel.
2024-04-25
Dont mind the XXXX I removed my account ID
Hey all Im trying to use the cloudposse SSM patch manager module located here: https://registry.terraform.io/modules/cloudposse/ssm-patch-manager/aws/latest However there is an issue that I cant seem to resolve. I am trying to get it to patch a windows machine with the key of PatchGroup and value of Windows but cant seem to figure out the correct syntax for it. Im also having trouble with it setting the patch baseline to Windows. By default its Amazon Linux 2 however when I change it to Windows it gives me an error message. Here is my code snippet and the error message I get.
this error message says that EnableNonSecurity isnt supported with Windows. You can disable non security with var.patch_baseline_approval_rules
, but the default option is set like this:
https://github.com/cloudposse/terraform-aws-ssm-patch-manager/blob/main/variables.tf#L191-L211
default = [
{
approve_after_days = 7
compliance_level = "HIGH"
enable_non_security = true
patch_baseline_filters = [
{
name = "PRODUCT"
values = ["AmazonLinux2", "AmazonLinux2.0"]
},
{
name = "CLASSIFICATION"
values = ["Security", "Bugfix", "Recommended"]
},
{
name = "SEVERITY"
values = ["Critical", "Important", "Medium"]
}
]
}
]
Thanks Dan. How does this work with a Windows asset? Just trying to patch one windows box. No Linux is involved.
it should work the same way, but you should change the default option to what you need for windows. for example
patch_baseline_approval_rules = [
{
approve_after_days = 7
compliance_level = "HIGH"
enable_non_security = false
patch_baseline_filters = [
{
name = "PRODUCT"
values = ["WINDOWS"]
},
{
name = "CLASSIFICATION"
values = ["Security", "Bugfix", "Recommended"]
},
{
name = "SEVERITY"
values = ["Critical", "Important", "Medium"]
}
]
}
]
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ssm_patch_baseline
https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_DescribePatchProperties.html
Lists the properties of available patches organized by product, product family, classification, severity, and other properties of available patches. You can use the reported properties in the filters you specify in requests for operations such as ,
This is what I get now.
I’d recommend trying one of the options listed under “valid values”
Hi! As some of you know, I’m one of the Atlantis(runatlantis.io) maintainers, and we are seeking input from our users. The Core Atlantis Team has created an anonymous survey for Atlantis users to help us understand the community needs and prioritize our roadmap. If you have the time, please take 5 minutes to fill it out https://docs.google.com/forms/d/1fOGWkdinDV2_46CZvzQRdz8401ypZR8Z-iwkNNt3EX0
2024-04-26
2024-04-27
I have a repo and in the root I have three directories for qa, stage and prod for each environment to create infrastructure in respective environment. And I want to keep the code DRY (Don’t repeat yourself).
NOTE: In each directory qa, stage and prod we are calling the child modules which are in the remote and not placing the child module configuration at the root of the module. I have a providers.tf file which is a globally common where all the providers are defined and each have alias. But, I want to place the provideres.tf file in the root of the repo instead of placing the SAME file in all the three qa, stage and prod directories.
Is it possible to place the common globally defined Providers.tf file in the root of the repo and build the infrastructure in all the environments?
Quick and dirty is a symlink, but you still need to figure out the backend stanza for each since they are different environments
That could be done via cli flag or custom wrapper
Hi @Joe Perez, Thank you for the quick response. Do you have an example or link for a document.
I don’t have exactly that, but I did write a terraform wrapper article on how to do this with templates https://www.taccoform.com/posts/tf_wrapper_p1/
Overview Cloud providers are complex. You’ll often ask yourself three questions: “Is it me?”, “Is it Terraform?”, and “Is it AWS?” The answer will be yes to at least one of those questions. Fighting complexity can happen at many different levels. It could be standardizing the tagging of cloud resources, creating and tuning the right abstraction points (Terraform modules) to help engineers build new services, or streamlining the IaC development process with wrappers.
Atmos would be the cloudposse way. There’s also terragrunt and terramate
thank you @loren But, we are trying to stick with the vanilla terraform.
Good luck with that. You’ll end up with your own wrapper or script, to meet that requirement
I was trying terraform -chdir, not sure how it can help us in our scenario.
It doesn’t, really
Terraform Cloud workspaces help here as well. Stacks will go even further.
@DevOpsGuy working with vanilla terraform and no other tooling is the best and only way to cut your teeth learning terraform. In order to truly appreciate the challenges of working with terraform, there’s no substitute for “learning the hardway”. However, I really do encourage you to read our write up on this, if for no other reason to be aware, so that as you start to encounter all the rough edges you’ll know - you’re not alone. This is a tried and true path, and multiple solutions out there exist, paid or open source. As Jake mentions, this is one of the problems the commercial Terraform Cloud/Enterprise offering solves, but there are alternatives as well which are open source.
Overcoming Terraform Limitations with Atmos
Teams often flounder with Terraform because they’re usually starting from scratch—a costly and outdated approach. Imagine if you tried to build a modern website without frameworks like React or Angular? That’s the uphill battle you’re fighting, without standardized conventions for Terraform, which other tooling brings to the table.
2024-04-28
This additional tflint ruleset looks handy for additional terraform stylistic guides such as terraform_variable_order
Recommend proper order for variable blocks The variables without default value are placed prior to those with default value set Then the variables are sorted based on their names (alphabetic order)
Plus 1 to this. It’d definitely helpful to have better organization in variable files
cc @Erik Osterman (Cloud Posse) (for v2 components)
Hi Team,
Need your help on the provider development using the plugin framework. I’m trying to set up a schema in the resource.go file. I set up an attribute which is required only during create as it is a sensitive attribute and it won’t be present in the response which is causing the below error. I tried multiple ways to accommodate like keeping the required:true
; optional:true
and computed:true
; sensitive:true
etc..nothing worked. I got below error
2024-04-27T15:09:36.989+0530 [ERROR] vertex "xxxx.cred" error: Provider produced inconsistent result after apply ╷ │ Error: Provider produced inconsistent result after apply │ │ When applying changes to xxxx.cred, provider │ "provider["hashicorp.com/edu/xxxx"]" produced an unexpected new value: │ .secret_key: inconsistent values for sensitive attribute.
Basically I need to handle an attribute required only during create but shouldn’t be present in the response which needs to be solved.
2024-04-29
2024-04-30
Hey, OpenTofu 1.7.0 is out with state encryption, provider-defined functions, and a lot more: https://opentofu.org/blog/opentofu-1-7-0/!