#terraform (2023-03)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2023-03-01
2023-03-02
I’m happy to announce the beta release of Gruntwork Patcher, a tool to automatically keep your infrastructure code up-to-date, even with…
It’s a shame it’s not OSS
I’m happy to announce the beta release of Gruntwork Patcher, a tool to automatically keep your infrastructure code up-to-date, even with…
I’m pretty sure it’s tightly coupled to the architecture and modules they are selling. It wouldn’t be too useable outside that
Oh good point
2023-03-03
Hi all, I’m struggling to attach an EFS volume to an ECS Fargate container, as far as I can make out I’ve got everything in place which should ensure the attachment… has anyone got any examples which I can cross-reference?
yeah I wish the terraform docs would have more complete examples for stuff like this, I found this medium article on efs mount with fargate. I hope this is helpful for cross-referencing with your setup https://medium.com/@bhasvanth/create-aws-efs-and-mount-in-ecs-fargate-container-using-terraform-8ce1d68b7eef
I’m not sure where the breakdown is for you, but I would recommend enabling and using ECS exec to exec into a running fargate container to validate what you’re trying to do. Its been super helpful for me
Thanks for the response! I wasn’t even aware that ECS exec on ECS Fargate was a thing, so I will definitely be checking that out! Appreciate the pointer
2023-03-04
2023-03-06
2023-03-07
2023-03-08
Hi folks, not sure if this is the right channel to ask, if not pls let me know and i’ll move the convo to new “home”).
My question goes around TFC as i’ve seen a lot of topics around the cons of using it but little on specifics, here are few:
• TFC does recognise the *.auto.tfvars
which will be loaded and made available to all workspaces. However it doesn’t know anything about the tfvars
itself. Now i’m aware about few some workaround using TF_VAR...
variable or the API to “populate” the workspace variables however that is not an option for me. So my first q - how did folks ended up going on with this limited functionality ?
• TFC plan output back to Github/ Gitlab etc -> is not a native solution, what path folks went with?
• TFC sandbox/ ephemeral env => out of the box is not as easy to have, what path folks went with?
• due to the lack above, i think is fair to say we could end up with 2 “paths”: one when running inside TFC and one when running outside TFC - .eg local in which case you need to deal with the token - hence where do you store the token? you have one per team or 1 for each individual developer? Based on all the above, how better off is env0 or spacelift ?
So my first q - how did folks ended up going on with this limited functionality ?
Can you elaborate on your usecase here? And how TF_VAR_ isn’t suitable?
sure thing, let me expand of it
The context is like this:
I’ve already codebase where i heavily use tfvars & auto.tfvars json files and because of that i want to reuse it and not go down another path just because TFC can’t support it. Having to go the TFC way will turn things messy if folks also have to deal with the local run (see my next points)
sounds like TF_CLI_ARGS
might be what you need
thanks @Alex Jurkiewicz will look into it. Any thoughts on the other points ?
sounds like TF_CLI_ARGS
might be what you need
this won’t work if workspace is configured in VCS mode, no ? or am i missing s’thing here
TF_CLI_ARGS_plan to make the environment variable only visible during the plan phase
only visible during the plan phase
oh, right i need to read about that, it won’t work for me. Thx for chiming in @Fizz
Why won’t it work? Tf cloud embeds the variables into the saved plan that is used by the subsequent apply, so in the apply phase, the tfvars isn’t even used or referenced
But if you make the tfvars visible during the apply phase, tf cloud will complain that the vars are already in the saved plan and setting it as an environment variable is not allowed
need to play more on this tbh, there are so many edge cases t think about it.
Hi Peeps, a question regarding semver.
I have a module, that deploys certain resources. The module itself depends on other modules. I want to upgrade an eks module’s major version. What’s best practice here in terms of how I should change the version of my module? If a maj version of a dependecy changes, should I also change the maj version of my module?
Id follow something like this
- Make sure your terrform has 0 changes in it by running a plan
- Change the module version number
- Run a plan
- Analyze the plan results and see if you want the new changes. If not, try to see if there is a way to
no-op
the change
what is the new recommended way
aws_account_id = join("", data.aws_caller_identity.current[*].account_id)
or
aws_account_id = one(data.aws_caller_identity.current[*].account_id)
there’s only one output, so no need to splat on it
data.aws_caller_identity.current.account_id
that is because it has a count
iterator
v1.4.0 1.4.0 (March 08, 2023) UPGRADE NOTES:
config: The textencodebase64 function when called with encoding “GB18030” will now encode the euro symbol € as the two-byte sequence 0xA2,0xE3, as required by the GB18030 standard, before applying base64 encoding.
config: The textencodebase64 function when called with encoding “GBK” or “CP936” will now encode the euro symbol € as the single byte 0x80 before applying base64 encoding. This matches the behavior of the Windows API when encoding to this…
2023-03-09
Hi folks, i have a question about Atmos as i was going over the concepts and trying to map to what been said in various office-hourse in past year or so.
As per Atmos docs you have components/terraform
directory and the full list points to https://github.com/cloudposse/terraform-aws-components. Now my question goes around:
what was the context of keeping all those modules locals instead of consuming from your private/ public registry ? As others child modules do point to the the registry
Opinionated, self-contained Terraform root modules that each solve one, specific problem
and btw very interesting concepts you, like the pattern there
Opinionated, self-contained Terraform root modules that each solve one, specific problem
@DaniC (he/him) we have a few diff concepts here:
- Atmos components and stacks - defined in YAML stack config files and usually in the
stacks
folder (it’s configurable)
- Terraform component, usually in
components/terraform
folder (it’s configurable)
this is separation of concerns by separating code/logic (terraform components) from configurations (Atmos components and stacks)
regarding https://github.com/cloudposse/terraform-aws-components - this is a collection of open source terraform components which can be used to provision your infrastructure
Opinionated, self-contained Terraform root modules that each solve one, specific problem
you def can use them, or create your own components in components/terraform
folder
terraform components are terraform root modules https://developer.hashicorp.com/terraform/language/modules#the-root-module
Modules are containers for multiple resources that are used together in a configuration. Find resources for using, developing, and publishing modules.
Every Terraform configuration has at least one module, known as its root module, which consists of the resources defined in the .tf
files in the main working directory. Root modules are the terraform configuration that we actually apply and have terraform state.
each component (TF root module) defines some functionality and can be combined from terraform modules and/or terraform resources and data sources
as mentioned above, you can use any open source components and you can verdor them into your repo by using Atmos vendoring https://atmos.tools/core-concepts/components/vendoring
Use Component Vendoring to make a copy of 3rd-party components in your own repo.
(if you don’t want to copy them manually)
and you can vendor them from the public repo, or from your own public/private repos (as you asked in the question)
you create your infra config by a combination of TF components (root modules) which need to be already in your repo (not in the registry) since you have to have some initial TF code to define what TF modules are being instantiated (you can’t have empty TF code to start with )
that’s what are TF components (root modules) for, they are in your repo anyway. And as mentioned, you can vendor them using Atmos, or copy them, or create your own components (which can use TF modules from public repos or your own private repos)
please look at https://atmos.tools/category/quick-start for more details
Take 20 minutes to learn the most important atmos concepts.
hey @Andriy Knysh (Cloud Posse) much thanks for the lengthy info provided.
what was the context of keeping all those modules locals instead of consuming from your private/ public registry ? As others child modules do point to the the registry
I’d like to address this point, because it’s so important (and a great question)
What we found from working with customers is if the root modules (components) were all remote, it make it much more cumbersome for your standard developers to a) search for where something was defined b) forced them to dereference a lot of git URLs to look at it c) required multiple PRs just to test something d) make the upstream repo (e.g. cloud posse) a blocker for introducing new changes and therefore a business liability. By vendoring root modules, you get the best of everything. You can easily search the code and understand what’s going on. You can make changes to it if you need to diverge. You can still pull down the upstream components using the atmos vendor commands.
At the same time, nothing requires you to commit these components to VCS. They can be downloaded on demand using the atmos vendor commands. So you can still do things similar to Terragrunt with remote root modules, if the value proposition of having the local components is not there. To reiterate though: we don’t think remote root modules is a great DX.
thank you very much Cloud Posse team, very detailed set of information
I was hoping to get a PR reviewed for the cloudposse/redis module.. Is this the right place? https://github.com/cloudposse/terraform-aws-elasticache-redis/pull/189
thanks, we’ll review it
you can use #pr-reviews
oh, thanks!
I need some opinions… Currently at work we have 10 accounts in AWS. For Terraform we use an IAM user in AccountA that assumes a role that can reach into all other accounts and has admin in all accounts to be able to do terraform things. We also use workspaces, one for each account. All of the state files live in an S3 bucket that is in this AccountA. All of the plans and applies are run by Jenkins(BOO) butwe are getting rid of Jenkins and are adopting Atlantis which runs from an eks cluster in a different account(AccountB). We currently use Dynamodb for locking. I should be able to just start using a new table in Account B so I have that covered. Considering that we have hundreds of thousands of resources in all of these state files and it would be too much to manually import each on into a new statefile in AccountB, what would be the best way to move those resources to the new account statefiles? Since we have a state file per workspace per service could I just copy that statefile over to the new bucket and have Atlantis look at that new bucket/statefile instead of the old? So I would move the statefile to the new bucket and put it under the same key as in the old bucket, then update the backend config to point to the new bucket. My plan is to move over each service one at a time and make sure everything works instead of just one big lift and shift. Thoughts?
If you init on one remote configuration, then reconfigure the remote config, terraform will ask you if you want to migrate (copy) the state from bucket location A => bucket location B.
terraform init -migrate-state
Initializing the backend...
Backend configuration changed!
Terraform has detected that the configuration specified for the backend
has changed. Terraform will now check for existing state in the backends.
Do you want to copy existing state to the new backend?
Pre-existing state was found while migrating the previous "s3" backend to the
newly configured "s3" backend. No existing state was found in the newly
configured "s3" backend. Do you want to copy this state to the new "s3"
backend? Enter "yes" to copy and "no" to start with an empty state.
Enter a value:
Do the buckets need to be able to get/put from each other across the accounts for that?
- terraform init
- … change the backend configuration
- terraform init -migrate-state The TL;DR because I wasn’t entirely clear ^.
My trouble with this approach is that I have one user in one account and a different one in the target account…and they cant see each other and dont have access to each others buckets. So if I change the back end in accountA its not going to be able to see the bucket in accountB where it all needs to migrate to.
You should be able to copy over all the state files from one bucket to the other then just update your backend config to point to thr new bucket and dynamo db lock table
The state file doesn’t contain references to its hosting bucket
Thats what I was thinking but when i tested it that didnt work. It tried to apply resources that were already there.
Did you update the location of the state file in your backend config? Are you accessing the bucket in the correct account?
You can check a state file exists in the bucket by running terraform state pull
So I have been able to move my state files and that seems to work ok. Now the issue im running into is that if i run a plan I get a message that basically says that the md5 in my Dynamo table doesnt match and I need to set it to whatever the original table has. I can do that but I would think I shouldnt have to. Wouldnt terraform just create a new md5 if im pointing at a new table in a different account from the original? I looked through my test statefile and I dont see a mention of that md5 in it or anything called “Digest value”, so how does it know?? Here is the error i get…
Initializing modules...
Initializing the backend...
Error refreshing state: state data in S3 does not have the expected content.
This may be caused by unusually long delays in S3 processing a previous state
update. Please wait for a minute or two and try again. If this problem
persists, and neither S3 nor DynamoDB are experiencing an outage, you may need
to manually verify the remote state and update the Digest value stored in the
DynamoDB table to the following value: 12492dff2940c03fafc2058ee7f96bd3
I think after tinkering with it some more what Im going to have to do is export my table and import it to the new account. This is a pain because as we slowly move TF code for services from one account to another other devs are working on the same code. So as we move things we remove the code from the old repo once its setup and working and tested in the new repo.
I would not suggest trying to move all of your code, statefiles, and dynamodb locks from one account to another if you have the option not to do thiat.
You migrated the dynamodb table too?
I would think you would not migrate the dynamo table as all of the stuff in there is derived so can be regenerated if it is missing
My guess is that the md5 contains the bucket arn, which includes the account id
Yea…was the only way to get around the error. Otherwise I’d have to find the md5 in the old table and paste it into the new one. Huge pain
2023-03-10
Hey what am I missing: I have been reading about terraform workspace command for the CLI, ie without tf cloud. I really don’t see the benefit of CLI-level workspaces over just using separate folders (to hold root module tf files and tvars). It is just as easy to forget to select the workspace as it is to forget changing to the right folder, no? Tfstate is separate for each env / stack in both cases so no advantage there either.
I’ve benefitted from having workspaces associated with branches.
in CI, when the environment is built according to the branch, a corresponding workspace is created. The infra is built and tested.
When feature branches gets merged into the staging branch, the branch, workspace, and infra are destroyed.
there are no code or folder changes as using the terraform workspace name is used in resource naming to avoid conflicts.
If you automate your plans and applies then you build in the workspace to your scripts or whatever…then you dont forget or use the wrong workspace.
So let’s say that the workspaces feature did not exist, would the following be equivalent to what @managedkaos describes:
- say the repo has a folder called
dev
that is a root terraform module for the dev stack; - ci/cd builder checks out the feature branch called
foo
- it would then copy the tf code from
dev
folder into a new sibling folder calledfoo
- then it would cd to
foo
and runterraform init
- it would run
terraform apply
with--var stack_name=foo
- it would then deploy code, clone database etc and run tests
- it woudl eventually run
terraform destroy
- builder would remove
foo
folder With workspaces, steps 3 and 4 get replaced withterraform workspace new
and step 8 withterraform workspace delete
. Everything else is the same AFAICT.
Is there anything missing from that list? Any situations where having used a folder instead of a folder copy would be major disadvantage?
not sure I like the idea of representing an environment in a “branch”, we use folders + terragrunt
Im not a fan of the branch model either.
In this case, it works because the environments are short lived and any features are merged into the main branch.
indeed, there are other projects where development is done on branches but the environments are in folders. que sera, sera.
2023-03-13
hi folks,
been looking at the semver TF modules and i found out there are different schools out there whereby:
• some are using the conventional commits to gut the release with actions like https://github.com/google-github-actions/release-please-action
• some are using based on repo labels (major/ minor/ patch) attached to a PR and when merged to main or using issueOps can trigger a release (draft or not)
• some TF aws modules are using this
• others use a Version file inside the repo from which a tag is being created any trade-offs on any options above? I can indeed see one on the conventional commits as is very hard to “speak the same lang” between various teams
Personally, I prefer using the labels, but it works best for poly repos, where one app per repo.
For monorepos, the version file might work better, since you have control over the version of each component inside the monorepo
We use release-drafter at cloudposse
thank you Erik for insights
Hi, I have applied successfully the following module https://github.com/cloudposse/terraform-aws-elasticache-redis/tree/master where I created the redis cluster but it doesn’t create the aws_elasticache_user_group
, after that I need to be used for user_group_ids
`resource "random_password" "ksa_dev_password" {
length = 16
special = true
}
resource "aws_elasticache_user" "elasticache_user" {
user_id = var.redis_name
user_name = var.redis_name
access_string = "on ~* +@all"
engine = "REDIS"
passwords = [random_password.ksa_dev_password.result]
tags = local.tags
}
resource "aws_elasticache_user_group" "elasticache_group" {
engine = "REDIS"
user_group_id = "${var.tenant}-${var.environment}"
user_ids = [aws_elasticache_user.elasticache_user.user_id, "default"]
tags = local.tags
}
resource "aws_elasticache_user_group_association" "elasticache_user_group_association" {
user_group_id = aws_elasticache_user_group.elasticache_group.user_group_id
user_id = aws_elasticache_user.elasticache_user.user_id
}
I’m getting the following error :
│ Error: creating ElastiCache User Group Association ("...-develop,...-dev-redis"): InvalidParameterCombination: ...-dev-redis is already a member of ...-develop.
│ status code: 400, request id: 8534d445-aee0-4b40-acf8-db36a14198e6
│
Could somebody help me ? mayne I’m doing something wrong
does anyone know / can recommend a terraform module for AWS neptune please?
Does anyone know how to write the EKS KUBECONFIG as a file?
aws eks update-kubeconfig
I meant specifically via terraform. Sorry should’ve stipulated that
maybe like this https://jhowerin.medium.com/easy-way-to-use-terraform-to-update-kubeconfig-951837fb4e8e
Once way to provision AWS EKS is by using Terraform and integrating EKS provisioning into your CI/CD build and pipeline workflows. When managing EKS, you may then want to use the kubectl CLI….so…
Wonder if that’ll work via Atlantis
or, you can templatize it in a file, then get all the values from the cluster, then call templatefile
function and get back the kubeconfig, then write it to a file (similar to https://stackoverflow.com/questions/64820975/how-to-retrieve-the-eks-kubeconfig)
I have defined an aws_eks_cluster and aws_eks_node_group as follows: resource "aws_eks_cluster" "example" { count = var.create_eks_cluster ? 1 : 0 name = local.cluster_n…
I’ll give it a go tomorrow thanks
also, if you work with the cluster in TF, you prob don’t need the whole kubeconfig, there are other ways to access the cluster, for example https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/auth.tf#L77
# Get an authentication token to communicate with the EKS cluster.
2023-03-14
i’m curious if anyone has encountered something like this before data "aws_elb_service_account" "default"
in https://github.com/cloudposse/terraform-aws-lb-s3-bucket/blob/master/main.tf returns
aws_elb_service_account = [
+ {
+ arn = "arn:aws-us-gov:iam::048591011584:root"
+ id = "048591011584"
+ region = null
},
]
but that account number isn’t associated with any of our accounts. where is it coming from?
data "aws_elb_service_account" "default" {
count = module.this.enabled ? 1 : 0
}
data "aws_iam_policy_document" "default" {
count = module.this.enabled ? 1 : 0
statement {
sid = ""
principals {
type = "AWS"
identifiers = [join("", data.aws_elb_service_account.default.*.arn)]
}
effect = "Allow"
actions = [
"s3:PutObject"
]
resources = [
"arn:${data.aws_partition.current.partition}:s3:::${module.this.id}/*",
]
}
statement {
sid = ""
principals {
type = "Service"
identifiers = ["delivery.logs.amazonaws.com"]
}
effect = "Allow"
actions = [
"s3:PutObject"
]
resources = [
"arn:${data.aws_partition.current.partition}:s3:::${module.this.id}/*",
]
condition {
test = "StringEquals"
variable = "s3:x-amz-acl"
values = ["bucket-owner-full-control"]
}
}
statement {
sid = ""
effect = "Allow"
principals {
type = "Service"
identifiers = ["delivery.logs.amazonaws.com"]
}
actions = [
"s3:GetBucketAcl"
]
resources = [
"arn:${data.aws_partition.current.partition}:s3:::${module.this.id}",
]
}
}
data "aws_partition" "current" {}
module "s3_bucket" {
source = "cloudposse/s3-log-storage/aws"
version = "1.0.0"
acl = var.acl
source_policy_documents = [join("", data.aws_iam_policy_document.default.*.json)]
force_destroy = var.force_destroy
versioning_enabled = var.versioning_enabled
allow_ssl_requests_only = var.allow_ssl_requests_only
access_log_bucket_name = var.access_log_bucket_name
access_log_bucket_prefix = var.access_log_bucket_prefix
lifecycle_configuration_rules = var.lifecycle_configuration_rules
# TODO: deprecate these inputs in favor of `lifecycle_configuration_rules`
lifecycle_rule_enabled = var.lifecycle_rule_enabled
enable_glacier_transition = var.enable_glacier_transition
expiration_days = var.expiration_days
glacier_transition_days = var.glacier_transition_days
noncurrent_version_expiration_days = var.noncurrent_version_expiration_days
noncurrent_version_transition_days = var.noncurrent_version_transition_days
standard_transition_days = var.standard_transition_days
lifecycle_prefix = var.lifecycle_prefix
context = module.this.context
}
i believe it is owned by aws… https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/elb_service_account
data "aws_elb_service_account" "default" {
count = module.this.enabled ? 1 : 0
}
data "aws_iam_policy_document" "default" {
count = module.this.enabled ? 1 : 0
statement {
sid = ""
principals {
type = "AWS"
identifiers = [join("", data.aws_elb_service_account.default.*.arn)]
}
effect = "Allow"
actions = [
"s3:PutObject"
]
resources = [
"arn:${data.aws_partition.current.partition}:s3:::${module.this.id}/*",
]
}
statement {
sid = ""
principals {
type = "Service"
identifiers = ["delivery.logs.amazonaws.com"]
}
effect = "Allow"
actions = [
"s3:PutObject"
]
resources = [
"arn:${data.aws_partition.current.partition}:s3:::${module.this.id}/*",
]
condition {
test = "StringEquals"
variable = "s3:x-amz-acl"
values = ["bucket-owner-full-control"]
}
}
statement {
sid = ""
effect = "Allow"
principals {
type = "Service"
identifiers = ["delivery.logs.amazonaws.com"]
}
actions = [
"s3:GetBucketAcl"
]
resources = [
"arn:${data.aws_partition.current.partition}:s3:::${module.this.id}",
]
}
}
data "aws_partition" "current" {}
module "s3_bucket" {
source = "cloudposse/s3-log-storage/aws"
version = "1.0.0"
acl = var.acl
source_policy_documents = [join("", data.aws_iam_policy_document.default.*.json)]
force_destroy = var.force_destroy
versioning_enabled = var.versioning_enabled
allow_ssl_requests_only = var.allow_ssl_requests_only
access_log_bucket_name = var.access_log_bucket_name
access_log_bucket_prefix = var.access_log_bucket_prefix
lifecycle_configuration_rules = var.lifecycle_configuration_rules
# TODO: deprecate these inputs in favor of `lifecycle_configuration_rules`
lifecycle_rule_enabled = var.lifecycle_rule_enabled
enable_glacier_transition = var.enable_glacier_transition
expiration_days = var.expiration_days
glacier_transition_days = var.glacier_transition_days
noncurrent_version_expiration_days = var.noncurrent_version_expiration_days
noncurrent_version_transition_days = var.noncurrent_version_transition_days
standard_transition_days = var.standard_transition_days
lifecycle_prefix = var.lifecycle_prefix
context = module.this.context
}
thanks @loren
or does the elb service account not something we would own
Hello is there way we can pass the sensitive variable in connection stanza of remote provisioning like passing sensitive password variable
2023-03-15
v1.4.1 1.4.1 (March 15, 2023) BUG FIXES: Enables overriding modules that have the depends_on attribute set, while still preventing the depends_on attribute itself from being overridden. (#32796) terraform providers mirror: when a dependency lock file is present, mirror the resolved providers versions, not the latest available based on…
Fixes #32795 Target Release
1.4.x Draft CHANGELOG entry
BUG FIXES
Enables overriding modules that have depends_on attribute set (although still doesn’t allow to override depends_on , which is th…
New version 5 of the aws provider is being prepared…. https://github.com/hashicorp/terraform-provider-aws/issues/29842
Since the last major provider release in February of 2022, we have been listening closely to the community’s feedback. The upcoming major release primarily focuses on removing caveats on the Default Tags functionality.
Summary
A major release is our opportunity to make breaking changes in a scheduled and publicized manner in an attempt to avoid unnecessary churn for our users. We attempt to limit major releases to every 12-18 months. Version v4.0.0 of the AWS provider was released in February of 2022.
Along with more significant changes in behavior detailed below, this release will also remove attributes that have been marked as deprecated. Typically these have been marked as such due to changes in the upstream API, or in some cases the use of the attribute causes confusion. Major release v5.0.0 will not make any changes in support for older versions of Terraform.
Default Tags Enhancement
The default tags functionality was released in May 2021 due to overwhelming community support. Over its existence as a feature in the provider, we have discovered other limitations which hamper its adoption. New features are now available in terraform-plugin-sdk, and the terraform-plugin-framework which means we are now able to remove these caveats and resolve any major issues with its use. Resolving these issues will solve the following:
• Inconsistent final plans that cause failures when tags are computed. • Identical tags in both default tags and resource tags. • Eliminate perpetual diffs within tag configurations.
While most of this work is resolving issues with the design of the feature, it does represent a significant change in behavior that we can’t be sure will not be disruptive for users with existing default tag configurations. As a result, it is considered a breaking change. Details of the Default Tags changes can be found in issue #29747
Remove EC2 Classic Functionality
In 2021 AWS announced the retirement of EC2 Classic Networking functionality. This was scheduled to occur on August 15th, 2022. Support for the functionality was extended until late September when any AWS customers who had qualified for extension finished their migration. At that time those features were marked as deprecated and it is now time to remove them as the functionality is no longer available through AWS. While this is a standard deprecation, this is a major feature removal.
Updating RDS Identifiers In–Place
Allow DB names to be updated in place. This is now supported by AWS, so we should allow its use. Practitioners will now be able to change names without a recreation. Details for this issue can be tracked in issue #507.
Remove Default Value from Engine Parameters
Removes a default value that does not have a parallel with AWS and causes unexpected behavior for end users. Practitioners will now have to specify a value. Details for this issue can be tracked in issue #27960.
Force replacement on snapshot_identifier change for DB cluster resources
Corrects the behavior to create a new database if the snapshot id is changed in line with practitioner expectations. Details for this issue can be tracked in issue #29409
Follow our Progress
The full contents of the major release and progress towards it can be viewed in the v5.0.0 milestone
Upgrading
As a major version contains breaking changes, it is considered best practice to pin a provider version to avoid bringing in potentially breaking changes automatically. To remain on v4.* versions of the provider until such time that you are able to accommodate those changes, either pin to any v4.* version:
terraform {
required_providers {
aws = {
version = "~> 4.0"
}
}
}
Or a specific version:
terraform {
required_providers {
aws = {
version = "4.56.0"
}
}
}
Full documentation on how to manage versions of a provider can be found on the Terraform Website.
Your usage patterns will dictate how much effort upgrading to v5.0 will take. We document each breaking change in our upgrade guide which will describe what changes are necessary to existing resources to preserve behavior. The upgrade guide will be available on the day of the release on the Terraform Website.
2023-03-16
v1.4.2 Version 1.4.2
v1.4.2 1.4.2 (March 16, 2023) BUG FIXES: Fix bug in which certain uses of setproduct caused Terraform to crash (#32860) Fix bug in which some provider plans were not being calculated correctly, leading to an “invalid plan” error (<a href=”https://github.com/hashicorp/terraform/issues/32860” data-hovercard-type=”pull_request”…
We inadvertently incorporated the new minor release of cty into the 1.4 branch, and that’s introduced some more refined handling of unknown values that is too much of a change to introduce in a pat…
Is there a good way to glob all the directory names rather than only file types, using something like fileset
?
Example:
for_each = {
for idx, directory in fileset(path.module, "some/location/*") :
basename(filepath) => filepath
}
dir = each.key
foo = yamldecode(file("${each.value}/foo.yaml"))
bar = yamldecode(file("${each.value}/bar.yaml"))
It seems like fileset
specifically only returns values that match a file, not a directory. Is there an alternative?
you’d have to get creative i think, with fileset and a **
pattern, dirname, and toset
maybe throw regex or replace in there also, depending on how deep your directory structure is
Thanks, yes I was trying out **
and figuring out how to dedup the keys
Terraform gives me an error about duplicate keys and adding an ellipsis, but I don’t want to group, I want to ignore
No fileset does not currently support it, not even double star. You can only glob files not folders.
This is discussed in github issue and I opened a feature request for this not long ago.
As a workaround, I globbed for a file that I know exists in each folder, but this won’t apply to all use cases. I was just lucky.
Yep that’s pretty much what I ended up doing. For the moment, the function I was creating only cared about directories that have a specific file anyway… but this might not always be the case.
2023-03-17
Hello!
I would want to ask if Cloudposse team has plans to add an ability to add different types of action types for aws_lb_listener_rule (not only “forward”, but redirect
orfixed-response
) in the following module: https://github.com/cloudposse/terraform-aws-alb-ingress?
Terraform module to provision an HTTP style ingress rule based on hostname and path for an ALB using target groups
No plans at the moment. Please feel free to create a branch, test it, and submit it :)
Terraform module to provision an HTTP style ingress rule based on hostname and path for an ALB using target groups
I have created PR, please check it and approve: https://github.com/cloudposse/terraform-aws-alb-ingress/pull/68 Added only fixed-response action, due to only this kind of action was needed for my case. If smb want to add other types of actions, feel free.
what
• Add an ability to create rules with *fixed-response* action
why
• There was no ability to create such rules
2023-03-19
This message was deleted.
2023-03-20
2023-03-21
Hi I had tried to add a VM with disk encryption set, os disk was getting encrypted with no issues, but data disk is not getting encryption at all
Hi, make sure that azure disk encry is enabled for the data disk. Also, check the Azure Key Vault policies and permissions.
Os disk is getting encrypted without any issues
Anyone have working code for that ? Do we need to run any scripts or anything for encryption My os disk is getting encrypted using disk_encryption_id . Same way I’m passing disk_encryption_id in data disk as well
My Disk encryption key already exists, I’m using the same to encrypt os disk which works fine
On azure
Hey everyone, have created a PR that adding support for CAPTCHA action for WAF rules: https://github.com/cloudposse/terraform-aws-waf/pull/31 Also bumped version of aws provider and terraform as it was ±2 years old - none of my tests failed with that Wanted to add the content{} block for actions but it seems like it is not being in aws provider still, so will add it later (i need that in my rule as well) BTW I can see that cicd for my PR failed, could someone navigate me how can I fix that? https://github.com/cloudposse/terraform-aws-waf/actions/runs/4481450993/jobs/7878168555?pr=31 (seems like it tried to push something into my repo, not sure what have I missed)
Please post in #pr-reviews
(and thanks for the contribution!)
what’s the latest/greatest approach to fixing this:
The "for_each" value depends on resource attributes that cannot be determined
> until apply, so Terraform cannot predict how many instances will be created.
> To work around this, use the -target argument to first apply only the
> resources that the for_each depends on.
is it still a local with the value ref then using count
?
fundamentally, the solution is not to use attributes of resources in the key
part of the for_each
expression
yeah, but it is based on that resource
well, an output from it
that’s a choice. it doesn’t have to be
create a for_each
expression where the key
is derived entirely from user input
the value
part can continue to be derived from another resource
yeah, was just wanting a more direct name there
i already adjusted it locally. was just curious if there were better solutions
would ya look at it
that’s interesting
i would be shocked if that worked with for_each
expressions
someone opened a PR and added this
resource "value_unknown_proposer" "default" {}
resource "value_is_fully_known" "task_exec_policy_arns" {
value = var.task_exec_policy_arns
guid_seed = "var.task_exec_policy_arns"
proposed_unknown = value_unknown_proposer.default.value
}
interesting. will have to toy with that at some point
that’s why we have not approved the PR yet
I think if you’re looking at this provider as a serious production solution, you are missing a more obvious better solution.
What loren said is basically accurate – Terraform simply Doesn’t Support creating a dynamic number of resources with count
/for_each
. If you statically know the number of resources, you can use this static number instead. If the number of resources is truly dynamic, you’ll need to determine the correct number outside of Terraform and push it in as a variable. A common pattern here is two Terraform configurations which run in sequence.
If you have some sample code, we can make more concrete suggestions.
already solved it. i was just wondering if there was some new hotness to resolve it without the old school ways.
this is an old problem they talked about “fixing” years ago
they talked about fixing it… but it’s not possible to fix it for all cases
if something depends on not yet provisioned resources, no way to know it
that’s why they have not fixed it, and never will
i don’t think it’s actually possible to fix it. at least, not as long as the key
part of the for_each expression ends up as part of the state address
i always thought they’d do a “(known after apply)” on the key part of the resource name as well, but it is definitely a bigger issue than that to give a full plan
so i get it…was just curious if they advanced on that front
the only “progress” i’ve seen has been some minor discussion on allowing some kind of “partial” plan/apply, or “convergence” via multiple applies… tracking issue is here: https://github.com/hashicorp/terraform/issues/30937
The idea of “unknown values” is a crucial part of how Terraform implements planning as a separate step from applying.
An unknown value is a placeholder for a value that Terraform (most often, a Terraform provider) cannot know until the apply step. Unknown values allow Terraform to still keep track of type information where possible, even if the exact values aren’t known, and allow Terraform to be explicit in its proposed plan output about which values it can predict and which values it cannot.
Internally, Terraform performs checks to ensure that the final arguments for a resource instance at the apply step conform to the arguments previously shown in the plan: known values must remain exactly equal, while unknown values must be replaced by known values matching the unknown value’s type constraint. Through this mechanism, Terraform aims to promise that the apply phase will use the same settings as were used during planning, or Terraform will return an error explaining that it could not.
(For a longer and deeper overview of what unknown values represent and how Terraform treats them, see my blog post
Unknown Values: The Secret to Terraform Plan.)
The design goal for unknown values is that Terraform should always be able to produce some sort of plan, even if parts of it are not yet known, and then it’s up to the user to review the plan and decide either to accept the risk that the unknown values might not be what’s expected, or to apply changes from a smaller part of the configuration (e.g. using -target
) in order to learn more final values and thus produce a plan with fewer unknowns.
However, Terraform currently falls short of that goal in a couple different situations:
• The Terraform language runtime does not allow an unknown value to be assigned to either of the two resource repetition meta-arguments, count
and for_each
.
In that situation, Terraform cannot even predict how many instances of a resource are being declared, and it isn't clear how exactly Terraform should explain that degenenerate situation in a plan and so currently Terraform gives up and returns an error:
│ Error: Invalid for_each argument
│
│ ...
│
│ The "for_each" value depends on resource attributes that cannot
│ be determined until apply, so Terraform cannot predict how many
│ instances will be created. To work around this, use the -target
│ argument to first apply only the resources that the for_each
│ depends on.
│ Error: Invalid count argument
│
│ ...
│
│ The "count" value depends on resource attributes that cannot be
│ determined until apply, so Terraform cannot predict how many
│ instances will be created. To work around this, use the -target
│ argument to first apply only the resources that the count depends
│ on.
• If any unknown values appear in a provider
block for configuring a provider, Terraform will pass those unknown values to the provider’s “Configure” function.
Although Terraform Core handles this in an arguably-reasonable way, we've never defined how exactly a provider ought to react to crucial arguments being unknown, and so existing providers tend to fail or behave strangely in that situation.
For example, some providers (due to quirks of the old Terraform SDK) end up treating an unknown value the same as an unset value, causing the provider to try to connect to somewhere weird like a port on localhost.
Providers built using the modern Provider Framework don't run into that particular malfunction, but it still isn't really clear what a provider ought to do when a crucial argument is unknown and so e.g. the AWS Cloud Control provider -- a flagship use of the new framework -- reacts to unknown provider arguments by returning an error, causing a similar effect as we see for `count` and `for_each` above.
Although the underlying causes for the errors in these two cases are different, they both lead to a similar problem: planning is blocked entirely by the resulting error and the user has to manually puzzle out how to either change the configuration to avoid the unknown values appearing in “the wrong places”, or alternatively puzzle out what exactly to pass to -target
to select a suitable subset of the configuration to cause the problematic values to be known in a subsequent untargeted plan.
Terraform should ideally treat unknown values in these locations in a similar way as it does elsewhere: it should successfully produce a plan which describes what’s certain and is explicit about what isn’t known yet. The user can then review that plan and decide whether to proceed.
Ideally in each situation where an unknown value appears there should be some clear feedback on what unknown value source it was originally derived from, so that in situations where the user doesn’t feel comfortable proceeding without further information they can more easily determine how to use -target
(or some other similar capabililty yet to be designed) to deal with only a subset of resources at first and thus create a more complete subsequent plan.
This issue is intended as a statement of a problem to be solved and not as a particular proposed solution to that problem. However, there are some specific questions for us to consider on the path to designing a solution:
• Is it acceptable for Terraform to produce a plan which can’t even say how many instances of a particular resource will be created?
That's a line we've been loathe to cross so far because the difference between a couple instances and tens of instances can be quite an expensive bill, but the same could be said for other values that Terraform is okay with leaving unknown in the plan output, such as the "desired count" of an EC2 autoscaling group. Maybe it's okay as long as Terraform is explicit about it in the plan output?
A particularly "interesting" case to consider here is if some instances of a resource already exist and then subsequent changes to the configuration cause the `count` or `for_each` to become _retroactively_ unknown. In that case, the final result of `count` or `for_each` could mean that there should be _more_ instances of the resource (create), _fewer_ instances of the resource (destroy), or no change to the number of instances (no-op). I personally would feel uncomfortable applying a plan that can't say for certain whether it will destroy existing objects. • Conversely, is it acceptable for Terraform to _automatically_ produce a plan which explicitly covers only a subset of the configuration, leaving the user to run `terraform apply` again to pick up where it left off?
This was essence of the earlier proposal [#4149](https://github.com/hashicorp/terraform/issues/4149), which is now closed due to its age and decreasing relevance to modern Terraform. That proposal made the observation that, since we currently suggest folks work around unknown value errors by using `-target`, Terraform could effectively synthesize its own `-target` settings to carve out the maximum possible set of actions that can be taken without tripping over the two problematic situations above. • Should providers (probably with some help from the Plugin Framework) be permitted to return an _entirely-unknown_ response to the `UpgradeResourceState`, `ReadResource`, `ReadDataSource`, and `PlanResourceChange` operations for situations where the provider isn't configured completely enough to even _attempt_ these operations?
These are the four operations that Terraform needs to be able to ask a partially-configured provider to perform. If we allow a provider to signal that it isn't configured enough to even try at those, what should Terraform Core …
I see Jeremy was in there discussing too
there is one weird situation where you can kinda use an attribute list to feed for_each
, but still only if the length of the list can be fully determined in advance. and, if that is true, you can do something like {for index, value in list_of_things : index => ..... }
so basically, if you give a resource/module a known list of inputs, and the length of that list directly maps to the length of an attribute of that resource/module, then you can use that trick in a for_each
expression to build up a map of values using the index of the list
but i really don’t recommend it
( i mention it only because i’m using that hack right at this moment working with a module that outputs a list of route-table-ids. the ids are unknown, so i can’t use them directly. but the length of the list is deterministic based on the number of AZs input to the module. so i can depend on the length. and that means i can use this hack to add routes to the route tables…. )
LOL. so long as it works.
2023-03-22
2023-03-23
hello all, when you are developing own terraform module, what decides if new change is a minor or major? I mean if the change forces resources recreation than it’s obviously a major one but what about adding new/modifying existing variables or outputs? how do you approach versioning challenge in such case? thanks for opinions in advance
that’s pretty much it. if the change requires users of the module to modify their configs or their state, then it’s a major version. if the change is done in a way that users can simply use the new version with no changes, then it’s a minor version
logic but what about changes in outputs? ie. if you change output of the module form list of strings to list of objects
consumers of that output would need to change their configs, so that’s backwards incompatible, and a major version change
have you heard of semver, Piotr? https://semver.org/ It includes descriptions of when an update should be considered minor or major (or patch)
Semantic Versioning spec and website
yeah, this is what I think everyone is using however, I just wanted to hear “real life” opinions
Hey Everybody, is there a way to declare an outside file path as a component dependency? I’m using a somewhat eccentric terraform provider that lets point all of my actual configuration to a relative file path that falls outside of my atmos directories. When I run “atmos describe affected”, on a PR made to that outside directory, the change doesn’t get picked up as impacting my stack and it my workflow doesn’t identify that a new plan is needed
2023-03-24
hi Folks, anyone around who may have used TFC and tried to structure the codebase into layers could share how you shared the workspace state output values between? For example you have vpc-prod workspace (sum of few tf modules) and compute-prod workspace (sum of few tf modules) and the latter one needs the output from former? In the docs i saw https://registry.terraform.io/providers/hashicorp/tfe/latest/docs/data-sources/outputs but not sure if is a good idea to go with
The same approach we take at Cloud Posse with Atmos and Spacelift will work for TFC. Everything we do is based on layers.
But “compute-prod” is what we would call a terralith. That would be further broken down into workspaces.
Happy to share more on #office-hours
Tfe_output or a combination of resource tagging and datasources both work. The latter would make your code more portable should you decide to move away from TFC in the future
Happy to share more on #office-hours
brilliant, thank you Erik. I’ll try to attend coming Wed.
a combination of resource tagging and datasources both work
@Fizz interesting but i guess for this you will need to pass the workspace names as well in the output, have a standardise naming convention which is easily discoverable ? if you have a pseudo example maybe that will help visualising the benefit
I mean don’t use the outputs or the state at all. Use the datasources from the provider e.g. aws to query the resources that got created
Ah sorry, i’m with you, thank you!
Hi guys, I need advice about passing variables through the command line.
terraform apply -var "myvar={\"Host\":\"$HOSTNAME\", \"headers\":\"{\"Content-type\":\"text/html\"}\"}"
value of headers will be written to Azure KeyVault, so it should be passed as is, with brackets: {"Content-type":"text/html"}
, but I get this error:
│ Error: Missing attribute separator
│
│ on <value for var.myvar> line 1:
│ (source code not available)
│
│ Expected a newline or comma to mark the beginning of the next attribute.
╵
exit status 1
If I understand correctly, Terraform recognizes curly brackets in the value as a nested map, but it should be recognized as a string. How can I overcome this issue?
variable myvar declared with type of map(string)
Though I believe it’s first interpretted as JSON?
I know it’s possible to pass maps and lists this way too. It’s not clear what your myvar
variable is, but I think you have a string.
So I think you want the RHS of the =
to be a string. But as it is, I think it’s getting decoded to a map.
just a hunch
@Erik Osterman (Cloud Posse) Yes, it is a string and it should be passed as a string to the module, without interpretation. I also tried to cover it with single quote, but terraform notified that I should use only double quotes. So I need to disable interpretation of the content, but looks like it is a feature to pass maps, as you mentioned
You can still escape it, the problem is it was only escaped for shell, now you need to escape it as a string for terraform. Basically, you need more backslashes :-)
The end result will look like a monstrosity
Thanks, got it!
anyone know if data.[template_file.my](http://template_file.my)_template.rendered
without the quotes…? im getting output with quotes "
completely nuked
data template_file "json" {
template = templatefile("${path.module}/template.json",
{
AWS_REGION = "${data.aws_region.current.name}"
}
)
}
data template_file "user_data" {
template = templatefile("${path.module}/user-data.sh",
{
JSON_TEMPLATE = data.template_file.json.rendered
)
}
# and when trying to printf ${JSON_TEMPLATE} in user-data.sh, all quotes are stripped
# instead it works if the original template starts with \"
What do you mean you had to use quotes?
im trying to pass in the ${rendered} json file into ec2 user-data.sh, but when i print ${rendered} in user-data.sh, it strips out every "
instead if my original json has \"
instead of "
, the ${rendered} json shows up with "
when passed into user-data.sh
edit: added snippet back into main thread
Is anyone else’s terraform plans failing with an x509: certificate has expired or is not yet valid:
error?
Looks like its not just me : https://news.ycombinator.com/item?id=35295216
Thanks! We just saw that too but hadn’t dug into it yet
Might be related to this
https://www.bleepingcomputer.com/news/security/githubcom-rotates-its-exposed-private-ssh-key/
GitHub has rotated its private SSH key for GitHub.com after the secret was was accidentally published in a public GitHub repository. The software development and version control service says, the private RSA key was only “briefly” exposed, but that it took action out of “an abundance of caution.”
looks like just a coincidence that SSH key rotation and SSL cert expired happened on the same day
2023-03-27
2023-03-28
hey, i looking into using following terraform module https://github.com/cloudposse/terraform-aws-glue but when i want to specify multiple partition keys i’m not shore on what is the best way from the variable i can see that it supports only one
variable "partition_keys" {
# type = object({
# comment = string
# name = string
# type = string
# })
# Using `type = map(string)` since some of the the fields are optional and we don't want to force the caller to specify all of them and set to `null` those not used
type = map(string)
description = "Configuration block of columns by which the table is partitioned. Only primitive types are supported as partition keys."
default = null
}
i’m i missing something ?
Terraform modules for provisioning and managing AWS Glue resources
dynamic "partition_keys" {
partition_keys
is a map
yes, i seen it, so it can accept only one partition ?
partition_keys = {
key1 = {
name =
comment =
type =
}
key2 = {
name =
comment =
type =
}
}
thanks!
2023-03-29
Anyone using karpenter in their EKS terraform stack? and if so, how did you integrate it in sanely?
Used helm_release to deploy and then kubectl resource to provision karpenter providers
And the IAM/OIDC/SQS bits?
With a module. Terraform-aws-modules/eks/aws/modules/karpenter
In the Atmos “Hierarchical Layout” there seems to be a lot of assumption about the way we organize our OUs and accounts. I assume this is because it has been a working strategy for Cloud Posse.
However, it seems to be making it much more difficult to adopt into our tooling.
E.g. The hierarchical layout assumes that the accounts living directly under each OU are only separate stages of a single account.
This is because the stage
variable from the name_pattern
is tied to the stack living directly under an OU tenant
You can change the name_pattern
but it won’t break the overall assumption that stacks actually cannot be per-account.
The assumption is more strict than that, because we’re limited to the following variables in the name_pattern
:
• namespace
• tenant
• stage
• environment
Case: Sandbox accounts. What if we wanted to provision defaults for sandbox accounts for our developers?
These sandbox accounts might live in a Sandbox
OU (tenant), but they aren’t necessarily separate stages of one another, at all.
There is no feasible strategy with the name_pattern
without breaking the behavior of other stacks.
One option could be to combine our account name and region into the environment
variable (possibly without side-effects?) like so: sandbox-account-1-use1.yaml
But then we would be left with several directories where nesting would be better organized like so: sandbox-account-1/use1.yaml
I can only think that we should have an additional variable in the name_pattern
for example: name
to truly identify the account.
I hope I’ve missed something and Atmos does have the flexibility for this. Any advice would be much appreciated!
Please use atmos
Architecture is entirely flexible. We work with lots of enterprises with very custom requirements.
Everything can be adjusted using https://atmos.tools/cli/configuration
Use the atmos.yaml
configuration file to control the behavior of the atmos
CLI.
The only thing rigid right now is our context parameters of (name, namespace, tenant, etc). We plan to eliminate these discrete parameters sometime this year.
…and support arbitrary naming conventions, while still requiring naming conventions.
Thanks @Erik Osterman (Cloud Posse)
In this case, yeah I don’t think it’s actually an issue with atmos specifically. These context parameters are used everywhere by Cloud Posse so atmos is just using them too. It seems in this case, they actually weren’t specific enough.
Another way of thinking of it is that you can only go at most 4 levels deep in your hierarchy: namespace
tenant
stage
environment
for example.
But there are cases where 5 levels might be needed: namespace
tenant
account-name
stage
environment
Please use atmos
I may have found my answer, I’ll report back if I do.
I did not find an answer.
Here, atmos describe stacks
reports only 2 stacks, but there are actually 4 unique stacks with this structure, which can be thought of as:
• acme-workloads-data-prod-us-east-1
• acme-workloads-data-test-us-east-1
• acme-workloads-jam-prod-us-east-1
• acme-workloads-jam-test-us-east-1
However, atmos
only sees:
• acme-workloads-test-us-east-1
• acme-workloads-prod-us-east-1
My current solution is to combine the OU and application/account name into the tenant
variable. We’ll see if that works:
tenant: workloads-data
let’s move to atmos
Appreciate more on these two AWS Provider issues:
2023-03-30
I’m new to this community, and I started using the cloudposse modules. I managed to use them to install our EKS cluster and node-group, Is there a cloudposse module available that can create the required IAM roles for EKS so I can use the ALB ingress controller? Or do you suggest another ingress controller?
Our implementation of our modules is what we call the refarch
The root modules (aka components) are defined here.
Our root modules for EKS are here https://github.com/cloudposse/terraform-aws-components/tree/master/modules/eks
All of the docs are centralized here https://docs.cloudposse.com/components/
Terraform Components
We integrate with OIDC and support RBAC. https://docs.cloudposse.com/components/catalog/aws/eks/cluster/
This component is responsible for provisioning an end-to-end EKS Cluster, including managed node groups.
@Dan Miller (Cloud Posse) looks like an example for this is not in this doc: @Linda Pham (Cloud Posse)
v1.4.3 1.4.3 (Unreleased) BUG FIXES: Prevent sensitive values in non-root module outputs from marking the entire output as sensitive [GH-32891]…
1.4.3 (Unreleased) BUG FIXES:
Prevent sensitive values in non-root module outputs from marking the entire output as sensitive [GH-32891] Fix the handling of planned data source objects when storin…
The outputs from non-root modules which contained nested sensitive values were being treated as entirely sensitive when evaluating them from state during apply. In order to have detailed informatio…
v1.4.3 1.4.3 (March 30, 2023) BUG FIXES: Prevent sensitive values in non-root module outputs from marking the entire output as sensitive [<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”1632647362” data-permission-text=”Title is private” data-url=”https://github.com/hashicorp/terraform/issues/32891” data-hovercard-type=”pull_request” data-hovercard-url=”/hashicorp/terraform/pull/32891/hovercard”…
The outputs from non-root modules which contained nested sensitive values were being treated as entirely sensitive when evaluating them from state during apply. In order to have detailed informatio…
Hi all. I’m encountering a bug with msk-apache-kafka-cluster
(this). I’ve created a company module specifically from it w/ this code:
main.tf
module "msk-apache-kafka-cluster" {
source = "cloudposse/msk-apache-kafka-cluster/aws"
version = "1.1.1"
name = var.name
vpc_id = var.vpc_id
subnet_ids = var.subnet_ids
kafka_version = var.kafka_version
associated_security_group_ids = var.associated_security_group_ids
broker_instance_type = var.broker_instance_type
broker_per_zone = var.broker_per_zone
}
variables.tf
variable "name" {
type = string
}
variable "vpc_id" {
type = string
}
variable "subnet_ids" {
type = list(string)
}
variable "kafka_version" {
type = string
}
variable "associated_security_group_ids" {
type = list(string)
}
variable "broker_instance_type" {
type = string
}
variable "broker_per_zone" {
type = number
}
As I am using terragrunt
to invoke the tf-module, my terragrunt.hcl
looks thusly:
terraform {
source = "[email protected]:myplace/terraform-modules.git//snowplow?ref=snowplow/v1.1.0"
}
dependency "vpc" {
config_path = "../vpc"
}
inputs = {
name = "myplace-snowplow-test"
vpc_id = dependency.vpc.outputs.vpc_id
subnet_ids = dependency.vpc.outputs.private_subnets
kafka_version = "2.8.1"
associated_security_group_ids = [dependency.vpc.outputs.default_sg_id]
broker_instance_type = "kafka.t3.small"
broker_per_zone = 1
}
It has a dependency of the outputs of the vpc module that show up like this:
azs = tolist([
"us-east-1a",
"us-east-1b",
"us-east-1c",
])
cgw_ids = []
database_subnets = [
"subnet-05ace04da69c0a5c3",
"subnet-03f094702e6413a5c",
"subnet-0e29e3ea07b3161bd",
]
default_sg_id = "sg-019db31e6084d695b"
intra_subnets = [
"subnet-02fa8b695b63f36be",
"subnet-068a94b0fcb72c6bf",
"subnet-0edb9a2c27f57b067",
]
nat_public_ips = tolist([
"3.231.112.27",
])
private_subnets = [
"subnet-047d998dd1bb4e300",
"subnet-02627f60507ea09fb",
"subnet-00ffed109a79644da",
]
public_subnets = [
"subnet-0b82cf0a6e280600a",
"subnet-0512c45da9cac36f2",
"subnet-0588f61d9b5307245",
]
this_customer_gateway = {}
vpc_cidr = "10.2.0.0/16"
vpc_id = "vpc-0adb2021bba46a1c5"
When I try to run the snowplow module however, I’m getting the following error:
Error: creating Security Group (glorify-snowplow-test): InvalidVpcID.NotFound: The vpc ID 'vpc-0adb2021bba46a1c5' does not exist
│ status code: 400, request id: 3f7c6a9c-0025-4baa-8345-f44496a95c7f
│
│ with module.msk-apache-kafka-cluster.module.broker_security_group.aws_security_group.default[0],
│ on .terraform/modules/msk-apache-kafka-cluster.broker_security_group/main.tf line 24, in resource "aws_security_group" "default":
│ 24: resource "aws_security_group" "default" {
That vpc exists (per the outputs above), and it’s in the console as well. Even when I hardcode that variable in the terragrunt.hcl
, it gives the same error.
Is this a bug that I need to open a report for?
It looks like the latest release of the terraform-aws-vpc-flow-logs-s3-bucket
module is using a version of terraform-aws-s3-log-storage
that is 8 versions out-of-date:
https://github.com/cloudposse/terraform-aws-vpc-flow-logs-s3-bucket/blob/master/main.tf#L165-L166
https://github.com/cloudposse/terraform-aws-s3-log-storage/releases
^ At the moment, I’ve experienced deprecation warnings due to this, such as:
Use the aws_s3_bucket_server_side_encryption_configuration resource instead
v1.4.4 1.4.4 (March 30, 2023) Due to an incident while migrating build systems for the 1.4.3 release where CGO_ENABLED=0 was not set, we are rebuilding that version as 1.4.4 with the flag set. No other changes have been made between 1.4.3 and 1.4.4.
Any tricks for adding *new tags* to *existing ec2 instances* that have already been deployed via terraform aws_autoscaling_group
, *without relaunching* the instances? if I just update the tags in the .tf file, the existing instances will not get those; relaunching the instances (eg via the ASG) seems overkill just for tags, and adding tags directly to the instances via the CLI is error prone (I could easily miss some instances, tags incorrect instances, use incorrect values etc).
Thinking perhaps I could create an output that shows the aws command to run for each ASG (ie the module that creates each ASG could do that, injecting ASG name, tags map etc in the aws command). The root module would just aggregate them so I would have to copy and paste in shell, that kind of mitigates most of the risks.
Just hoping there’s a trick I haven’t though of…
Looks like detaching then re-attaching the ec2 instances of an ASG might be better, has anyone tried that? Or setting the instances of an ASG to go on standby, but it’s not clear from the docs whether exit from standby will update tags, seems unlikely
hmmm I might be able to use resource aws_ec2_tag
with a for_each
that is empty when var.update_asg_instances
is false, and is list of ASG instances otherwise. Then if I see that tags of an ASG are planned for update, I abort plan and I also set var.update_asg_instances=true
, giving the resource the ASG’s tags; after the apply, I terraform state rm
on the aws_ec2_tag
, and rerun apply with var.update_asg_instances=false
. Sounds complicated but it can actually be done in 3 lines of bash script (using --var
). Not bad, beats using the aws cli.
it’s a one-off task. Update the ASG configuration so new instances are tagged properly, and remediate any existing instances by hand (retag OR terminate)
I would use the Tag Editor to find the target resources (hopefully they have some other tag in common) and then add the new tag in bulk. I would only update the TF to apply the tag as needed to new resources vs trying to remediate existing resources.
https://docs.aws.amazon.com/tag-editor/latest/userguide/find-resources-to-tag.html
Use Tag Editor to find resources that you want to tag, apply tags, remove tags, or change existing tags.
I agree that current tf + aws provider does not seem good place to handle this;
BUT my desired state is well defined: I want all instances of an ASG to have the tags of the ASG that manages. A custom provider that uses the AWS SDK would have no problem ensuring that this is true, without stepping on aws provider toes, since the tags of the instances created by the ASG are not tracked by terraform aws provider.
It’s unfortunate that the AWS provider does not provide a resource that supports that notion.
I think you misunderstand the goal of the AWS provider. It’s not to provide all features to everyone, but to be a relatively thin and unopinionated wrapper around the underlying AWS API.
If you want this feature, direct your feature request to AWS to support “updating the tags of ASG instances when I update the ASG’s tags”. Terraform’s provider will implement support for that.
So I’m trying to feed CIDRs into security_group_rules
from the ec2-instance
module and it’s complaining about:
│ The given value is not suitable for
│ module.instance.module.box.var.security_group_rules
│ declared at
│ .terraform/modules/instance.box/variables.tf:72,1-32:
│ element types must all match for conversion to list.
I have two locals, first is the output of formatlist("%s/32", var.ips)
and the second is concat(local.first_local, ["127.0.0.1", "127.0.0.2"])
, feeding the former as cidr_blocks: local.first_local
works fine, but cidr_blocks: local.second_local
and I’m boggled on which part it thinks isn’t the same type. Debug output shows:
got: {{{} [{{{} map[cidr_blocks:{{{} [{{{} %!s(cty.primitiveTypeKind=83)}}]}} description:{{{} %!s(cty.primitiveTypeKind=83)}} from_port:{{{} %!s(cty.primitiveTypeKind=78)}} protoco
l:{{{} %!s(cty.primitiveTypeKind=83)}} to_port:{{{} %!s(cty.primitiveTypeKind=78)}} type:{{{} %!s(cty.primitiveTypeKind=83)}}] map[]}} {{{} map[cidr_blocks:{{{}}} description:{{{} %!s(
cty.primitiveTypeKind=83)}} from_port:{{{} %!s(cty.primitiveTypeKind=78)}} protocol:{{{} %!s(cty.primitiveTypeKind=83)}} to_port:{{{} %!s(cty.primitiveTypeKind=78)}} type:{{{} %!s(cty.
primitiveTypeKind=83)}}] map[]}}]}}
want: {{{} {{{}}}}}
But that’s way to many brackets for me to grok.
So wrapping the concat
in a flatten
seemed to work, but I still dunno where the empty values are coming from
2023-03-31
Hi Folks, anyone is aware of what sort of data is being shared back with Mend.io when installing Renovate GH App ? I’m trying to sell it to my team but the GH Org admin requests all sort of compliance rules around the data being transferred even when using the free version of Renovate.