#terraform (2024-07)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2024-07-01
2024-07-02
I have created an EC2 instance in terrafrom with a userdata template. In the template I install and setup WireGuard, and defined a few users. But adding/removing users from the user data doesn’t redeploy the instance?! Terraform apply shows 1 change to make and the server is shutdown, AWS shows the updated userdata, but when the server is back up I don’t see any change in users. I have tried to add a step in the user data to delete the config file. Still no change. Is there a way I can force terraform to completely destroy the EC2 instance on every apply?
If the user_data_replace_on_change
is set then updates to this field will trigger a destroy and recreate.
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/instance#user_data
user data used to be a replacing change, now it is only optionally so. by default changes will only stop/start the instance
there are also ways to get user data to run on every startup, but i think that’s more of a cloud-init or ec2-config thing, and not specifically an aws or terraform thing
I didn’t know about the user_data_replace_on_change
Going to give that a go..
here’s how to setup per-boot scripts with cloud-init, if you want to go that route… https://cloudinit.readthedocs.io/en/latest/reference/modules.html#scripts-per-boot
Thanks a million :slightly_smiling_face: user_data_replace_on_change
did the trick!
2024-07-03
v1.9.1 1.9.1 (Unreleased) UPGRADE NOTES:
Library used by Terraform (hashicorp/go-getter) for installing/updating modules was upgraded from v1.7.5 to v1.7.6. This addresses CVE-2024-6257. This change may have a negative effect on performance of terraform init or terraform get in case of larger git repositories. Please do file an issue if you find the performance difference noticable. (<a…
v1.9.1 1.9.1 (July 3, 2024) UPGRADE NOTES:
Library used by Terraform (hashicorp/go-getter) for installing/updating modules was upgraded from v1.7.5 to v1.7.6. This addresses CVE-2024-6257. This change may have a negative effect on performance of terraform init or terraform get in case of larger git repositories. Please do file an issue if you find the performance difference noticable. (<a…
1.9.1 (July 3, 2024) UPGRADE NOTES:
Library used by Terraform (hashicorp/go-getter) for installing/updating modules was upgraded from v1.7.5 to v1.7.6. This addresses CVE-2024-6257. This change ma…
2024-07-04
anyone here used TerraMaid before? https://github.com/RoseSecurity/Terramaid
A utility for generating Mermaid diagrams from Terraform configurations
I think @Michael has
A utility for generating Mermaid diagrams from Terraform configurations
Note, Terramaid is a a very young project under active development.
@Allan Swanepoel I created the tool as a learning project, but I’m actively working on it each day, so stay tuned for it to get more mature!
thats awesome @Michael
:wave: I am wondering if it’d make sense to add support for configuring lambda permissions (i.e. who can invoke the function) directly in the aws-lambda-function module? This is the resource we could add, with a variable (presumably a list) for configuring at least the principal
and source_arn
attributes for each permission entry. It kinda feels natural to be able to declare the permissions in the lambda config, but I am not sure if we could run into some circular dependency issues this way. In my use case, it’s an S3-triggered function, so the bucket source ARN would be known in advance and this pattern would work. What do you think?
This is an internal debate we struggle with on a regular basis. Where is the line drawn? Typically, modules are used inside of other modules, and things like IAM policies are better expressed as HCL. That said, would you be open to proposing an issue with the hypothetical interface?
We can discuss that and if it looks good, and you’re willing to implement it, then go for it.
Yep. In our case, we deploy using terragrunt, so being able to deploy straight off of the terraform registry module (not having to create a wrapper module just to add one resource for the permissions) would actually help. I also kind of like how the author of the lambda config then gets to decide who can invoke it (similar to how they can define the lambda IAM role permissions). Also, this would be optional (for_eached with a default of []), so one could still declare the permissions elsewhere if a more complex setup is needed. I can whip up a simple PR with a proposal to discuss this further. Thanks for the quick response
I’ve updated the PR with readme updates. I had missed this step on my previous PR, too, so the diff now includes docs for the inline IAM policy feature as well.
@Jeremy White (Cloud Posse) or @Ben Smith (Cloud Posse) can you do the final sign off?
Looks like terratest is failing with complaints about the new s3 bucket resource.
@Jeremy White (Cloud Posse) At first glance, it looks like the lambda permission and the s3 notification resource creation are racing (creation of the notification requires the permission). I’ll have a look at fixing this, when I’m back at a computer (on vacation right now).
Hello :party_parrot:
I’ve recently become interested in Atmos and am doing a PoC on a small project within my company with Atmos.
While using it, I am satisfied with most of the features and it is well documented so I had no problem learning it, however, I have a question about using Template Functions in the data sharing between stacks.
Instead of using the terraform native module of cloudposse
, I created the necessary root modules myself, so I don’t use the RemoteState
method.
If the output is a list of strings rather than a simple string, when referenced from another stack, it will be converted to a string and referenced as [item1 item2 item3]
or something like that.
How do I get it to reference like a list normally?
@Andriy Knysh (Cloud Posse)
@Junk please see the related thread here https://sweetops.slack.com/archives/C031919U8A0/p1719132460159829
Hi, I’m testing out atmos v1.81 template functions release: How do I get array element for this one?
- '{{ (atmos.Component "aws-vpc" .stack).outputs.private_subnets }}'
in short, when you use Go templates, you are manipulating text files (the fact that the files are YAML is not relevant to the templating engine)
and Go templates work with strings only
so in your files, you have to “shape” the result strings into the correct data types. For example, for lists, you can use the toJson
function (since JSON is a subset of YAML), or the range
function (see the thread)
I was lacking a reference, so thank you for explaining it so well. I’ll try to test it out.
2024-07-05
Hi everyone, Is there a provider function available for the equivalent of this module, by any chance? https://github.com/cloudposse/terraform-null-label I think a provider function might be easier/cleaner to use, than a module.
We use an internal module context
for this, and I actually think it is quite nice. Because we can generate naming standards internally. Like we can use ctx.prefix_global
or just get region with ctx.region
. Then we can set prefixes to contain or build namespaces automatically based on context. Best of all we can use that across providers. So one can use the same context for both an aws provider and a postgres provider.
Interesting! do you have any examples for that, please?
I don’t directly, because it is internal, but it is something like this:
module "ctx" {
source = "gitlab.internal/foo/context/aws"
version = ">= 1"
active = var.active
env = var.env
erect = var.erect
namespace = var.namespace
}
module "core" {
source = "gitlab.internal/foo/core/aws"
version = "~> 2.4"
azs = var.azs
env = module.ctx.env
}
locals {
api_name = "${module.ctx.prefix}-api-int"
}
thanks for the details, that helps
Yes, Cloud Posse has a provider
Terrform provider for managing a context in Terraform.
With an example of using it here: https://github.com/cloudposse/atmos/tree/main/examples/demo-context
This example is with atmos
2024-07-08
Is Atlantis the best free / foss TACOS?
depends a little on whether you need a taco to provider a runner, or if you want to use runners for gitlab-ci or github-actions… for the former, probably yes. for the latter, you have cli tools like digger/terramate/atmos that integrate with the build system and the repo hosts
seems like atlantis it is for now then. thanks!
2024-07-09
2024-07-10
Hi All, I want to deploy a cloudformation stackset in parallel over multiple accounts in one region.
Currently I use:
resource “aws_cloudformation_stack_set_instance” “this” { operation_preferences { max_concurrent_percentage = 50 region_concurrency_type = “PARALLEL” }
but it does not scale over 1 deployment
anyone knows how to do this ?
v1.9.2 1.9.2 (July 10, 2024) BUG FIXES:
core: Fix panic when self-referencing direct instances from count and for_each meta attributes. (#35432)
This PR fixes a crash that occurs when self-referencing direct instances from the count and for_each meta arguments. The same behaviour was also happening within the import blocks. These have been …
2024-07-11
v1.9.1 1.9.1 (July 3, 2024) UPGRADE NOTES:
Library used by Terraform (hashicorp/go-getter) for installing/updating modules was upgraded from v1.7.4 to v1.7.5. This addresses CVE-2024-6257. This change may have a negative effect on performance of terraform init or terraform get in case of larger git repositories. Please do file an issue if you find the performance difference noticable. (<a…
Anyone utilizing Hashicorp Sentinel for Policy-as-Code in your pipelines? We’ve been thinking about different ways to incorporate policies into pipelines to make approval processes smoother for infra provisioning and curious if anyone had any recommendations
2024-07-12
2024-07-17
v1.10.0-alpha20240717 1.10.0-alpha20240717 (July 17, 2024) EXPERIMENTS: Experiments are only enabled in alpha releases of Terraform CLI. The following features are not yet available in stable releases.
ephemeral_values: This language experiment introduces a new special kind of value which Terraform allows to change between the plan phase and the apply phase, and between plan/apply rounds….
1.10.0-alpha20240717 (July 17, 2024) EXPERIMENTS: Experiments are only enabled in alpha releases of Terraform CLI. The following features are not yet available in stable releases.
ephemeral_values…
The terraform block allows you to configure Terraform behavior, including the Terraform version, backend, integration with HCP Terraform, and required providers.
Hello, can anybody please point out to a simple workable solution of implemanting a maintanance page for Beanstalk application behind ALB. Thank you
Depends a little bit about your requirements
One option is using route53 health checks and failing over to another service which could be your maintenance page
In requirments is a possibility for devs to turn on and off maintenance mode
It depends though at what level. e.g. if you want to work on the load balancer, it needs to be at a higher level.
2024-07-18
Heyoo, It’s me again I just published this comparing 5 LLMs on a specific Terraform code generation tasks (and ofcourse included ourselves at the end )
We’re trying to figure out how to improve IaC workflows in general, code generation alone is not enough, as you all know writing terraform is not the worst part.
I really appreciate your feedback, or if you’d like to share more edge-cases where an LLM screwed up so we could add to the benchmark we’re working on
thanks @Erik Osterman (Cloud Posse) we keep experimenting and pushing the DX to be 10x better than a code editor, I’m betting that an IDE plugin is a piece of the puzzle but so much more is needed. I’ll post more on that as things come up.
2024-07-21
On the surface it seemed like a match made in heaven.
Hi Everyone, Any idea what is the api rate limit for terrasnek cancel run or discard run APIs?
2024-07-23
Hello,
I created a PR on terraform-aws-waf module. Can you review my PR and validate it if you have time ?
Currently i have idempotency problem and this feature will fixed that.
Don’t hesitate if you have question.
PR: https://github.com/cloudposse/terraform-aws-waf/pull/91
Thank you
@Erik Osterman (Cloud Posse)
Hello,
what
• I added “enable_machine_learning” argument inside “aws_managed_rules_bot_control_rule_set” config
why
• I added this argument to avoid idempotency problem if you use “COMMON” inspection level.
regards,
Jgalais
@Andriy Knysh (Cloud Posse)
Hello,
what
• I added “enable_machine_learning” argument inside “aws_managed_rules_bot_control_rule_set” config
why
• I added this argument to avoid idempotency problem if you use “COMMON” inspection level.
regards,
Jgalais
https://github.com/cloudposse/terraform-aws-waf/releases/tag/v1.8.0. @galais.jerome thanks
2024-07-24
v1.9.3 1.9.3 (July 24, 2024) ENHANCEMENTS:
Terraform now returns a more specific error message in the awkward situation where an input variable validation rule is known to have failed (condition returned false) but the error message is derived from an unknown value. (#35400)
BUG FIXES:
core: Terraform no longer performs an unnecessary…
In cases where the condition is known but the error_message is not, we were previously returning the generic error about the error message not being evaluable. This is an interim change to give bet…
How to run atmos terraform <stuff>
and capture the output without ANSI codes? atmos terraform <stuff> -no-color
seems to eliminate ANSI output from terraform
, but Atmos adds its own ANSI. Did quick code search without joy. Is there a CLI option, or will I need to post-process ANSI out of the captured output?
if Atmos logs messages, it should log it to stdterror
if configured correctly
Use the atmos.yaml
configuration file to control the behavior of the atmos
CLI.
Atmos does not eliminate ANSI codes, but it should be logging to /dev/stderr
It may log to stderr, but if I atmos terraform plan <name> -s platform-use2-dev -no-color
, Terraform output (plan info) is uncolored, but atmos
itself is injecting ANSI-colored/styled text such as Terraform has been successfully initialized! (in green). There does not seem to be a way to turn that off (unlike grep, say, which has a--color=never
option, or Terraform which has -no-color
). Atmos seems to always emit ANSI into my output, which makes it harder to copy output plans into GitHub comments, Jira tickets, etc.
Here’s an example of what that looks like when pasted into GitHub comment:
ok, so this command works
atmos terraform plan <name> -s platform-use2-dev -no-color
but -no-color
is only applied to terraform plan
while it also executes terraform init
which outputs those color messages (before terraform plan
, which does not output ANSI codes)
you can skip over atmos calling terraform init if you know your project is already in a good working state by using the --skip-init flag like so atmos terraform <command> <component> -s <stack> --skip-init
Gotcha. So that ANSI output is an embedded tf init’s output. I don’t generally like to post-process formatting out of a document, but in this case, a sed or similar post-cleaner seems to make sense. Don’t want to worry about whether stacks are initialized or not. Leaving that to atmos is part of the value prop.
I’m actually quite surprised how bad GitHub is at consuming ANSI. Other popular tools (thinking Jupyter notebooks here) are surprisingly good at parsing colored/styled text inline and never miss a beat.
The hivemind suggests sed -r 's/\x1B\[[0-9;]*[mK]//g'
as a de-ANSI tool and indeed atmos terraform plan <stuff> | sed -r 's/\x1B\[[0-9;]*[mK]//g' >plan
seems work as desired. Pure text, no formatting in plan
@Jonathan Eunice have you considered https://developer.hashicorp.com/terraform/cli/config/environment-variables#tf_in_automation
Learn to use environment variables to change Terraform’s default behavior. Configure log content and output, set variables, and more.
@Andriy Knysh (Cloud Posse) https://sweetops.slack.com/archives/CB6GHNLG0/p1721857756341849?thread_ts=1721854899.495519&cid=CB6GHNLG0
he can add --skip-init
to avoid that
while it also executes terraform init
I did find the -no-color
CLI option, but when used in Atmos context, only affects the last tf command, not the implicit init.
However TF_CLI_ARGS='-no-color' atf plan <stuff>
does work. That feels like a win compared to post-processing the ANSI styling out.
using TF_CLI_ARGS
is a nice idea @Jonathan Eunice
Yea, good call
btw, I see you’re using shell aliases. Have you also seen the atmos aliases?
You could add this to atmos.yaml
aliases:
tp: terraform plan
Then add the shell alias alias a=atmos
and run a tp
if you like things short
alias atf='atmos terraform'
alias tf='terraform'
alias helm='werf helm'
alias k='kubectl'
2024-07-25
Hello,
Currently we have the following structure in Terraform
Terraform
| main.tf
| vars
| dev
| default.tfvars
| cluster1.tfvars
| cluster2.tfvars
Currently what we do in order to deploy is run the following command per cluster:
terraform apply -var-file vars/<env>/default.tfvars -var-file vars/<env>/<cluster_name>.tfvars
Can Atmos help in making that into a single command that will deploy to all the cluster in a given environment?
@Andriy Knysh (Cloud Posse)
@Ido this is exactly one of the reasons that Atmos exists - to separate code (terraform/opentofu components
) from configuration (stacks
), and to make the configurations DRY and reusable across many environments, accounts, regions
the config you showed can be defined in Atmos stacks, and then yes, using just one command atmos terraform apply <component> -s <stack>
it can be deployed into different environments
@Andriy Knysh (Cloud Posse) I didn’t understand how to achieve that from your tutorial.
I have the following structure
.
├── atmos.yaml
├── components
│ └── terraform
│ └── weather
│ ├── README.md
│ ├── main.tf
│ ├── outputs.tf
│ ├── variables.tf
│ └── versions.tf
└── stacks
├── catalog
│ └── station.yaml
└── deploy
├── dev
├── dev.yaml
├── prod.yaml
└── staging.yaml
But this allows me to have 1 station (cluster) per environment. I need to have multiple per environment that are using the same component, and can be deployed separately or together. Is this possible?
@Ido you are prob referring to Multiple Component Instances
in the same stack, please see this doc https://atmos.tools/design-patterns/multiple-component-instances
Multiple Component Instances Atmos Design Pattern
2024-07-26
2024-07-28
2024-07-29
We tested a technique called grammar prompting (e.g. https://arxiv.org/abs/2305.19234) for different open weight models on TF code generation, and this is the preliminary results
We’re still ironing out some details, adding Llama 3.1 and closed source models to the mix then will publish a more comprehensive writeup
I personally think there’s still a lot of room for improvement, and a big opportunity for new GenAI-assisted IAC tooling
Large language models (LLMs) can learn to perform a wide range of natural language tasks from just a handful of in-context examples. However, for generating strings from highly structured languages (e.g., semantic parsing to complex domain-specific languages), it is challenging for the LLM to generalize from just a few exemplars. We propose \emph{grammar prompting}, a simple approach to enable LLMs to use external knowledge and domain-specific constraints, expressed through a grammar in Backus–Naur Form (BNF), during in-context learning. Grammar prompting augments each demonstration example with a specialized grammar that is minimally sufficient for generating the particular output example, where the specialized grammar is a subset of the full DSL grammar. For inference, the LLM first predicts a BNF grammar given a test input, and then generates the output according to the rules of the grammar. Experiments demonstrate that grammar prompting can enable LLMs to perform competitively on a diverse set of DSL generation tasks, including semantic parsing (SMCalFlow, Overnight, GeoQuery), PDDL planning, and SMILES-based molecule generation.
2024-07-30
v1.10.0-alpha20240730 1.10.0-alpha20240730 (July 30, 2024) BUG FIXES:
The error message for an invalid default value for an input variable now indicates when the problem is with a nested value in a complex data type. [<a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”2412199978” data-permission-text=”Title is private” data-url=”https://github.com/hashicorp/terraform/issues/35465” data-hovercard-type=”pull_request” data-hovercard-url=”/hashicorp/terraform/pull/35465/hovercard”…
Previously this error message was including only the main error message and ignoring any context about what part of a potential nested data structure it arose from. Now we’ll use our usual path for…