#terraform (2022-09)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2022-09-01
what
• Remove join splat on module.security_group_arn
why
• Fix conflict with using custom security group in associated_security_group_ids
and argument create_security_group
is false
references
• N/A
please post in #pr-reviews
what
• Remove join splat on module.security_group_arn
why
• Fix conflict with using custom security group in associated_security_group_ids
and argument create_security_group
is false
references
• N/A
Hi, I have a map object as below. I was able to go one level down and was able to get the entire “dev” value . how do i get only node_group_name value ?
managed_node_groups = {
"dev" = {
eks = {
node_group_name = "node-group-name1"
instance_types = ["m5.large"]
update_config = [{
max_unavailable_percentage = 30
}]
}
mng_custom_ami = {
node_group_name = "mng_custom_ami"
custom_ami_id = "ami-0e28cf2562b7b3c9d"
capacity_type = "ON_DEMAND"
}
}
"qe"= {
eks = {
node_group_name = "node-group-name2"
instance_types = ["m5.large"]
}
mng_custom_ami = {
node_group_name = "mng_custom_ami"
custom_ami_id = "ami-0e28cf2562b7b3c9d"
capacity_type = "ON_DEMAND"
block_device_mappings = [
{
device_name = "/dev/xvda"
volume_type = "gp3"
volume_size = 150
}
]
}
}
}
variable env {}
mng = var.managed_node_groups[var.env]
var.managed_node_groups[*].eks["node_group_name"]
Reference values in configurations, including resources, input variables, local and block-local values, module outputs, data sources, and workspace data.
thank you, how to get the node_group_name of just the first element for each environment, if i dont want to hardcode .eks below
var.managed_node_groups[*].eks["node_group_name"]
2022-09-02
could anyone suggest, what will be the perfect auto-scaling during the high traffic of the ecs fargate, and also send me the github link for my reference, thanks in advance.
7 is the perfect scale
@Alex Jurkiewicz would you recommand any github links for creating perfect autoscaling tf?
this slack is run by Cloudposse, who publish many Terraform modules. Check out their repos here: https://github.com/cloudposse/
DevOps Accelerator for Startups Hire Us! https://slack.cloudposse.com/
start with these resources, do few tests,
resource "aws_appautoscaling_target" "ecs_target" {
max_capacity = 4
min_capacity = 1
resource_id = "service/${aws_ecs_cluster.example.name}/${aws_ecs_service.example.name}"
scalable_dimension = "ecs:service:DesiredCount"
service_namespace = "ecs"
}
resource "aws_appautoscaling_scheduled_action" "dynamodb" {
name = "dynamodb"
service_namespace = aws_appautoscaling_target.ecs_target.service_namespace
resource_id = aws_appautoscaling_target.ecs_target.resource_id
scalable_dimension = aws_appautoscaling_target.ecs_target.scalable_dimension
schedule = "at(2006-01-02T15:04:05)"
scalable_target_action {
min_capacity = 1
max_capacity = 200
}
}
in your case use CloudPosse’s modules as target
thank you
2022-09-03
2022-09-04
What is best practise to install packages and configure few settings in ec2 instance? Do you prefer provisioner with “remote-exec”? or Ansible or packer? I need to run an applications in four ec2 instance with pre-configuration. I have shell script ready but wanted to know better approach.
I would suggest keeping the server configuration out of terraform and use something like Ansible instead.
For my projects that involve a server or two, an application installation, and a bit of configuration, I’ve found the following to be the best approach:
- Keep the application code in one repo
- Keep the TR infra code in another repo
- Keep the server and application config in another repo and use ansible to: a. Install user/service accounts b. Configure and update the server c. deploy the application
Having ansible and config in its own repo makes it easy to manage and deploy environments in a way that doesn’t require re-running TF or rebuilding the application. Also, its much easier to track configuration changes vs app or infra changes. Yes, in some cases a big change requires coordination across all three repos. but is most cases (daily operation), the only thing that changes is the config repo its much easier to track and apply changes there.
Thank you. I will revise my ansible knowledge I was planning to invest time to learn packer (to build machine images ) and deploy/provision then using Terraform.
Hi everyone, I supposed to create ecs on multi region using tf, now ecs running on us-east-1, could anyone help me to solve this problem. Thanks in advance
2022-09-05
Hey guys - I have creation of ECR in my TF. How do you flag the ECR part to avoid destroying it during executing terraform destroy
?
You can delete the resources manually from the state file before running terraform destroy
See terraform state rm
Awesome! Thanks @Alex Jurkiewicz!
2022-09-06
I have created multiple ec2 instance using count . In that one ec2 instance deleted using -target option or manually . In the subsequent deployment I want terraform to skip the deployment of manual deleted instance. How to achieve this?
resource "aws_instance" "web" {
count = 4 # create four similar EC2 instances
ami = "ami-00785f4835c6acf64"
instance_type = "t2.micro"
tags = {
Name = "Server ${count.index}"
}
lifecycle {
ignore_changes = [
aws_instance.web[1]
]
}
}
i try to implement using lifecylce ignore change but getting error This object has no argument, nested block, or exported attribute named “aws_instance”.
Any pointers on this?
I’m not sure that the ignore_changes is compatible with what you want to achieve. you can ignore changes for a specific attribute or block of a ressource, but [I THINK] not for an entire resource.
It’s my own opinion, I let other answer if it is possible
Thanks @Pierre-Yves. If we reduce the count then it will be impacted across all the subnets. Is there any other option without reducing the count?
What do you mean by “reduce the count”.
For my part, i was not telling you to change your count ^^. I was just saying that I think you can’t use the ignore_changes meta-argument for your need
How to make backward-compatible changes to modules already in use.
2022-09-07
can anyone help me to ..assign ecs fargate public ip to target group, now private ip is assigned on target group.
v1.2.9 1.2.9 (September 07, 2022) ENHANCEMENTS: terraform init: add link to documentation when a checksum is missing from the lock file. (#31726)
Original PR: #31408 Backport PR: #31480 For some reason the backport process only picked up the first two commits from the original PR. This PR manually copies over the changes missed by the backpo…
2022-09-08
Hey guys,
Running an initial terraform apply
has been failed due to expired aws credential. I updated the creds and rerunning apply, it’s failed once again due to the resources being existed already resulted from the initial applied earlier.
How do you approach with this kind of case?
I think a screen share might let me understand. If you cann’t rerun something bigger is wrong like the way the code is structured.
I don’t know what the resource is, the simple solution would be to delete it, if that is possible? Then it will be rebuilt.
I have seen it sometimes where a plan says resource will get remade, even though I think it isn’t needed.
Because your session expired while the resource was being created and presumably your state lives in s3 or something similar (dependent on your session) the state has gone out of wack from the reality.
In order to remediate you will need to perform terraform import
operations on the resources that were created and then not recorded into state.
I think when a apply failed due to expired credentials, it should save a tfstate locally, pushing this tfstate to your backend should fix the issue.
I discovered recently while I was looking at using HCL Go libraries to do our own config processing, that TF 1.3 will have some pretty awesome improvements to config defaults. And I saw in this channel a syndicated post about it just now, but it might have gotten missed, so I’m writing this.
The improvement actually goes way beyond providing the optional value in the optional()
function call. That improvement alone is great, because it allows for a much more natural way to declare default objects and easier to grok the structure (instead of using a separate default
attribute in variable
. or defaults()
function).
But HC also fixed a major issue with defaults merge in 1.2 (as was available in both deafult
attrib and defaults()
function): it will create default nested objects to full depth based on the spec. Which it does not do in the experimental support available in 1.2, thus rendering the defaults()
function almost useless (IMO).
There’s really only 2 use cases that these 1.3 improvements do not solve for me, but I can live without them (whereas the issues that 1.3 fixes were deal breakers for us and we were going to roll our own using hclwrite lib).
I’ll be moving our current in-house config system to use the new capabilities of 1.3 over the next few weeks (depends on client priorities, might take longer), very excited to see how far I can get.
v1.3.0-beta1 1.3.0 (Unreleased) NEW FEATURES:
Optional attributes for object type constraints: When declaring an input variable whose type constraint includes an object type, you can now declare individual attributes as optional, and specify a default value to use if the caller doesn’t set it. For example: variable “with_optional_attribute” { type = object({ a = string # a required attribute b = optional(string) # an optional attribute c = optional(number, 127) # an…
does the defaults()
function still even exist in 1.3? i thought it was part of the optional experiment, and the experiment was removed in 1.3…
v1.3.0-beta1 1.3.0 (Unreleased) NEW FEATURES:
Optional attributes for object type constraints: When declaring an input variable whose type constraint includes an object type, you can now declare individual attributes as optional, and specify a default value to use if the caller doesn’t set it. For example: variable “with_optional_attribute” { type = object({ a = string # a required attribute b = optional(string) # an optional attribute c = optional(number, 127) # an…
yes defaults()
has been removed entirely (the experiment_optional option has been removed altogether). Only optional()
is left (and it’s a lot better than previous, as I explained).
yep, tracking. long thread on its progress here, https://discuss.hashicorp.com/t/request-for-feedback-optional-object-type-attributes-with-defaults-in-v1-3-alpha/40550
Hi all , I’m the Product Manager for Terraform Core, and we’re excited to share our v1.3 alpha , which includes the ability to mark object type attributes as optional, as well as set default values (draft documentation here). With the delivery of this much requested language feature, we will conclude the existing experiment with an improved design. Below you can find some background information about this language feature, or you can read on to see how to participate in the alpha and pro…
yes that’s how I found out about it
Actually, found out about it in https://github.com/hashicorp/terraform/issues/28344 which also has interesting background about current (ie 1.2 experiment) limitations and links to that one you posted
it should be great. But I wouldn’t be too quick on using Terraform betas. Some of them have done things like zero state in the past
I think a 1.x beta (or perhaps even x.0) had a bug where it would plan to remove all resources in certain conditions?
hey guys, how are you managing user creation in rds, any best practices ?
clusters?
aurora?
global?
mysql?
we need more details
aurora/rds mysql clusters. i tried to search for a resource in terraform to create generic users other than the master one , but couldnt find any
there is a mysql user provider you can use
Not? Use IAM connected RDS user integration
you can use that too, yes I forgot about that
2022-09-09
Module development and best practices Looking for experience and opinions
Tough not to have some of these overlap with just vanilla tf practices, but doing this for my team and thought I would post here for other people’s input as well
• modules do not reinvent the wheel e.g. if there is an aws module, a cloudposse module or similar these are used instead of home-rolling
• modules have documentation and examples
• modules have terratests
• module code avoids code smells like ternaries, excessive remote state lookups
• modules avoid using shell providers as much as possible
• modes avoid reading or writing files at local or remote locations for the purposes of getting or creating effectively hard-coded information to then be used in later logic
• modules are versioned and a versions file is used to pin modules
• expose important outputs
• limited use of custom scripts
• modules follow a universally agreed-upon naming convention
• modules are integrated with environment specific code and do not rely on lookups, etc to figure out what environment specific values to get
• modules are not too specific, e.g. a databricks-s3-encrypted-with-kms-and-object-replication
module should be instead databricks-component-a
, databricks-component-b
, …, kms-cm-key
, s3
modules and all of these should be used from the tf registry via cloudposse, aws, or similar well-known publishers
• the root module should only call modules
• aws account numbers should be looked up, not hardcoded in tf files
i would add one, avoid using depends_on if at all possible, and make a special effort to avoid module-level depends_on (as opposed to resource-level depends_on). always prefer passing attributes instead, which terraform will use to construct the graph
“the root module should only call modules”? What is a “root module”?
“A versions file is used to pin modules” Do you mean pinning providers?
I agree with most of the rest, but the list feels a bit “write clean code where possible, we won’t explain why these dot points lead to clean code or why clean code is good tho”
@Alex Jurkiewicz https://www.terraform.io/language/modules#the-root-module
Modules are containers for multiple resources that are used together in a configuration. Find resources for using, developing, and publishing modules.
I meant to say modules should be pinned in source references
i consider a “root module” to be one that owns the backend config, state, the lock file, provider block configurations, and the config inputs
basically a “module” that you have designed explicitly to support directly running the init/plan/apply/destroy workflow for one or more configurations
Hi team — hoping to get some eyes on this when someone has the time: https://github.com/cloudposse/terraform-datadog-platform/pull/71
what
• lookup function did not pull the correct value required for thresholds, and instead went to the default.
• This resulted in an error when creating an SLO of type monitor
when using more then one threshold.
why
• We are creating all of our metrics, monitors, SLOs, etc with IaC, using cloud posse’s modules (thanks!)
please post to #pr-reviews
what
• lookup function did not pull the correct value required for thresholds, and instead went to the default.
• This resulted in an error when creating an SLO of type monitor
when using more then one threshold.
why
• We are creating all of our metrics, monitors, SLOs, etc with IaC, using cloud posse’s modules (thanks!)
Does the free edition of terraform cloud still require each workspace hardcode AWS credentials? Or can you setup an IAM role that it can assume?
In the free version you can configure the workspace to use API mode which will then make TF cloud just a state holder. In API mode, you define the workflow and provide the hardware to run the plans. E.g. you could run it in GitHub actions with GitHub runners. This then allows you to decide how you want to provide credentials. A role on the runners? GitHub secrets configured in the pipeline that then assumes a role? Basically you have full control.
You’ll also need to set local execution mode.
@Fizz just confirming my understanding.
in that mode though, there are zero audit trails, no confirmations, and nothing represented in TFC, right? It’s only serving as the state backend (a glorified s3 bucket). To your point, you could then run terraform in conventional CI/CD, but TFC is providing no other benefit than state management.
Yes. In the paid version, you can have runners on your own infra managed by tf cloud. There you can attach a role to your runner (assuming you are on AWS)
I just find it odd that they don’t support the more native integration model where you provision an IAM role that trusts their principle and allow them to assume the role. This is how free/entry-level plans of Datadog and Spacelift work. Presumably others as well.
Yep. Cross account role that can be assumed by a user, or role, in their account would be a nice feature.
It might be a deliberate omission though. I’ve heard on the paid plan they charge $50 per apply. So it seems like they really want to encourage you to run on your own hardware.
I’ve just set this up using OIDC providers in each account (deployed via stacksets).. then it’s just a matter of exposing the TFC_WORKLOAD_IDENTITY_TOKEN environment variable (i use the Epp0/environment provider) and bang.. multi-account TFC deployments using JWT
2022-09-11
2022-09-12
Hey, are You using Checkov/TFsec/Kicks in CI ( Github Actions for example ) ? I just wanted to ask, I just discovered https://github.com/security-alert/security-alert/tree/master/packages/sarif-to-comment/, which can effectively convert SARIF to GH comment… But its not working correctly, because all these tools are predownloading modules and analyses them with given input on the filesystem. So It can generate comments, but it will generate diff URLs based on local path, instead of just pointing to the correct “upstream” module called from main.tf. Ideas?
Does anyone know why I’m getting this error? An argument named "iam_role_additional_policies" is not expected here.
In the Terraform site, it shows that this should be under the module eks section.
I’m happy to take a look, I don’t think I have enough context to do anything but a google search.
I tried to configure the following:
create_iam_role = true
iam_role_name = "eks-manage-nodegroup-shlomo-tf"
iam_role_use_name_prefix = false
iam_role_description = "Self managed node group role"
iam_role_tags = {
Purpose = "Protector of the kubelet"
}
iam_role_additional_policies = [
"arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly",
"arn:aws:iam::810711266228:policy/SecretsManager-CurrentValueROAccess",
"arn:aws:iam::810711266228:policy/SecretsManager-CurrentValueROAccess"
]
https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest?tab=inputs
Thank you for the help
2022-09-13
is it somehow possible to test the github action pipelines of the modules locally or within the fork? I have some troubles to pass all pipeline steps
@Tommy yes, answer is act
act
is awesome! Though, in most cases, for me it ended up being slower than just pushing and letting github handle it. I store logs as artifacts so I can troubleshoot better
thank you, I will take a look!
and watch out, you can do things in ACT that do not work in the actual github actions runners
I know some members on the team of tried it a couple times and given up because they didn’t get any further. They’d get it working in ACT, then it wouldn’t work in the runners. Vise versa.
2022-09-14
v1.3.0-rc1 1.3.0 (Unreleased) NEW FEATURES:
Optional attributes for object type constraints: When declaring an input variable whose type constraint includes an object type, you can now declare individual attributes as optional, and specify a default value to use if the caller doesn’t set it. For example: variable “with_optional_attribute” { type = object({ a = string # a required attribute b = optional(string) # an optional attribute c = optional(number, 127) # an…
Hypothetical reasons to arrest an actual Terraform founder in this thread please
For example, South Korea court reportedly issues arrest warrant for Terraform founder for AWS Provider v3 rollout.
South Korea court reportedly issues arrest warrant for Terraform founder for charges that cannot be determined until apply
South Korea court reportedly issues an arrest warrant for Terraform founder for abusing local exec’s to manipulate the stock price.
Hi Team, can some one help me with creating IAM user in terraform by passing variable from values.yml file
2022-09-15
Has anyone tried using any of the existing EKS related TF modules to deploy a Windows EKS node group for a cluster?
@andylamp @Jeremy G (Cloud Posse) Do either of you know if the cloudposse/eks-workers/aws
module should be able to accomplish this and set the self-managed node group similar to the Linux managed node group?
@Jeremy (UnderGrid Network Services) I have never worked with a Windows EKS node group, and do not know the specifics, but I would expect cloudposse/eks-workers/aws
module should be able to launch Windows nodes by selecting the appropriate AMI via eks_worker_ami_name_filter
and eks_worker_ami_name_regex
or image_id
I thought as much. The hangup I’ve found with the eks-workers module is that it doesn’t allow me to override the user data which is obviously going to be different for windows than linux
With eka-node-groups can provide user data base64 encoded and it over rides the default I believe
Specifying userdata is not a requirement to launch a node; EKS supplies appropriate defaults.
Not in the case of Windows eks nodes
I don’t see anything in the AWS documentation about setting userdata. Please educate me.
This topic describes how to launch Auto Scaling groups of Windows nodes that register with your Amazon EKS cluster.
if you read through you find the cloudformation template they have (https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/amazon-eks-windows-nodegroup.yaml) and it has a user data block that it includes in the launch template that the ASG calls
even the eks-node-group
module has a user data template for Linux managed node groups but the module has the userdata_override_base64
variable if you want to override the default. eks-workers
doesn’t have any similar mechanism and the userdata.tpl
is Linux specific
OK, I admit that I’m not completely following because TBH I believe you and don’t want to spend the time to learn it right now. Short story is that if you want to duplicate the relevant inputs from eks-workers
in eks-node-group
in a PR I will approve it.
okay… I’ll work on a PR and test it… I know this is likely a bit of a niche situation as I have our dev team asking to include a Windows EKS node group to our cluster so they can work on moving some of the application(s) that runs on Windows into EKS and off EC2
Hey @Jeremy (UnderGrid Network Services) do you have any progress or tips on this?
@johnny I have been a bit sidetracked lately wearing my firefighter hat so haven’t made the progress I wanted on it. The dev I was working with managed to get the Windows node up and running via click-ops after I’d stood up the Linux node group via TF but I haven’t gotten his steps into my TF yet.
@Jeremy (UnderGrid Network Services) That’s fair. Do you happen to know what the userdata should look like for getting the nodes into the cluster? …I’m not sure if that’s how it works but I think I’m almost to that point. I believe I should have the nodes launching soon but not sure what happens after they go up given the userdata is not windows based.
@johnny there is a user-data block passed to the node instance to enable joining the domain, there’s also the aws-auth ConfigMap change required as well to allow the nodes to join. I don’t know the specifics yet but the dev also reported they had trouble getting the Ingress to work initially but worked it out. I still need to determine what his steps to resolve that were
Anyone have an idea on which module I need to update this variable in?
module.tf_cloud_builder.module.bucket.google_storage_bucket.bucket: Destroying… [id=]
╷
│ Error: Error trying to delete bucket containing objects without force_destroy
set to true
I’d start by looking at source to whatever module is used for tf_cloud_builder
as it appears to be calling the bucket
module that is creating it so may be a variable being passed along
Thanks! I started down that path but need to check again
The more you work with it reading the state paths make more sense to trace
Is there a way to push a variable from the root module down to sub modules?
pass as variables to the module and you get outputs from the module
does the code need to be re-initialized when you update variables in a module?
if the source to module is relative directory based (eg - source = ‘../modules/x’) then no, but if it’s being pulled from a repo or registry yes you will
That’s what I cannot figure out for some reason
looking at the root module I don’t see any calls to the error module
however when I look in the .terraform folder that get created I see many module directories
yes the terraform init
process generates the module directories under the .terraform directory
if your .tf code has something like:
module "blah" {
source = "../modules/my_module"
...
}
then the terraform init
does not need to be done when the code under ../modules/my_module
is updated or changed.
However if it has something like:
module "blah" {
source = "cloudposse/label/null"
...
}
or any other source that pulls from a Git repo or Terraform registry it does
I see the folder for tf_cloud_builder in .terraform directory
However I don’t see a folder for module bucket
you should not manipulate the .terraform
directory manually… assume it doesn’t exist is safest
where is your module "tf_cloud_builder" { ... }
in your working directory
there is not one
This is the code I executed: https://github.com/terraform-google-modules/terraform-example-foundation/tree/master/0-bootstrap
I copied the root folder terraform-example-foundation to my local machine. I changed directories into the 0-bootstrap folder and ran the appropriate TF commands
It created the resources
Now when I’m trying to delete them is where the problem comes into play
it’s in the [cb.tf](http://cb.tf)
as
module "tf_cloud_builder" {
source = "terraform-google-modules/bootstrap/google//modules/tf_cloudbuild_builder"
version = "~> 6.2"
project_id = module.tf_source.cloudbuild_project_id
dockerfile_repo_uri = module.tf_source.csr_repos[local.cloudbuilder_repo].url
gar_repo_location = var.default_region
workflow_region = var.default_region
terraform_version = local.terraform_version
cb_logs_bucket_force_destroy = var.bucket_force_destroy
}
The variable var.bucket_force_destroy is not being pulled from TF destroy
if you’re just trying to perform a terraform destroy
and it’s complaining about not being able to delete the bucket because it is not empty then can you not go into the bucket and delete the objects stored inside it
I did that as well
still complaining
actually I think I may have found it… as I expected the variable is exposed
bucket_force_destroy = true
needs to be added to your tfvars
I know tfvars exposes variables you define in it, but if a variable is not defined in tfvars. Does TF look at the variables.tf at all?
it defaults to false
.. you see it passes var.bucket_force_destroy
as cb_logs_bucket_force_destroy
to the tf_cloud_builder
module which then passes it along to the bucket
module that it calls
if you don’t define the variable in tfvars then it gets the default value assigned in the [variables.tf](http://variables.tf)
correct and I updated the variables.tf to be true
In theory after doing that shouldn’t that have correct the problem?
that’s not the ideal way to do it when you’re using someone elses module
understood, but just asking for better understanding as I’m still learning TF
now TF is complaining that the root module does not declare a variable named buckets_force_destroy
you mis-spelled it… it’s not plural bucket_force_destroy
fixed that, but still getting the same error from above
module.tf_cloud_builder.module.bucket.google_storage_bucket.bucket: Destroying… [id=tf-cloudbuilder-build-logs-prj-b-cicd]
╷
│ Error: Error trying to delete bucket tf-cloudbuilder-build-logs-prj-b-cicd containing objects without force_destroy
set to true
That’s about the extent I can help with them as I don’t use GCE and reading the Terraform repo you gave shows that setting bucket_force_destroy = true
in tfvars passed to it should be passed through to the bucket
module when tf_cloud_builder
calls it in https://github.com/terraform-google-modules/terraform-google-bootstrap/blob/master/modules/tf_cloudbuild_builder/cb.tf#L96
module "bucket" {
https://github.com/terraform-google-modules/terraform-example-foundation/blob/master/0-bootstrap/cb.tf#L102 is where tf_cloud_builder
is called and passes the bucket_force_destroy
variable value
cb_logs_bucket_force_destroy = var.bucket_force_destroy
What is the greatest lates on TF pipelines lately? How do you run multi tenant/user self serve infra with feature branches in multi account, multi region setups?
Interesting to know on how the pipeline is setup, how the input variables are pass over and how is the user flow
2022-09-16
Hey Team, does anyone know why account_id is not part of cloudposse/terraform-cloudflare-zone module?
resource "cloudflare_zone" "example" {
account_id = "f037e56e89293a057740de681ac9abbe"
zone = "example.com"
}
How can the account_id help in that module ?
I believe the account_id
is now implicit in the cloudflare
provider itself
so it should be optional to set the account_id
.
@Angela Zhu do you have a requirement to set an explicit account_id to each module.cloudflare_zone
instantiation ?
Hey RB, thanks for quick response. account_id embedding into the provider has been deprecated, it suggests to use specific ’account_id” attributes instead.
I do have a requirement to set account_id in each zone
I don’t think you need to set the account_id
in either the cloudflare
provider or in any of the cloudflare terraform resources anymore.
I do have a requirement to set account_id in each zone
may I ask why you need to set this optional argument ?
The situation I’m in right now is that I’m migrating from using cloudflare/cloudflare module into using cloudposse/terraform-cloudflare-zone. After I import resources, everything works except that it’s flagging account_id ~> from whatever to null. I can’t confidently push this code because I can’t find documentation on what happens when this is removed. Would it impact member or access_group? It seems to me every zone should have an account_id and zone_id.
In their documentation, in only 1 place they mentioned It’s required that an account_id
or zone_id
is provided and in most cases using either is fine.
Everywhere else are just saying this is optional
except that it’s flagging account_id ~
from whatever to null.
this should be OK but if you are uncomfortable, feel free to put in a PR to add an optional account_id
with a default value of null
I’m testing it in a lower environment right now. I might push a PR for this change. Thanks @RB
2022-09-19
i have a for_each
for an EKS_node_group resource like below:
resource "aws_eks_node_group" "nodegroup" {
for_each = var.nodegroups
...
how do i ignore all scaling configs for all of the keys?
lifecycle {
create_before_destroy = true
ignore_changes = [scaling_config.[0].desired_size]
}
currently i have the above, am i right in thinking this will only effect the first loop?
Hi! Hopefully I can get some direction on my issue.
I am trying to use this module to create an AWS client VPN endpoint, and running into an issue. I cannot avoid getting this error:
│ Error: "name" isn't a valid log group name (alphanumeric characters, underscores, hyphens, slashes, hash signs and dots are allowed): ""
│
│ with module.ec2_client_vpn.module.cloudwatch_log.aws_cloudwatch_log_group.default[0],
│ on .terraform/modules/ec2_client_vpn.cloudwatch_log/main.tf line 17, in resource "aws_cloudwatch_log_group" "default":
│ 17: name = module.log_group_label.id
I have been able to prove something is wrong with this module as if I modify the above referenced line in that file, with a name directly, it works. And I am very confused on how this is working.
FWIW I have set logging_stream_name
with a value, but this always gives me this validation error.
I have tried names with and without slashes, dashes, and any other allowed chars outside alphanumeric values.
Any help is greatly appreciated. I’m pretty much at the point I will need to abandon this module usage as a result of this problem.
Can you share how you’re instantiating the module?
Yeah sure!
module "ec2_client_vpn" {
source = "cloudposse/ec2-client-vpn/aws"
ca_common_name = "vpn.mycompany.com"
root_common_name = "vpn-client.mycompany.com"
server_common_name = "vpn-server.mycompany.com"
client_cidr = "10.5.4.0/22"
vpc_id = data.aws_vpcs.mycompany-vpc.ids[0]
organization_name = "mycompany"
name = "client_vpn"
logging_enabled = true
logging_stream_name = "client-vpn/aws-sso-enabled"
id_length_limit = 0
retention_in_days = 90
associated_subnets = ["subnet-idididid"]
self_service_portal_enabled = true
authentication_type = "federated-authentication"
split_tunnel = true
self_service_saml_provider_arn = "arn:aws:iam::ACCTNUMBER:saml-provider/AWSSSOROLE"
authorization_rules = [
{
name = "grant_all"
authorize_all_groups = true
description = "Grants all groups access to the full network"
target_network_cidr = "10.0.0.0/8"
}
]
additional_routes = [
{
destination_cidr_block = "10.0.0.0/8"
description = "Local traffic Route"
target_vpc_subnet_id = "subnet-idididid"
}
]
}
https://github.com/cloudposse/terraform-aws-cloudwatch-logs/blob/master/main.tf#L17
If I edit this line in the .terraform
folder, after init and put just my log stream name, it will give me a working plan output.
name = module.log_group_label.id
Sorry updated specific submodule.
I had a quick look.
I think the issue is the log group not the stream. Most of these modules assume use of context.tf so in this case, module "log_group_label"
has nothing set. You can set variables namespace
, stage
, name
etc or you can use context.tf or the null-label module in your own project and set them there, then pass the reference into module "ec2_client_vpn"
via the context
variable.
The example shows the former.
Ah I’ll try that tomorrow morning. Thank you!
No worries, let us know how you go!
Joe, that worked! Thank you so much!
Hi all, i want to redirect https://example1.example.com to https://example.com/example1 in nginx, if anyone aware of nginx please help me to slove this problem.
what does this have to do with Terraform? Try #sre
but this question seems like something you can solve by googling
Ok
2022-09-21
v1.3.0 1.3.0 (September 21, 2022) NEW FEATURES:
Optional attributes for object type constraints: When declaring an input variable whose type constraint includes an object type, you can now declare individual attributes as optional, and specify a default value to use if the caller doesn’t set it. For example: variable “with_optional_attribute” { type = object({ a = string # a required attribute b = optional(string) # an optional attribute c = optional(number, 127) #…
I have multiple databases in one db instances, how can i backup particular databases in aws, i am using aurora mysql.
Hi Folks, I’m experiencing what feels like a fun bug with the Cloudposse Datadog-Lambda-Forwarder Module. For my use case, I’m deploying it to all of our accounts in a centralized workspace using provider blocks. Calling the module multiple times produces an error that calling it a single time does not. Error details and a minimally reproducible code example in :thread: . (Resolved by depends_on
)
Error Message:
Error: External Program Execution Failed
with module.datadog_staging_lambda_forwarder.module.forwarder_log_artifact[0].data.external.git[0]
on .terraform/modules/datadog_staging_lambda_forwarder.forwarder_log_artifact/main.tf line 9, in data "external" "git":
program = ["git", "-C", var.module_path, "log", "-n", "1", "--pretty=format:{\"ref\": \"%H\"}"]
The data source received an unexpected error while attempting to execute the program.
Program: /usr/bin/git
Error Message: fatal: not a git repository (or any parent up to mount point /home/tfc-agent)
Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).
State: exit status 128
This should be referencing this line in this module which is called here in the main module.
Minimal code example:
module "datadog_prod_lambda_forwarder" {
source = "cloudposse/datadog-lambda-forwarder/aws"
# Cloud Posse recommends pinning every module to a specific version
version = "0.12.0"
forwarder_log_enabled = true
cloudwatch_forwarder_log_groups = {
some_group = {
name = "<path to a log group>",
filter_pattern = ""
},
some_other_group = {
name = "<path to a log group>"
filter_pattern = ""
}
}
dd_api_key_source = var.prod_dd_api_key_source
dd_tags = []
providers = {
aws = aws.prod
}
}
module "datadog_staging_lambda_forwarder" {
source = "cloudposse/datadog-lambda-forwarder/aws"
# Cloud Posse recommends pinning every module to a specific version
version = "0.12.0"
forwarder_log_enabled = true
cloudwatch_forwarder_log_groups = {
some_group = {
name = "<path to a log group>",
filter_pattern = ""
},
some_other_group = {
name = "<path to a log group>"
filter_pattern = ""
}
}
dd_api_key_source = var.staging_dd_api_key_source
dd_tags = []
providers = {
aws = aws.staging
}
}
provider "aws" {
region = "us-west-2"
alias = "prod"
assume_role {
role_arn = var.prod_role_arn
session_name = "Terraform"
external_id = var.prod_aws_external_id
}
access_key = var.prod_aws_access_key
secret_key = var.prod_aws_secret_key
}
provider "aws" {
region = "us-west-2"
alias = "staging"
assume_role {
role_arn = var.staging_role_arn
session_name = "Terraform"
external_id = var.staging_aws_external_id
}
access_key = var.staging_aws_access_key
secret_key = var.staging_aws_secret_key
}
Provider should work without assume_role if you use access/secret keys into the specific accounts, I kept it as close to my implementation as possible just on the outside chance this is related (Although i doubt it).
And to note: I can get any of the modules to work if I comment out the others, I’ve attempted it with 1, 2, and 3 modules. With 1, it works (no matter which), with 2, one will fail, with 3, two will fail. I haven’t tested it with 4+, but I think it’s reasonable to assume it will be n-1 failures.
Oh, and: this is executed via terraform cloud, if that makes a big difference.
That’s pretty odd, taking a look now
It’s possible that this might be related to the -C
flag in the git command, and if it’s run multiple times. From the git documentation:
-C
Run as if git was started in _ _ instead of the current working directory. When multiple `-C` options are given, each subsequent non-absolute `-C ` is interpreted relative to the preceding `-C `. If _ _ is present but empty, e.g. `-C ""`, then the current working directory is left unchanged. This option affects options that expect path name like `--git-dir` and `--work-tree` in that their interpretations of the path names would be made relative to the working directory caused by the `-C` option. For example the following invocations are equivalent: git --git-dir=a.git --work-tree=b -C c status git --git-dir=c/a.git --work-tree=c/b status
I’m frankly not sure if running (
var.module_path
collapsed to${path.module}
per this line):git -C ${path.module} log -n 1 --pretty=format:{"ref": "%H"}
or the properly escaped equivalent multiple times would essentially stack deeper and deeper and be problematic, or if this is otherwise potentially related to path.module and the terraform warning: We do not recommend usingpath.module
in write operations because it can produce different behavior depending on whether you use remote or local module sources. Multiple invocations of local modules use the same source directory, overwriting the data inpath.module
during each call. This can lead to race conditions and unexpected results. If that’s the case, it’s possible I may be able to avoid this by usingdepends_on
to ensure each module fully completes before the next one attempts to run. I’m going to give that a try right now.
Yep, using depends_on
to ensure each module finishes before the next starts resolved the issue. It’s likely related to path.module
.
Gotcha, glad that unblocked you, I’ll add this to our notes, I know we’ve been seeing some more git -C
issues recently, maybe theres a way to avoid it or clean it up
Unfortunately while running the apply (rather than just plan) this morning, it came back. depends_on
appears to resolve the plan-time error, but they don’t run properly.
Looks like s3 bucket replication of existing objects is not currently supported by latest AWS provider (4.31).
So my best option seems to be to first run terraform apply to put new-object replication in place for desired buckets, then run a Batch Replication job from CLI using aws s3control create-job ...
on each bucket (since I have a lot of buckets to replicate existing objects, and replication jobs require a replication config to already exist).
But then it is easy to forget to run that script after terraform apply, so better:
• Add a local-exec
provisioner to the bucket replication config resource in my tf code, with when=create
. But this would get skipped for buckets that already have replication config (ie already created).
• Better add that provisioner to a null_resource
that is enabled only if a variable is set to true (and no when
set). I would set it to true, apply, set it to false, push.
Any considerations I might be forgetting?
I just enabled replication through terraform, and used the Batch jobs to replicate the existing objects initially. After that the replication rule is resuming as expected. But I only had to do that for 10 S3 buckets so the initial manual step was not that time consuming for me.
Anyone looked at updating the terraform-aws-elasticsearch module to support OpenSearch or creating a new module for it?
Hey all, I’m trying to set up a new AWS organization and accounts with the terraform-aws-components/account
module but running into an odd issue on the atmos terraform plan
:
│ Error: error reading Organizations Policy (p-9tkedynp): AWSOrganizationsNotInUseException: Your account is not a member of an organization.
│
│ with module.organizational_units_service_control_policies["platform"].aws_organizations_policy.this[0],
│ on .terraform/modules/organizational_units_service_control_policies/main.tf line 37, in resource "aws_organizations_policy" "this":
│ 37: resource "aws_organizations_policy" "this" {
Yeah I’m not a member of an organization, my impression is the account module is supposed to create the organization no? (Resolved by terraform clean)
This is my component in the atmos stack:
components:
terraform:
account:
vars:
enabled: true
account_email_format: aws+%[email protected]
account_iam_user_access_to_billing: DENY
organization_enabled: true
aws_service_access_principals:
- cloudtrail.amazonaws.com
- guardduty.amazonaws.com
- ipam.amazonaws.com
- securityhub.amazonaws.com
- servicequotas.amazonaws.com
- sso.amazonaws.com
- auditmanager.amazonaws.com
- ram.amazonaws.com
enabled_policy_types:
- SERVICE_CONTROL_POLICY
- TAG_POLICY
service_control_policies_config_paths:
- "../aws-service-control-policies/catalog/organization-policies.yaml"
organization_config:
root_account:
name: core-root
stage: root
tenant: core
tags:
eks: false
accounts: [ ]
organization:
service_control_policies: [ ]
organizational_units:
- name: platform
accounts:
- name: platform-dev
tenant: platform
stage: dev
tags:
eks: false
- name: platform-staging
tenant: platform
stage: staging
tags:
eks: false
- name: platform-prod
tenant: platform
stage: prod
tags:
eks: false
service_control_policies:
- DenyLeavingOrganization
- name: core
accounts:
- name: core-audit
tenant: core
stage: audit
tags:
eks: false
- name: core-data
tenant: core
stage: data
tags:
eks: false
- name: core-dns
tenant: core
stage: dns
tags:
eks: false
- name: core-identity
tenant: core
stage: identity
tags:
eks: false
- name: core-network
tenant: core
stage: network
tags:
eks: false
- name: core-security
tenant: core
stage: security
tags:
eks: false
service_control_policies:
- DenyLeavingOrganization
This error was magically resolved by terraform clean and deleting the state
2022-09-22
Hey all, is there any tool for convert cloudformation to terraform ??
Yeah, but i haven’t seen any proper tool
Convert Cloudformation templates to Terraform.
2022-09-23
Is it possible to have a terraform module enforce that the aws provider it inherits is configured to a certain region? (And fail if a provider for a different region is in use)
no, I do not think is possible
since the provider can use ENV variables to be configured
it supports the same aws ENV variables so if you do it in you module, you can still set the AWS_REGION var to whatever and workaround the hardcoded region
is that what you mean?
or you are asking if you can create resources in your module for another region?
I don’t want to violate the module user’s expectations and operate in a different region to what they asked - just want to let them know “you can only use this module in <X> region”
Ah, looks like configuration_aliases
in required_providers
would essentially enable me to restrict to a given provider alias (named by region), that should suffice
ohhh cool
When you use configuration_aliases
, it acts as though you have n+1 providers, as it assumes configuration_aliases = ["us-west-2"]
is equal to two providers: aws
and aws.us-west-2
. I’ve experienced strange issues when only passing in a single provider to it (providers = { aws.us-west-2=aws }
) and not passing in the aws=...
provider as well.
You may wish to look into using something like data "aws_region" "current" {}
and validate data.aws_region.current.name == myregion
. I haven’t used it in this manner myself though, so you should experiment with both methodologies and see how they work in practice.
Thanks @Julian Olsson. Worked perfectly with a lifecycle postcondition!
The data "aws_region"
variant, I assume? If so, excellent, thanks for letting me know it works, I’ll keep that one in my back pocket for another day.
Yes, exactly. And me too!
I have this issue where I can not run terraform import
on a new remote state within TFE at a workspace. It’s a new workspace and does not have resources yet, I am trying run import script before merging a PR for all tf resources. Any ideas how to solve this?
Acquiring state lock. This may take a few moments...
Failed to persist state: Error uploading state: resource not found
Create the state with one dummy resource, then run your imports?
If runner based, just upload blank TF code and let an empty plan/apply run. Then add the stuff you want to import.
Make sure you’ve got workspace configured in the cloud block.
TFC Cloud pricing question: anyone know the actual price?
I asked few people and said there is a cost per state
( workspace) , per user and per run?
as usual website is not very detailed…..
Talking Terrafrom Cloud SaaS not Enterprise
I want to confirm it is a real per user only cost
If you sign up (free) and look at the usage tab, it’ll give you everything you need to know.
we are on the free
but we need to forecast price so we need to have an idea on how to calculate the price
I’m not sure re. the normal cloud one. It may be worthwhile setting up a meeting with them to establish cost estimates.
Basically it’s bespoke depending on your negotiating power. Think like new relic
I just had bad experiences with Hashicorp sales every single time
They’re not super great (cost wise for features you get), I wouldn’t recommend if you are able to use any others.
Yes I think the process is to delay giving you a quote for as long as possible until they can figure out what you can afford
per user
this guys have been useless to say the least….
they keep asking for a number, like asking “tell me how much money you have, I will charge you that”
what kind of sale tactic is that? I do not know, but I do not like it
Depends on your negotiating power. I had one customer who had to pay $50 per apply when using a tf cloud runner. That goes away when you use your own runner when you use tfcloud solely to manage state and workspace configuration, or you use the tfcloud agent to operate your own fleet of runners on your own hardware.
I think you would want to run your own runners anyway, solely to manage the principal used to run tf better, e.g. aws access key when running in tf cloud vs aws role when running your own runner.
2022-09-24
2022-09-26
I have a stack that will consist of N tfstates. I could easily write an N-line bash script to do tf apply on each one, but I’m wondering if one of terragrunt, terramate, terraspace or cdktf might have good support for this and aspects of such design that I might now yet realize
Eg N-1 of those states will be completely independent one another and will depend only on the first module (which is a base layer), so technically they could all be updated in parallel. Does one of these tools support describing the stack in terms of separate states, and the dependencies of module on other modules, then it could automatically figure out the order of tf applies and do some in parallel.
terragrunt and terramate both handle that scenario. i find it rather hard to parse outputs of either though, when running against multiple stacks in parallel. easy to lose/miss something in review
any reason in particular?
better visibility of the changeset
if you must roll your own CICD automation solution, here is a new tool that attempts to help you figure out the order of operations… https://github.com/contentful-labs/terraform-diff
Always know where you need to run Terraform plan & apply!
if you’re using github and github actions, there’s also tfcmt to post plan results back to github pull requests… https://github.com/suzuki-shunsuke/tfcmt
Fork of mercari/tfnotify. tfcmt enhances tfnotify in many ways, including Terraform >= v0.15 support and advanced formatting options
Thanks @loren for the suggestions
2022-09-27
Just got to say, as someone new to terraform trying to build infrastructure quickly for a new venture, cloudposse terraform modules rule, wow. Thanks
Has cloudposse developed any module/components for AWS ipam? I’m looking into using IPAM instead of working out all the IP blocks in a spreadsheet
We don’t but we’ve created a root terraform module (component) that wrapped this module
Terraform Module for create AWS IPAM Resources
Thanks
2022-09-28
Has anyone setup centralized egress for all your VPCs through the network account, via an NAT gateway, using cloudposse terraform-aws-components? I’m using transit gateway but it looks like that would require a lot of changes to the tgw components’ route configs.
we’re building out this architecture right now for another customer
out of curiosity, why would you want to do this?
Mostly centralized control/monitoring of traffic, and cost. https://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/centralized-egress-to-internet.html https://aws.amazon.com/blogs/networking-and-content-delivery/creating-a-single-internet-exit-point-from-multiple-vpcs-using-aws-transit-gateway/
As you deploy applications in your Landing Zone, many apps will require outbound only internet access (for example, downloading libraries, patches, or OS updates).
In this post, we show you how to centralize outbound internet traffic from many VPCs without compromising VPC isolation. Using AWS Transit Gateway, you can configure a single VPC with multiple NAT gateways to consolidate outbound traffic for numerous VPCs. At the same time, you can use multiple route tables within the transit gateway to […]
Any recs on apps for detecting drift in Terraform if you are NOT on Terraform cloud? Every place ive worked we have always had an internally developed custom app. I really dont want to have to write another one again for my current gig.
driftctl?
Interesting…ill check it out. Thx
Just an fyi for anyone else that pokes their head in this thread…its found at [docs.driftctl.com](http://docs.driftctl.com)
not just [driftctl.com](http://driftctl.com)
<– That takes you to a wordpress login
Any recommendations for a good guide on deploying cloudposse modules into own projects?
v1.3.1 1.3.1 (September 28, 2022) NOTE: On darwin/amd64 and darwin/arm64 architectures, terraform binaries are now built with CGO enabled. This should not have any user-facing impact, except in cases where the pure Go DNS resolver causes problems on recent versions of macOS: using CGO may mitigate these issues. Please see the upstream bug <a class=”issue-link js-issue-link” data-error-text=”Failed to load title” data-id=”1231779689” data-permission-text=”Title is private”…
Hey everyone, I have a question regarding terraform-null-label: I get how to use it as a module. But do I also include the [context.tf](http://context.tf)
in my own files if I’m writing a module myself (which I do all the time because everything in Terraform is a module)? Basically replicating what Cloud Posse is doing within their own modules.
[context.tf](http://context.tf)
has all the context variables used by the label module (and other things)
if you include it, you don’t have to provide all those variables
our pattern is to always include [context.tf](http://context.tf)
and don’t think about those common vars that are used by all modules and components
Okay, that helps. Thanks
Is there anyone out there interested in upgrading TF 0.12 to something more current..
We just upgraded some of our terraform workspace/configs to 0.13. from there on, upgrading to further versions beyond was fairly easy (no major syntax changes). any questions in particular?
Hello Team,
How can remove a resouce created using cloudposse/vpc-peering-multi-account/aws
we don’t need vpc peering.. what is the best way to do it.
because if i delete it and then plan and apply it is faling
if i set enable = false then authorization issue is coming
@Nitin Could you create an issue for this on the module?
For now you could do a targeted destroy
terraform destroy -target module.peering