#terraform (2021-06)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2021-06-01
I forget who else was looking for this, but the new aws provider release has support for aws amplify… https://github.com/hashicorp/terraform-provider-aws/releases/tag/v3.43.0
Me. For way too long. Found out about this a week or two back — Stoked it finally shipped
Ditto. We have long given up and moved to self managed S3 + CloudFront
Sucks that there’s no integration for env vars with param store / secret mgr (in amplify) so moving to terraform would mean committing some tokens to source..
@Michael Warkentin Use sops + the sops provider. Better way of dealing with secrets then PStore or secrets manager IMO.
Simple and flexible tool for managing secrets. Contribute to mozilla/sops development by creating an account on GitHub.
A Terraform provider for reading Mozilla sops files - carlpett/terraform-provider-sops
@Matt Gowie will you be joining us for #office-hours today?
Yeah @Erik Osterman (Cloud Posse) — I’ll be on.
Does anyone know of a good terraform module for creating an S3 bucket set up to host a static site?
Terraform Module for Creating S3 backed Websites and Route53 DNS - cloudposse/terraform-aws-s3-website
oh perfect, thanks @Andriy Knysh (Cloud Posse)
Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn
see examples
folders
For those of you using terraform static analysis and plan verification tools (Sentinel, Snyk, tfsec, checkov, etc.), it would be great to hear your thoughts on what features are missing or what approach you see working/not working? Do you see this as something that should be coupled with PR process, CI/CD, TACOS platform, all of above or something else entirely? In full transparency, I’m the founder of https://soluble.ai which integrates a variety of static analysis tools into a ?(free) GitHub App. But the question is just honest discovery, useful to all. Curious what you all think and what your experiences have been.
Secure your cloud infrastructure – Infrastructure as Code (IaC) – Terraform, CloudFormation, Kubernetes
hello Rob, I am using tfsec as a pre commit to validate my terraform code before pushing to Azure. I didn’t integrate it in CICD “yet” but will do it
Secure your cloud infrastructure – Infrastructure as Code (IaC) – Terraform, CloudFormation, Kubernetes
Curious if it is just you authoring, one team, many teams? Does tfsec do about what you need? Are you writing a lot of custom policy?
one team but I have split code in several repos. default tfsec rules fits to me and yes I am authoring mainly for our infra team . for now tfsec is just taken as a warning and will not block the ci/cd. When it will then it should be in the cicd
2021-06-02
Does anyone have an example on how to use the “kubelet_additional_options” variable for the terraform-aws-eks-node-group module? I am testing it like this without any luck so far. Thanks
kubelet_additional_options = "--allowed-unsafe-sysctls=net.core.somaxconn,net.ipv4.ip_local_port_range=1024 65000"
v0.15.5 0.15.5 (June 02, 2021) BUG FIXES: terraform plan and terraform apply: Don’t show “Objects have changed” notification when the detected changes are only internal details related to legacy SDK quirks. (#28796) core: Prevent crash during planning when encountering a deposed instance that has been removed from the configuration. (<a…
hi guys anyone has an idea how to resolve this currently defined in cloudflare terraform module. I first thought i should set the attribute for paused: true. But its still does not seem tot work. Plese help
➜ staging git:(BTA-6363-Create-a-terraform-code-base-for-cloudflare) ✗ terraform plan
Acquiring state lock. This may take a few moments...
Error: Unsupported attribute
on ../../cloudflare/modules/firewall.tf line 6, in locals:
6: md5(rule.expression),
This object does not have an attribute named "expression".
@emem I’m not familiar with cloudflare resources but I’m wondering, what is the resource/variable/object/etc named rule
? Seems as though your are not referencing it correctly… :thinking_face:
Is rule
a value that you are creating or is this from a third party module you are using?
thanks @managedkaos was able to find the issue
no problem! glad you worked it out
have u encountered this before
nil entry in ImportState results. This is always a bug with
the resource that is being imported. Please report this as
a bug to Terraform
i’m hitting a problem with the way some of our modules are designed now that we’re starting to switch to AWS SSO for auth. we use data "aws_caller_identity" "current" {}
a bit to get the current account id rather than having to pass it in, unfortunately when using SSO it looks like this is the root account rather than the account you’re applying against. does anyone have an easy way around this or do i need to go on an adventure?
Something isn’t right. That should return the respective account’s id. I use it all the time. I also use the aws-cli implementation of the same command all the time to check the current account.
aws sts get-caller-identity --profile dev
aws sts get-caller-identity --profile prod
do you use AWS SSO?
Here is a quick to test…
provider "aws" {
region = "us-east-1"
profile = "sandbox"
}
data "aws_caller_identity" "current" {}
output "account_id" {
value = data.aws_caller_identity.current.account_id
}
output "id" {
value = data.aws_caller_identity.current.id
}
Yes. I have for 2-3 years.
AWS SSO (not old school SSO via IAM).
yeh ok - i’ll do some more digging then
[profile default]
sso_start_url = <https://yourcompany.awsapps.com/start>
sso_region = us-east-1
sso_account_id = 000000000000
sso_role_name = AdministratorAccess
region = us-east-1
[profile sandbox]
sso_start_url = <https://yourcompany.awsapps.com/start>
sso_region = us-east-1
sso_account_id = 000000000000
sso_role_name = AdministratorAccess
region = us-east-1
[profile dev]
sso_start_url = <https://yourcompany.awsapps.com/start>
sso_region = us-east-1
sso_account_id = 111111111111
sso_role_name = AdministratorAccess
region = us-east-1
[profile prod]
sso_start_url = <https://yourcompany.awsapps.com/start>
sso_region = us-east-1
sso_account_id = 222222222222
sso_role_name = AdministratorAccess
region = us-east-1
# sso login (using default profile)
aws sso login
# now have access to all profile despite only "login" to "default profile
aws sts get-caller-identity
aws sts get-caller-identity --profile dev
aws sts get-caller-identity --profile prod
you are correct - i was looking at this at 11pm last night and came to the wrong conclusion when i saw something change
it was another issue
thanks for diving so deep on this to help me out
Np
2021-06-03
hi guys who has gotten around resolving this terraform import issue before
nil entry in ImportState results. This is always a bug with
the resource that is being imported. Please report this as
a bug to Terraform
guess this might be the right place to put this, got a contribution PR that should now be ready for review: https://github.com/cloudposse/terraform-aws-msk-apache-kafka-cluster/pull/22
what Added support for the incoming (AWS provider 3.43.x) SASL/IAM auth method. why Allows access control to an MSK cluster via IAM instead of requiring SCRAM secret management. references AWS…
what Added support for the incoming (AWS provider 3.43.x) SASL/IAM auth method. why Allows access control to an MSK cluster via IAM instead of requiring SCRAM secret management. references AWS…
Hello,
when using terraform cloud, how do you provide terraform init argument ? I didn’t find a way to do it
I am used to provide variable to connect to the remote state like this :
terraform init -reconfigure -backend-config="login=$TF_VAR_login" ...
Hey there, you should be able to pass CLI args using the TF_CLI_ARGS as a variable: https://www.terraform.io/docs/cli/config/environment-variables.html#tf_cli_args-and-tf_cli_args_name
Terraform uses environment variables to configure various aspects of its behavior.
I’m consulting a customer to not use TF cloud. The business plan costs an arm and a leg
@msharma24 We’d love for you and your customers to check out our pricing models at env0 if the TFC quotes have your head spinning
Disclaimer: I’m the DevOps Advocate at env0
Note: The Enterprise tier pricing isn’t listed because these are 100% customized agreements from top to bottom, so we don’t know what one looks like until we spec out what is needed.
Have you seen something like this where you know there are changes (made manually in the console), terraform knows there are changes, and yet there is no plan to revert the changes?
Note: Objects have changed outside of Terraform
Terraform detected the following changes made outside of Terraform since the last "terraform apply":
Unless you have made equivalent changes to your configuration, or ignored the relevant attributes using ignore_changes, the following
plan may include actions to undo or respond to these changes.
──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
No changes. Your infrastructure matches the configuration.
They’ve been tracking that, I think… Patched some instances in 0.15.5, but sounds like there are still some occasions… https://github.com/hashicorp/terraform/issues/28776
I have a configuration I've just updated to 0.15.4 and now terraform plan/apply always reports the following: Note: Objects have changed outside of Terraform Terraform detected the following ch…
my plans got worse after 0.15.5
Hello Guys, Is anyone using Terraform API driven runs? curl -s –header “Content-Type: application/octet-stream” –request PUT –data-binary @${config_dir}.tar.gz “$upload_url” I am trying to understand and use this. I’d like to do this through Go or Python
Terraform Cloud, I’m assuming
Yes or Terraform Enterprise
Hello, I’m trying to update the AMI on an EKS cluster created with terraform-aws-eks-cluster-0.38.0 module and terraform-aws-eks-node-group-0.19.0 setting create_before_destroy = true in the eks_node_group module but pods are not relocated to the new nodes and the node group keeps modifying and times out. Anybody using this kind of rolling updates with this modules? Any hint about how to orchestrate this rollings? Thanks.
do you mean rolling update in ASG or k8s deployment?
2021-06-04
hi anyone have any good resources or links for terraforming a aws api-gateway ??
Can I get some upvotes on this? lol for some reason it’s been sitting there for a long time, but adding S3 Replication Time Control would be very valuable from Terraform https://github.com/hashicorp/terraform-provider-aws/pull/11337
original issue I think https://github.com/hashicorp/terraform-provider-aws/issues/10974
Community Note Please vote on this pull request by adding a reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave "…
Hey all. Is this still under review? I’m manually editing the module with this PR and it’s working well so far. Any idea on a new release? https://github.com/cloudposse/terraform-aws-sso/pull/13
what a.permission_set_arn is providing a unique value to the account_assignment name. However the permission_set_arn can not be determined until after the apply of the permission sets. Using a.per…
Any updates on this yet?
what a.permission_set_arn is providing a unique value to the account_assignment name. However the permission_set_arn can not be determined until after the apply of the permission sets. Using a.per…
@RB and @Andriy Knysh (Cloud Posse) I think are taking a look at this right now
(we encountered this as well)
2021-06-06
on .terraform/modules/apigw_certificate/main.tf line 37, in resource "aws_route53_record" "default":
37: name = each.value.name
A reference to "each.value" has been used in a context in which it
unavailable, such as when the configuration no longer contains the value in
its "for_each" expression. Remove this reference to each.value in your
configuration to work around this error.
Started seeing this error with cloudposse / terraform-aws-acm-request-certificate . Anyone familiar with this Terraform error? I’ve never seen it before and can’t quite understand it
Terraform module to request an ACM certificate for a domain name and create a CNAME record in the DNS zone to complete certificate validation - cloudposse/terraform-aws-acm-request-certificate
did you upgrade this module from a previous version, also is this happening on plan or apply
Terraform module to request an ACM certificate for a domain name and create a CNAME record in the DNS zone to complete certificate validation - cloudposse/terraform-aws-acm-request-certificate
on plan. I figured it out – if you pass in a hostname with uppercase letters, you get this error
ahh good to know
pr in for anyone following along at home https://github.com/cloudposse/terraform-aws-acm-request-certificate/pull/46
quite the weird error
2021-06-07
Hi All, terraform plan does already some validation like duplicate variables but what is missing is duplicate validation for the contents of maps and lists does anyone know of a way/tool to validate .tfvars files duplicates including duplicates inside maps and lists?
https://github.com/terraform-linters/tflint might be able to do what you need @Adnan
A Pluggable Terraform Linter. Contribute to terraform-linters/tflint development by creating an account on GitHub.
is it possible to pass outputs a inputs for variables?
Yes, you can pass outputs from modules as inputs to other modules
hey guys this is probably a FAQ so sorry if so: What’s a good article or series for writing CI for Terraform? Specifically I now have a small team of people all working on a project together, what’s a good resource to follow on how to test. deploy and not step on each other’s toes?
*We use CircleCI and Terraform, no PaaS (yet)
You can use the terraform cloud if you are looking for a Paid service… https://www.hashicorp.com/blog/learn-ci-cd-automation-with-terraform-and-circleci
If you don’t want terraform cloud, you can try something like this..
Get started automating Terraform in CI/CD with a new tutorial that walks you through deploying a web app and its underlying infrastructure using the same CircleCI workflow.
Use our CI/CD template for Terraform to learn how you can use Infrastructure-as-Code (IaC) to improve CI/CD processes. This template will show you exactly how to implement and maintain a CI/CD pipeline with Terraform.
if you want to validate and find configuration issues of your terraform in the CI process.. you can use our free product https://get.soluble.cloud/
Automated Infrastructure as Code (IaC – Terraform, CloudFormation, Kubernetes) static security testing for developers
I’d suggest against Terraform Cloud. They’re getting better, but are still fairly behind their competitors. Scalr or Spacelift are the way to go IMO:
Scalr is a remote state & operations backend for Terraform with access controls, policy as code, and many quality of life features.
Enable collaboration. Ensure control and compliance. Customize and automate your workflows.
Spacelift has quite few disadvantages compared to scalr and Terraform cloud… I like the terraform cloud triggers which I use a lot that doesn’t exist in Scalr but if you are more into OPA, shared modules, Custom policies .. scalr might be a good fit..
You can check out our product ( disclaimer - i am CEO of env0) at www.env0.com which allows you to do much more than Terraform Cloud imho.
You can check out this video which presents all 4 solutions - Terraform Cloud, env0, Scalr, Spacelift https://youtu.be/4MLBpBqZmpM
We use the Fargate module for deploying atlantis: https://www.runatlantis.io
Atlantis: Terraform Pull Request Automation
2021-06-08
Hi all, I am using the helm provider to deploy a chart… but when adding a template in the helm chart, the tf deployment does not detect the fact that I added a yaml file… how can this be resolved?
https://registry.terraform.io/ - Anyone else having issues reaching the site?
i can access
the site
@Brian Ojeda
Terraform 1.0 — now generally available — marks a major milestone for interoperability, ease of upgrades, and maintenance for your automation workflows.
v1.0.0 1.0.0 (June 08, 2021) Terraform v1.0 is an unusual release in that its primary focus is on stability, and it represents the culmination of several years of work in previous major releases to make sure that the Terraform language and internal architecture will be a suitable foundation for forthcoming additions that will remain backward compatible. Terraform v1.0.0 intentionally has no significant changes compared to Terraform v0.15.5. You can consider the v1.0 series as a direct continuation…
at last
Feels unexciting as there isn’t much new being released, but at least we’ll finally stop hearing jokes about terraform not being 1.0
exactly
it’s important to know that we can now stop having to consider a version upgrade as a major activity - which is nice
wonder if terraform test is still beta in v1.0 or staying with v0.15
@Chris Fowles
it’s important to know that we can now stop having to consider a version upgrade as a major activity - which is nice
Yes/no.
Now we’re back to 0.11 and 0.12 style version upgrades - the kind that happen every year and are scary.
With regular breaking changes, we got much better at handling them.
but at least we’ll finally stop hearing jokes about terraform not being 1.0
But now I lose my excuse for why cloudposse modules are 0.x
Hi all. QQ if I may.. I’m seeing the following error
Error: "name_prefix" cannot be less than 3 characters
This is coming from the eks-workers
module. Looks as though it’s then coming from the ec2-autoscale-group
module and then from the label/null
module.
Full Error:
│ on .terraform/modules/eks_workers.autoscale_group/main.tf line 4, in resource "aws_launch_template" "default":
│ 4: name_prefix = format("%s%s", module.this.id, module.this.delimiter)
I can’t seem to see why it’s not getting an id..
FYI, I’ve changed nothing. Just calling the eks-workers
module…
module "eks_workers" {
source = "./modules/eks-workers"
cluster_certificate_authority_data = module.eks_cluster.eks_cluster_certificate_authority_data
cluster_endpoint = module.eks_cluster.eks_cluster_endpoint
cluster_name = module.eks_cluster.eks_cluster_id
cluster_security_group_id = module.eks_cluster.security_group_id
instance_type = "t3.medium"
max_size = 8
min_size = 4
subnet_ids = module.vpc.public_subnets
vpc_id = module.vpc.vpc_id
associate_public_ip_address = true
}
NB: Although the module is local, it was cloned this morning so is up to date.
Anyone got any thoughts on this? Would a GH Issue be more suitable for this?
Hey all, Im using the terraform eks community module. Im trying to tag the managed nodes with the following:
additional_tags = {
"k8s.io/cluster-autoscaler/enabled" = "true"
"k8s.io/cluster-autoscaler/${var.cluster_name}" = "true"
"Name" = var.cluster_name
}
In addition to this i’m trying to merge the tags above with var.tags
with minimal success - does anyone know how to do that?
I tried the following with no luck
additional_tags = {
merge(var.tags,
"k8s.io/cluster-autoscaler/enabled" = "true"
"k8s.io/cluster-autoscaler/${var.cluster_name}" = "true"
"Name" = var.cluster_name
)
}
tags = merge(
{
"Name" = format("%s", var.name)
},
local.tags,
)
}
i think your issue is the { } missing around your 3 bottom tags.
let me try adding the { }
additional_tags = {
merge(var.tags, {
"k8s.io/cluster-autoscaler/enabled" = "true"
"k8s.io/cluster-autoscaler/${var.cluster_name}" = "true"
"Name" = var.cluster_name
})
}
that results in
50: additional_tags = {
51: merge(var.tags, {
52: "k8s.io/cluster-autoscaler/enabled" = "true"
53: "k8s.io/cluster-autoscaler/${var.cluster_name}" = "true"
54: "Name" = var.cluster_name
55: })
56: }
Expected an attribute value, introduced by an equals sign ("=")
= ${var.cluster_name}” ?
it shouldnt need that. but thats odd.
still the same error
what version is this?
Ohj
you still ahve a syntax error
14.10
additional_tags = merge(var.tags, {
"k8s.io/cluster-autoscaler/enabled" = "true"
"k8s.io/cluster-autoscaler/${var.cluster_name}" = "true"
"Name" = var.cluster_name
})
try that.
An argument or block definition is required here.
additional_tags
is map(string)
value so that should work
turn your tags into a local and see if it works then
locals {
#Instance Tagging
tags = {
"service" = var.service_name
"env" = var.environment
"stackname" = "${var.environment}-${var.application_name}"
}
}
etc
then do locals.tags in the merge.
hmm I’ll try - the thing is, var.tags
are picked up from various *.tfvars
files
so locals might make it so i duplicate some tags
are you outputting them?
the tags? no
Threading this to reduce noise.
good call
so your vars are in multiple files?
yeah
for different environments
How exactly are you structuring your terraform?
each app/env should have its own set of terraform.tfvars files
something like
app –terraform – dev – terraform.tfvars – main.tf – outputs.tf – variables.tf – stage – terraform.tfvars – main.tf – outputs.tf – variables.tf – prod – terraform.tfvars – main.tf – outputs.tf – variables.tf
(your experience may vary this is what we use basically)
Or use something like terragrunt where you can define them all in a single place and it keeps it a bit more DRY.
you should be able to import your module in the main.tf call, and expose the locals to the module there, where it can generate the local tags.
additional_tags = merge(var.tags, {
Name = var.cluster_name
"k8s.io/cluster-autoscaler/enabled" = "true"
"k8s.io/cluster-autoscaler/${var.cluster_name}" = "true"
}, )
that seems to have worked, however my instances dont have any of the tags after apply
the plan didnt show a change either
Hi all, I am using the helm provider to deploy a chart… but when adding a template in the helm chart, the tf deployment does not detect the fact that I added a yaml file… how can this be resolved?
I am not sure if there is a clean way to make it work with terraform. a hack/workaround could be to determine if there are changes some other way, if yes taint/replace the resource. or just handle helm separately :D
@Adnan So is this a known “side-effect” of using the helm provider? That is a big issue imo…
yes, i think its a known issue https://github.com/hashicorp/terraform-provider-helm/issues/372 not aware if it has been fixed somewhere
Terraform Version Terraform v0.12.12 Helm provider Version ~> 0.10 Affected Resource(s) helm_resource Terraform Configuration Files resource "helm_release" "service" { name =…
2021-06-09
I posted a question for module support yesterday and it’s lost in the scroll back. Is this the best place for module support? Or should I raise a github issue? TIA.
Here is good
Thanks, I’ve reshared.
Reshared here so it doesn’t get lost in scrollback
Hi all. QQ if I may.. I’m seeing the following error
Error: "name_prefix" cannot be less than 3 characters
This is coming from the eks-workers
module. Looks as though it’s then coming from the ec2-autoscale-group
module and then from the label/null
module.
Full Error:
│ on .terraform/modules/eks_workers.autoscale_group/main.tf line 4, in resource "aws_launch_template" "default":
│ 4: name_prefix = format("%s%s", module.this.id, module.this.delimiter)
I can’t seem to see why it’s not getting an id..
FYI, I’ve changed nothing. Just calling the eks-workers
module…
module "eks_workers" {
source = "./modules/eks-workers"
cluster_certificate_authority_data = module.eks_cluster.eks_cluster_certificate_authority_data
cluster_endpoint = module.eks_cluster.eks_cluster_endpoint
cluster_name = module.eks_cluster.eks_cluster_id
cluster_security_group_id = module.eks_cluster.security_group_id
instance_type = "t3.medium"
max_size = 8
min_size = 4
subnet_ids = module.vpc.public_subnets
vpc_id = module.vpc.vpc_id
associate_public_ip_address = true
}
NB: Although the module is local, it was cloned this morning so is up to date.
Does anyone know if theres a way to ignore changes to the entire module? Ive got this tgw module originally deployed, but it has been messed with manually a couple of times that i dont know if i could salvage it by monkey patching the tf code, hence this question…
Resources have the lifecylce
meta-argument, with which you can use ignore_changes
- I know this doesn’t answer your question, but the reason for me mentioning is; https://www.terraform.io/docs/language/modules/syntax.html - This mentions that the lifecycle argument is reserved for future releases.. so perhaps lifecycle is/will (be) available for modules
│ Error: Unsupported argument
│
│ on main.tf line 19, in module "vpc":
│ 19: lifecycle = {
│
│ An argument named "lifecycle" is not expected here.
sadly just stumbled upon this.. hopefully it’ll get incorporated somehow
https://www.reddit.com/r/Terraform/comments/mrzsbg/how_to_use_lifecycle_feature_with_ec2instance/
In a terraform task, created an ec2_instance creation module module “ec2_instance” { source =…
any other shady hacks for this? or am i doomed to monkey patch this mess…?
Could you use terraform state mv
to move resources into a new module which represents what is in state?
thanks, checking it out, im pretty newb when it comes to tf…
slight update: i ended up messing with the state file instead of monkey patching the tf code… im not endorsing my actions in anyway shape of form lol
Running into issue creating aks cluster in azure when using manage identity and private dns zones. Hoping to find anyone who worked with AKS and possibly provide some guidance please
Hi Team, I would like to create a hosted zone in aws through terraform…Can you suggest me a terraform module which does the same or any guidance would be helpful.
Terraform module which creates Route53 resources on AWS - terraform-aws-modules/terraform-aws-route53
Thank you both..https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/route53_zone..I tried this and its working…
HCP Packer is a new cloud service designed to bridge the gap between image creation and deployment with image-management workflows. The service will be available for beta testing in the coming months.
While HCP Packer is not “Packer in the cloud,”
Too late, its 100% going to be branded “Packer in the cloud”
I created a module for Route 53 Resolver DNS Firewall using the cloudposse scaffolding if anyone wants to kick the tires on it https://github.com/pjaudiomv/terraform-aws-route53-resolver-dns-firewall
Terraform module to provision AWS DNS firewall resources. - pjaudiomv/terraform-aws-route53-resolver-dns-firewall
nice, in Terraform 1.0, terraform destroy -help
states only that it’s an alias for terraform apply -destroy
. But terraform apply -help
doesn’t mention -destroy
2021-06-10
In regards to [contex.tf](http://contex.tf)
and this
module.. can someone tell me where module.this.id
is coming from? In specific reference to the aws-ec2-autoscale-group and aws-eks-workers modules. But this seems to be a standard configuration across a lot of modules.
It’s using the null-label module. This is a module which doesn’t create infrastructure, but is designed to create a consistent name based on inputs
the module is instantiated as “this” and id
is one of the null-label outputs. Specifically the one that outputs the “consistent name”
I’m having a hard time trying to narrow down the error I’m seeing when I use the eks-workers
module.
It calls EC2-Autoscale-Group.
Which has a name prefix, which it gets from module.this.id
However the error I’m seeing suggests it’s getting a no value from module.this.id
there’s no default name. Did you pass in any of the variables used by null-label module?
Starting to see this now.. there’s no namespace, environment or stage.. which are inputs.
I haven’t done anything, I’m just using a CP module.
namespace, environment, stage, name, attributes
yes, this requirement is not well documented
simplest approach is to set name
only to specify the name you want to use for the module’s resources
I see.. So I must pass these attributes into the eks-workers module?
yes, you have to pass at least one of them
The confusion came because the offending module is nested
passing multiple of them, and the null label module’s other variables are designed for advanced workflows where you compose or nest multiple labels
Brilliant, thank you. That was a simple fix
I passed name, and now I’m onto the next error! But at least I’m passed that point.
Thanks again,
I am new to terraform I am trying to use cloudposse (git url: https://github.com/cloudposse/terraform-aws-tfstate-backend) to save the state of the terraform on s3 bucket on AWS but I keep getting this error on Jenkins: (
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Error: Failed to get existing workspaces: S3 bucket does not exist.
The referenced S3 bucket must have been previously created. If the S3 bucket
was created within the last minute, please wait for a minute or two and try
again.
Error: NoSuchBucket: The specified bucket does not exist
status code: 404, request id: RTY8A45R6KR8G72F, host id: yEzmd9hrvPSY3MY3trWfvdtyw4VcJZ+L+hf79QpkOkbSD7GU4Xz9EViWHbDRXiHjTp8k5LgPIzM=
). Any help and guidance will be appreciated thanks.
Terraform module that provision an S3 bucket to store the terraform.tfstate
file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption. - cloudposse…
Error: NoSuchBucket: The specified bucket does not exist
The bucket doesn’t exist - Create it first
Terraform module that provision an S3 bucket to store the terraform.tfstate
file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption. - cloudposse…
when I created the bucket I get this errors:
Acquiring state lock. This may take a few moments...
[31m
[1m[31mError: [0m[0m[1mError locking state: Error acquiring the state lock: 2 errors occurred:
* ResourceNotFoundException: Requested resource not found
* ResourceNotFoundException: Requested resource not found
Terraform acquires a state lock to protect the state from being written
by multiple users at the same time. Please resolve the issue above and try
again. For most commands, you can disable locking with the "-lock=false"
flag, but this is not recommended.[0m
So you created the bucket and now you’re getting that error from TF?
I created it from AWS console with the exact name that was expected by the terraform form for storing the state.
That’s an issue with DynamoDB - I’ve not used that module before, but it looks as though it creates the bucket for you if you follow the guide.
cloudposse/tfstate-backend/aws
should have created the bucket for you if you followed this?
yes that what it should do but I don’t know why its not creating that
resource "aws_s3_bucket" "default" {
....
According to the module it does create the bucket.
resource "aws_dynamodb_table"
...
It also sorts the dynamo table.
Step one on the readme… where did you put it?
I added it in a folder called backend and the created a main.tf and added it there
this is my terraform structure
So the first step is in the main.tf in the backend folder and then the second step is in the backend.tf
That’s probably your problem.
add that module to management-site/main.tf
not sure thats the problem the jenkins deploy job calls it first before the management site.
Have you tried just following the steps in the module first? To see if it works that way? Then moving things around once you know it works?
I’m not sure why you’ve got logic in a script to check whether the bucket exists, and it it doesn’t exist; run the backend module. To me that doesn’t make sense… the whole point of Terraform is that it creates things which don’t exist and doesn’t re-create things which do exist.
What you’ve done won’t work though, you’ve created a backend with no state to put into it.
You need to bin that backend directory and bring the module into your [main.tf](http://main.tf)
Then run terraform init
followed by terraform apply
followed by terraform init -force-copy
The first init pulls the module down, the apply creates the S3 bucket and the second init copies the backend to the bucket.
This has been resolved thank you but I didn’t use cloudposse again as the issue persisted even after put all in same file as you advised. thanks for the help.
This was the one I used to achieve that. https://github.com/stavxyz/terraform-aws-backend
A Terraform module for your AWS Backend + a guide for bootstrapping your terraform managed project - stavxyz/terraform-aws-backend
hi everyone looking for any ideas or resources that i can use to setup using terraform api gateway with a cognito user pool any help would be much appreciated if you wanna contact me i can explain in more detail our current setup and issues we are facing
Hi there. I run into a weird error with aws provider and wonder if anyone have run into this too:
resource "aws_synthetics_canary" "api" {
name = "test"
artifact_s3_location = "s3://${aws_s3_bucket.synthetic.id}"
execution_role_arn = aws_iam_policy.synthetic.arn
handler = "apiCanaryBlueprint.handler"
runtime_version = var.synthetic_runtime_version
zip_file = data.archive_file.synthetic.output_path
schedule {
expression = "rate(60 minutes)"
}
}
terraform apply and:
│ Error: error reading Synthetics Canary: InvalidParameter: 1 validation error(s) found.
│ - minimum field size of 1, GetCanaryInput.Name.
│
│
│ with aws_synthetics_canary.api,
│ on monitoring.tf line 94, in resource "aws_synthetics_canary" "api":
│ 94: resource "aws_synthetics_canary" "api" {
│
╵
I’ve got a VPC with some private subnets, and I’m passing those subnet IDs into a module to deploy instances to run an app. I’m also passing in an instance type, but not all instance types are available in all regions and one subnet doesn’t have the instance type I need in it. I’m trying to use the aws_subnet
data resource to retrieve the AZs each subnet is in, then use aws_ec2_instance_type_offerings
to filter the list of subnets so I only deploy in ones where the instance type is available, but I’m not sure how to create a data resource for each subnet. Can I use foreach here?
Does anyone know a workaround for this issue? https://github.com/terraform-aws-modules/terraform-aws-rds-aurora/issues/129
When passing iam db role to iam_roles variable, e.g. iam_roles = [<db_role_arn>] , it fails to apply role to Aurora postgres db cluster with following error - Error: InvalidParameterValue: Th…
trying to provide iam_roles
but get this error: Error: InvalidParameterValue: The feature-name parameter must be provided with the current operation for the Aurora (PostgreSQL) engine.
@here We’re having a special edition of #office-hours next week and will be joined by @Taylor Dolezal who is a Senior Developer Advocate at HashiCorp. Please queue up any questions (or gripes) you have about Terraform on this thread and we’ll have Taylor review them live on the call, thanks!
@here we have another special edition of Office Hours next week Wednesday June 16th!
@Taylor Dolezal will be joining us! Taylor is a Senior Developer Advocate at HashiCorp and we’ll be talking to him about an array of topics including: his role, what’s it like to be a developer at HashiCorp, what we can expect next for Terraform, Nomad vs Kubernetes, security considerations with custom providers, and answering live Q&A from anyone who joins! Hope to see you there
@Taylor Dolezal has joined the channel
2021-06-11
Hi all, quick question for sanity’s sake… In the EKS-Workers module where it refers to autoscaling groups.. This is not the same as Cluster Autoscaler? Or is it?
I’ve deployed a cluster using EKS-Workers.. set max nodes to 8 and min nodes to 3.. but when I deploy 20 nginx pods the nodes don’t scale.
Perhaps there’s an input to enable autoscaling? Or do I need to look at writing something myself to enable cluster auto scaling?
I think I’ve answered this myself. I needed to deploy the autoscaler pod
2021-06-12
https://github.com/tonedefdev/terracreds allow you to store token for TF cloud or similar SaaS like ( env0 - scalr - spacelift) in macos or windows vault instead of plain text, same as aws-vault. I used this between switching TF Cli workflow between TF cloud and Scalr.
A Terraform Cloud/Enterprise credentials helper. Contribute to tonedefdev/terracreds development by creating an account on GitHub.
2021-06-13
Questions for terraform-aws-modules/vpc/aws
: im switching from single NATGW to multi NATGW setup per AZ. In the plan it’s instructed to destroy the original NATGW that was originally created. This seems fishy to me as it would basically cease the outgoing traffic during which the apply is doing its thing… Anyone knows a way to skip the destroy? or is there a better way to go about this?
Check if the create_before_destroy
is set and is true
. If it is set and true
, then it there is little to no down time downtime.
https://www.terraform.io/docs/language/meta-arguments/lifecycle.html
Terraform by HashiCorp
hmm dont think that would work in this case as it would try to create a NATGW with the elastic ip hooked up to the original NATGW…
the problem here is that i have whitelisted the original elastic ip somewhere else and messing with the original NATGW in any way shape of form would break this link
shoulda google the source… TLDR = SOL
https://github.com/terraform-aws-modules/terraform-aws-vpc/issues/506
I've created a VPC with single_nat_gateway=true. When attempting to change to single_nat_gateway=false the plan shows the following: # module.vpc.aws_nat_gateway.this[0] must be replaced -/+ re…
Hi Guy ….Do we have any utility like tfenv for windows to use tf version whichver we like ?
2021-06-14
Hi folks, I am starting to migrate my terraform state and stuff to Terraform Cloud. So far so good, however now I encountered the following error, when migrating the module using the cloudposse eks modue.
│ Error: Get "<http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth>": dial tcp 127.0.0.1:80: connect: connection refused
│
│ with module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes[0],
│ on .terraform/modules/eks_cluster/auth.tf line 83, in resource "kubernetes_config_map" "aws_auth_ignore_changes":
│ 83: resource "kubernetes_config_map" "aws_auth_ignore_changes" {
│
Any ideas/hints?
I have tried to change the kubernetes provider, checked the IAM credentials. Still no clue
looks like your kubernetes provider is not configured correctly or is expecting k8s on localhost
however before importing the state to Terraform Cloud it worked.
There is nothing on the docs, that the provider needs a config file, just pass the host
, cluster_ca_certificate
and token
from the data resources
correct, the config file is not necessary if you set those properties. I was just going by the error message where it show that it is trying to connect to 127.0.0.1
it does not make sense, it tries to connect to localhost… the only change I made was addition the remote backend pointing to terraform cloud
Is your backend initialized? There’s a one-time step to push your s3 config.
Yes it is. I see the state file and the resources in the TFC GUI. When I run plan it executes it remotely
But then it throws the error
It’s a little hard to say, but it fees like a configuration problem. I’ve often found those out by looking at the TF_LOG=TRACE output. There’s a lot of info given about each call, including where the variables are referenced from.
Thx, would give it a try
I found the cause, the missing config path: Add this env var to fix
export KUBE_CONFIG_PATH=~/.kube/config
Nice catch!
Well I needed to go through 70k lines of trace logs
That does seem like a lot. I’ve been thinking of a tool that might help make that easier – something that parses for diagnostic info, and helps the user interpret the logs a bit better. It might be something that could be added to the utils provider, given time.
I was able to send notification to SNS topic when a new log event appears in Log Group via aws_cloudwatch_log_metric_filter
and aws_cloudwatch_metric_alarm
, but I was wondering, how can I send the message itself and not just metric values? Thanks!
hi there, I’m new user of atmos workflow. just wondering how to import existing resources using atmos or i should do it outside of atmos and then use atmos after
Hi all – I’m troubleshooting a specific problem in terraform-aws-components//account-map, a shared component which makes remote-state calls using modules defined in terraform-yaml-stack-config. I’ve been troubleshooting a few cases where the terraform-aws-provider seems to hang up for various reasons during the remote-state call. The reasons aren’t always clear, but they result in terraform errors such as: Error: rpc error: code = Unavailable desc = transport is closing
Would any of you have an idea what the provider might be giving up on here? Are there techniques that might pull more debugging info out of the utils provider?
Here’s a TF_LOG=TRACE output. I’ve found this particular issue to be more difficult than most.
This line seems to be central to the issue:
path=.terraform/providers/registry.terraform.io/cloudposse/utils/0.8.0/linux_amd64/terraform-provider-utils_v0.8.0 pid=16307 error="exit status 2"
The provider seems to fail without a lot of additional info. In this case, I’ve linted the yaml, checked the variable dependencies, etc. I might try looking for version compatibilities next – or perhaps rebuild the module to add a bit more debugging, if available.
I figured this out. There were pieces of terraform-yaml-stack-config I didn’t understand. Once I found the TF_LOG_PROVIDER flag and the terraform-provider-utils source, I realized the merge tool could actually be a powerful scanning/diagnostic/debug tool to help with bad configurations like my own. Might be a win for other people down the road. I’ll try and post a few example diagnostics – like maybe something that warns about bad remote-state configs.
Hello guys,
I just pushed a new PR to the Beanstalk env module. Could somebody take a look on it when possible ? This feature would close some old issues and PR at the same time. https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/pull/182
what Add NLB support in the module Set default protocol to TCP in case that loadbalancer_type == "network" S3 logs and Security Groups are not valid for Network ELB. HealthCheckPath appl…
Anyone here using default_tags
with the AWS provider? Seen any gotchas? A few open issues around perpetual diff/conflicting resource tag problems. Looks maybe not fully baked yet….
Only issue I had was with ecs on govcloud
Perpetual diffs when changing a resource not related to the tag change? Are you using terraform-null-label
at the mo on all resources and passing them around into other modules etc?
Description I have been looking forward to the default tagging support and tested it on a project yesterday which uses https://github.com/terraform-aws-modules/terraform-aws-vpc/ — this immediately…
Provider version 3.38.0 Terraform 0.15.1 when using default_tags feature apply fails as it tries to create tags on ecs resource. Community Note Please vote on this issue by adding a reaction to t…
Yup that’s the one
2021-06-15
Hi all, I’m fairly new to Terraform and I’m still getting to grips with the best practices….
I’m currently in the process of creating a simple environment which includes a newly created - VPC, IGW, Public Subnet and a EC2 Instance.
However, at the point of applying the config I receive the error message below, has anyone seen anything like this before? Any help/advice would be greatly appreciated
terraform apply --auto-approve
module.vpc.aws_subnet.main_subnet: Creating...
module.vpc.aws_vpc.vpc: Creating...
module.vpc.aws_vpc.vpc: Creation complete after 4s [id=vpc-09da0001c2b98a15f]
module.network.aws_internet_gateway.igw: Creating...
module.network.aws_internet_gateway.igw: Creation complete after 1s [id=igw-0e922b721b610639f]
╷
│ Error: error creating subnet: InvalidVpcID.NotFound: The vpc ID 'aws_vpc.vpc.id' does not exist
│ status code: 400, request id: 5b4df02a-6826-45a4-a3ca-1e7fcaff4920
│
│ with module.vpc.aws_subnet.main_subnet,
│ on modules/vpc/main.tf line 11, in resource "aws_subnet" "main_subnet":
│ 11: resource "aws_subnet" "main_subnet" {
Code snippet?
What are you supplying as the value for the availability_zone
if anything? That looks like your problem.
resource "aws_subnet" "main_subnet" {
vpc_id = aws_vpc.main.id
cidr_block = var.subnet_cidr
map_public_ip_on_launch = “true”
availability_zone = var.availability_zone
tags = { name = “main Subnet” } }
variable “availability_zone” { type = string default = “eu-west-2a” }
are you passing any vars when calling terraform apply? do you have a terraform.tfvars file setting the availability_zone?
the error is coming from the aws api, it’s not really a terraform error, exactly. might inspect the console to see what’s up…
actually, the error is saying you are passing the literal string "var.availability_zone"
as the value for var.availability_zone
…
InvalidParameterValue: Value (var.availability_zone)
so i’d take another look at your aws_subnet block, and make sure it’s not actually this:
availability_zone = "var.availability_zone"
I haven’t got a .tfvars file
Any ideas why the variable I have set isn’t being picked up?
module "vpc" {
source = “./modules/vpc”
vpc_id = “var.vpc_id”
vpc_cidr = “10.0.0.0/24”
subnet_cidr = “10.0.1.0/24”
availability_zone = “eu-west-2a”
}
this is incorrect syntax:
vpc_id = "var.vpc_id"
(at least, it is incorrect if you mean to pass the value of the variable vpc_id
, instead of the literal string "var.vpc_id"
)
remove the quotes:
vpc_id = var.vpc_id
quotes have been removed, but error still occurs
New/different error message received -
module.network.aws_internet_gateway.igw: Creation complete after 1s [id=igw-065812ef10aa22484]
╷
│ *Error: error creating subnet: InvalidVpcID.NotFound: The vpc ID 'aws_vpc.vpc.id' does not exist*
*│* status code: 400, request id: b5a36b6d-fe2f-4f46-b2c5-eb6452e7b23e
│
│ with module.vpc.aws_subnet.main_subnet,
│ on modules/vpc/main.tf line 11, in resource “aws_subnet” “main_subnet”:
│ 11: resource “aws_subnet” “main_subnet” {
resource "aws_subnet" "main_subnet" {
vpc_id = var.vpc_id
cidr_block = var.subnet_cidr
map_public_ip_on_launch = "true"
availability_zone = var.availability_zone
tags = {
name = "main Subnet"
}
variable "vpc_id" {
type = string
default = "aws_vpc.vpc.id"
}
Hello folks, I’m trying to play with terraform-aws-ecs-web-app, launching examples/complete with all defaults, terraform apply went well but, task eg-test-ecs-web-app keep dying because of “Task failed ELB health checks”, maybe because of fargate, someone have an idea ?
FYI, if anybody here was using my gitpod-terrafrom
image for Gitpod, I moved it to ECR Public as Dockerhub annoyed me: https://github.com/Vlaaaaaaad/gitpod-terraform/pull/11 AKA https://gallery.ecr.aws/vlaaaaaaad/gitpod-terraform
DockerHub got… lazy and user-hostile so I am moving this image to Amazon ECR Public. Also doing some long-needed cleanup
Amazon ECR Public Gallery is a website that allows anyone to browse and search for public container images, view developer-provided details, and see pull commands
Question on using multiple workspaces and graphs
We currently use terragrunt and its dependency
resource to pull outputs from other workspaces (example: one workspace is for VPC config and most other resources pull subnet ID’s from it). It seems we could be doing this with the terraform_remote_state
provider, but we would miss out on terragrunt’s ability to understand the graph of dependencies (the run-all
commands are smart about ordering based on the graph).
How do folks handle the graph without a tool like Terragrunt? Some form of pipeline which understands dependencies? Avoid having deeply nested graphs to begin with?
we label all of our components and then make other components dependencies
plus spacelift provides graph visualization across root modules
checkout env0, scalr, and TFC too
I am using this module (https://registry.terraform.io/modules/cloudposse/config/aws/latest) to deploy AWS Config using the CIS 1.2 AWS benchmark with this submodule (https://registry.terraform.io/modules/cloudposse/config/aws/latest/submodules/cis-1-2-rules). I get an error on the terraform plan though:
│ Error: Invalid index
│
│ on .terraform/modules/aws_config/main.tf line 99, in module "iam_role":
│ 99: data.aws_iam_policy_document.config_sns_policy[0].json
│ ├────────────────
│ │ data.aws_iam_policy_document.config_sns_policy is empty tuple
│
│ The given key does not identify an element in this collection value.
The error goes way when I put create_sns_topic
value to true
Any insights on how to get rid of this error? It seems like the module expects sns policy to exist
there’s probably a parameter to pass in an existing SNS topic
I guess the module expects you to either specify create_sns_topic = true
or to specify a pre-existing SNS topic
Ahh okay, Saw the readme and it says to add this var findings_notification_arn
if I pass false
for create_sns_topic
. Thanks for the insight!
cc: @matt
2021-06-16
PSA please queue your questions
https://sweetops.slack.com/archives/CB6GHNLG0/p1623371713331100
@here We’re having a special edition of #office-hours next week and will be joined by @Taylor Dolezal who is a Senior Developer Advocate at HashiCorp. Please queue up any questions (or gripes) you have about Terraform on this thread and we’ll have Taylor review them live on the call, thanks!
that’s this week, now
@here We’re having a special edition of #office-hours next week and will be joined by @Taylor Dolezal who is a Senior Developer Advocate at HashiCorp. Please queue up any questions (or gripes) you have about Terraform on this thread and we’ll have Taylor review them live on the call, thanks!
Ideas…. what do you not like about terraform?
What are the hardest parts getting terraform adopted by your organization?
What’s it like behind the scenes running such a popular open source project with thousands of contributors?
What are the benefits of using CDK for Terraform over vanilla Terraform? I understand the case for using a programming language a developer might be more familiar with compared to Terraform, but is there any other huge benefit to the CDK that might be overlooked by someone already using Terraform? Also, are there any anti-patterns that can be avoided by someone just starting out with the CDK? Thank you!
Good one
v1.1.0-alpha20210616 1.1.0 (Unreleased) NEW FEATURES: lang/funcs: add a new type() function, only available in terraform console (#28501) ENHANCEMENTS: configs: Terraform now checks the syntax of and normalizes module source addresses (the source argument in module blocks) during configuration decoding rather than only at module installation time. This…
The type() function, which is only available for terraform console, prints out a string representation of the type of a given value. This is mainly intended for debugging - it's handy to be abl…
When using https://github.com/cloudposse/terraform-aws-eks-iam-role?ref=tags/0.3.1 how can I add an annotation to the serviceaccount to use the IAM role I created? I had to do it manually after the serviceaccount and IAM role were created. Is there a way I can automate this?
Command I used: kubectl annotate serviceaccount -n app service [eks.amazonaws.com/role-arn=arn:aws:iam::xxxxx:role/rolename@app](http://eks.amazonaws.com/role-arn=arn:aws:iam::xxxxx:role/rolename@app)
Terraform module to provision an EKS IAM Role for Service Account - cloudposse/terraform-aws-eks-iam-role
do you mean this variable - service_account_namespace?
also see the comment here https://github.com/cloudposse/terraform-aws-eks-iam-role/blob/debd970108254a59950c7e5f98dcfffc1719270e/main.tf#L8
Terraform module to provision an EKS IAM Role for Service Account - cloudposse/terraform-aws-eks-iam-role
This is more of an annoyance with AWS than with Terraform but…
Is there a way to deregister task all definitions for a given task family on destroy?
TLDR: I’m finding that when I work with ECS, each service creates a “family” of task definitions. Once I’m done with the service I can terraform destroy
and it goes away but the task family and the task definitions hang around. I can clean them up from the console and/or CLI but is there a way to nudge TF to do it for me? It would be one less thing to have to do to keep my “unused-resources-in-the-AWS-console” OCD in check.
This is the kind of thing we want to add to a forthcoming terraform-aws-utils
provider (nothing there yet, but we’re pretty close to pulling the trigger on it)
anyone know why during a terraform destroy
terraform would still try to resolve the data
resources?
for example, i have a blah.tf file that has some data
sources for resources that no longer exist. This makes my terraform destroy error out. Why does terraform care about those data
sources? shouldn’t it just try to destroy everything within the state?
im starting out a new greenfield terraform project, and I am curious how people are structuring their projects? Most recently, in the last few years, I have been using a workspaces based approach (OSS workspaces, not tf cloud workspaces), but I found a lot of issues with this approach (the project and state grew to the point of needing to be moved to separate repos, which led to issues of how to handle changes that crossed repos, and also how to handle promotion of changes, etc), so I’m looking around to see other ways of structuring a TF project. Are people still using terragrunt, and structuring their project accordingly? Thanks!
Multi environment AWS Terraform demo. Contribute to msharma24/multi-env-aws-terraform development by creating an account on GitHub.
Using Terraform workspaces sucks so I figured crating a wrapper script to do multi env deployment, and here is reference I posted on my github
I think workspaces are on the way out. The Terraform 1.0 compatibility guarantees explicitly call our workspaces as not being guaranteed.
The recommended approach from Hashicorp is one directory per environment.
Personally, I use a single directory and a different backend configuration per environment – so like Terragrunt, but we’re using Spacelift.
how do you handle promotion of changes with the different backends?
Generally, we continuously deploy our infra, so once you merge a PR, it will automatically deploy to dev, then stg, then prod. So there’s no real promotion in the usual case.
If we are making a major change which does require promotion, generally we rely on logic in the Terraform configuration which creates something only if environment is “dev” or whatever.
awesome, thanks!
in my team we are doing the same as @Alex Jurkiewicz
This is an interesting discussion!
@Alex Jurkiewicz @Pierre-Yves I have followed a similar approach but only auto deploying feature/*
branches to dev, and then when a PR is opened we run a TF plan against prod, so that we can review the plan + code review in the PR and when merged auto deploy to Prod
hm, you deploy feature branches? What if the PR is not later merged
@Alex Jurkiewicz can you comment on this?
The Terraform 1.0 compatibility guarantees explicitly call our workspaces as not being guaranteed.
I just started a project that is using TF 1.0 and workspaces so reading this gives me pause
Typically, I have one environment per directory but I’m setting up a project in automation that uses workspaces as it simplifies the project structure. The project will have multiple (>5) environments at a time and they will come and go randomly as devs work on them so I figured using workspaces would be the best approach.
There is a section in the Terraform docs about their compatibility guarantees for Terraform 1.x. Search that for “workspace”
I think workspaces are the only viable native-terraform approach to dynamic environments like you describe, and I still use them for this purpose. But I’m trying to move away for static environments
thanks
I have static environment stage and production, bu no workspace . I think I’ll need workspace to be able to test my module and destroy everything after the test. but did not found other usage …
@Ian Bartholomew Which direction did you end up going? I’m about to hit the same question.
I’m going with terragrunt and the folder per environment, the nested folder per domain, and using external modules to handle promotion of changes
Which external modules?
our own. basically having a separate repo for a given domain, and version it using git tags, and referencing it the primary infrastructure repo, and using the ref
option to point to a specific release of that. that way you can promote up changes in your infra with PRs, changing the version to point to a new one. This section in their docs does a good job at explaining it
Learn how to start with Terragrunt.
Thanks. I’m a bit wary of creating a separate repo for each domain, but the flow makes sense otherwise.
yah, i feel you. The reason i am is that in the past, when we have put all the resources into a single state, it gets to be hard to manage pretty quick, both in terms of actual state size, as well as just understanding what all is changing. like on one project, doing a plan required redirecting the plan
output to a separate file to inspect, since the output would exceed the console line limit
so, for me, breaking it up makes it easier to manage
I’ve always separated out state by env/module. Is that not the norm?
i think it is. we might be talking about the same thing. i just mean domain as in area of concern
I agree with @Ian Bartholomew split the tfstate amongst repos = light weight tfstate + better maintainability + improve access right
2021-06-17
Someone can share with me good terraform examples for eks+fargate?
this is awesome reference https://github.com/maddevsio/aws-eks-base I highly recommend it
This boilerplate contains the know-how of the Mad Devs team for the rapid deployment of a Kubernetes cluster, supporting services, and the underlying infrastructure in the Amazon cloud. - maddevsio…
@Mohammed Yahya Thank you!
just to learn
How can I suppress Checkov failures coming from upstream modules pls? Putting suppression comments in the module call doesn’t seem to work.
module "api" {
#checkov:skip=CKV_AWS_76:Do not enable logging for now
#checkov:skip=CKV_AWS_73:Do not enable x ray tracing for now
source = "[email protected]:XXXXXXXX/terraform-common-modules.git//api-gateway?ref=main"
<snip>
}
Skipping directories
To skip a whole directory, use the environment variable CKV_IGNORED_DIRECTORIES
. Default is CKV_IGNORED_DIRECTORIES=node_modules,.terraform,.serverless
you know where modules path are right? in .terraform folder > modules you can skip the whole folder or pass down the specific module.
also you can skip the check it self.
another solution would be to exit with non-zero
perfect! thank you
Hi,I don’t know if this is the right place to ask but I’m trying to run the Elastic Beanstalk example at https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/tree/master/examples/complete and it is not working. I get the following output. The only thing I changed was the value of the “source” and “version” fields in the ““elastic_beanstalk_environment” module to “cloudposse/elastic-beanstalk-environment/aws” and “0.41.0”. What am I missing ?
Did you update the variables file?
yes
Actually I fixed it. It’s a version issue. I will do a PR to the repo
what Updated the versions of dependencies in the example so that it works with terraform 1.0 why Previous version returned "deprecated" errors on vpc module. Example didn't run out…
Hello. I am trying to learn how to use the for_each meta argument and am really hoping someone can help me out. I am trying to create 4 subnets each with a different name & cidr, like so:
resource "aws_subnet" "public_a" {
for_each = {
"public_subnet_a" = "10.10.0.0/28"
"public_subnet_b" = "10.10.0.16/28"
}
vpc_id = aws_vpc.this.id
cidr_block = each.value
availability_zone = "us-west-1a"
tags = merge(
var.tags,
{
"Name" = each.key
},
)
}
I need to use the resultant subnet ids in several other resources like acls and route tables but am having issues because everything then seems to require I add a for_each argument to each resource so I can then refer to the aws_subnet.public_a[each.key].id. Questions: 1) Is there a way around doing this such as splitting these into individual elements and then not having to add a for_each to every resource that references the subnet id? 2) Even if I add the for_each to something like a route table I still get errors and am not sure what the for_each should reference since if I put something like for_each = aws.subnet.public_a.id I would have to add the [each.key]. Assuming I do have to use a for_each for every resource that references the subnets, what is the proper way to handle this? 3) Is my code for the subnet ok or should I have handled it differently - perhaps it is inherently faulty?
I have tried element, flatten, using a data source block, using [*], etc.. I appreciate any help but please explain in terms someone who is learning can understand as I really want to progress in my understanding. Thank you.
Can you show the complete module code?
Separate out the values: “public_subnet_a” = “10.10.0.0/28” “public_subnet_b” = “10.10.0.16/28”
into the variable.tf and then use the: for_each = var.{name of variable}
For what it’s worth not that it solves your immediate issue, but might help… I wrote up how to use for-each a bit more on my blog and it’s to date got a ton of traffic as many people get confused on this.
Your mileage may vary, but maybe you’ll find something useful there and if not add a comment and let me know. I had to write that down as it’s super intuitive.
I feel like the for each design reflects the Go developers that built it, but not normal users so behavior isn’t similar to many other tools I’ve used. Once I started learning Go it made a lot more sense.
While iterating through a map has been the main way I’ve handled this, I finally ironed out how to use expressions with Terraform to allow an object list to be the source of a for_each operation. This makes feeding Terraform plans from yaml or other collection input much easier to work with.
Hey thanks for the responses - I will read your blog and see if I can figure it out @sheldonh and if not I will post the code after I sanitize it @Joe Niland
does the cloudposse “terraform-aws-ecs-cloudwatch-autoscaling” module support target tracking scaling strategy?
or is it just the step scaling strategy
2021-06-18
Hey everyone!
I got a problem, can someone please help me with this?
i)What i am getting- i am creating prometheus by using helm release of prometheus and using this repo "<https://prometheus-community.github.io/helm-charts>"
so i am getting the services name like this 1)prometheus-kube-prometheus-operator 2)prometheus-kube-state metrics 3) prometheus-grafana, etc.
ii)What i am expecting- 1)prometheus-operator 2)state metrics 3)grafana, etc.
iii)Where i am stuck- As i am using this repo i am giving name only in helm release of prometheus name=“prometheus” so i think the suffix coming from repo only , how can i make them to what i want.
Prometheus community Helm charts
If you have a look at the kube-state-metrics chart, you can see that the name is composed of the release name and chart name, if you don’t provide a fullnameOverride
value. In your case that would be prometheus
(release name) and kube-state-metrics
(chart name) which results in prometheus-kube-state-metrics
.
If you don’t want that you will have to set the fullnameOverride
in your values file.
Prometheus community Helm charts. Contribute to prometheus-community/helm-charts development by creating an account on GitHub.
resource "helm_release" "prometheus" {
chart = "kube-prometheus-stack"
name = "prometheus"
namespace = "monitoring"
create_namespace = true
repository = "<https://prometheus-community.github.io/helm-charts>"
# When you want to directly specify the value of an element in a map you need \\ to escape the point.
set {
name = "podSecurityPolicy\\.enabled"
value = true
}
set {
name = "server\\.persistentVolume\\.enabled"
value = false
}
set {
name = "server\\.resources"
# You can provide a map of value using yamlencode
value = yamlencode({
limits = {
cpu = "200m"
memory = "50Mi"
}
requests = {
cpu = "100m"
memory = "30Mi"
}
})
}
}
in this where can i pass?
I haven’t worked with the helm
Terraform provider yet, but it would be something like
set {
name = "fullnameOverride"
value = "my-full-name"
}
I reckon
The stack resource does use a suffix (see here for example), so you might be out of luck there.
Prometheus community Helm charts. Contribute to prometheus-community/helm-charts development by creating an account on GitHub.
the value should be what i want to give the name of prometheus service right?
Yes, in case of kube-state-metrics
it would be the full name, for the stack it would probably be prometheus
(to get prometheus-operator
).
but the single helm chart is installing all the things
all the services
i tried by passing like this
set {
name = "fullnameOverride"
value = "my-full-name"
}
but not worked
Is it possible or not?
Like without changing anything in repo
Hi, I’m using the module terraform-aws-config from https://github.com/cloudposse/terraform-aws-config and it seems you can’t create the resources without using “create_sns_topic = false” you get this error:
╷
│ Error: Invalid index
│
│ on main.tf line 99, in module "iam_role":
│ 99: data.aws_iam_policy_document.config_sns_policy[0].json
│ ├────────────────
│ │ data.aws_iam_policy_document.config_sns_policy is empty tuple
│
│ The given key does not identify an element in this collection value.
Just letting you guys know.. no breaking issue, use terraform 0.15.5 btw
did you also see findings_notification_arn
?
someone else reported this same confusion recently, sounds like the docs need an improvement here
Hi, I didn’t understand how to set the value of the map_additional_iam_roles variable in the terraform-aws-eks-cluster module
I tried it this way and was unsuccessful:
map_additional_iam_roles = {"rolearn":"arn:aws:iam::xxxxxxxx:role/JenkinsRoleForTerraform"}
In the README, the type for this variable is defined as:
list(object({
rolearn = string
username = string
groups = list(string)
}))
You need to provide values in that format
eg, you are missing username
and groups
values
I tried that way, but it failed too.
I’m checking the hashicorp documentation to see if I can get a light
how did it fail? Did you get the same error message, or a different one?
the error is different
Error: Variables not allowed
on vars.tf line 137, in variable "map_additional_iam_roles":
137: rolearn = "arn:aws:iam::xxxxx:role/JenkinsRoleForTerraform"
Variables may not be used here.
Error: Missing item separator
on vars.tf line 137, in variable "map_additional_iam_roles":
136: default = [
137: rolearn = "arn:aws:iam::xxxxxxx:role/JenkinsRoleForTerraform"
Expected a comma to mark the beginning of the next item.
ERRO[0002] Hit multiple errors:
Hit multiple errors:
exit status 1
sounds like you have some syntax errors then. Can you post your full code for this module block?
I’m guessing you are trying to wrap this module and pass in this value as a variable. And that you have some syntax errors in the wrapping part.
To start with, I suggest you hardcode the values in your module definition.
module "eks_cluster" {
source = "cloudposse/eks-cluster/aws"
# ...
map_additional_iam_roles = [
{
rolearn = "x"
username = "y"
groups = []
}
]
}
2021-06-19
2021-06-20
Hi all, I’ve raised the same issue before i)What i am getting- i am creating prometheus by using helm release of prometheus and using this repo "<https://prometheus-community.github.io/helm-charts>"
so i am getting the services name like this 1)prometheus-kube-prometheus-operator 2)prometheus-kube-state metrics 3) prometheus-grafana, etc.
ii)What i am expecting- 1)prometheus-operator 2)state metrics 3)grafana, etc.
iii)Where i am stuck- As i am using this repo i am giving name only in helm release of prometheus name=“prometheus” so i think the suffix coming from repo only , how can i make them to what i want. but this time i was able to rename the prometheus operator but using the fullnameOverride set, but not for other services like node exporter,etc
Prometheus community Helm charts
2021-06-21
Anyone thought of a way to achieve this without using null_data_source
which is deprecated. https://github.com/cloudposse/terraform-aws-eks-cluster/blob/c25940a8172fac9f37bc2a74c99acf4c21ef12b0/examples/complete/main.tf#L89
I tried moving it to locals but kept seeing the aws-auth
configmap error.
was it just moved to a dedicated “null provider” resources, so I guess you just have to load the provider to get it working. https://registry.terraform.io/providers/hashicorp/null/latest/docs https://registry.terraform.io/providers/hashicorp/null/latest/docs/data-sources/data_source
Am I right in saying that in theory, we should be able to move this into locals? Since all we’re doing here is waiting until the cluster & config map is up before we deploy the node group?
Where do you see that its deprecated? That isn’t mentioned on the docs. However the documentation says that locals can achieve the same effect. https://registry.terraform.io/providers/hashicorp/null/latest/docs/data-sources/data_source
In certain versions of terraform a warning is displayed saying it’s deprecated when using it
Provides constructs that intentionally do nothing, useful in various situations to help orchestrate tricky behavior or work around limitations. This provider is maintained by the HashiCorp Terrafor…
Huh, weird that it doesn’t show in the docs
Somewhere I believe I read that there would be no more development in ways of enhancements or feature requests too
Yes, TF is complaining that it’s deprecated. I’ve tried using locals to get the cluster name, but it does not have the same effect. Which is strange.
If I add a local cluster_name
to pull from the eks_cluster
module, and then set the relevant field in the worker nodes module.. I get the config map error
what is “the config map error”?
I wonder if there’s a difference with where a local would get it’s value from and where a data source would? Presumably a data source get’s it’s inputs from the API, which means the resource has to be up & local get it’s value from the module… so in theory a local variable could be populated before the cluster is fully up?
can you paste what you’re defining for the local
locals {
cluster_name = module.eks_cluster.cluster_id
}
Then using local.cluster_name
in eks_node_group
…
module "eks_node_group" {
...
cluster_name = local.cluster_name
...
}
Ah, thats kind of what I was suspecting. The null data source was using both the cluster name AND the config map attribute, so that everything was synchronized. So you’d need a local defined that uses both those values
I did add a local for the config map ID
they need to be joined in single local though
Oh, that will be my problem.
locals {
cluster_name = module.eks_cluster.cluster_id
kubernetes_config_map_id = module.eks_cluster.kubernetes_config_map_id
}
Not this then?
yah join those into 1 string basically
that will cause the local to be undefined until both those values are available
Not completely sure what you mean?
You mean join using join()
?
locals {
wait_on_thing = "${module.eks_cluster.cluster_id}- ${module.eks_cluster.kubernetes_config_map_id}"
}
I see.
Got it, excellent, thank you. I’ll test that now
That’ll only help you part-way though
you’ll have to then somehow use that variable in the other resource, so that it is dependent on it
might mean you do some ugly string splitting to get the cluster-id (or whatever it is you need)
Ah yeah, that’s true…
Seems logical to use null_data_source
really, since it gives what we want. But having the deprecated warning is annoying.
Something like this I guess? element(split("-", "${wait_for_thing}"),0)
yah
can anyone help me with the below please …
resource "aws_acm_certificate" "cert" {
count = var.enable_ingress_alb ? 1 : 0
domain_name = "alb.platform.${var.team_prefix}.${var.platform}.${var.root_domain}"
validation_method = "DNS"
tags = {
CreatedBy = "Terraform" }
lifecycle {
create_before_destroy = true
}
}
resource "aws_route53_record" "cert_validation" {
for_each = { for domain in aws_acm_certificate.cert.*.domain_validation_options : domain.domain_name => { name = domain.resource_record_name, record_value = domain.resource_record_value } }
name = each.value["name"]
type = "CNAME"
records = [each.value["record_value"]]
zone_id = data.terraform_remote_state.dns.outputs.zone
ttl = 60
}
when enable_ingress_alb = true
I am getting the following error
Error: Unsupported attribute
on .terraform/modules/kubernetes/modules/kubernetes-bottlerocket/ingress-alb-certs.tf line 17, in resource "aws_route53_record" "cert_validation":
17: for_each = { for domain in aws_acm_certificate.cert.*.domain_validation_options : domain.domain_name => { name = domain.resource_record_name, record_value = domain.resource_record_value } }
This value does not have any attributes.
I think it has something to do with the *
in aws_acm_certificate.cert.*.domain_validation_options
try:
for_each = var.enable_ingress_alb ? { for domain in aws_acm_certificate.cert[0].domain_validation_options : domain.domain_name => { name = domain.resource_record_name, record_value = domain.resource_record_value } } : {}
?
though note, if you start using subject alternative names, the validation record for root.zone and for *.root.zone are the same, which can lead to a race condition… here’s how we handled that: https://github.com/plus3it/terraform-aws-tardigrade-acm/blob/master/main.tf#L40-L46
Interesting thanks man I think the [0] doesn’t work so looking at options
Error: Unexpected resource instance key
on .terraform/modules/kubernetes/modules/kubernetes-bottlerocket/ingress-alb-certs.tf line 25, in resource "aws_acm_certificate_validation" "cert":
25: certificate_arn = aws_acm_certificate.cert.0.arn
Because aws_acm_certificate.cert does not have "count" or "for_each" set,
references to it must not include an index key. Remove the bracketed index to
refer to the single instance of this resource.
did you change something?
resource "aws_acm_certificate" "cert" {
count = var.enable_ingress_alb ? 1 : 0
count is right there
I removed it by accident
Schoolboy error
Hey guys. Quick question. Can we create a terraform module conditionally?
I’d love to know this as well. I believe people have asked for the functionality, but I’m not sure if it’s been added lately. CloudPosse modules allow you to pass variables in like “enabled” or “<resource>_enabled”.
A common pattern in many Terraform modules is to add some sort of “create” or “enabled” parameter as mentioned above, but in recent Terraform versions it’s possible to use for_each and count on modules: https://www.terraform.io/docs/language/meta-arguments/for_each.html
Terraform by HashiCorp
I found a solution. We can use count
on modules for Terraform versions 0.13 +
So I was able to do something like count = var.myflag ? 0 : 1
in the module definition.
https://www.terraform.io/docs/language/meta-arguments/count.html The very first section in the above link mentions this
Terraform by HashiCorp
also consider for_each
to preserve index position when applying the same module multiple times
here is an example using count to trigger a module for a given environnment variable
module "servers" {
count = var.env == "stage" ? 0 : 1
source = "[email protected]:v3/your_module?ref=v1.67"
location = var.location
env = var.env
2021-06-22
hey guys, I just found this: https://github.com/iann0036/cfn-tf-custom-types custom type for CloudFormation, you can now add also Terraform resource, this is awesome to me
CloudFormation Custom Types for Terraform resources. - iann0036/cfn-tf-custom-types
does anyone have any useful resources for path based DENY rules on WAF v2?
i am reading the docs and getting myself totally confused
hi all
im having some weirdness with the key-pair/aws module
so i have this
module "key_pair" {
source = "cloudposse/key-pair/aws"
namespace = var.app_name
stage = var.environment
name = "key"
ssh_public_key_path = "./.secrets"
generate_ssh_key = "true"
private_key_extension = ".pem"
public_key_extension = ".pub"
}
I have the .pem files it generates on one machine, but i want to transfer this to another machine. I put the same key files in the same folder. But tf apply wants to create new private/public key pairs for some reason
# module.key_pair.local_file.private_key_pem[0] will be created
+ resource "local_file" "private_key_pem" {
+ directory_permission = "0777"
+ file_permission = "0600"
+ filename = "./.secrets/xx-development-key.pem"
+ id = (known after apply)
+ sensitive_content = (sensitive value)
}
# module.key_pair.local_file.public_key_openssh[0] will be created
+ resource "local_file" "public_key_openssh" {
+ content = <<-EOT
ssh-rsa xxx
EOT
+ directory_permission = "0777"
+ file_permission = "0777"
+ filename = "./.secrets/xx-development-key.pub"
+ id = (known after apply)
}
Are you using remote state?
use this module instead: https://github.com/cloudposse/terraform-aws-ssm-tls-ssh-key-pair
Terraform module that provisions an SSH TLS Key pair and writes it to SSM Parameter Store - cloudposse/terraform-aws-ssm-tls-ssh-key-pair
2021-06-23
is there a way to only perform a remote state lookup if a value is true?
put a count on the data
resource?
or does it not support that?
oh it looks like it does
easy then
100% inserts embarrassed face
i guess this is what i get for working for a medical procedure this morning
I think I’m stuck on a past decision to include stage option inside my backend declaration for my application x
.
# Backend ----------------------------------------------------------------------
module "terraform_state_backend" {
source = "cloudposse/tfstate-backend/aws"
version = "~> 0.32"
namespace = var.application
stage = terraform.workspace
name = "terraform"
profile = var.aws_credentials_profile_name
terraform_backend_config_file_path = "."
terraform_backend_config_file_name = "backend.tf"
force_destroy = false
}
Since my workspace was dev at the time, my remote backend bucket, as I now realize, has been unfortunately called x-*dev*-terraform
instead of what I think should have been from the beginning, just x-terraform
.
Now, when I added my new terraform workspace called prod
and doing a simple terraform plan, I see that it would create additional state bucket, dynamodb table etc.
All of that backend-y stuff which shouldn’t really be added since my unfortunate x-dev-terraform
state bucket already has subfolders for each of my workspaces, right?
So now, I’m stuck. There is a prod/ folder inside my state bucket, but the state is empty, so it wants to create everything including the backend (which I guess should not be added). If i edit this module declaration from the top, and remove the stage line, it cannot just edit resources but must replace them, which I think would break in half as it tries to keep state but also replace state bucket. How do i escape this?
In short, my idea is to recreate everything from dev back on prod, in the same state bucket, by using workspaces.
@Nikola Milic the tfstate-backend module is typically (and recommended) to be invoke by itself as a single root module in isolation once for all of your other root modules. You can store all state files in a single bucket and utilize workspace_key_prefix
in your backend configuration to properly separate root modules. Your workspaces will then create the various folders for each environment that you create.
My suggestion is to create a new root module to invoke tfstate-backend, transition your state for your root module to that new backend, and then remove + destroy the tfstate-backend module usage in your environment root module.
How do you isolate a root module?
Root modules are terraform projects. They store state.
Child modules are modules you consume in root modules.
So what I’m saying is you’ll create a separate terraform project alongside your existing one called bootstrap
or whatever you want to call it that then creates your state bucket in isolation from the rest of your terraform resources.
I see. Let’s say that I create that brand new folder for creating new state bucket (which will be correctly called x-terraform). If I follow bootstraping guide on the repo, there will be a backend.tf file as a result of that process. Should that file replace current backend.tf in this original folder?
Yeah.
Then do a tf init
and it’ll ask you to transition the state to the newly configured bucket.
At what point do i delete the module declaration from my original main.tf?
When doing this method, be sure to look into the workspace_key_prefix
argument for the s3 backend configuration. You’ll need that to manage multiple root modules in the same bucket.
After you transition state to the new bucket. Once you do that, your legacy state bucket is no longer necessary.
You can keep it around for historical purposes if you want by just removing it from state, but everything should get dup’d over to the new bucket so that’s up to you.
workspace_key_prefix argument does not exist on the cloudposse module?https://github.com/cloudposse/terraform-aws-tfstate-backend
No, you’ll need to supply that yourself for each root module’s [backend.tf](http://backend.tf)
file.
The other way is to create a tfstate-backend for each root module that you have, but we’re moving away from that as it’s unnecessary.
Hm I’m kind of confused by the terminology, let me see if we are on the same page. When you say “each root module that you have” what do you exactly mean?
If i understand you correctly i should have this:
/infra bootstrap/ main.tf <- backend config main/ dev/ dev.tfvars prod/ prod.tfvars backend.tf <- copied from bootstrap main.tf <- declaration of app resources
what are those root modules in this scenario
main/ and bootstrap/ are the root modules.
so there should be two backend.tf files in those two folders, each of them having additional workspace_key_prefix value same as the name of the folder
In smaller projects, having one root module can work. But in larger environments where you’re managing 1000s of resources it quickly becomes a huge headache so the community best practice is to separate root modules for areas of concern to decrease blast radius (think of it as having a root module for your various tiers of infra: Network, Data, Application Cluster, Monitoring, etc.)
Cloud Posse themselves goes with very fine grained root modules where they create one for each type of AWS service (check out terraform-aws-components for that), but that isn’t necessary for all projects.
Opinionated, self-contained Terraform root modules that each solve one, specific problem - cloudposse/terraform-aws-components
so there should be two backend.tf files in those two folders, each of them having additional workspace_key_prefix value same as the name of the folder
Yeah
Gotcha. Thanks for the really well explained solution.
I’ll try and make it work, if I get stuck, expect me here for more answers
Np. This is confusing stuff for anybody newer to Terraform. Hashi doesn’t do a good job in pushing best practices as well as they should.
Btw, one more quick question, do i need to worry about workspaces in my bootstrap project? I guess not?
@Matt Gowie All went smoothly, thanks again!
Related to the prior question on backend declarations. I want a dynamic backend creation in s3/dynamo, like how terragrunt does the project initialization. However, I want to keep things as native terraform as possible.
I know Go. Should I just look at some code to initialize backend writing it myself, or is there some Go tool I’m missing out there that creates the remote backend, s3 dynamic creation, and policies? Something like tfbackend init
so I can use native terraform for the remaining tasks without more frameworks? (I looked at Terraspace, promising, but again like Terragrunt another abstraction layer to troubleshooting)
Ideally I’d use the cloudposse backend config module, except I’m not ok with having to run that first to init then generate the tf file. I’m half tempted to just flip back to using terraform cloud for remote runs and be done with it.
I could use Go, but more code to write and tear down, which feels like a stubborn refusal to then benefit from pulumi/terragrunt at that point
Terragrunt output is so messy it’s hard to debug at times, so I’m considering backing out some of the terragrunt for native terraform stacks. I have tf var files already and wrapper for handling this… but I don’t have backend s3 creation handled.
Even if I’m using PR workflow with Azure pipelines, it might just make sense to just leverage terraform cloud and be done with it i guess.
The pattern nowadays with the tfstate backend module is to just create it once and use the one bucket for all root modules using the workspace_key_prefix
. I and others are digging it as you only need to create the backend once and then it’s a fairly untouched root module going forward.
Does that not work for you?
Ok, so one “stack” = backend creation and just use that going forward. I thought it would cause locks due to 1 dynamo provisioned, but I’m guessing the dynamo part is per backend state path instead, so I can still not be locked to a single non-parallel run for all stacks using bucket
“stack” in the SweetOps sense or stack in some other sense? Damn terminology is overused so I have to check
But in general, one tfstate backend creation period.
Dynamo locking still works the way you would want it to as long as you’re utilizing workspace_key_prefix
.
Hmm. One directory containing this action and then that’s it for that aws account. Everything else is purely path changes. Back to using backendconfig file/vars to set to this at runtime.
thanks for confirming the key prefix is used with dynamo. Thought I’d be locking everything up with a single run at a time. Wasn’t aware the key prefix was the way it was locked. thanks!
Has anyone in here used this pattern before? What speaks against using tfvars
instead of yaml
here?
https://github.com/concourse/governance/blob/master/github.tf https://github.com/concourse/governance/blob/master/locals.tf https://github.com/concourse/governance/tree/master/contributors
Documentation and automation for the Concourse project governance model. - concourse/governance
Documentation and automation for the Concourse project governance model. - concourse/governance
Documentation and automation for the Concourse project governance model. - concourse/governance
just curious what others think, any 2c welcome
Documentation and automation for the Concourse project governance model. - concourse/governance
Documentation and automation for the Concourse project governance model. - concourse/governance
Documentation and automation for the Concourse project governance model. - concourse/governance
Confused on what pattern you’re asking exactly? Concourse overall or some tf pattern within the files you shared?
check out the inputs. it’s all yaml driven.
not a common pattern.
at least haven’t seen this in the wild before.
locals {
contributors = {
for f in fileset(path.module, "contributors/*.yml") :
trimsuffix(basename(f), ".yml") => yamldecode(file(f))
}
teams = {
for f in fileset(path.module, "teams/*.yml") :
trimsuffix(basename(f), ".yml") => yamldecode(file(f))
}
...
resource "github_membership" "contributors" {
for_each = local.contributors
username = each.value.github
role = "member"
}
resource "github_team" "teams" {
for_each = local.teams
name = each.value.name
description = trimspace(join(" ", split("\n", each.value.purpose)))
privacy = "closed"
create_default_maintainer = false
}
...
It’s not uncommon I’d think. Actually moved towards using yaml for a bit and merging but there’s some gotchas and it adds an additional abstraction that can be problematic at times to troubleshoot. I ended up trying to stick mostly with TFVars when possible.
Ah using YAML as a datasource is what you mean. Yeah this is becoming a more common approach IMO. I use it a lot for the same purpose: team.yaml, repos.yaml, accounts.yaml, etc.
(ya, tfvars cannot be loaded selectively at run time, which makes yaml better plus it’s portable)
We have a module for this pattern https://github.com/cloudposse/terraform-yaml-config
Terraform module to convert local and remote YAML configuration templates into Terraform lists and maps - cloudposse/terraform-yaml-config
and we employ it all over the place:
• https://github.com/cloudposse/terraform-datadog-monitor
• https://github.com/cloudposse/terraform-aws-config
• https://github.com/cloudposse/terraform-opsgenie-incident-management
Terraform module to configure and provision Datadog monitors from a YAML configuration, complete with automated tests. - cloudposse/terraform-datadog-monitor
This module configures AWS Config, a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. - cloudposse/terraform-aws-config
Terraform module to provision Opsgenie resources from YAML configurations using the Opsgenie provider,, complete with automated tests - cloudposse/terraform-opsgenie-incident-management
thanks for the pointers there folks.
I ended up trying to stick mostly with TFVars when possible.
i feel like this should still be the default to not throw away the “typing” checks and the potential vars validation. if granular loading at runtime is not an issue i guess this is not a pattern to adopt then. very cool though for when needed.
what we do is use native tf variables in our open source modules, but leverage YAML for the configuration in our components (aka root modules)
that way we get both.
our modules validate the types, while our configuration is separate from the code.
can anyone recommend a WAF module with kinesis firehose setup?
the wafv2 resources in Terraform are quite poor. We actually stopped using them because they were so slow
interesting
do you still not stream it to firehose?
we had firehose streaming for a while, but dropped it. We use access logs from cloudfront level now instead
we don’t use cloudwatch unfortunately
We actually stopped using them because they were so slow
Slow to plan / apply?
yes
cloudfront access logs write to s3 fwiw
We are implementing the terraform-aws-firewall-manager
with WAFv2. We stopped short of kinesis only due to time, but will probably add it.
Terraform module to configure AWS Firewall Manager - cloudposse/terraform-aws-firewall-manager
adds firehose
cc: @Ben Smith (Cloud Posse)
2021-06-24
https://twitter.com/mazen160/status/1408041406195699715 - if anyone’s interested!
I am excited to be speaking in Bsides Amman about my ongoing research on cloud security, starting with: Attack Vectors on Terraform Environments!
Save the date: July 3rd More details to come! https://pbs.twimg.com/media/E4peMkzX0AIWWPP.jpg
Hello everyone. I have a question. I want to use this module to create my organisation, workspaces and variables required by those workspaces https://registry.terraform.io/modules/cloudposse/cloud-infrastructure-automation/tfe/latest Below are the points where I am confused.
- Do I need to setup a separate repository (or even the same repository with different path) and place all the TF cloud related infra setup code there.
- Create a workspace for that repository in Terraform Cloud
- Create this specific workspace and variables related to it manually in Terraform Cloud. Thats what I can think of. Is there any other way. I want to know how community is using this module.
Thanks.
v1.0.1 1.0.1 (June 24, 2021) ENHANCEMENTS: json-output: The JSON plan output now indicates which state values are sensitive. (#28889) cli: The darwin builds can now make use of the host DNS resolver, which will fix many network related issues on MacOS. BUG FIXES: backend/remote: Fix faulty Terraform Cloud version check when migrating…
A sensitive_values field has been added to the resource in state and planned values which is a map of all sensitive attributes with the values set to true. To achieve this, I stole and exported the…
https://github.com/hashicorp/envconsul very nice tool to pass env variables generated on the fly from consul (configuration) or Vault ( secrets)
Launch a subprocess with environment variables using data from @HashiCorp Consul and Vault. - hashicorp/envconsul
As companies deliver code ever faster, they need tooling to provide some semblance of control and governance over the cloud resources being used to deliver it. Env0, a startup that is helping companies do just that, announced a $17 million Series A today. M12, Microsoft’s Venture Fund, led the roun…
Cc: @ohad congrats!
As companies deliver code ever faster, they need tooling to provide some semblance of control and governance over the cloud resources being used to deliver it. Env0, a startup that is helping companies do just that, announced a $17 million Series A today. M12, Microsoft’s Venture Fund, led the roun…
Thanks @Erik Osterman (Cloud Posse)
Thanks a lot @Erik Osterman (Cloud Posse)!!
hi guys long time no talk can someone please merge https://github.com/cloudposse/terraform-aws-cloudtrail-s3-bucket/pull/45
what Instead of requiring the user to define arn_format for gov or china regions, lookup the partition in this module why Makes using this module easier Fixes #44
Released as 0.19.0
— Thanks @Ryan Ryke!
what Instead of requiring the user to define arn_format for gov or china regions, lookup the partition in this module why Makes using this module easier Fixes #44
Perform aws partition lookup for arn @bwmetcalf (#45) what Instead of requiring the user to define arn_format for gov or china regions, lookup the partition in this module why Makes using this m…
trying to use it in gov cloud
cc @Erik Osterman (Cloud Posse)
i lied… my problem is a flow logs issue in gov cloud…
What is the practice followed to grant Terraform IAM access to multiple AWS Accounts - In the past I have just created one IAM user in the SharedServices which can assume a “Terraform Deploy IAM Role with Admin Policy” in all other accounts where I wish to create resources with terraform and I would just use the IAM Access Keys in the CICD Configuration securely.
i usually set assume role in the provider section
v1.0.1 1.0.1 (June 24, 2021) ENHANCEMENTS: json-output: The JSON plan output now indicates which state values are sensitive. (#28889) cli: The darwin builds can now make use of the host DNS resolver, which will fix many network related issues on MacOS. BUG FIXES: backend/remote: Fix faulty Terraform Cloud version check when migrating…
1.0.1 (June 24, 2021) ENHANCEMENTS: json-output: The JSON plan output now indicates which state values are sensitive. (#28889) cli: The darwin builds can now make use of the host DNS resolver, whi…
A sensitive_values field has been added to the resource in state and planned values which is a map of all sensitive attributes with the values set to true. To achieve this, I stole and exported the…
hi @Matt Gowie i have a pr for you https://github.com/cloudposse/terraform-aws-vpc-flow-logs-s3-bucket/pull/27
currently getting: Error: MalformedPolicyDocumentException: Policy contains a statement with one or more invalid principals. on .terraform/modules/flow_logs.kms_key/main.tf line 1, in resource &quo…
i have an issue in gov cloud with the kms key being cranky at me
currently getting: Error: MalformedPolicyDocumentException: Policy contains a statement with one or more invalid principals. on .terraform/modules/flow_logs.kms_key/main.tf line 1, in resource &quo…
looks like the bucket policy was updated to add the arn:aws-gov-cloud option but the kms key does not
cc @Erik Osterman (Cloud Posse) again (sorry)
@Andriy Knysh (Cloud Posse)
@Ryan Ryke thanks for the PR, it looks good, @Matt Gowie and myself made some comments
updated, i added iam in (missed that thanks) not sure what you meant about changing the format
@Ryan Ryke shipped as 0.12.1
Enhancements add arn format to the kms policy @rryke (#27) currently getting: Error: MalformedPolicyDocumentException: Policy contains a statement with one or more invalid principals. on .terr…
thanks a bunch
thanks @Ryan Ryke
by changing the format I meant you can provide your own value in the variable
since the code now uses the var, it will work
Anybody know where I can find info about module.this
? https://github.com/cloudposse/terraform-aws-vpc/blob/master/main.tf#L13 I haven’t been able to find anything in the Terraform docs, and have only seen this in CloudPosse modules so far.
it’s a cloudposse convention to use https://github.com/cloudposse/terraform-null-label as module "this"
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label
essentially, it’s a module with no resources that exports a consistent name you can use for other resources
I guess that’s what I’m confused about. I expected to find something like module "this"
in the .tf
files, but I searched through all the code and can’t find that. I see module "label"
and all kinds of other references, but not the reference to this
. Sorry if I’m a bit slow on the uptake.
there is, in context.tf
we include this file in all our modules https://github.com/cloudposse/terraform-aws-ec2-autoscale-group/blob/master/context.tf
Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group
and then you have module.this
which is also, by convention, the same file & filename. You can see the original in the null-label repo
haha… wow… it’s right there. I guess GitHub search failed me. Sorry about that.
Thanks for taking the time!
this way, we don’t have to specify all the common vars in each module, the file just provides all the vars we use in ALL modules, so we have consistent naming convention for the common vars
Yeah makes perfect sense actually.
2021-06-25
Folks, I was wondering how do you deal with aws_ecs_task_definition
and continuous delivery to ECS.
Do you keep the task-definition.yml
within the application repository or you manage it by Terraform?
I’m stuck with being able to build my app and deploy the latest release tag to ECS within the pipeline however, I have environment variables and configs which are dependent on other terraform resources outputs.
We use Terraform to deploy our ECS services as part of our Ci/CD pipeline. To be honest Terraform is exactly deigned for the deployment of apps (Waypoint maybe better for this). That said, we update the ECS task definition (in HCL) with the new service tag number which is passed in as a variable during runtime. This then updates the service and a new one is rolled out.
A downside of this approach is that it will delete the previous task definition, which means a roll back would require a redeployment of the previous version.
@Bruce Thanks for sharing your approach! Indeed, that’s definitely a downside. It would be nice to keep the task definition revisions.
Seems like some good discussion about just this has been happening https://github.com/hashicorp/terraform-provider-aws/issues/258#issuecomment-655764975
This issue was originally opened by @dimahavrylevych as hashicorp/terraform#8740. It was migrated here as part of the provider split. The original body of the issue is below. Hello community, I fac…
Hello, everyone. Hope all’s well. Before I submit an issue on Github, I wanted to make sure I wasn’t doing something “dumb”.
I am attempting to use the rds proxy module located at “cloudposse/rds-db-proxy/aws”. I’ve filled in most of the values, and I want the module to create an IAM role for accessing the RDS authentication Secret (rather than providing my own). I’m getting the following errors when I try a “terraform plan”:
Error: expected length of name to be in the range (1 - 128), got
on .terraform/modules/catalog_aurora_proxy/iam.tf line 78, in resource "aws_iam_policy" "this":
78: name = module.role_label.id
Error: expected length of name to be in the range (1 - 64), got
on .terraform/modules/catalog_aurora_proxy/iam.tf line 84, in resource "aws_iam_role" "this":
84: name = module.role_label.id
I have tried:
• terraform init
• terraform get
• terraform get inside the module (the “cloudposse/label/null” module didn’t appear to download automatically)
I’m using version 0.2.0 of the module, though I believe there weren’t any changes since 0.1.0?
I’m currently on terraform 0.14.11.
Ah!! I figured it out
it was because the “name” parameter for the RDS proxy module wasn’t set yet
I narrowed it down by setting the iam_role_attributes field to insert a letter, which passed the iam role creation issue and then gave me an error about the length of the name in the “module.this.id”
after I commented out the iam_role_attributes parameter line and set the name for the module, everything was fine
has anyone in here integrated https://driftctl.com into their workflows somehow? just curious.
driftctl is a free and open-source CLI that warns of infrastructure drift and fills in the missing piece in your DevSecOps toolbox.
I’ve just started last week, but still experimenting with it. I love what it does so far, but I’m struggling a bit to get the multi-region issues worked out.
driftctl is a free and open-source CLI that warns of infrastructure drift and fills in the missing piece in your DevSecOps toolbox.
How are teams handling terraform destroy of managed s3 buckets that have a large 500K+ objects? We have sometimes resorted to emptying the bucket via the management portal. We have been looking at using the on destroy provisioning step but passing the correct creds down into the script problematic in our case.
add force_destroy = true
to the config, run the apply, then destroy, https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket#force_destroy?
We have force_destory
set to true
but it still take a long long long long long time to complete.
oh yeah, it is sloooooow
i guess you could terraform state rm <bucket>
and then destroy? and handle the bucket removal out-of-band?
not ideal, but it is a work around
aws knowledge center says to use a lifecycle policy to expire/delete all objects and versions after one day
Yea for really large versioned buckets I end up running a boto script to empty the bucket before running the destroy, this is a lot quicker but certainly not ideal and have yet to find a better way
Not sure if it’lll help, but Terraform has this feature: https://registry.terraform.io/providers/hashicorp/time/latest/docs/resources/sleep
But if the S3 command itself times out, not sure that’ll help.
Anyone play with this tool yet? Claiming to be an open source alternative to Sentinel… https://github.com/terraform-compliance/cli
a lightweight, security focused, BDD test framework against terraform. - terraform-compliance/cli
that seems like it provides human readable definitions of rules, which Sentinel doesn’t provide, right?
a lightweight, security focused, BDD test framework against terraform. - terraform-compliance/cli
we’ve decided to go with OPA for writing rules directly against Terraform. It’s not the easiest language to work with. But I don’t have a lot of confidence there’s a clear winner in this space yet
Me either, not investing time in anything just yet
Probably worth looking at Checkov too. Yeah OPA + Conftest is a decent shout.
2021-06-26
2021-06-27
Hi guys did anyone encounter this and can assist ? https://github.com/cloudposse/terraform-aws-eks-cluster/issues/117 thanks!
On an existing EKS cluster that was created with this module, I'm am unable to add cluster_log_types to the cluster. module "eks_cluster" { source = "cloudposse/eks-cluster/aws&q…
2021-06-28
Anyone encountered this issue
│ Error: Error creating ElastiCache Replication Group (cosmos-test-elasticache): InvalidParameterValue: When specifying preferred availability zones, the number of cache clusters must be specified and must match the number of preferred availability zones.
│ status code: 400, request id: a29ff76d-dad3-4775-b1cf-6b265a37dbe4
│
│ with module.redis["cluster-2"].aws_elasticache_replication_group.redis_cluster,
│ on ../redis/main.tf line 3, in resource "aws_elasticache_replication_group" "redis_cluster":
│ 3: resource "aws_elasticache_replication_group" "redis_cluster" {
│
╵
this is my main.tf file
data "aws_availability_zones" "available" {}
resource "aws_elasticache_replication_group" "redis_cluster" {
automatic_failover_enabled = true
availability_zones = data.aws_availability_zones.available.names
replication_group_id = "${var.name}-elasticache"
replication_group_description = "redis replication group"
node_type = var.node_type
number_cache_clusters = 2
parameter_group_name = "default.redis6.x"
port = 6379
subnet_group_name = aws_elasticache_subnet_group.redis_subnets.name
}
resource "aws_elasticache_subnet_group" "redis_subnets" {
name = "tf-test-cache-subnet"
subnet_ids = var.redis_subnets
}
Well… What region are you using? How many AZ names are returned by the data resource? Is that the same number as the number off cache clusters?
ap-south-1
multi-az-disabled
should i enable it ?
I am facing the same issue and this says adding element works but it is not
what Allows not defining availability_zones Can create more nodes than you have defined AZs (once AWS provider is fixed hashicorp/terraform-provider-aws#14070 (comment)) why availability_zones p…
Hello. I’m wondering about this ECS module. How exactly can I work EFS in to the container definition? https://registry.terraform.io/modules/cloudposse/ecs-container-definition/aws/latest
Because as it seems, I can only really use a mount_points
argument, once I involve that of volume
it naturally doesn’t work since this seems not supported. Am I missing something?
Okay, so found that it should be adjusted in the task definition. But there I seem to run in to other issues..
Error: Invalid value for module argument
on main.tf line 90, in module "thehive_service_task":
90: volumes = [
91: {
92: host_path = "/",
93: name = "efs-ecs"
94: efs_volume_configuration = [
95: {
96: file_system_id = "fs-XXXXX"
97: root_directory = "/"
98: transit_encryption = "ENABLED"
99: transit_encryption_port = null
100: authorization_config = []
101: }
102: ]
103: # docker_volume_configuration = null
104: },
105: ]
The given value is not suitable for child module variable "volumes" defined at
.terraform/modules/thehive_service_task/variables.tf:205,1-19: element 0:
attribute “docker_volume_configuration” is required.
So I need to enable that of the docker_volume_configuration
- And that gets me in to this issue:
Plan: 1 to add, 1 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
module.test_service_task.aws_ecs_task_definition.default[0]: Creating...
Error: ClientException: When the volume parameter is specified, only one volume configuration type should be used.
on .terraform/modules/test_service_task/main.tf line 36, in resource "aws_ecs_task_definition" "default":
36: resource "aws_ecs_task_definition" "default" {
Sooo.. kind of deadlocked there right now as I cannot null the for_each neither.
Okay.. So this apparently works.. Since it’s still implicitly applying source-dest mapping even though you apply EFS config.. It’s weird, and not the nicest way or am I mistaken? empty vars needing to be set even though the code already implied them to be empty? Feels nasty and cluttery at least.
module "test_service_task" {
source = "cloudposse/ecs-alb-service-task/aws"
version = "0.57.0"
...
volumes = [
{
host_path = null
name = "efs-ecs"
efs_volume_configuration = [
{
file_system_id = "fs-XXXX"
root_directory = "/"
transit_encryption = "ENABLED"
transit_encryption_port = null
authorization_config = []
}
]
docker_volume_configuration = []
...
Anyone know if there is an open issue / discussion within the TF community around terraform.lock.hcl
files not supporting multiple operating systems? Or what to do about that hangup? I’m starting to think about checking in lock files… but if they don’t work cross platform then I’m unsure how folks make em work for their whole team.
Will post to #office-hours if no real good answers.
terraform providers lock -platform=windows_amd64 -platform=darwin_amd64 -platform=linux_amd64
will generate a lock.hcl for all platforms
I had the issue too while using TFE, it sounds like it was because I was using the plugin_cache_dir configuration which was forcing in some way the platform of my providers to my local terraform and not the remote one.
I resolved my issue by using a .terraformignore file specifying to exclude .terraform.lock.hcl*
for remote execution.
I’m not using it for now, I’m adding it to gitignore
lot of issues when using CI
until it mature I guess.
Huh interesting. I will try out the providers lock CMD. Maybe that’ll help…
My issue also could be plugin_cache_dir as I of course use that as well.
You also need to lock Darwin arm64 now
@Matt Gowie this is the closest I know https://github.com/hashicorp/terraform/issues/27769
Plugin caching is unusable since it fails when verifies checksum in dependency lock file. Is there way to disable this locking feature? Tbh I can see that caching feature is more needed than this s…
Good find. That sheds some more light.
upvote plz https://github.com/hashicorp/terraform/pull/28700
that will eventually add the ability to templatize strings instead of being stuck creating a file just to feed it into templatefile
This is a design prototype of one possible way to meet the need of defining a template in a different location from where it will ultimately be evaluated, as discussed in #26838. For example, a gen…
I revisited using native terraform with terraform cloud instead of terragrunt and was annoyingly reminded of the limitations when I tried to pass my env variables file with -var-file
and it complained :laughing:
I think that’s probably my biggest annoyance right now.
If I could rely on env.auto.tfvars
working I’d do that.
Otherwise I’d have to use a CLI/Go SDK to set all the variables in the workspace in Terraform Cloud itself.
Otherwise I feel I’m back to terragrunt if I don’t want to use my own wrappers to pass environment based configurations. I do like the yaml config module, but it’s too abstracted right for for easy debugging so I’m sticking with environment files.
I have been down this road last month and sticking with native TF + env files in a BB Cloud Pipeline.
what doesn’t work about -var-file
or auto.tfvars
?
my exact annoy problem, you can solve this by using environment variable
TF_CLI_ARGS_plan=-var-file=../../env/prod/us-east-1/prod.tfvars
using this env instead of a flag, will allow running in TFC or TFCLI workflows
@sheldonh you’re overdue for an update on atmos and what we are doing with stack configs.
Sneaky! I’ll check it out soon then. I did explore stack configs a few months back and it wasn’t the right fit for me at the time.
I will say while I get the general appeal of Variant2, i’ve struggled to find use for it rather than just writing Go/PowerShell as it’s another DSL and very verbose. For you working across many tools it probably makes sense, but my last go at it didn’t provide the right fit. Always willing to recheck this stuff out though!
Oh, and one thing that I found I really missed with vanilla terraform was the dependency being pulled from local outputs. For instance I use Cloudposse label module. I use as an input for context for every other item in terragrunt, but with native terraform I found it more complicated to use the remote state in s3 that is also dynamically set in backend config. Felt like I was adding more complexity… though that’s just my reaction as I tried to convert back 2-3 simple modules.
Be nice if terraform had a bit more opinionated workflow options built in to simplify inherited env, such as terraform -env staging
that would automatically load env/staging.tfvars
…. I could wish right!
@sheldonh When I first started into terraform I created something along those lines by bash scripting around it locally by keying off of the selected workspace name: https://github.com/Gowiem/DotFiles/blob/master/terraform/functions.zsh#L1-L30
Problem of course is that it doesn’t scale. You can write scripts / make targets at the root of your terraform repo that do the same and then push your team to always use those scripts / targets… but yeah not great.
Gowiem DotFiles Repo. Contribute to Gowiem/DotFiles development by creating an account on GitHub.
I like Mohammed’s solution though — That’s a good one
I wrote a go wrapper for project that does goyek -env dev -stack 'vpc,security-groups' -runall
and loops through the directories to avoid using run-all at a parent level. Of course if things work well I also built the folder structure based on cloudposse docs so goyek -env staging -stack '02-foundation' -runall
and it does runall in each directory.
I’m not saying this is perfect, but with streams of stdout/stderr it’s pretty reliable.
I like the cli args concept, will have to think on that to figure out if it does what I want as feeding outputs from one small piece into another is a strength of terragrunt. Great ideas and thank you for sharing it all!
2021-06-29
anyone encounter this ? https://github.com/cloudposse/terraform-aws-eks-cluster/issues/117
On an existing EKS cluster that was created with this module, I'm am unable to add cluster_log_types to the cluster. module "eks_cluster" { source = "cloudposse/eks-cluster/aws&q…
Hi everyone, could you give me a hint on how to pass a json object as an input variable to a module… e.g. the module contains the <<POLICY EOF>> or <<ITEM >> syntax.. can I pass json into a variable and then use jsonencode in the module? If yes, how do you pass json as an input? Perhaps as a string?
if the JSON has a known static structure, you can pass it as native Terraform object. If the JSON is freeform, you can pass it as a string
Hi Alex, yes, I am trying to figure out how to pass a dynamodb item as a variable..
The docs only show an example with an inline <<ITEM >>
variable "user_provided_json" {
type = object({
name = string
age = number
alive = bool
friends = list(string)
})
}
or
variable "user_provided_json_string" {
type = string
}
locals {
user_provided_json = jsondecode(var.user_provided_json_string)
}
if I pass it as string, do I need to escape things? I am guessing so…
not sure what you mean exactly but maybe
thank you Alex
short question still.. if I pass it as an object, how do I pass it to the item or policy property?
do I just say policy = var.user_provided_json ?
Hi folks, I need your help in understanding the folder structure I need to have for different environments (dev, stage, prod) when building an infra very similar to https://github.com/cloudposse/terraform-aws-ecs-web-app
Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app
its up to you how you want to do this layout, if you follow cloudposse
approach, check this https://docs.cloudposse.com/tutorials/first-aws-environment/
if not, you can create workspaces using https://www.terraform.io/docs/language/state/workspaces.html
Workspaces allow the use of multiple states with a single configuration directory.
Thank you for the link @Mohammed Yahya actually it doesn’t matter which approach. Just trying to understand how to make the structure to be able to deploy on 3 different environments with slight differences choosing the best approach for my case. I currently have the following folder structure. I’m unsure how to manage tfvar files for each environment with the modules structure of Cloud Posse
for the modules folder:
@Mohammed Yahya yes?
I thought of it like this. But I’m not sure whether this works with how Cloud Posse has structured the modules etc.
The .tf
files found in root are the ECS https://github.com/cloudposse/terraform-aws-ecs-web-app
I see, this is my approach for layout https://github.com/mhmdio/terraform-templates-base
Terraform Templates Base - monoRepo. Contribute to mhmdio/terraform-templates-base development by creating an account on GitHub.
I think there are tons of way for the layout, just test and see what match your use case.
Thanks for sharing, the template looks very clean. You just earned yourself a follower and a ^^
Hey, anyone ever had to do TF based interview test… looking to pull one together and though this would be a good place for some ideas
I think it can be easy to set the bar too high with a test of a specific tool. I tend to ask candidates if they know how to create a resource that will sometimes be deployed and sometimes not, eg count 0/1.
Agreed. Ability to learn and show adaptability for infra as code etc over specific domain language.
Before I had to do it recently I had never deployed a load balancer due to the nature of my work and VPC/subnets was new due to limitations in last company. Now I’ve done full stack deployment.
Eagerness to demonstrate some area of infra as code makes sense, but maybe let them pick what they excel at so they can shine?
If they have only used console… that’s a different story and will answer if they are starting from scratch on infra as code.
I’d also suggest maybe offering part of it to be show and tell and let them show something they are enthusiastic about
review this, helped me pass the exam https://medium.com/bb-tutorials-and-thoughts/250-practice-questions-for-terraform-associate-certification-7a3ccebe6a1a
Read and Practice these questions before your exam
also understand this very well ( automation for terraform), since this topic is the most probably you gonna work in real life - beside normal tf dev tasks. https://learn.hashicorp.com/collections/terraform/automation
Automate Terraform by running Terraform in Automation with CircleCI, or following guidelines for other CI/CD platforms.
Folks, anyone here got into Terraform module testing for infrastructure code? Source: https://www.terraform.io/docs/language/modules/testing-experiment.html#writing-tests-for-a-module
i took a minor swing at it with Terratest as I’m working with Go now. I haven’t tried the new testing framework in the current experiment. I’m kinda waiting for this work to stablize before I mess around with it.
Thanks Sheldon
2021-06-30
v1.1.0-alpha20210630 1.1.0 (Unreleased) NEW FEATURES: cli: terraform add generates resource configuration templates (#28874) config: a new type() function, only available in terraform console (<a href=”https://github.com/hashicorp/terraform/issues/28501” data-hovercard-type=”pull_request”…
terraform add generates resource configuration templates which can be filled out and used to create resources. The template is output in stdout unless the -out flag is used. By default, only requir…
The type() function, which is only available for terraform console, prints out a string representation of the type of a given value. This is mainly intended for debugging - it's handy to be abl…
Hmm, weird workflow to import and then add
terraform add generates resource configuration templates which can be filled out and used to create resources. The template is output in stdout unless the -out flag is used. By default, only requir…
The type() function, which is only available for terraform console, prints out a string representation of the type of a given value. This is mainly intended for debugging - it's handy to be abl…
I guess it is aimed at teams who are importing legacy stuff