#terraform (2020-07)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2020-07-01
What’s a recommended way to manage the provider versions across modules?
Requirements:
• Ability to test modules independently Please feel free to direct me to some reading if necessary.
these days, i believe the required_providers
block, an attribute of the terraform
block…
https://www.terraform.io/docs/configuration/terraform.html#specifying-required-provider-versions
The terraform configuration section is used to configure some behaviors of Terraform itself.
I’ll look into it, thank you!
I thought that pinning provider versions in modules was advised against?
yes there may be breaking change.
this will allows you to roll out update version per module
v0.13.0-beta3 0.13.0-beta3 (July 01, 2020) BUG FIXES: backend/azurerm: support for snapshotting the blob used for remote state storage prior to change (#24069) backend/remote: Prevent panic when there’s a connection error (<a href=”https://github.com/hashicorp/terraform/issues/25341” data-hovercard-type=”pull_request”…
Rebased PR with some small fixes. Originally by @pmarques (2018) and @rahdjoudj (2019). Fixes #18284 Closes #18512 Closes #21888
Ensure that the *http.Response is not nil before checking the status. This can happen when retrying transport errors multiple times.
2020-07-02
Is there a way to deploy lambdas to lambda@edge via tf? Or is the only way to do it via api calls?
im using tf to do this now
look at resource aws_cloudfront_distribution
and look at its lambda_function_association
lambda_function_association {
event_type = "origin-request"
include_body = false
lambda_arn = "${aws_lambda_function.request.arn}:${aws_lambda_function.request.version}"
}
Cool
Just need that and the cloudfront trigger?
you need the cloudfront distribution resource, you need the lambda resource, and then you can use that block with the cf distribution resource to tie it in
i slightly cheated tho. i created my cf distribution and lambda via the aws console, then i backported it into terraform. i went through a couple iterations of updating the lambda code in my repo and redeploying with terraform and it worked as expected
Yeah fair. I did the same with my cf distribution
I just discovered this though which is pretty cool
data "archive_file" "this" {
for_each = local.lambdas
type = "zip"
source_file = "${path.root}/lambdas/${each.key}_viewer_request.py"
output_path = "${path.root}/lambdas/${each.key}_viewer_request.zip"
}
yep that’s what im using to zip up lambda
data "archive_file" "request" {
type = "zip"
source_file = "${path.module}/lambda_request.js"
output_path = "${path.module}/lambda_request.zip"
}
resource "aws_lambda_function" "request" {
provider = aws.us-east-1
role = data.aws_iam_role.lambda_exec.arn
function_name = "CloudFrontRewriteToIndex"
handler = "lambda_request.handler"
runtime = "nodejs12.x"
publish = true
source_code_hash = data.archive_file.request.output_base64sha256
filename = "${path.module}/lambda_request.zip"
tags = local.tags
}
nice and easy
Is it possible to add the cloudfront trigger? None of the lambda resources seem right? Perhaps this isn’t needed if you’re telling the cloudfront dist about the lambda in the tf code?
2020-07-03
Hi All, anyone facing issues with Terraform v0.12.24
running inside EKS Pod? somehow Terraform is assuming EKS worker node’s role than Pod’s ServiceAccount, the worker node’s role doesn’t have admin policy so its failing. Terraform v0.12.20
works fine with same setup. any leads?
OK i have hit this little issuette with AzureRM and Vault where the token issued from vault is not being accepted by Azure as AD has not replicated. everything I have read suggests using a bash script to insert an artificial delay of 120 seconds into the authentication process.
I have this script that I nicked
subscription_id=$1 sleep $2 echo “{ "subscription_id\”: \”$subscription_id\” }” data “external” “subscription_id” { program = [“./install.sh”, “<subscription_id>”, “120”] } (edited) [8:09 AM] and according to the post I was reading I replace the line subscription_id = <subscription_id> with subscription_id = “data.external.subscription_id.result[“subscription_id”]” however when I issue a terraform plan against that i receive:
There are some problems with the configuration, described below.
The Terraform configuration must be valid before initialization so that Terraform can determine which modules and providers need to be installed.
Error: Missing newline after argument
on test.tf line 3, in provider “azurerm”: 3: subscription_id = “data.external.subscription_id.result[“subscription_id”]”
An argument definition must end with a newline. I know I am missing something simple but i just cant see it.
(please use code blocks) e.g.
subscription_id=$1
sleep $2
echo "{ \"subscription_id\": \"$subscription_id\" }"
data "external" "subscription_id" {
program = ["./install.sh", "<subscription_id>", "120"]
}
Could you maybe post the full bit of appropriate code in a code block? It’s hard to understand above… Is it literally:
subscription_id=$1
sleep $2
echo "{ \"subscription_id\": \"$subscription_id\" }"
data "external" "subscription_id" {
program = ["./install.sh", "<subscription_id>", "120"]
}
Or is:
subscription_id=$1
sleep $2
echo "{ \"subscription_id\": \"$subscription_id\" }"
The contents of install.sh?
that is the script.
this is the code:
provider “azurerm” { version = “~>2.0” subscription_id = “data.external.subscription_id.result[“subscription_id”]”
tenant_id = “tenant_id” client_id = “data.vault_generic_secret.azure.data[“client_id”]” client_secret = “data.vault_generic_secret.azure.data[“client_secret”]” features {} }
provider “vault” { address = “vault_address:8200/” auth_login { path = “auth/approle/login” parameters = { role_id = “role_id” secret_id = “secret_id” } } }
data “vault_generic_secret” “azure” { path = “azure/creds/Azure-Terraform” }
resource “azurerm_resource_group” “rg” { name = “myRemoteAmazicTest-rg” location = “northeurope” }
:wave: FWIW It helps everyone to first run terraform fmt
in the source folder as it makes it a bit more readable. Secondly, you can put code into Slack Codeblocks by either typing 3 backticks or find codeblock in the comment menu at the ellipse.
You could also use a local-exec on a null resource to sleep for a bit in your terraform code like:
resource "null_resource" "pause_a_bit" {
provisioner "local-exec" {
command = "sleep 120"
}
}
I might try that cheers
nope that did not work. the delay needs to be in the provider authentication not putting a pause on the code use.
the issue is that the vault generated tokens have not been replicated arround the Azure AD so when they are presented back to Azure they are not seen as valid
Error: Error building account: Error getting authenticated object ID: Error listing Service Principals: autorest.DetailedError{Original“adal: Refresh request failed. Status Code = ‘400’. Response body: {"error"<i class=”em em-"unauthorized_client","error_description"”></i>"AADSTS700016: Application with identifier ‘data.vault_generic_secret.azure.data[“client_id”]’ was not found in the directory ‘7aeb5a8a-a7d2-40c1-8019-859b3549e7f1’. This can happen if the application has not been installed by the administrator of the tenant or consented to by any user in the tenant. You may have sent your authentication request to the wrong tenant.\r\nTrace ID: 5073fbfa-8f18-4ba9-a0c7-6f0934bf6c00\r\nCorrelation ID: f640271c-6e6f-49e2-8ede-5d24d6b46bb2\r\nTimestamp: 2020-07-03 1044Z","error_codes"<i class=”em em-[700016],"timestamp"”></i>"2020-07-03 1044Z","trace_id"<i class=”em em-"5073fbfa-8f18-4ba9-a0c7-6f0934bf6c00","correlation_id"”></i>"f640271c-6e6f-49e2-8ede-5d24d6b46bb2","error_uri"<i class=”em em-"<https”></i>//login.microsoftonline.com/error?code=700016>"}”, resp:(http.Response)(0xc000449950)}, PackageType:”azure.BearerAuthorizer”, Method:”WithAuthorization”, StatusCode:400, Message:”Failed to refresh the Token for request to https://graph.windows.net/7aeb5a8a-a7d2-40c1-8019-859b3549e7f1/servicePrincipals?%24filter=appId+eq+%27data.vault_generic_secret.azure.data%5B%E2%80%9Cclient_id%E2%80%9D%5D%27&api-version=1.6”, ServiceError:[]uint8(nil), Response:(http.Response)(0xc000449950)}
on test.tf line 1, in provider “azurerm”: 1: provider “azurerm” {
this is the generated account the enbolded section show an AD error code that says not valid account details.
@Tom Howarth you have this in quotes
client_id = "data.vault_generic_secret.azure.data["client_id"]"
client_secret = "data.vault_generic_secret.azure.data["client_secret"]"
you need to remove those double quotes like so:
client_id = data.vault_generic_secret.azure.data["client_id"]
client_secret = data.vault_generic_secret.azure.data["client_secret"]
Error: Invalid character
on test.tf line 8, in provider “azurerm”: 8: client_id = data.vault_generic_secret.azure.data[“client_id”]
This character is not used within the language.
Error: Invalid expression
on test.tf line 8, in provider “azurerm”: 8: client_id = data.vault_generic_secret.azure.data[“client_id”]
Expected the start of an expression, but found an invalid expression token.
Error: Invalid character
on test.tf line 8, in provider “azurerm”: 8: client_id = data.vault_generic_secret.azure.data[“client_id”]
This character is not used within the language.
Error: Invalid character
on test.tf line 9, in provider “azurerm”: 9: client_secret = data.vault_generic_secret.azure.data[“client_secret”]
This character is not used within the language.
Error: Invalid character
on test.tf line 9, in provider “azurerm”: 9: client_secret = data.vault_generic_secret.azure.data[“client_secret”]
This character is not used within the language.
If you look closely at the double quotes you see something is off. I’m sure you’re on a Mac and sometimes if you copy paste double quotes from online, your mac makes it a weird double quote which has a direction left or right like “ or ” , it needs to be “
I have looked at those and replaced the all now it looks like my script is not being read. so it will not load the external data stanza. is there a special way for this to be called
TBH I think you should take a little time to go through a few of the online terraform courses, that will help you a lot
that is after removing the quotes
removing the quotes for the value in the brackets results in this:
Error: Reference to undeclared resource
on test.tf line 3, in provider “azurerm”: 3: subscription_id =data.external.subscription_id.result[subscription_id]
A data resource “external” “subscription_id” has not been declared in the root module.
Error: Invalid reference
on test.tf line 3, in provider “azurerm”: 3: subscription_id =data.external.subscription_id.result[subscription_id]
A reference to a resource type must be followed by at least one attribute access, specifying the resource name.
Error: Invalid reference
on test.tf line 8, in provider “azurerm”: 8: client_id = data.vault_generic_secret.azure.data[client_id]
A reference to a resource type must be followed by at least one attribute access, specifying the resource name.
Error: Invalid reference
on test.tf line 9, in provider “azurerm”: 9: client_secret = data.vault_generic_secret.azure.data[client_secret]
A reference to a resource type must be followed by at least one attribute access, specifying the resource name.
did you remove the quotes around client_secret and client_id ?
Yes see later response
can you post me the current code ?
Give me 10 minutes, just not at my machine at the moment.
Removing those quotes is wrong as they aren’t a variable, the previous code was right, I’ll respond there
hi all, I used the cloudposse/terraform-aws-documentdb-cluster
repo to create a documentdb instance in aws. How can I configure the docdb instance to send logs to Cloudwatch?
Enabling enabled_cloudwatch_logs_exports
only enables the cluster logging. However, it does not enable the parameter group’s audit_logs
variable.
Any ideas?
Possibly not supported yet - PRs welcome! we’ve ramped up our review capacity.
How to make a load balancer that after a successful health check not switched to the old version of the container?
Are you talking about ECS?
Yes. ECS Fargate
I guess you’re using the default ECS deployment mode which is a rolling update. That results in old and new containers running at the same time during the deployment.
Yes
There is a CodeDeploy blue green deployment option that should help
An Amazon ECS deployment type determines the deployment strategy that your service uses. There are three deployment types: rolling update, blue/green, and external.
The “but” to the Blue Green using CodeDeploy is that service autoscaling is not supported.
Amazon ECS service auto scaling is not supported when using the blue/green deployment type. As a workaround, you can suspend scaling processes on the Amazon EC2 Auto Scaling groups created for your service before the service deployment, then resume the processes once the deployment has completed. For more information, see Suspending and resuming scaling processes in the Amazon EC2 Auto Scaling User Guide.
Ok. Thanks
About a minute I opened new and old version.
how to fix it?
2020-07-04
2020-07-06
Hi guys! I am new to terraform and trying to get my hands on terraform functions. I have created main.tf file:
resource "aws_lb_listener" "backend_alb_listener" {
load_balancer_arn = aws_lb.backend_alb.arn
port = lookup(var.alb_http_listeners, "port")
protocol = lookup(var.alb_http_listeners, "protocol", null)
# default_action - (Required) An Action block.
dynamic "default_action" {
for_each = var.alb_http_listeners
content {
type = lookup(default_action.value, "action_type", "forward")
target_group_arn = aws_lb_target_group.backend_alb_target_group.arn
}
}
}
and variables.tf file:
variable "alb_http_listeners" {
default = {
"block 1" = {
port = 443
protocol = "HTTPS"
default_action = {
action_type = "forward"
}
}
}
type = any
description = "A list of maps describing the HTTP listeners or TCP ports for this ALB."
}
It seems like lookup
the function is not able to read from variables.tf file as when I run terraform plan
, it takes default values i.e, port = 80 and protocol = HTTP, not which I set in variables.tf file. Can anyone help me in writing the variables.tf file correctly.
Thanks in advance.
When you get into situations like this with terraform a good technique is to make a temporary folder and add a few of the bare minimum things to get an example working, preferably with no remote resources. Usually I’ll just declare a locals
block and a few output
resources to show me what’s going on.
In your case here, the problem you’re having is that you have a complex map defined and lookup
expects the key to be at the top level. So to lookup up the port you’d actually need to pass the ‘block 1’ map to the function.
@Zach thanks for your valuable input. Can you give me a quick fix for the problem, meaning how do I pass block 1
map to function
locals {
alb_http_listeners = {
default = {
"block 1" = {
port = 443
}
}
}
}
output "port" {
value = lookup(local.alb_http_listeners["default"]["block 1"], "port" ,80)
}
you can stick that into a .tf and run an apply, you’ll get ‘443’ as the output.
So do I have to do this in every lookup function, such as in protocol
Anywhere you have nested maps, yes
great thanks:)
or figure out if there’s a way to do it using the ‘key’ part of the lookup function, but either way its going to mean you have to know the structure of the map
You may just want to step back and think if you’re taking the right approach in that case
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Jul 15, 2020 11:30AM.
Register for Webinar
#office-hours (our channel)
2020-07-07
Hey everyone huge fan of what yall do. Been using your terraform modules for a long time.
Question: I’m switching our worker modules from terraform-aws-eks-workers to terraform-aws-eks-node-group and I noticed the node group module is missing the bootstrap_extra_args
parameter that the workers module has. This is a blocker for us so I wanted to see if there was something I was missing or if maybe this was on the road map to add?
Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers
Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.
Looks like this isn’t actually possible with managed node groups. Nvm can ignore.
exactly - this is a fundamental limitation
we wanted this too - for using jenkins (docker in docker), but had to stick with workers for that node group.
note, it’s possible to mix and match. you can use both modules in concert under the same EKS cluster
Got it makes sense. Yeah thats too bad. I love the simplicity of managed node groups but not being able to add kubelet args is a blocker for us
what flags in particular?
Specifically I need to be able to set the --cluster-dns
flag so we can use node-local-dns caching for the cluster
aha, ok
fwiw, for others following along, this is the feature request issue: https://github.com/aws/containers-roadmap/issues/596
Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or "me to…
Here’s a question for the crowd. I’m conditionally creating an NLB based off of a variable, and I need to conditionally add said NLB as an additional load balancer to an ECS cluster. I know this example doesn’t work, but this is basically what I’m trying to do, and wondering if anyone has done something similar
resource "aws_ecs_service" "service" {
name = "${var.namespace}-${var.stage}-${var.service}"
cluster = aws_ecs_cluster.cluster.id
task_definition = aws_ecs_task_definition.td.arn
launch_type = "FARGATE"
desired_count = var.desired_count
network_configuration {
security_groups = [aws_security_group.ecs_tasks_sg.id]
subnets = var.internal == true ? tolist(var.private_subnets_ids) : tolist(var.public_subnets_ids)
assign_public_ip = true
}
load_balancer {
target_group_arn = aws_lb_target_group.alb_tg.arn
container_name = var.container_name
container_port = var.public_container_port
}
load_balancer {
count = var.nlb_enabled == true ? 1 : 0
target_group_arn = aws_lb_target_group.nlb_tg[0].arn
container_name = var.container_name
container_port = var.private_container_port
}
}
You can use a for loop on the attributes, no?
Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group
more examples here https://github.com/search?q=org%3Acloudposse+for_each&type=Code
GitHub is where people build software. More than 50 million people use GitHub to discover, fork, and contribute to over 100 million projects.
I’ll give that a try now!
Thanks Erik, I was able to use the examples to create a dynamic block with a for_each. First time I’ve used a dynamic
.
Excellent! glad that worked out for what you needed.
HCl2 for the win!
Hello, I had a question about configuring parameter group settings using cloudposse/terraform-aws-documentdb-cluster
. Is there currently no way to update parameter group settings (such as enabling tls, ttl_monitor, and profiler logs) using that terraform template?
quite possibly - but we’d accept PRs and are actively reviewing contributions
Does it take the parameter group settings of the default parameter group (docdb3.6) then?
Terraform module to provision a DocumentDB cluster on AWS - cloudposse/terraform-aws-documentdb-cluster
when terraforming EKS with node groups, how do you add ingress for the automatically provisioned security group to the cluster SG and vice versa? I don’t see a SG attribute that is exported
im a little late to the party, but we can’t tag the underlyign ec2 instances w/ Name, etc?
• Stop using alb-ingress?
–
[alb.ingress.kubernetes.io/security-groups](http://alb.ingress.kubernetes.io/security-groups)
Instead, provision the security groups and setup the trust relationships. Then add pass the new blessed security group to the ingress.
sorry i’m not talking about kubernetes ingress. i was talking about the cluster security group ingress rule and node group security group ingress rule. when provisioning the nodegroup, i don’t seem to have a reference to its security group in terraform? going the traditional worker ASG route, the security group is provisioned manually by us in terraform
Maybe I’m still misunderstanding…
You don’t have control over the rules themselves added by AWS. That’s what you’re paying them for
but you have the security groups that you can add/remove rules to
forgive me @Erik Osterman (Cloud Posse) for not being specific enough. I don’t believe the EKS node group returns the automatically provisioned security group for the node group.
So, you’re right this does not appear readily available, but it seems there is a way:
Manages an EKS Node Group
Get information on an Amazon EC2 Autoscaling Group.
Provides a Launch Configuration data source.
terraform exposes the autoscale group of the managed nodes
look up the autoscale group using the aws_autoscaling_group
data provider
then from there, lookup the launch configuration using the aws_launch_configuration
data provider
then from there you can access the security_groups
This seems like a nice addition to our module - if you end up implementing it, we’d accept it
@Erik Osterman (Cloud Posse) I may revist nodegroups in the future if there are new features. Right now, for those of us that had been doing things the traditional ASG way, I feel like the terraform module for eks workers is basically just as “managed” (because its automated provisioning) but allows for more configurability.
obviously if you were implementing the EKS workers terraform from scratch today without any prior context, doing it via nodegroups would’ve been way faster
2020-07-08
Partial Outage of Workspace Updates, some Runs may not complete Jul 8, 19:47 UTC Identified - We are currently working on a fix for an issue that affects workspace changes. The underlying problem affects the ability for some runs to start or complete.
HashiCorp Services’s Status Page - Partial Outage of Workspace Updates, some Runs may not complete.
Partial Outage of Workspace Updates, some Runs may not complete Jul 8, 20:08 UTC Resolved - We’ve tested and rolled out a fix for this issue. Runs that aren’t completing can be discarded and re-queued. Locked workspaces can now be force-unlocked (via a new run or via settings –> locking –> force unlock). https://www.terraform.io/docs/cloud/workspaces/settings.html#lockingJul 8, 19:47 UTC Identified - We are currently working on a fix for an issue that affects workspace changes. The underlying problem affects the ability for some runs to start or…
Does terraform registry nondeterministic versioning give you enough cause to stop using GitHub tag based sources and instead use the private registry that Terraform cloud offers? Seems like I could do non breaking updates this way while GitHub tags wouldn’t
2020-07-09
Below local-exec was able to update .bashrc file in macOS but it never works. My execution fails with aws credentials are not configured..Same works fine in ubuntu OS. Any idea why it is failing and any solution to make it work. I cant configure using aws configure command as I am doing it on run time. Below is the way for me but why it is not working on macOS need to solve. Please suggest.
resource "null_resource" "aws_configure" {
provisioner "local-exec" {
command = "grep -qwF 'export AWS_ACCESS_KEY_ID' ~/.bashrc || echo 'export AWS_ACCESS_KEY_ID=${module.globals.aws_details["access_key"]}' >> ~/.bashrc;grep -qwF 'export AWS_SECRET_ACCESS_KEY' ~/.bashrc || echo 'export AWS_SECRET_ACCESS_KEY=${module.globals.aws_details["secret_key"]}' >> ~/.bashrc;grep -qwF 'export AWS_DEFAULT_REGION' ~/.bashrc || echo 'export AWS_DEFAULT_REGION=${module.globals.aws_details["region"]}' >> ~/.bashrc;"
interpreter = ["bash", "-c"]
}
}
What are you trying to achieve here?
Actually I am trying to setup aws credentials directly as env value during terraform execution..this would be used by terraform for AWS resource creation..Actually My plan is to run this terraform files anywhere and I will do the aws credential configuration..and initiate AWS resource creation..
Can you use aws-vault exec?
Till now I’ve been running it on ubuntu machines..where .bashrc file update made sure aws credentials are used during runtime..
oh ok..i never knew about this..any link u would suggest?
You could then use tfenv
Transform environment variables for use with Terraform (e.g. HOSTNAME
⇨ TF_VAR_hostname
) - cloudposse/tfenv
A vault for securely storing and accessing AWS credentials in development environments - 99designs/aws-vault
There’s also cloudposse/geodesic if you want to use a docker image as your standard environment for running Terraform things
oh ok..i never thought about running as docker image..interesting..let me chk out..
this will solve all my problem..if I am able run it as docker image..
It works well
You can try this repo https://github.com/cloudposse/testing.cloudposse.co
Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co
Pass your account id, etc as a docker run env var
See dockerfile
Then docker exec and run assume-role
sure really looks great..i never thought so many repo on cloudposse..really cool stuff..let me go over and see if i can set it up..
Thank you very much @Joe Niland
Good luck!
You’re welcome
I am trying to update codebuild project with EFS settings as terraform aws_codebuild_project does not have the option to do it during initial setup .. I am getting the following error …. I am sure its something simple that I am missing here .. let me know if anyone can point me what is that I am missing here
null_resource.output-id: Provisioning with 'local-exec'...
null_resource.output-id (local-exec): Executing: ["/bin/sh" "-c" "aws codebuild update-project --name supercell-shared-infra --file-system-locations [type=EFS,location=fs-865b7b05.efs.us-east-1.amazonaws.com,mountPoint=mount-point,identifier=efs-identifier]"]
null_resource.output-id (local-exec): Expecting value: line 1 column 2 (char 1)
2020-07-10
Hi Everyone - I have a question on terraform.
I wrote a aws pipeline setup script in terraform that gets all the config values from variables with the intention of reusing the script for creating multiple pipelines. But , if I update my variables to create a new pipeline then as the state file has information on the previous terraform builds its overriding the existing pipeline with new values. How do I handle this situation?
Sounds like your pipeline resources should be a module and you should invoke usage of that module for your X many pipelines and pass the corresponding variables at that level.
Generally, it sounds like reading up on Terraform state and lifecycle would be useful to you as terraform doesn’t act like a typical bash script would. You invoke terraform once to make your resources and then upon any subsequent invoking it’s either updating, adding, or destroying those same resources.
Thanks @Matt Gowie! So if I have a module for pipelines I can then create x number of pipelines by changing the variables is it?
Yeah, you can create your project’s main.tf file, enumerate each pipeline you want by including a module block for it, and then pass your associated pipeline variables to their corresponding module.
Got it thanks again @Matt Gowie appreciate the help!
Np, good luck with it!
How about using terraform workspaces and having multiple state files for multiple executions?
I typically use workspaces for environments i.e. dev, stage, prod. You could use them for what you’re looking to accomplish, but I’d suggest against it considering what I know so far.
Yes, best practice is use workspaces for env variations, but not different deployments of unique stack. Good advice Gowiem.
2020-07-11
This manual-install situation should hopefully gradually become uncommon after the Terraform 0.13.0 release in a few weeks, as more providers…
Does this mean it becomes easier to publish/use custom providers in tf then? The hackery involved in using a community provider found out in the wild is little painful to automate.
yep, this should be easy - as long as the maintainers register their providers
it will continue to be a pain for legacy providers disitrbuted via github.
I wrote up on my experience here if it’s of interest….https://www.sheldonhull.com/blog/compiling-a-custom-provider-and-including-for-terraform-cloud/
Assumptions You are familiar with the basics of setting up Go and can run basic Go commands like go build and go install and don’t need much guidance on that specific part. You have a good familiarity with Terraform and the concept of providers. You need to include a custom provider which isn’t included in the current registry (or perhaps you’ve geeked out and modified one yourself ). You want to run things in Terraform Enterprise .
Turns out I wasted some time on this. I totally missed that creating a bundle was only for terraform Enterprise and so I had to backtrack at the end of my exploration
Excited to see that third party providers will be as easy to leverage as modules
2020-07-12
I hope they make it easy to host your own registry, or proxy/cache the terraform registry at your own endpoint
Hi guys, how would you try to resolve this issue? I dive more in depth through this workshop https://www.techcrumble.net/2020/01/how-to-configure-terraform-aws-backend-with-s3-and-dynamodb-table/
terraform apply -auto-approve
Acquiring state lock. This may take a few moments...
Error: Error locking state: Error acquiring the state lock: 2 errors occurred:
* ResourceNotFoundException: Requested resource not found
* ResourceNotFoundException: Requested resource not found
Terraform acquires a state lock to protect the state from being written
by multiple users at the same time. Please resolve the issue above and try
again. For most commands, you can disable locking with the "-lock=false"
flag, but this is not recommended.
Managing state with terraform is quite crucial, when we are working with multiple developers in a project, with remote operation and sensitive data, let’s see how to use AWS Backend with S3 and DynamoDB table
I’d verify the S3 bucket and DynamoDB table were actually created via the AWS Console just to rule the simple stuff out.
Managing state with terraform is quite crucial, when we are working with multiple developers in a project, with remote operation and sensitive data, let’s see how to use AWS Backend with S3 and DynamoDB table
This is our way https://github.com/cloudposse/terraform-aws-tfstate-backend
Terraform module that provision an S3 bucket to store the terraform.tfstate
file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption. - cloudposse…
@Erik Osterman (Cloud Posse) here we go mate. Should I rely on the conditional operator?
terraform apply -auto-approve
var.region
AWS Region the S3 bucket should reside in
Enter a value: yes
provider.aws.region
The region where AWS operations will take place. Examples
are us-east-1, us-west-2, etc.
Enter a value: us-west-2
data.aws_iam_policy_document.prevent_unencrypted_uploads[0]: Refreshing state...
module.terraform_state_backend.data.aws_iam_policy_document.prevent_unencrypted_uploads[0]: Refreshing state...
module.terraform_state_backend.aws_dynamodb_table.with_server_side_encryption[0]: Refreshing state... [id=eg-test-terraform-state-lock]
aws_dynamodb_table.with_server_side_encryption[0]: Refreshing state... [id=terraform-state-lock]
Error: Error in function call
on main.tf line 255, in data "template_file" "terraform_backend_config":
255: coalescelist(
256:
257:
258:
|----------------
| aws_dynamodb_table.with_server_side_encryption is empty tuple
| aws_dynamodb_table.without_server_side_encryption is empty tuple
Call to function "coalescelist" failed: no non-null arguments.
Error: Error in function call
on .terraform/modules/terraform_state_backend/main.tf line 234, in data "template_file" "terraform_backend_config":
234: coalescelist(
235:
236:
237:
|----------------
| aws_dynamodb_table.with_server_side_encryption is empty tuple
| aws_dynamodb_table.without_server_side_encryption is empty tuple
Call to function "coalescelist" failed: no non-null arguments.
Sorry about stupid question guys. Hopefully, you don’t mind too much. I’m still on a learning curve and quite a bit in my slow-mode regime lol
2020-07-13
Join HashiCorp and Microsoft to talk about workload migration from VM’s to Kubernetes using HCS on Azure with Consul Cluster Management. In this session, Ray Kao, Cloud Native Specialist at Microsoft…
Join local practitioners for an overview of the HashiCorp toolset and a hands-on virtual workshop for Terraform on AWS on Tuesday, July 14th. http://events.hashicorp.com/workshops/terraform-july14
I went to this, was very disappointed.
Join local practitioners for an overview of the HashiCorp toolset and a hands-on virtual workshop for Terraform on AWS on Tuesday, July 14th. http://events.hashicorp.com/workshops/terraform-july14
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Jul 22, 2020 11:30AM.
Register for Webinar
#office-hours (our channel)
can someone help me understand the diff between why one should use .tfvars vs auto.tfvars
? Reading several blogs/docs they seems to be used in the same manner
Files called terraform.tfvars are loaded first.
*.auto.tfvars are loaded in alphabetical order after terraform.tfvars so they can be useful as overrides.
Files specified by -var-file are loaded last and can override values from the other two.
https://www.terraform.io/docs/commands/apply.html#var-file-foo
The terraform apply
command is used to apply the changes required to reach the desired state of the configuration, or the pre-determined set of actions generated by a terraform plan
execution plan.
Also tfvars doesn’t work with Terraform Cloud. You have to use auto.tfvars or load the input values via TFC variables
ah thx @Chris Wahl that’s what i was looking for. I should have mentioned we are using TFC but this is good to know too Joe. thx you both
Take a look at https://www.terraform.io/docs/configuration/variables.html#variable-definition-precedence
Input variables are parameters for Terraform modules. This page covers configuration syntax for variables.
It is possible to use tfvars with TFC/TFE
Make an Environment Variable called TF_CLI_ARGS_plan
and a value of -var-file=./environments/prod.tfvars
This client used open source workspaces with named tfvars for each workspace, and when moving to TFE they didn’t want to start managing variables in the UI and wanted to keep using tfvars for variables.
I don’t think so. Unfortunately.
It would be awesome if you could. What you can do though is use yaml configuration and parse it in tf
This is what I’m currently doing instead of tfvars
whats the best way to run a loop on module? I need to be able to build an small environment automatically based on a list variable , I see the loop for the module is going to be available in 0.13 but since its not available yet .. is there any other way that I can accomplish this … I am looking at for_each but I am confused if the cross resource dependencies will work as I needed to pass other resource outputs .. any help would be greatly appreciated
Basically you’ll want to do something like this: https://github.com/cloudposse/terraform-aws-ecr/blob/master/main.tf#L24-L34
Terraform Module to manage Docker Container Registries on AWS ECR - cloudposse/terraform-aws-ecr
It gets complicated quickly though if you depend on a lot of resources
thanks @Erik Osterman (Cloud Posse) I only have one resource dependency so I think this should work perfectly …
2020-07-14
Hey everyone. i’m using terraform-aws-eks-workers 0.7.1 still on terraform 0.11 , i’m looking for a way to add availability zone in the tags so i can spread my pods on all nodes evenly across all nodes topologySpreadConstraints, is there a easy way of doing this?
Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers
hi Team.. I want to create iam users for my team so that they can access AWS console and perform operations on dynamo db and sqs. I want to do it using terraform.. Any good pointers on that? I checked these 2 links but are very basic https://www.terraform.io/docs/providers/aws/r/iam_user.html https://www.terraform.io/docs/providers/aws/r/iam_access_key.html Need some advanced options and multiple policies which I can enforce for those IAM users
Provides an IAM user.
Provides an IAM access key. This is a set of credentials that allow API requests to be made as an IAM user.
have a look at these modules: https://github.com/cloudposse?q=iam&type=source&language=
you will pretty much just need roles, policies and users
ok. let me have a look
Did you create your queues and dynamodb with Terraform too?
yes
in link you sghared, there are 3 different project folders for policy, roles and users.. how they connect to create users
You create policies that are attached to roles and you allow individual users to assume those roles.
If you already use something like Okta (or other IdP), I’d strongly consider setting up Federated SSO. Managing individual users in AWS with Terraform gets annoying and being too granular can also cause you trouble as there are limits to numbers and sizes of policies that can be attached to a user, and the user will inevitably want more and more.
Of course, it depends how many users you are talking about. If there’s 10, then Terraform will be fine. If there’s 1000 across 220+ AWS accounts then SSO / Self-Serve access request is a must.
ok.. we have less users so terraform is fine for us
hi Team,
I created code for user account creation but once I run terraform plan, it also destroy many things by its own.. This destroy things is not needed,, how to avoid destroys or am I doing something wrong in my new code for user account creation
2020-07-15
Hi Team - I have my terraform scripts bundled as modules, now I have a main.tf under root directory and I have multiple module configurations reusing same module but with different variables with the intent of reusing the module code for multiple module configurations. But when I do a terraform plan instead of creating 2 resources its basically overriding 1st one with 2nd configuration why does this happen and what is the way to create multiple resources ?
This is related to your using local state (assumption). The state of the managed objects is kept in local state files (terraform.tfstate) or in a remote data store, such as S3. Running TF in the same dir as the state files for a previous run of TF will operate on the same objects as the last run, because it gets its state from the state files.
I have a similar set-up and I was advised to look into Terraform Cloud, to manage the state.
You could use workspaces to separate out the state, regardless of where the state resides, but i’m not really up on that approach.
Has anyone ran into the error below when using dynamic block function on tags?
on main.tf line 50, in resource "aws_launch_template" "default":
50: dynamic tags {
Blocks of type "tags" are not expected here
here’s what i’m inserting
dynamic "tags" {
for_each = local.common_tags
content {
key = tags.key
value = tags.value
}
}
In all of my code AWS tags have an equals sign tags = {}
, and the Terraform AWS Tagging documentation backs that up by saying resource tags are arguments that accept a key-value map (not blocks that support for_each). I wanted to verify that so I googled it and found the Terraform 0.12 Preview for Dynamic Nested Blocks gives a contradictory example showing exactly what you’re trying to do. Finally I found official documentation clarifying this:
This is required in Terraform 0.12 since tags is an argument rather than a block which would not use
=
. In contrast, we do not include=
when specifying the network_interface block of the EC2 instance since this is a block.
It is not easy to distinguish blocks from arguments of type map when looking at pre-0.12 Terraform code. But if you look at the documentation for a resource, all blocks have their own sub-topic describing the block. So, there is a Network Interfaces sub-topic for the network_interface block of the aws_instance resource, but there is no sub-topic for the tags argument of the same resource.
For more on the difference between arguments and blocks, see Arguments and Blocks.
the equivalent of “dynamic” for a map, as tags is, would be to use a for
expression…
tags = { for k, v in <map> : k => v }
something like that, anyway, not able to test it at the moment…
tags = flatten([
for key in keys(local.common_tags) : merge(
{
key = key
value = local.common_tags[key]
}, var.additional_tag_map)
])
You can replace var.additional_tag_map with {} if you don’t have other tags
sweet thank you all for the info i will test this out and let you know if it works
How can narrow down my filter to just “id”. Here is what i am running terraform state show module.controlplane.aws_security_group.worker
resource "aws_security_group" "worker" {
arn = "arn:aws:ec2:us-west-2:000000000:security-group/sg-000000000000"
id = "sg-000000000000"
ingress = []
name = "foobar"
owner_id = "000000000"
revoke_rules_on_delete = false
vpc_id = "vpc-000000000000"
}
hey there, having legacy terraform 0.11 and aws provider 1.32.0
goal - manage credentials through 3 aws accounts (environments dev
, test
, prod
) and 5 logical domains for each environment into SSM
the question is that am i overcomplicating or exists more easier way to handle theses?
here’s module
code
note:
• yes, i know regarding a security concerns
• state locally managed
2020-07-16
Trying to use a private GitLab repository as Terraform module.
It works fine when I hardcode the token like this:
module "resource_name" {
source = "git::<https://oauth2>:<GITLAB_TOKEN>@gitlab.com/user/repo.git?ref=tags/v0.1.2"
...
}
It also works like this:
module "resource_name" {
source = "git::<https://gitlab.com/user/repo.git?ref=tags/v0.1.2>"
...
}
When I extend my ~/.gitconfig
with:
[url "<https://oauth2>:<GITLAB_TOKEN>@gitlab.com"]
insteadOf = <https://gitlab.com>
Is there a way I could provide the GITLAB_TOKEN
via environment variable?
Sounds like the 2nd way you have there is a good way to do it. I don’t believe you can feed in a terraform variable to a module source, since it is needed up front when you run terraform init
https://www.terraform.io/docs/modules/sources.html#generic-git-repository give some good ideas. You can use ssh instead of HTTPS, and it will use your SSH key, or you can set up your git credential helper to already be logged in
The source argument within a module block specifies the location of the source code of a child module.
You are right, I tried over Terraform variable but it didn’t work:
There are some problems with the configuration, described below.
The Terraform configuration must be valid before initialization so that
Terraform can determine which modules and providers need to be installed.
Error: Unsuitable value type
on main.tf line 2, in module "resource_name":
2: source = "git::<https://oauth2>:${var.gitlab_token}@gitlab.com/user/repo.git?ref=tags/v0.1.2"
Unsuitable value: value must be known
Error: Variables not allowed
on main.tf line 2, in module "resource_name":
2: source = "git::<https://oauth2>:${var.gitlab_token}@gitlab.com/user/repo.git?ref=tags/v0.1.2"
Variables may not be used here.
need to read more about “git credential helper”, thx @roth.andy
Used the following:
• Have a simple script at /usr/local/bin/git-credential-helper
#!/bin/sh
# Script used to authenticate against GitLab by using the GitLab token
# When using the GitLab token the assumed user is oauth2
# Example: git clone <https://oauth2>@gitlab.com/user/project.git
# Return the GitLab token
echo "${GITLAB_TOKEN}"
• create environment variable with the token:
export GITLAB_TOKEN='<MY SECRET GITLAB TOKEN>'
• then just download the module with:
GIT_ASKPASS=/usr/local/bin/git-credential-helper terraform init
The [main.tf](http://main.tf)
looks like this now:
module "resource_name" {
source = "git::<https://oauth2>@gitlab.com/user/project.git?ref=tags/v0.1.2"
...
}
@roth.andy thx again for pointing me in the right direction
you’re welcome
I ran into a similar need and saw this thread. Really appreciate the content / ideas here, I built upon it to solve my problem. Shared the details here! https://wahlnetwork.com/2020/08/11/using-private-git-repositories-as-terraform-modules/
Learn how to quickly and efficiently setup private git repositories as Terraform modules using a dynamic access token and continuous integration!
CDK for Terraform: Enabling Python & TypeScript Support: https://www.hashicorp.com/blog/cdk-for-terraform-enabling-python-and-typescript-support/
Cloud Development Kit for Terraform, a collaboration with AWS Cloud Development Kit (CDK) team. CDK for Terraform allows users to define infrastructure using TypeScript and Python while leveraging the hundreds of providers and thousands of module definitions provided by Terraform and the Terraform ecosystem.
so does that mean I just write infrastructure and app both in typescript ?
Cloud Development Kit for Terraform, a collaboration with AWS Cloud Development Kit (CDK) team. CDK for Terraform allows users to define infrastructure using TypeScript and Python while leveraging the hundreds of providers and thousands of module definitions provided by Terraform and the Terraform ecosystem.
so, pulumi out of business yet?
damn damn damn, this is good: https://registry.terraform.io/modules/cloudposse/iam-policy-document-aggregator/aws/0.1.0. We’re doing it a different crappier way where I’m at. Going to suggest we swap over to this.
btw, up to 0.3.1
now
This is neat, but still can’t understand a reason why would someone aggregate policy into one ?
What are some use cases ?
I’m working at a large media company right now that is clearly pushing the limits of what IAM is capable of, for better or worse. We need every byte of IAM policy possible, so it’s useful to concat many small policies into larger one when attaching to a role.
2020-07-17
Hi! anyone facing this issue when destroying/creating an AWS EKS cluster? https://github.com/cloudposse/terraform-aws-eks-cluster/issues/67
what Error: Creating CloudWatch Log Group failed: ResourceAlreadyExistsException: The specified log group already exists: The CloudWatch Log Group '/aws/eks/eg-test-eks-cluster/cluster' alr…
Please see the supporting details in that issue. It’s not a module problem, it’s a terraform problem.
what Error: Creating CloudWatch Log Group failed: ResourceAlreadyExistsException: The specified log group already exists: The CloudWatch Log Group '/aws/eks/eg-test-eks-cluster/cluster' alr…
Terraform Cloud Outage Jul 17, 21:31 UTC Investigating - Due to a failure in a third-party DNS provider, Terraform Cloud runs are failing and the Terraform Cloud web interface is unavailable.
Terraform Cloud Outage Jul 17, 21:41 UTC Monitoring - Terraform Cloud is currently back to normal functionality. We’re continuing to monitor DNS functionality and communicate with our provider.Jul 17, 21:31 UTC Investigating - Due to a failure in a third-party DNS provider, Terraform Cloud runs are failing and the Terraform Cloud web interface is unavailable.
Terraform Cloud Outage Jul 17, 22:35 UTC Resolved - The upstream DNS provider has fixed the issue. Terraform Cloud is operational again - if a run failed during this outage, please re-queue it. If you have problems queueing runs, please reach out to support.Jul 17, 21:41 UTC Monitoring - Terraform Cloud is currently back to normal functionality. We’re continuing to monitor DNS functionality and communicate with our provider.Jul 17, 21:31 UTC Investigating - Due to a failure in a third-party DNS provider, Terraform…
HashiCorp Services’s Status Page - Terraform Cloud Outage.
2020-07-18
Hi everyone, I have got a problem with Terraform when I add more than 6 tags on the AWS services.
It didn’t update the “kubernetes.io/role/elb” = “1” to stage file (edited) I cannot find the reason for that. Can you help me find out?
Are slashes valid for key? Never tried that
I think it not a problem because I also have other tags with slashes.
What is the issue? What is the error message? What type of resource? You need to provide more details to get help
I cannot set 7 tags for a resource by Terraform.
Terraform just shave 6 tags in the Terraform stage
Are you saying that you believe your code should set 7 tags, but you are only getting 6 and no errors? If so, I suspect it is an issue in your code. You would need to share all of the related code for anyone to verify this
What’s the latest from 01.13 hands on? Has it saved a lot of repeat code for you so far? Haven’t tried as using Terraform cloud primarily. Overall reaction to improvements would be great.
My thoughts from about a month ago, the module-level count/for_each will definitely simplify things for many modules… https://sweetops.slack.com/archives/CB6GHNLG0/p1592345793219500?thread_ts=1592345793.219500&cid=CB6GHNLG0
oh yeah, playing with tf 0.13 today, and the ability to disable a module using count = 0
is the … eliminates so much of the cruft in advanced modules
Also makes it easier to work with community modules and integrate them into your own work
2020-07-19
Hey. question about the aws elastic beanstalk environment module. Is it possible for it to impliment waf using that module, for the EB environment? I didn’t see any inputs for waf or any other modules available for it?
2020-07-20
Hi, i was using EKS worker nodes in the past our our staging ENV and now i would like to switch to terraform-aws-eks-node-group my question is
- if i use terraform-aws-eks-node-group is there a way to encrypt the disk and also set scaling policy(CPU limit) ?
- if i use EKS worker nodes is there a way to automatically dain nodes before removing them, at the moment i’m using
termination_policies = ["OldestInstance", "OldestLaunchConfiguration", "Default"]
? thanks.
Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.
We have a PR for encryption that was contributed, but needs rebasing by @Virender Khatri
Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.
what Enable optional encryption_config Create optional Cluster KMS Key if one is not provided why To enable eks cluster resources (e.g. secrets) encryption references https://aws.amazon.com/bl…
You can also enable https://www.terraform.io/docs/providers/aws/r/ebs_encryption_by_default.html
Manages whether default EBS encryption is enabled for your AWS account in the current AWS region.
For draining, you might want to have a look at https://github.com/aws-samples/amazon-k8s-node-drainer
Gracefully drain Kubernetes pods from EKS worker nodes during autoscaling scale-in events. - aws-samples/amazon-k8s-node-drainer
(haven’t used it)
A Kubernetes Daemonset to gracefully handle EC2 instance shutdown - aws/aws-node-termination-handler
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Jul 29, 2020 11:30AM.
Register for Webinar
#office-hours (our channel)
2020-07-21
Hi, has anyone here used terraform with helmfile together?
I know @mumoshu has been working on a helmfile provider
Yes he has and I’ve so hapily downloaded the latest release and built it locally: https://github.com/mumoshu/terraform-provider-helmfile
Deploy Helmfile releases from Terraform. Contribute to mumoshu/terraform-provider-helmfile development by creating an account on GitHub.
But in his example he mentions external helmfile_release_set which I use
But I also need to send env variables with it
And sending env variables is shown only with inline releases
For release sets it should work. We are doing this the following way:
resource "helmfile_release_set" "common_stack" {
...
environment_variables = {
EXTERNAL_IP = data.google_compute_address.dev_ip_address.address
}
...
}
ohhhh
using release 0.3 ?
Not yet, we are facing some issues with the most recent release
which one you have so far?
terraform apply
just started failing with no visible reason. I’m investigating this right now
same here…
so 0.2.0 until then?
This is the issue I opened: https://github.com/mumoshu/terraform-provider-helmfile/issues/22
I've tried the latest version of this provider (taken from the master branch) and found out that some terraform apply failed without any reason. I also noticed that the output had changed. It b…
for me it just ran endlessly
creating on and on
There is something similar
Steps to Reproduce : Set export TF_LOG=TRACE which is the most verbose logging. Run terraform plan …. In the log, I got the root cause of the issue and it was : dag/walk: vertex "module.kube…
interseting
should i revert to 0.2.0 until further notice?
You can give it a try. But it also has some troubles like the spoiled state problem I described here: https://github.com/mumoshu/terraform-provider-helmfile/issues/16
In #9 we decided that taking file approach like resource "helmfile_release_set" "mystack" { content = file("./helmfile.yaml") … } would fix all the troubles with the…
so 0.1.0 is safest
probably a reason why it’s labeled as latest
It has almost the same issues with the state:)
Btw you might find this useful:
https://github.com/roboll/helmfile/issues/505#issuecomment-653684082
Thoughts on adding support for terraform as a temptable values data source? Opening this to think through design and integration. An opposite option would be to have a terraform helmfile provider w…
huh, seems broken a lot …
unfortunatelly, helmfile is great for helm/kustomize combo + patches…
Regarding the spoiled state problem, you might not face this. We’ve been using this provider successfully for quite some time. But when you encounter the problem it will bother you a lot)).
Did I understand correctly that there are two approches. One is using the tf plugin to execute the release AND pass variable data and one is just to output variable data from the tf infrastructure that you can later using with helmfile?
<ref+tfstate://path/to/some.tfstate/RESOURCE_NAME> and helmfile gets it from tf automatically by using output variables?
I would like to see a tf variant2 provider instead :X
Helmfile does not handle postdelete actions/hooks for deleting CRDs. So I have to use taskfile/variant2 for pre and post actions/hooks, but then I will lost a chance to use helmfile tf provider..
Did I understand correctly that there are two approches. One is using the tf plugin to execute the release AND pass variable data and one is just to output variable data from the tf infrastructure that you can later using with helmfile?
I think so, haven’t tried the latter yet though:)
interesting nonetheless, happy i got new info thanks a lot for that!
What if tfstate is stored in AWS S3, Azure Storage ?
Helm-like configuration values loader with support for various sources - variantdev/vals
Remote backends like S3 is also supported…
Oh noe, Azure is not there..
Would be nice to support AWS SSO and Azure MSI auth/authz ( + Azure Storage Backend ) like terraform does.
Would be nice to support AWS v2 CLI SSO ( terraform-providers/terraform-provider-aws#10851 ) and Azure CLI ( https://www.terraform.io/docs/providers/azurerm/guides/azure_cli.html ) in the future. T…
Would be nice to support Azure Storage ( as alternative to AWS S3 ) via #6 ( Same should apply for AWS S3. Thanks
for me it just ran endlessly
This is super interesting. I thought I have not changed any part of the provider that may end up something like that
So I fixed a potential deadlock issue https://github.com/mumoshu/terraform-provider-helmfile/issues/21#issuecomment-662178012
Steps to Reproduce : Set export TF_LOG=TRACE which is the most verbose logging. Run terraform plan …. In the log, I got the root cause of the issue and it was : dag/walk: vertex "module.kube…
My issue was also resolved
Thanks a lot for the support and fixes @mumoshu, I can get the latest release and try again?
I might be doing something really wrong but when I do terraform plan with 0.3.2 I get Error: mkdir : no such file or directory
It has just: content = file(“../helmfile/helmfile.yaml”)
Sry my bad! Fixed in 0.3.3
ohh np. I just noticed your timezone, yikes sorry for bothering you
ah no worry! much better than leaving it until tomorrow morning. thx for testing
on it now
seems like the path to the binary is hardcoded to /usr/local/bin/helmfile but I have it installed via snap, even though the helm binary works on cli. And I think previous versions worked
I tried also to helm_binary = “/snap/bin/helm” but no success
wow really?
i hope i’m not doing something wrong though
actually, it’s not hard-coded. the default value for helm_bin
is helm
. For bin
its helmfile
beginner with terraform and news with helmfile
do you have terraform.tfstate
in your work dir?
yes
if you don’t have any kind of secrets in it, could you share it so that we can see what’s happening in the state
or perhaps your .tf file if seeing the state file doesnt help
ohh i had a file with a long hash generated there (probably from last tests) after removing it I just get error that it doesn’t find some file but that’s normal because they are relative to the helmfile not terraform
so appart from this it seems to be working I just need to change the structure a bit I assume
after removing it I just get error that it doesn’t find some
are you talking about a file named like helmfile-8003d363e667d0a8cbc898f0866f02756c24328f56045aad1745a59deca0a14e.yaml
generated by the provider?
aand I just saw the working directory directive and now after changin that it works, wow you thought about everything
yes sir tha’t’s the one I was refering to
glad to hear it worked for you!
and I can get values from terraform like ip address, sql instance address, domains from uptime checks and pass them on
thanks so much for your help and support
i’ll do my best to push to into production, test it and provide feedback
awesome! please feel free to ask me anything towards that, here or in gh issues, whatever works for you
@mumoshu hello again
I went ahead and tested forward the latest release and it seems the diff generated from terraform is correct indeed but it ends with This is a bug in the provider, which should be reported in the provider’s own issue tracker.
Everything seems fine and running the command manually directly with helmfile makes the deployment
Error: Provider produced inconsistent final plan
I'm moving forward from the previous version to the most recent and the latest fails for all our environment with Error: rpc error: code = Unavailable desc = transport is closing during terrafo…
The initial one was: https://github.com/mumoshu/terraform-provider-helmfile/issues/22
I've tried the latest version of this provider (taken from the master branch) and found out that some terraform apply failed without any reason. I also noticed that the output had changed. It b…
Thanks for the update Andrey
I also tried the approach you suggested with tfstate and such
Under values I added
- “ref+tfstate://../terraform/terraform.tfstate”
no provider registered for scheme “tfstate”
oh wait, maybe I need to update my helmfile
oh that was it
now I get
panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1172ece]
PANIC!
not sure what I’m doing wrong
@Andrew Nazarov are you using this by any chance?
Ah think I got it, I need to name variables separately, i included them like a .yaml, my bad
Sorry, haven’t tried yet this ref+tfstate approach
Would have went with the first one but seems it doesn’t go through
@Andrew Nazarov for me updating to the latest helmfile seems to have worked
ah nevermind it came back once I changed something else
Any luck with this Andrey or did you revert back to an older version?
Thanks. Error: Provider produced inconsistent final plan
is really interesting. I’m now wondering why it doesn’t reproduce on my machine while I’m developing it
Maybe it’s only a warning rather than an error on my env?
2020/07/27 09:52:31 [WARN] Provider "helmfile" produced an unexpected new value for helmfile_release_set.mystack2, but we are tolerating it because it is using the legacy plugin SDK.
The following problems may be the cause of any confusing errors from downstream operations:
- .diff_output: was cty.StringVal("Comparing release=myapp2, chart=sp/podinfo\ndefault, myapp2-podinfo, Deployment (apps) has changed:\n...\n prometheus.io/port: \"9898\"\n spec:\n terminationGracePeriodSeconds: 30\n containers:\n - name: podinfo\n- image: \"stefanprodan/podinfo:1234\"\n+ image: \"stefanprodan/podinfo:12345\"\n imagePullPolicy: IfNotPresent\n command:\n - ./podinfo\n - --port=9898\n - --port-metrics=9797\n...\n\nin ./helmfile-3adb8a9ba929668a0ea4b06dbeddd4cba9f0283b926b55c1c31f5e7176c497e1.yaml: failed processing release myapp2: helm3 exited with status 2:\n Error: identified at least one change, exiting with non-zero exit code (detailed-exitcode parameter enabled)\n Error: plugin \"diff\" exited with error\n"), but now cty.StringVal("")
Maybe my tf is outdated?
$ terraform version
Terraform v0.12.13
Still unable to reproduce it even with tf v0.12.29
@Andrew Nazarov @Paul Catinean Could you share me exact config and steps to reproduce the error you’re seeing? I’ve tried this for an hour with various configs but have no luck so far. It works fine on my machine
also which resource type are you using, helmfile_release_set
or helmfile_release
, when you get the error?
Hi @mumoshu, yes sure I will try to give as much data as I can
• Terraform v0.12.26
• helmfile version v0.125.0
resource “helmfile_release_set” “staging” { content = file(“../helmfile/helmfile.yaml”)
working_directory = "../helmfile"
environment_variables = {
TERRAFORM-TEST = "This came from terraform!"
}
environment = "staging" }
output “staging” { value = helmfile_release_set.odoo-staging.diff_output }
That’s the full output of the terraform apply command
Let me know if I can help with any extra details
@Paul Catinean Thank you so much! I’ll try reproducing the issue once again with it.
In the meantime, would you mind trying 0.3.5 which I’ve just released, to see if it gives you any difference?
For me, it did fix the warning I was seeing(https://sweetops.slack.com/archives/CB6GHNLG0/p1595811243157400?thread_ts=1595351319.054400&cid=CB6GHNLG0) which might relate to yours
Maybe it’s only a warning rather than an error on my env?
2020/07/27 09:52:31 [WARN] Provider "helmfile" produced an unexpected new value for helmfile_release_set.mystack2, but we are tolerating it because it is using the legacy plugin SDK.
The following problems may be the cause of any confusing errors from downstream operations:
- .diff_output: was cty.StringVal("Comparing release=myapp2, chart=sp/podinfo\ndefault, myapp2-podinfo, Deployment (apps) has changed:\n...\n prometheus.io/port: \"9898\"\n spec:\n terminationGracePeriodSeconds: 30\n containers:\n - name: podinfo\n- image: \"stefanprodan/podinfo:1234\"\n+ image: \"stefanprodan/podinfo:12345\"\n imagePullPolicy: IfNotPresent\n command:\n - ./podinfo\n - --port=9898\n - --port-metrics=9797\n...\n\nin ./helmfile-3adb8a9ba929668a0ea4b06dbeddd4cba9f0283b926b55c1c31f5e7176c497e1.yaml: failed processing release myapp2: helm3 exited with status 2:\n Error: identified at least one change, exiting with non-zero exit code (detailed-exitcode parameter enabled)\n Error: plugin \"diff\" exited with error\n"), but now cty.StringVal("")
Just wondering but how are you upgrading terraform-provider-helmfile?
That’s also a good question, since this is the first time I’ve been doing this and first time using terraform
This is 0.3.6 provider I think that I was using
I download the latest release available from github, unpack it
I installed go with snap
go version go1.14.6 linux/amd64
inside the unarchived release I do go build
and the resulting executable I place directly in to .tgplugins directory
and replace the other one, then run terraform init
I had no instructions on how to do it from the page so i assumed this would be the way
Same error i’m afraid using https://github.com/mumoshu/terraform-provider-helmfile/releases/tag/v0.3.5
Deploy Helmfile releases from Terraform. Contribute to mumoshu/terraform-provider-helmfile development by creating an account on GitHub.
and the process describe above
@mumoshu
@Paul Catinean Thanks a lot! The process looks fine. Probably there should be some unseen bug I’m not aware of. Will keep investigating..
If I can help with anything else like a screencast, helmfile description, even a call if needed I can do just let me know
Would you mind sharing me terraform.tfstate
file that should have been created by terraform
?
That was not included in the paste I give with terraform apply?
I have the .tfstate not sure how to get the one that should have been created by terraform? if not in the paste
Ah also i ommited my helm version if that has anything to do with it: version.BuildInfo{Version:”v3.2.4”, GitCommit:”0ad800ef43d3b826f31a5ad8dfbb4fe05d143688”, GitTreeState:”clean”, GoVersion:”go1.13.12”}
I thought terraform apply
creates it on first successful run
it never went through I guess, so every time I do terraform apply I get the same diff
the tfstate diff and helmfile diff inside it
Got it! Thanks
Maybe I should try a simpler helmfile with less data inside? see if it’s from that or idk
Maybe. JFYI, this is what I’m using..
provider "helmfile" {}
resource "helmfile_release_set" "mystack" {
path = "./helmfile.yaml"
helm_binary = "helm3"
working_directory = path.module
environment = "default"
environment_variables = {
FOO = "foo"
}
values = [
<<EOF
{"name": "myapp"}
EOF
]
selector = {
labelkey1 = "value1"
}
}
resource "helmfile_release_set" "mystack2" {
content = <<EOF
releases:
- name: myapp2
chart: sp/podinfo
values:
- image:
tag: "12345"
labels:
labelkey1: value1
- name: myapp3
chart: sp/podinfo
values:
- image:
tag: "2345"
EOF
helm_binary = "helm3"
// working_directory = path.module
working_directory = "mystack2"
environment = "default"
environment_variables = {
FOO = "foo"
}
values = [
<<EOF
{"name": "myapp"}
EOF
]
selector = {
labelkey1 = "value1"
}
}
output "mystack_diff" {
value = helmfile_release_set.mystack.diff_output
}
output "mystack_apply" {
value = helmfile_release_set.mystack.apply_output
}
output "mystack2_diff" {
value = helmfile_release_set.mystack2.diff_output
}
output "mystack2_apply" {
value = helmfile_release_set.mystack2.apply_output
}
resource "helmfile_release" "myapp" {
name = "myapp"
namespace = "default"
chart = "sp/podinfo"
helm_binary = "helm3"
// working_directory = path.module
// working_directory = "myapp"
values = [
<<EOF
{ "image": {"tag": "3.14" } }
EOF
]
}
output "myapp_diff" {
value = helmfile_release.myapp.diff_output
}
output "myapp_apply" {
value = helmfile_release.myapp.apply_output
}
helmfile.yaml
releases:
- name: {{ .Values.name }}-{{ requiredEnv "FOO" }}
chart: sp/podinfo
values:
- image:
tag: foobar2abcda
labels:
labelkey1: value1
is this part of the automated test cases?
Not yet. Just that I’m still not sure how this can be (easily) automated. I’m running terraform init
, plan
, apply
on this every time I change code(and manually tag/publish a new release)
could this have something to do with me trying multiple times to do a release through this that failed with previous versions?
Possibly. Maybe you have helm releases installed on your cluster even though terraform+tf-provider-helmfile considers it a fresh install on every terraform apply
run?
i just removed the file from terraform and it killed the instance
So that’s working
Thank god the env variable works and killed only staging
Are you saying terraform apply
after removing some helmfile_release from your tf file resulted in deletion? :slightly_smiling_face: you may already know but you’d better run terraform plan
beforehand
ah yes, that was the plan to remove it from tf state and also the cluster so it’s “reset”
and now doing an apply from scratch
ah, could you share me your ../helmfile/helmfile.yaml
?
Sure, i’ll just remove some sensitive data here and there
but it’s multi-tiered with some subfiles
it’s not a must if it takes too much effort! my current theory is this issue has something to do with complex helmfile.yaml and/or first install so just wanted to have more samples if i can have
Now that you mention, cert-manager has the installCrds hook which can add a lot of data and I did have issues in the past regarding the gitlab-runner
But at the same time I went to the respective file and did helmfile -e staging apply and it went through
I’ll check again
much appreciated! i need some sleep now but i’ll definitely report back tomorrow
thanks for you time!
I’ll also post feedback if I get anything new
Let me know if you need any more info
It seems that if I change the main helmfile and remove entire conditions it works
As soon as I change a value in the values.tmpl file and the diff shows diff of values vs diff of helmfile
that’s when it breaks
so the thing to try is to have a helmfile which has an external gotmpl values file
Quite some talk:) Don’t know where to start. Yeah, I’ve got a huge helmfile (about 2k lines of code) injected into fairly small helmfile
Indeed
This is the only distinction I could find. If I remove an entire release it works and it shows just the few lines of the release
If I change anything in the values.yaml.gotmpl and I change 1 char then I get that
I tried in multiple releases
I have one suspicion. Could it be these dots in diffs?
...
bit, office365-editor, ConfigMap (v1) has changed:
...
These are taken from diff_output
could very well be, i also have changes in the configmap as well
How can we test?
Or this is the change I’ve got in the diff for the failed run
- path = "helmfile.yaml" -> null
But actually these look like just a red herring
maybe there’s a standard go function that parses this where we can feed the diff into?
Found this string in the output. There is a mess in logs after this error, I’ll try to grab something sensible
but now cty.StringVal("Adding repo stable
I’ve tried to reconstruct the message from the mess
When expanding the plan for helmfile_release_set.mystack to include new values
learned so far during apply, provider "registry.terraform.io/-/helmfile"
produced an invalid new value for .diff_output: was cty.StringVal("Adding repo
...
a lot of text
...
but now cty.StringVal("Adding repo stable
...
a lot of text
...
So probably aforementioned dots matters))))
I’ll try to decipher more
think I have to start learning go so I can add some breakpoints and do some tests myself
I would like to debug too, but don’t have time right now at all
I’ve made some work to decipher the output and to me the value pointed in was cty.StringVal()
is the same as the value from but now cty.StringVal()
so it’s essentially the same?
"registry.terraform.io/-/helmfile" produced an invalid new value for
.diff_output: was cty.StringVal("Adding repo stable
But that would happen in any diff
Doing changes in the main helmfile has no issue whatsoever
I can also see
- error = (known after apply)
Thanks! I think it’s fixed in v0.3.6 https://github.com/mumoshu/terraform-provider-helmfile/releases/tag/v0.3.6
// I’ll add some explanation on why it (may) fix the issue later but I don’t have time right now!
Deploy Helmfile releases from Terraform. Contribute to mumoshu/terraform-provider-helmfile development by creating an account on GitHub.
Testing right away
Seems to be working indeed, but the problem is I don’t see the diff for change of helmfille. Just the main one
While this works I won’t be able to sign off on the final changes though
Works for me too. @mumoshu huge thanks. And yes the diff_output is missing. Is this expected behaviour?
Was wondering the same
I’ve managed to fix it in v0.3.7. Sorry for back and forth but could you try it once again?
Same error i’m afraid
I think reproducing the error is key here to fix the issue possibly adding it to automation later. Did you try having an external gotmpl values file from the main helmfile.yaml ?
Sad
So I did reproduce the issue and managed to fix it on my env
The important point was that terraform calls plan twice and the two plan results must be equivalent
twice, interesting
actually you are right it does call it twice
I see twice the same output
Yeah. The first plan can optionally be done beforehand by calling terraform plan
and storing the planfile to be reused later by terraform apply
if you didnt store/reuse the plan on apply, terraform apply
seems to run plan twice, one for initial planning and the second for the verification
just to be 100% sure that nothing changed by the time you called plan to apply?
yeah
I guess that makes sense, I do plan once to check, and apply right after since it’s a small infrastructure that only I am managing
to me the two plan results werent really equivalent
helmfile diff
runs a number of helm repo add
and helm reop update
in the very beginning, and the result of helm reop update
isn’t reliable
ohhhhhhhhhhhh
I was just looking at that now
maybe an option on helmfile to output directly just the diff? that way it’s also backwards compatible idk
so i’ve tried to “erase” the messages from helm repo up
in the last few commits, which solved the isseu for me
yeah that might be an option!
Strangely enough when I don’t change the vals yaml it always works
thats odd. but if it’s the same issue, you should see the error after a few more trials
what do you mean?
maybe i’ve misunderstood what you’ve said in
Strangely enough when I don’t change the vals yaml it always works
so you just said that terraform apply
works when you have no change on your helmfile.yaml, right?
does the issue disappear if you set concurrency = 1
on your helmfile_release_set resource in your tf file?
let me check
nop, same error
not sure what else to try
Could you share me the full log obtained by runnign terraform apply with TF_LOG=TRACE TF_LOG_PATH=tf.log terraform apply
?
sure
Thank you so much for your cooperation It really helps!
more than happy to do so, it’s in my own interest as well, will help me a great deal in my future deployments
hmm I see there is also private info there like public keys and such
I can try to remove them just not sure if I get it all
you need it all or just the helmfile part?
I’ll join soon too. Not sure what to check though. I’ll grab the latest version
I was a bit worried that either gitlab runner or cert-manager with installCrds set to true might be the problem but seems not since it’s not included in the diff
Hey I was the discussion on the github issue, how has the color diff been resolved?
@Paul Catinean Thanks for sharing the TF_LOG!!
Apparently helm-diff result is unstable… From your log, I can see that the first helmfile-diff run shows changes on configmap and then deployment, where the second run shows changes on deployment and then configmap…
ohhh
I am using the latest helm diff plugin (which you contributed to in the upstream project)
I was even going to open a PR to propose updating the diff version on helmfile to a more recent one
As for me having the installCRDs parameters set to true on cert-manager resulted in huge diffs of essentially the exact same data just organized differently
what version of helm diff are you using yourself sir?
yeah. i thought helm-diff used a go map(hashmap) to collect resources to be diffed. afaik map entry order isn’t guaranteed/stable
ouch hmmmm
I’m using the latest version of helm-diff
number?
I have 3.1.2
3.1.2
hmmmm
then it’s related to my chart and how it produces the output or how helm diff parses that output
yeah probably… and it may even be due to that I’m so lucky that helm-diff somehow prints the changes in a stable order
probably i’d get the same error if add more releases and k8s resources to be diffed
It’s a plausable scenario
not sure how to proceed here
maybe i can “enhance” the provider to skip running helmfile-diff on the second plan
as it seems, if this is the case, there is too much volatility between repos added and diff plugin version that it can be wildly inconsistent
on the surface - yes
what about switching to kubectl diff as it was proposed in one helmfile issue? Will this help?
I’ve just faced the issue with the fixed version (https://sweetops.slack.com/archives/CB6GHNLG0/p1595423095095200?thread_ts=1595351319.054400&cid=CB6GHNLG0).
To me the most stable version is after it became two times slower. I’m sorry for finding this:))))))))
My issue was also resolved
It seems this is the commit from which I built the image and I’ve been using it successfully since then
But yes, it’s slow
Thanks! So I’ve cut v0.3.8 https://github.com/mumoshu/terraform-provider-helmfile/releases/tag/v0.3.8
Starting this version, the provider should run helmfile-diff only once while apply, which should fix the issue…
Deploy Helmfile releases from Terraform. Contribute to mumoshu/terraform-provider-helmfile development by creating an account on GitHub.
Also since it reuses the output from the previous helmfile-diff run against same resource attrs + helmfile build
result, terraform apply
results in one less helmfile-diff run, which should make it a bit faster to run
to me it seems working (again).. the nature of this issue makes it very hard for me to reliably test. your confirmation is much appreciated as usual
It now creates snapshots of helmfile-diff outputs under .terraform/helmfile/diff-*.txt. Ideally the provider should clean it up after a successful apply. I’ll add that later if we confirm it’s actually working
testing now sorry work came in
HAPPY DAYS IT WORKED!
And on top of that it was indeed faster
Thanks for sticking with me on this one @mumoshu
I’ll do some more testing on the staging environments and after some extensive testing will try to move them into production
woop woop it also sends environment variables properly so this is great
It did give a strange diff with some strange distribution of removed/added (a lot of red) but it did go through
Works for me too. I’ll keep an eye on it and test more cases
@mumoshu you rocks)
true that
Just as info for future version, not urgent since this seems to work indeed. The diff shows addition/removals on top of the diff which shows the helmfile diff and can be confusing
Should I create an issue with this?
awesome! thanks a lot for your patience and support
re: confusing diff output, i got to think that it’s unavoidable due to the nature of terraform plan
it shows the diff for the latest helmfile diff
output against the previous helmfile diff
stored in the tfstate
maybe we’d better use helmfile template
instead so that terraform can use that to show changes in manifests?
diff_output
can be just removed, or renamed to changes
so that it is clear that terraform is showing “diff of changes”, rather than changes in the manifests
after asking the question myself it did make sense that it shows this indeed
that’s not a bad idea, I didn’t even use helmfile template until now but it does make sense indeed
I’m facing a strange issue when tf apply for the env ends up with the error map has no entry for key "securityService"
. Like there is no such key in values. But actually it’s there and all prior runs were successful. Cannot understand what went wrong. Probably I’ll file an issue when I grab more information
It seems that it doesn’t see the the values file somehow
Found it. A values file was malformed - a colon was missing after one key not related to the key from the error at all.
Re: diff_output, I was able to make helmfile diff
output as the whole diff shown in plan, not “diff of diffs”. I’ll cut a release later
Re: the “map has no entry” error, glad to see you fixed it so to be extra sure, we don’t need to fix helmfile/provider for that?
no, everything is working great so far:)
Awesome! Thanks for all the feedbacks.
FYI: v0.3.9 has been released with the diff fix
Removed the intial release and starting a new one
Quick question while I run this. How would one be able to transfer data structures such as hashes/maps/dicts or lists from terraform to helmfile ? some decode on the helmfile part in the gotemplate?
I’d suggest using string interpolation w/ jsonencode
on the terraform side
Like:
content = <<EOH
values:
- whatever: ${jsonencode(locals.whatever)}
EOH
and then on helmfile part jsondecode I assume we have a gotemplate function
to transforms maps/lists passed from tf to helmfile?
yeah
I assume you can only pass values as env variables? wth an external helmfile?
How about this?
content = <<EOH
values:
- whatever: ${jsonencode(locals.whatever)}
---
{{ range $_, $item := .Values.whatever }}
...
{{ end }}
or
values = [
<<EOF
{"whatever": ${jsonencode(locals.whatever)}}
EOF
]
content = <<EOH
{{ range $_, $item := .Values.whatever }}
...
{{ end }}
have to try both
the latter would work with external ones
so I think 0.3.9 seems to work, I see the initial output with helmfile repo adding etc, the main helmfile and then the diff in the actual values.yaml within the terraform diff
Doesn’t show with colors or just the original diff but seems to accurately show the difference
ah so I disable coloring recently cuz i thought it would conflict with terraforms’ own coloring
I’ll have to sit and think a bit what’s going on, today is one of the slowest days ever, need more coffee
But I’m inclined to (re)enable coloring by default, and add an option to disable it instead
https://github.com/mumoshu/terraform-provider-helmfile/issues/24
terraform-provider-helmfile version: 3df004fb52ba7c30f65d80eb0c3b537e0c4df7eb helmfile version: helmfile-0.125.0.high_sierra.bottle.tar.gz Overview We are using helmfile for managing Jenkins helm d…
So when changing a value in the values.yaml file
So it somehow shows the diff before this operations, the version now and then the diff from now
The last part is totally accurate just not sure why I see the diff from before this deploy, or maybe I’m just confused
maybe you’re saying that apply_output
is confusing?
Actually apply output that’s at the very end (all in green but np) that’s perfectly accurate
apply_output
is showing that the previous apply_output(contains previously applied “diff”) to be recomputed after apply
So I made that deployment now all I’m changing is strictly LIMIT_REQUEST: “” and the configmap will be changed along with it
just changing 0 to 1 and backward
And i see the diff with 0 to 1 and now the 1 to 0
In theory shouldn’t it show just the latest change? - - - the current value + the changed value
ahhh….
I believe that’s due to stale cache. I used sha256 hash of helmfile build
output in the name of cache files, which is used to run helmfile diff
only once to avoid the error we had before
"registry.terraform.io/-/helmfile" produced an invalid new value for
.diff_output: was cty.StringVal("Adding repo stable
any changes to values.yaml won’t change the hash value so it would definitely happen
(working on the fix
but this is more then great and functional, thank you so much
This is just feedback if you want to push it to another level it’s more than helpful
when changing a value in the values.yaml file
should be fixed in v0.3.10
🥰🥰
Not exactly sure why I keep getting “Error: the lock file (Chart.lock) is out of sync with the dependencies file (Chart.yaml). Please update the dependencies”
Could be an issue in my chart but I didn’t change it between tests
Running helmfile alone seems to be working though
So maybe something related to the provider
Thanks! Interesting - could you try running helmfile diff
, template
, build
and apply
alone?
The provider just delegates everything to those 4 helmfile commands. So as long as those commands work, the provider should work…
it seems helmfile template fails indeed i have to find out why
it’s most likely because the chart is being downloaded and needs to have the dependencies updated
yeah that may be relevant. helmfile template
updates repos, run helm dep build
on local charts only, run helm fetch
on helm 2 only, and finally run helm template
.
this is remote chart so not sure why it refers to a /tmp/ downloaded version of the chart
just to be extra sure - you are using helm3, right?
perhaps helmfile v0.125.1 should work differently https://github.com/roboll/helmfile/issues/1377
but not sure why it doesn’t fail on other helmfile commands for you
When running helmfile –environment –selector chart=<MY_CHART> template, all of the charts are fetched beforehand and not just the charts in the selector. Running the same command but with l…
helm 3 yes sir
is it a public chart, or private one?
helmfile version v0.125.0
public
could you share me the chart location? i’ll try reproducing it myself
it has a few dependencies which granted I need to update if I use it locally but remotely i did not have to, not sure
Updated the latest helmfile and still the same thing
Thanks. I reproduced it!
So the root cause seems to be due to that the chart has outdated dependencies stored in Chart.lock
which needs to be updated by running helm dep up
before publishing the chart, i think
but this is a chart issue so Helmfile should anyway have a way to graceful handle it
Maybe helmfile can just warn about the outdated chart deps and proceed
instead of failing and forcing you to block until the chart is fixed in upstream
rethinking it for a while, i’ve fixed it as a part of https://github.com/roboll/helmfile/pull/1400
In #1172, we accidentally changed the meaning of prepare hook that is intended to be called BEFORE the pathExists check. It broke the scenario where one used a prepare hook for generating the local…
so it doesn’t run helm dep build
on fetched charts anymore, which turns out to be unnecessary and safely skipped
wow that sounds great, also thanks for the help on the chart that’s outside of helmfile and the provider. I’m getting free consultation now
I need to document on how to handle those dependencies, maybe remove the .lock or update the dependencies as you said or pin an exact version
It’s hard to follow this thread. But it seems with not the most recent version for the subsequent runs if nothing is changed tf apply prints out the apply result from the previous run and not related to the current one. Will investigate further
@Andrew Nazarov AFAIK terraform apply
outputs are computed from the latest tfstate. The helmfile provider does not “reset” the apply_output
when no run so next terraform apply
would still show the previous apply_output. Is this what you’ve observed?
Maybe I can try to make the provider “reset” apply_output
on read, so that it’s still shown in terraform output
on successful apply, but will reset to an empty string on next terraform apply
.
Yeah. For the vast majority of outputs this behaviour seems natural. But I’m still not sure about apply_output
though. Maybe we can keep it as it is. Let me file an issue and we can think about it for a while and continue discussing there
It seems diff output is also shown
ah that’s too bad. the provider does have some code to “reset” diff_output on read. so this strategy won’t work for apply_output as well
next best workaround should be to let the provider show diff that “sets diff_output and apply_output to empty on
terraform plan after the successful
apply`
https://github.com/mumoshu/terraform-provider-helmfile/issues/26
Not an urgent thing though:)
When the are no changes in manifests apply_output and diff_output that are shown to the user contain information from the previous successful run. This could be misleading. Terraform apply says No …
thx!
Maybe I’ll find time to play with it by myself. But I need to understand how this provider works now since you’ve made a lot of changes recently. And I’m not quite sure I understand all tf internals right.
im still learning it, too. i’ll try to answer questions if you have any also, i recently started adding comments on code where i find difficult to understand at glance. so that may help
I’m holiday for the time being but I’ve seen the commit message of the latest helmfile and it works along with the latest provider thanks so much @mumoshu
Would like clarification so I can expand on the actual expression syntax used in this draft blog post I wrote on using expressions with Terraform for iteration with for_each. PR#40 - Iteration through list of objects
If you are up for it, I’d love any comments on the pull request itself, as I’m a bit unclear about if this is a Terraform foreach type of construct. It seems to be using syntax that is partially go, with the for key,val in list syntax.
I’d like to understand this better as I’ve seen the flatten function used before with some more complex cases and can’t find any reference on the for each syntax itself explaining the schema of it such as for <itemvariable> in <Collection>: <object> => <propertyforkey>
and as a result I’m guessing too much on this stuff.
The Terraform language allows the use of expressions to access data exported by resources and to transform and combine that data to produce other values.
Yes. I’ve read that before. I’m going to reread it again for fresh insight
Very Go inspired it seems, but it’s own functionality I guess.
The source value can also be an object or map value, in which case two temporary variable names can be provided to access the keys and values respectively:
[for k, v in var.map : length(k) + length(v)]
Finally, if the result type is an object (using {
and }
delimiters) then the value result expression can be followed by the ...
symbol to group together results that have a common key:
{for s in var.list : substr(s, 0, 1) => s... if s != ""}
the =>
operator in my .net world is a lambda typically, so also was a bit confused on that here
The type of brackets around the for
expression decide what type of result it produces. The above example uses [
and ]
, which produces a tuple. If {
and }
are used instead, the result is an object, and two result expressions must be provided separated by the =>
symbol:
{for s in var.list : s => upper(s)}
the flatten and setproduct pages have detailed examples on how to use them with for_each…
IMO, the most important paragraph in all the docs is this:
However, unlike most resource arguments, the for_each
value must be known before Terraform performs any remote resource actions. This means for_each
can’t refer to any resource attributes that aren’t known until after a configuration is applied (such as a unique ID generated by the remote API when an object is created)
can get yourself into all sorts of trouble if you do not carefully consider how all the attributes in the for_each expression objects may be specified
Definitely aware of that
Gone done that rabbit hole. I’m just more confused on advanced expressions to manipulate an objects structure
Basically per that blog draft I showed I previously had to do key based lists, which is clunky. I figured out how to change a list of objects by using this to provide the key, but still not fully versant on using the flatten, nested for etc. trying to wrap my head around it as there is no “schema” doc + examples, it’s kinda all mixed with a general doc like what you showed which i find a bit confusing
the flatten example in the docs is doing exactly that
for subnet in local.network_subnets : "${subnet.network_key}.${subnet.subnet_key}" => subnet
the left side of =>
is the key, and becomes the ID for the resource. it must be unique or you have duplicate resource IDs, which does not work
you can dynamically construct that key, exactly as they show there
your yaml is a bit confusing to me, it’s a list of maps of maps, where with terraform it would be cleaner if it were just a list of maps. i.e.
users:
- name: foobar1
email: [email protected]
- name: foobar2
email: [email protected]
- name: foobar3
email: [email protected]
and you’d then loop over it with for_each:
for_each = { for user in <users> : user.name => user }
you could of course mimic that exact structure in your data source, and just give for_each the map of maps… note there are no lists here, so no need to use for
to build up the map
users:
foobar1:
name: foobar1
email: [email protected]
foobar2:
name: foobar2
email: [email protected]
foobar3:
name: foobar3
email: [email protected]
and the for_each:
for_each = <users>
i tend to prefer the former, a simple list of maps
in both cases, each.key
is the user name, and each.value
is the map of user attributes. and you can dot-index into the map, e.g. each.value.name
and each.value.email
Yes, that’s what I already do, the key. however, it adds an additional layer, and was hoping to still working with an object list and inline tell what is the key, which my blog post showed worked. I’m just not far beyond that with the more advanced flatten with nested objects and all
in your case, you can use a set also, because the name is your unique key… then you don’t need to construct the map
users:
- name: foobar1
email: [email protected]
- name: foobar2
email: [email protected]
- name: foobar3
email: [email protected]
and the for_each:
for_each = <users>[*].name
well, no, then you can’t get to the user attributes… hmm…
@loren the PR for the blog post showed it seemed to work fine with what I did, it’s more the advanced usage that has me a bit stumped beyond this.
The foreach transformation i blogged about seemed to work, are you seeing that syntax fail for you too?
no i understand what’s working and what’s not, based on the data structures passed in and how the expressions are evaluated. with my prior comment, i was only speculating some on how to maybe simplify the expression, but decided it wouldn’t work for this use case
hi hoping someone have ran into this and was able to come with a solution.
I have the following
resource "tfe_variable" "lt_vpc_security_group_ids" {
category = "terraform"
key = "lt_vpc_security_group_ids"
value = var.lt_vpc_security_group_ids
hcl = true
workspace_id = tfe_workspace.id
}
If i ran terraform plan the error is thrown
Inappropriate value for attribute "value": string required.
how can i use a variable in the value w/out running into this? Maybe some way of escaping the value?
I think just need to use join()
thx Joe doesn’t seem to work as it’s expected it as a list string type
The docs seem to show a string for value
Or am I misunderstanding you?
Oh you’re trying to reference the variable and not the value?
correct sorry for the confusion
here’s my variable.tf
variable "lt_vpc_security_group_ids" {
type = list(string)
description = "A list of security group to associate with"
#default = []
}
So my understanding is you need to join that list into a single string so it can be used in the value field of tfe_variable
hmm possible if you can give an example?
not sure i understand
the variable type is set to use list(string)
maybe i can change it to use value = "[[var.lt](http://var.lt)_vpc_security_group_ids]"
and the final result if i use the above.
2020-07-22
hello, is there a terraform command that allow to fetch and inspect data output ? by example I would to get and print the values availables by :
data.terraform_remote_state.core_state.outputs.mymodule.*
in the terraform console
you can do
data.terraform_remote_state.core_state.outputs.mymodule
and it should show you all values
haaa perfect ! thanks
when I fetch terraform console print me an id value for the given data.terraform_remote_state key, and the output value is the expected one.
data.terraform_remote_state.core_state.outputs.placement_groups[0].entry_point.id
/subscriptions/07xyz/resourceGroups/tf_stage_placement_groups/providers/Microsoft.Compute/proximityPlacementGroups/tf_stage_placement_group_entry_point
but when I give it to a module
module "haproxy" {
placement_group_id = data.terraform_remote_state.core_state.outputs.placement_groups[0].entry_point.id
}
I have the error
on main.tf line 74, in module "haproxy":
74: placement_group_id = data.terraform_remote_state.core_state.outputs.placement_groups[0].entry_point.id
|----------------
| data.terraform_remote_state.core_state.outputs.placement_groups[0] is tuple with 1 element
This value does not have any attributes.
what should I change to give the id to my module ?
solved ! little shenaningan
it seems terraform console removed one [] from the output
data.terraform_remote_state.core_state.outputs.placement_groups[0][0].entry_point.id
solves it ..
Scheduled Maintenance | Terraform Cloud THIS IS A SCHEDULED EVENT Jul 26, 07:00 - 09:00 UTCJul 22, 08:30 UTC Scheduled - We will be undergoing a scheduled maintenance for Terraform Cloud on July 26th at 7:00am UTC. During this window, there may be interruptions to terraform run output, and some runs might be delayed.
HashiCorp Services’s Status Page - Scheduled Maintenance | Terraform Cloud. |
Hi, I want to create a peering cross accout. I follow this module https://github.com/cloudposse/terraform-aws-vpc-peering-multi-account
When I run tf apply
, I have a error like that
Error creating VPC Peering Connection: UnauthorizedOperation: You are not authorized to perform this operation. Encoded authoriza
tion failure message: f0JL_4uWp-Mwhq3z3IXzmRBpgU1j5tAqDBCqcAadPglsZUj221QT_jFXXJiZU4Ff--t_mdBRNLntwBgWBUvbLS8Z_MGQAMbmRg07sLwu66nJas330iV5tosDVC1RVPsW07ooR9M2nr2zyqcz8QTIe0m1dKCJ1MNrBJNS980XtIpmuvv6Zurajip2-3GAyXaRxM6eQj3IYz-rI5seHfoSdiA34k3Tm4rFEx7ITP2aIHgc5tmsH-OMltrn0Nr6z-vgAtxq4SYYFyNNOVLEL9wxXMn1JDfEGKqxVaN88cw4KbuErUPTwwquTR6p9PkfBv_Z9ADm8xcKuzde
f3t9i9o_WxF2_Y01ybW1I-Avb9wBhU38RJ7WAaT-meVRqF0iJMrvg0ZAsaFcAl44J98XItv1Jr0xUozJNQmWQbYwvAOEcdkRvtfOlElUhUsqVdGDMCfDtmCTFdDqQWAgR-KqjZLJpPHqMpyd6g5YF1wRtZkm9IrLg8L5ZXuCuoURvR8Q4
AvCRPNuTHDhfSxhotKP9-D_rgr3T1YixQOwwppw1u6BuXIWTsF0GkshxxP55i7xMecabyop1T7yUyWhkfBOvFCGgDAwfddHMOT_7l-o_qmm7z-iiZRpsRo2cF4HbBauzcQbOKC2RO1CS5M5HtiXx29YoOmo272EhNL7fUl2N3PQ9QEfPnjfRAG_xlf4CnBT6jzohOYEn7NoLFhhJyZLtj3HwFYIQcoXzhtJu7s
So I try to decode this message with aws sts decode-authorization-message ...
{
"allowed": false,
"explicitDeny": false,
"matchedStatements": {
"items": []
},
"failures": {
"items": []
},
"context": {
"principal": {
"id": "AROARX57SNBJI7LD7TL5Q:1111111111111111",
"arn": "arn:aws:sts::22222222222:assumed-role/r_ops_peering_access/1111111111111111"
},
"action": "ec2:CreateVpcPeeringConnection",
"resource": "arn:aws:ec2:us-west-2:22222222222:vpc/vpc-33333333333",
"conditions": {
"items": [
{
"key": "22222222222:Env",
"values": {
"items": [
{
"value": "Prod"
}
]
}
},
{
"key": "ec2:ResourceTag/Env",
"values": {
"items": [
{
"value": "Prod"
}
]
}
},
{
...
This is my IAM policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:CreateRoute",
"ec2:DeleteRoute"
],
"Resource": "arn:aws:ec2:*:XXXXXXXX:route-table/*"
},
{
"Effect": "Allow",
"Action": [
"ec2:DescribeVpcPeeringConnections",
"ec2:DescribeVpcs",
"ec2:ModifyVpcPeeringConnectionOptions",
"ec2:DescribeSubnets",
"ec2:DescribeVpcAttribute",
"ec2:DescribeRouteTables"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:AcceptVpcPeeringConnection",
"ec2:DeleteVpcPeeringConnection",
"ec2:CreateVpcPeeringConnection",
"ec2:RejectVpcPeeringConnection"
],
"Resource": [
"arn:aws:ec2:*:XXXXXXXX:vpc-peering-connection/*",
"arn:aws:ec2:*:XXXXXXXX:vpc/*"
]
},
{
"Effect": "Allow",
"Action": [
"ec2:DeleteTags",
"ec2:CreateTags"
],
"Resource": "arn:aws:ec2:*:XXXXXXXX:vpc-peering-connection/*"
}
]
}
Very confuse now. The error message said I don’t have permission to create a Peering Connection, but I have this permission in my Policy. Any idea?
Terraform module to provision a VPC peering across multiple VPCs in different accounts by using multiple providers - cloudposse/terraform-aws-vpc-peering-multi-account
v0.13.0-rc1 0.13.0-rc1 (July 22, 2020) BUG FIXES: command/init: Fix confusing error message for locally-installed providers with invalid package structure (#25504) core: Prevent outputs from being evaluated during destroy (<a href=”https://github.com/hashicorp/terraform/issues/25500” data-hovercard-type=”pull_request”…
When installing a provider which has an invalid package structure (e.g. a missing or misnamed executable), the previous error message was confusing: This PR adds support for a post-install provide…
If we're adding a node to remove a root output from the state, the output itself does not need to be re-evaluated. The exception for root outputs caused them to be missed when we refactored res…
v0.12.29 Version 0.12.29
v0.12.29 0.12.29 (July 22, 2020) BUG FIXES: core: core: Prevent quadratic memory usage with large numbers of instances by not storing the complete resource state in each instance (#25633)
This backports #25544, but is not a direct cherry-pick of the commit due to significant changes in the types between 0.12 and 0.13. The AbstractResourceInstance type was storing the entire Resource…
I am looking for an EKS module
Look no further! https://github.com/cloudposse/terraform-aws-eks-cluster
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
2020-07-23
Hello everyone A coleague of mine asked for me to present this pull request https://github.com/cloudposse/terraform-aws-elasticsearch/pull/61 adding the possibility to insert a different aws ec2 service identifier
what Adds possibility to insert different aws ec2 service identifier. why Insert aws ec2 service identifier different than ["ec2.amazonaws.com"] is necessary, for example in china accou…
Review done. Not far away from accept if it will pass our code checks.
what Adds possibility to insert different aws ec2 service identifier. why Insert aws ec2 service identifier different than ["ec2.amazonaws.com"] is necessary, for example in china accou…
Thanks for the feedback. We’ll keep working on it and get back to you when it is updated
#terraform I have the following error on ....\modules\BaseInfrastructure[main.tf](http://main.tf) line 225, in module “diagnostic_settings”: 225: resource_id = azurerm_virtual_network.this[each.key].id
The “each” object can be used only in “resource” blocks, and only when the “for_each” argument is set.
#terraform . Here is the snippet of resource and module to which I am using to calling the resource module ”diagnostic_settings” { source = ”../DiagnosticSettings” resource_id = azurerm_virtual_network.this[each.key].id
resource ”azurerm_network_security_group” ”this” { for_each = var.network_security_groups name = each.value[“name”]
You need a for_each
on the resource you are trying to use the each
in (your module diagnostic_settings
in this case.
yes, I have it in the resource. if you see hte resource details I have pasted below
resource ”azurerm_network_security_group” ”this” { for_each = var.network_security_groups name = each.value[“name”]
you are using each in your module "diagnostic_settings"
which has no for_each
:
module "diagnostic_settings" {
source = "../DiagnosticSettings"
resource_id = azurerm_virtual_network.this[each.key].id
can you tell me how to use it
Can you describe what you are trying to achieve?
am trying to enable diagnostics logging for all network security groups
resource ”azurerm_network_security_group” ”this” { for_each = var.network_security_groups name = each.value[“name”] location = local.location resource_group_name = azurerm_resource_group.this.name
module ”diagnostic_settings” { source = ”../DiagnosticSettings” resource_id = lookup(azurerm_virtual_network.this[each.key].id) retention_days = var.retention_days storage_account_id = var.diagnostics_storage_account_name }
azurerm_network_security_group is the resource to which I am calling diagnostic_settings module to enable diag settings
I believe something like this would work:
module "diagnostic_settings" {
for_each = var.network_security_groups
source = "../DiagnosticSettings"
resource_id = lookup(azurerm_virtual_network.this[each.key].id)
retention_days = var.retention_days
storage_account_id = var.diagnostics_storage_account_name
}
that may not work: https://github.com/hashicorp/terraform/issues/17519
Is it possible to dynamically select map variable, e.g? Currently I am doing this: vars.tf locals { map1 = { name1 = "foo" name2 = "bar" } } main.tf module "x1" { sour…
may require 0.13 per that issue
hi, I may help but would need the content of “var.network_security_groups” to check that it’s a map etc ..
and I get the following error
Error: Reference to “each” in context without for_each
on ....\modules\BaseInfrastructure[main.tf](http://main.tf) line 225, in module “diagnostic_settings”: 225: resource_id = azurerm_virtual_network.this[each.key].id
The “each” object can be used only in “resource” blocks, and only when the “for_each” argument is set.
i guess you need azurerm_virtual_network.this[${each.key}].id
Hi i have MFA setup on certain AWS accounts. With the AWS CLI, i get prompted to enter the serial when executing commands in those accounts. How can i use Terraform to provision infrastructure there with MFA enabled?
aws-vault
thx
Hello, I am trying to use the cloudposse/kms-key/aws
public module. On the Terraform Registry, I do not see the option to use a configure a custom KMS key policy but when I click the link to go to the GitHub repo, I see that as an available input. Unfortunately, I haven’t been able to setup a custom policy. Is this possible to do using this module? Thanks in advance!
Unfortunately, I haven’t been able to setup a custom policy.
can you elaborate?
what happens when you try
Whenever I grab a known working policy that I have that is not using this module and I try to inject it into this module, Terraform v12.24 outputs an error saying:
Error: Unsupported argument
on kms.tf line 87, in module "cloudtrail_CMK":
87: policy = <<JSON
An argument named "policy" is not expected here
@Erik Osterman (Cloud Posse)
can you share the actual invocation of the module?
sure
module "cloudtrail_CMK" {
source = "cloudposse/kms-key/aws"
version = "0.2.0"
name = "${var.APPLICATION}_${terraform.workspace}_${var.CUSTOMER_MANAGED_KEYS[4]}_CMK"
description = "KMS key for S3 buckets and objects"
deletion_window_in_days = 10
enable_key_rotation = true
alias = "alias/${var.APPLICATION}_${terraform.workspace}_${var.CUSTOMER_MANAGED_KEYS[4]}_CMK"
policy = <<JSON
{
"Version": "2012-10-17",
"Id": "key-default-1",
"Statement": [
{
"Sid": "Enable IAM User Permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::xxxxxxxxxxxx:root"
},
"Action": "kms:*",
"Resource": "*"
}
]
}
JSON
tags = "${merge(
var.DEFAULT_TAGS,
map(
"Name", "${var.APPLICATION}_${terraform.workspace}_${var.CUSTOMER_MANAGED_KEYS[4]}",
"Environment", terraform.workspace
)
)}"
}
Do you know what I could be doing wrong? @Erik Osterman (Cloud Posse)
I appreciate the help btw.. I’ve been scratching my head on this for a little while now..
hrmm.. nothing jumps out at me immediately. Some other feedback though:
tags = "${merge(
var.DEFAULT_TAGS,
map(
"Name", "${var.APPLICATION}_${terraform.workspace}_${var.CUSTOMER_MANAGED_KEYS[4]}",
"Environment", terraform.workspace
)
)}"
this is pre-0.11 syntax
it might work, but it’s not leveraging the full capabilities of HCL2
You’re using 0.2.0
of the module terraform-aws-kms
module
The problem is the policy
argument was not in that version.
Try:
source = "cloudposse/kms-key/aws"
version = "0.5.0"
if you’re on terraform 0.11, then there might not be support for this in our module.
Thank you, sir. I’ll try again with the updated version. I’m using Terraform 12.24
I’ll be sure to grab the latest and greatest from GitHub next time. I think in the Terraform Registry it still referenced the older version so I didn’t think to check if there was a newer version
Our usage example might erroneously refer to 0.2.0. Typically our examples just pin to master and warn the user to pin to a version.
If I don’t specify a version, will it automatically pull the latest and greatest? I’d personally rather do that..
Oh, that’s not advisable though.
The problem is with so many modules, we can never ensure we do not break backwards compatibility.
So pin to the working version when you provision the infrastructure.
2020-07-24
I’m having a difficult time getting my providers set up right in the caller and called modules i’m using. For the most part, i’m trying to have my top-level mod providers look like this, where infra
is the profile name of the master account:
provider "aws" {
region = var.aws_region
profile = "infra"
assume_role {
role_arn = "arn:aws:iam::${var.aws_account_id}:role/OrganizationAccountAccessRole"
}
forbidden_account_ids = local.forbidden_account_ids
}
Then, i pass them down – all but one of course has an alias:
module "stack_install" {
source = "../../../../../application-stack"
providers = {
aws = aws
aws.infra = aws.infra
aws.cf = aws
aws.cf-us-east-1 = aws.cf-us-east-1
aws.client-us-east-1 = aws.cf-us-east-1
aws.route53 = aws.infra
}
And in the called module, i just stub out the provider like this:
provider "aws" {
region = local.default_region
forbidden_account_ids = [local.master_account_id]
}
But i’m getting this error:
Error: No valid credential sources found for AWS Provider.
Please see <https://terraform.io/docs/providers/aws/index.html> for more information on
providing credentials for the AWS Provider
So, what am i doing wrong? I had full configs for some providers in the called modules, but moving that to the top-level mods I believe is the right thing to do, since TF seems to be subject to really weird dependencies, like one module relying on the provider from another module that was called as well.
why so many providers ?
providers = {
aws = aws
aws.infra = aws.infra
aws.cf = aws
aws.cf-us-east-1 = aws.cf-us-east-1
aws.client-us-east-1 = aws.cf-us-east-1
aws.route53 = aws.infra
}
what’s the difference b/w them?
region and role to assume?
check the regions in the child modules. They should be the same as in the top-level providers.
and you should not prob specify regions in the child modules providers
I’m spinning up new accounts, with most stuff in us-east-2, but Cloudfront (cf) and some other stuff (ACM) has to be in us-east-2, and we have a master account that has our DNS as well as a few other global resources that we need to hit as well, so aws
is just the new account in us-east-2, aws.infra
is the master account, and the cf-us-east-1
is the new account in east. Region is whatever region we are deploying this stack to – now, us-east-2.
Regarding the role assumption, i’m spinning up this stack in separate accounts, and i’m using role assumption to access them.
they will be taken from the top-level providers
I think you have to specify regions in the child mod providers, or you can’t stub it out. I’ll confirm that.
I think this should not be in a child module
provider "aws" {
region = local.default_region
forbidden_account_ids = [local.master_account_id]
}
in each resource in child modules, you just use `
provider = "aws"
(or other providers)
perhaps not, but with all this cross-account access, i’ve hit that wall and avoided polluting our master account, so i think it’s valuable.
I was hoping to avoid explicitly setting the provider for the +150 resources in this build. Obviously, I use the provider alias to reference everything else.
terraform recommends not to declare providers in child modules, for many diff reasons. One of the reasons is that if you define a provider in a child module and then update it, you will not be able to destroy the resources created by that provider
define everything at top-level and just provide the providers to all child modules
wait…it was my understanding that you need to stub out the provider – mauybe it’s because pycharm barfs if you don’t have a “local” provider,
So, just completely remove the provider defs from the submods?
When I remove the provider defs from the submods, I get errors like this:
Error: missing provider module.stack_install.provider.aws.cf
Also, my IDE (pycharm) does not recognize provider refs, unless the provider is defined in the module. I know that’s not exactly a TF issue, but one that impacts my workflow
I’m going to see what happens, when i completely clear out the state and start over.
we’ve found we need to stub the provider alias in a module, in order to use it, e.g.:
provider "aws" {}
provider "aws" {
alias = "cf"
}
but every other piece of the provider config goes in the top-level root config
per terraform, and what we saw many times, you should not define providers in child modules, nor mix the definitions at top-level and in child modules
but that doesn’t entirely work… if you use an “alias” in a resource in the module, then you must stub the provider with the alias name
@loren something like that is ok, you just should not define any other properties of those providers
roger, yeah, that tracks what we’ve seen also
Awesome. that generally jibes with what I’ve been trying to move to. I kicked this can down the road a few months ago and it’s time to pay the piper.
Anyone familiar with the TGW for terraform? Why does it create 2x transit gateway route table for me?
https://github.com/terraform-aws-modules/terraform-aws-transit-gateway/blob/master/main.tf#L39-L51
Terraform module which creates Transit Gateway resources on AWS - terraform-aws-modules/terraform-aws-transit-gateway
you can use the default route table, but it has a lot of restrictions
Terraform module which creates Transit Gateway resources on AWS - terraform-aws-modules/terraform-aws-transit-gateway
e.g. you can’t use the default route table when you provision TGW in multi-account env, share the TGW with the organization using AWS RAM and attach VPCs from different accounts to the TGW
so, by creating a separate route table, you can use it in all different scenarios (single account and multi-account sharing)
thanks for the response, i guess terraform created the 2 by default? and im guessing the one with empty name is the default?
yes, it says “Default” in the header
ah ok “default association….”
by the way, looking at the template, is there a way to disable the creation of RAM? my org hasnt been properly set up sharing yet.
(those terraform-aws-modules
are not CloudPosse modules, but I guess you can open a PR if you need any new feature)
and the module has a var var.share_tgw
to enable/disable sharing
i set share_tgw
to false but somehow it still tried to create ram
and i found https://github.com/cloudposse/terraform-aws-transit-gateway, but it looks kind of empty. If you could tweak it with my feedback that would be great.
Contribute to cloudposse/terraform-aws-transit-gateway development by creating an account on GitHub.
yea, we started the module, but implementing it for a single account is easy (but not too useful), implementing it for multi-account is not simple (and we use TGW for multi-account environments). We’ll get back to it soon
hey @Andriy Knysh (Cloud Posse) sorry for bugging, but have you tried spinning up TGW with the complete example from https://github.com/terraform-aws-modules/terraform-aws-transit-gateway? To me that doesnt spin up the vpc attachment nor the routes within the tgw route table. I’ve been banging my head on this for a while now…
Terraform module which creates Transit Gateway resources on AWS - terraform-aws-modules/terraform-aws-transit-gateway
no, we did not try that module
Hi, I’m trying to use terraform-aws-ecs-codepipeline to pull from a personal github repo, but the module thinks my gh username is a gh org. I get an error:
GET <https://api.github.com/orgs/brietsparks>: 404 Not Found []
on .terraform/modules/ecs_push_pipeline.github_webhooks/main.tf line 7, in provider "github":
7: provider "github" {
Terraform Module for CI/CD with AWS Code Pipeline and Code Build for ECS https://cloudposse.com/ - cloudposse/terraform-aws-ecs-codepipeline
@Briet Sparks can you share the entire module block?
Terraform Module for CI/CD with AWS Code Pipeline and Code Build for ECS https://cloudposse.com/ - cloudposse/terraform-aws-ecs-codepipeline
module "ecs_push_pipeline" {
source = "git::<https://github.com/cloudposse/terraform-aws-ecs-codepipeline.git?ref=master>"
name = "guestbook-ci"
region = var.region
repo_owner = "brietsparks"
github_webhooks_token = var.github_webhooks_token
repo_name = "guestbook"
image_repo_name = "guestbook"
branch = "ci-pract"
service_name = "guestbook"
ecs_cluster_name = "guestbook"
privileged_mode = "true"
}
2020-07-25
Anyone else observing windows ec2 instances and route53 records taking more time than normal to create? I notice these two resources take a considerable amount of time even though I can see them created through the AWS Console already. A windows ec2 instance taking ~7minutes
2020-07-26
Scheduled Maintenance | Terraform Cloud Jul 26, 07:01 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Jul 22, 08:30 UTC Scheduled - We will be undergoing a scheduled maintenance for Terraform Cloud on July 26th at 7:00am UTC. During this window, there may be interruptions to terraform run output, and some runs might be delayed.
HashiCorp Services’s Status Page - Scheduled Maintenance | Terraform Cloud. |
Scheduled Maintenance | Terraform Cloud Jul 26, 07:49 UTC Completed - The scheduled maintenance finished successfully. The system is fully operational again.Jul 26, 07:01 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Jul 22, 08:30 UTC Scheduled - We will be undergoing a scheduled maintenance for Terraform Cloud on July 26th at 7:00am UTC. During this window, there may be interruptions to terraform run output, and some runs might be delayed.
2020-07-27
hello, I manage terraform code in multiple repos. Is there a way to add a resource tag recording the repos name ? alternatively I can manually set it.
Assuming the folder the repo is checked out into is the same as the repo you can use an inline command to resolve the git repo and pass it to terraform command-line
terraform [...] -var 'reponame=$(basename $(git rev-parse --show-toplevel))'
It’s not very resilient, and I think I would prefer to use a vars file instead as it only needs setting once per repo…
We use an internal module to run a similar command. Hoping to open source it eventually. For now Henry’s method would work
Hi, any thoughts about Terraform modules coming from registry vs git? I can’t find any benefit of the registry other than centralizing modules and abstracting access to a different layer…
it doesnt seem like there’s a way to use modules from the tf registry that’s free for private modules. but it does allow you to use the version
argument of a module.
whereas the git/ssh method requires you to use the ?ref=tags/x.y.z
in order to pin down a version
There was another thread about this. Some differences with registry:
• Can only get released versions
• Faster. Download tarball instead of cloning repo
• Version is a release. Versus tag that could be changed There are private registries avaliable including tf enterprise and some open source tools
@Steven git tags can be re-pushed (with force) and this isn’t ideal from the security perspective. But couldn’t one use similar attack vector by removing and recreating a specific Terraform registry version?
Terraform registry is based on github releases. I don’t think you can overwrite a release, but you can delete a release then recreate the same version. So from a security perspective, it is only a little harder to change. For me the first 2 reasons are the biggest wins
@Steven going through the code, I found that one could use the depth=1
parameter: https://github.com/hashicorp/go-getter/blob/3db8f8b08debed6f7d341e147435feefc2d3def3/get_git.go#L169
Package for downloading things from a string URL using a variety of protocols. - hashicorp/go-getter
and here is an funny thing about tar vs clone:
$ time curl -O <https://codeload.github.com/gatsbyjs/gatsby/tar.gz/gatsby%402.24.11>
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 380M 0 380M 0 0 1195k 0 --:--:-- 0:05:25 --:--:-- 2493k
real 5m 25.97s
user 0m 11.64s
sys 0m 15.40s
$ time git clone <https://github.com/gatsbyjs/gatsby.git> --depth 1 --branch [email protected]
Cloning into 'gatsby'...
remote: Enumerating objects: 7360, done.
remote: Counting objects: 100% (7360/7360), done.
remote: Compressing objects: 100% (6224/6224), done.
remote: Total 7360 (delta 715), reused 4268 (delta 459), pack-reused 0
Receiving objects: 100% (7360/7360), 367.08 MiB | 1.49 MiB/s, done.
Resolving deltas: 100% (715/715), done.
Note: switching to '0f2be968c0e3b889650f5fa5dd05692b50cd7b2a'.
...
Updating files: 100% (6347/6347), done.
real 4m 13.89s
user 0m 43.44s
sys 0m 26.42s
Might clone download things in parallel?
Can only get released versions -> I think this could be also seen as a limitation because one can get only releases. With git, one can get releases (tags), branches or even specific commits
Currently I personally don’t see any technical benefits of the registry :neutral_face:
The only benefit from the user perspective is the clear interface for authentication terraform login
.
The git alternative looks more complex and one would definitely need a wrapper.
@Steven I really appreciate Your feedback so far!
I see that there are issues with the depth
parameter: https://github.com/hashicorp/terraform/issues/23641
Terraform Version Terraform v0.12.17 Terraform Configuration Files module "test" { source = "git://github.com/terraform-providers/terraform-provider-azurerm.git//examples/app->…
hello,
is it possible to override the default_actions config in terraform-aws-modules/alb/aws
v5.6.0. I notice it automatically creates an lb listener with default_actions , but I’d like to tweak it a bit
resolved.
anyone use a module to create scheduled ecs tasks ? looking at this module, but open to other modules too.
- https://github.com/cloudposse/terraform-aws-ecs-alb-service-task (56 stars but not as applicable)
- https://github.com/turnerlabs/terraform-ecs-fargate-scheduled-task (33 stars, seems applicable but not possible to disable iam resources) - https://github.com/dxw/terraform-aws-ecs-scheduled-task (21 stars, same, cannot disable iam resources) - https://github.com/tmknom/terraform-aws-ecs-scheduled-task (8 stars, can disable iam resources)
leaning on the last one
thread starter
i dont believe there is a cloudposse one except for the cloudposse alb one.
- https://github.com/cloudposse/terraform-aws-ecs-alb-service-task (56 stars but not as applicable)
- https://github.com/dxw/terraform-aws-ecs-scheduled-task (21 stars)
- https://github.com/tmknom/terraform-aws-ecs-scheduled-task (8 stars)
leaning on the last one since it has variables to turn on and off the iam roles by passing existing ones. would love to hear more thoughts on this.
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Aug 05, 2020 11:30AM.
Register for Webinar
#office-hours (our channel)
How do I properly get outputs working for the S3 module?
source = "cloudposse/s3-bucket/aws"
version = "0.14.0"
I’m trying to do this but it isn’t working..
output "this_s3_bucket_id" {
description = "The name of the bucket."
value = module.cloudtrail_s3_bucket.aws_s3_bucket.default.*.bucket_id
}
I tried to copy the outputs from the GitHub repo thinking that would help but it didn’t..
never mind. Found the examples directory with the outputs. Sorry about that
all Cloud Posse modules have examples
directory with a working example, which is deployed to AWS using terratest
in the test
directory
appreciate all the hard work getting these modules created and available to the public!
Been looking more into how to deny CRUD on aws resources outside of terraform
it seems like it may be possible with a conditional policy with stringlike
on the useragent since the useragent contains the word terraform
anyone use this approach before?
I know @Jan implemented this before. His implementation included 2 accounts per user. One account permitted read-only webconsole access, and the other account admin access, but no web console access.
thats very cool. glad it’s been tried and tested. would love to hear more input from him
Ah yea that was fun
in the end through the simple truth though is that if you have a power user (CLI intended use) that user can just open the console via cli if they really wanted to
@sweetops171 do you mind sharing your comparison for the aws:useragent
? I haven’t tried this yet but was thinking about testing/implementing this…
"Statement":[
{
"Sid": "Allow kms:PutKeyPolicy only if executed using Terraform",
"Effect": "Allow",
"Principal": "*",
"Action": ["kms:PutKeyPolicy"],
"Resource": [
"arn:aws:kms:us-east-1:012345678:key/guidguid-guid-guid-guid-guidguidguid"
],
"Condition": {
"StringLike": {"aws:UserAgent": "*terraform*"}
}
}
]
where the full user agent is something like this
aws-sdk-go/1.32.12 (go1.13.7; darwin; amd64) APN/1.0 HashiCorp/1.0 Terraform/0.12.26 (+<https://www.terraform.io>)
2020-07-28
Hi everybody
Anyone have use the terraform-aws-key-pair
module on Terraform Cloud?
I’ve used it in a new workspace, and every time I queue a plan, it gives me the following error:
what did you set for the ssh_public_key_path
variable ?
secrets
I guess I’ve just used the default value
ok try with an absolute path or “./secrets”
you’re trying to use an existing key right?
yes, correct Joe
Will try with an absolute path
Same error Joe
Error: Error in function call
on .terraform/modules/aws_key_pair/main.tf line 30, in resource "aws_key_pair" "imported":
30: public_key = file(local.public_key_filename)
|----------------
| local.public_key_filename is "./secrets/acme-dev-myapp.pub"
Call to function "file" failed: no file exists at secrets/acme-dev-myapp.pub.
hmm the tests run with these values: https://github.com/cloudposse/terraform-aws-key-pair/blob/master/examples/import-key/fixtures.us-east-2.tfvars
Terraform Module to Automatically Generate SSH Key Pairs (Public/Private Keys) - cloudposse/terraform-aws-key-pair
./ will be relative to your module path which can vary a bit depending on how you’re running terraform
yeah, I saw that as well. Unfortunately, I’m having some issues using this module.. won’t use it in future projects
Basically the workaround I did now, is to leave generate_ssh_key = true
this way it doesn’t mess with the key pair
that is strange - is it generating the files for you?
if you can, please create an issue in github
also you could try the previous release 0.11.0
Error: Error in function call
on .terraform/modules/aws_key_pair/main.tf line 30, in resource "aws_key_pair" "imported":
30: public_key = file(local.public_key_filename)
|----------------
| local.public_key_filename is "secrets/acme-dev-myapp.pub"
Call to function "file" failed: no file exists at secrets/acme-dev-myapp.pub.
hello, you should specify root.path
like this:
public_key =file(${root.path}/blah/${local..public_key_filename})
Hi @Pierre-Yves, is ${root.path}
available for Terraform Cloud?
I guess so it’s in terraform 0.12 .
Also you may want to consider storing the ssh key in vault and not in the repos
Now I can’t queue any plans because it will force a new key pair to be created..
hello, i am looking on a way to simplify this code that is generating azure lb config with a for or for_each loop. may be the data structure needs to be changed the point is that I would like to loop over the public ip ( ip1 and ip2) and then over each remote_port and lb_port .. can you help ?
locals {
lbconfig = {
ip1 = {
remote_port = {
http = ["Tcp", "80"]
https = ["Tcp", "443"]
}
lb_port = {
http = ["80", "Tcp", "80"]
http = ["443", "Tcp", "443"]
}
}
ip2 = {
remote_port = {
http = ["Tcp", "80"]
https = ["Tcp", "443"]
}
lb_port = {
http = ["80", "Tcp", "80"]
http = ["443", "Tcp", "443"]
}
}
}
}
resource "azurerm_lb_rule" "azlb" {
count = length(local.lbconfig["ip1"]["lb_port"])
resource_group_name = var.resource_group_name
loadbalancer_id = azurerm_lb.azlb.id
name = "${var.prefix}-${var.env}-${element(keys(local.lbconfig["ip1"]["lb_port"]), count.index)}"
protocol = element(local.lbconfig["ip1"]["lb_port"]["${element(keys(local.lbconfig["ip1"]["lb_port"]), count.index)}"], 1)
frontend_port = element(local.lbconfig["ip1"]["lb_port"]["${element(keys(local.lbconfig["ip1"]["lb_port"]), count.index)}"], 0)
backend_port = element(local.lbconfig["ip1"]["lb_port"]["${element(keys(local.lbconfig["ip1"]["lb_port"]), count.index)}"], 2)
frontend_ip_configuration_name = var.frontend_name
enable_floating_ip = false
backend_address_pool_id = azurerm_lb_backend_address_pool.azlb.id
idle_timeout_in_minutes = 5
probe_id = element(azurerm_lb_probe.azlb.*.id, count.index)
depends_on = [azurerm_lb_probe.azlb]
}
done something similar, i hope this helps
dev_env_apps = { zeppelin = 8890, spark = 18080, master = 50070 }
resource "aws_alb_target_group" "this" {
for_each = var.dev_env_apps
name = "${var.tags.environment}-${each.key}-alb-tg"
port = each.value
protocol = "HTTP"
vpc_id = var.vpc_id
deregistration_delay = 30
target_type = "instance"
tags = var.tags
}
resource "aws_alb_listener_rule" "this" {
for_each = var.dev_env_apps
listener_arn = var.alb_lstnr_arn
action {
target_group_arn = aws_alb_target_group.this[each.key].arn
type = "forward"
}
condition {
field = "host-header"
values = ["${each.key}.*"]
}
}
resource "aws_lb_target_group_attachment" "this" {
depends_on = [aws_emr_cluster.this]
for_each = var.dev_env_apps
target_group_arn = aws_alb_target_group.this[each.key].arn
target_id = data.aws_instance.master.id
port = each.value
}
resource "aws_route53_record" "this" {
for_each = var.dev_env_apps
zone_id = var.zone_id
name = "${each.key}.${var.zone_name}"
type = "A"
alias {
name = var.alb_dns_name
zone_id = var.alb_zone_id
evaluate_target_health = true
}
}
anyone come up with a terraform method of switching between launch_type=EC2
and launch_type=FARGATE
with zero downtime ? looking for a terraform-y way to do this.
im wondering if the code deploy method would work
Hi all, can anyone direct me on how to use multiple_definitions from terraform-aws-ecs-container-definition with the module terraform-aws-ecs-alb-service-task?
for the container_definition_json
argument of the ecs_alb_service_task
module
simply put in a list of container json
container_definition_json = jsonencode([
module.my_container.json_map_object,
module.my_other_container.json_map_object,
module.my_other_other_container.json_map_object,
])
Hi thanks for the tip, I was trying that earlier and I get “An output value with the name “json_map” has not been declared”
the new container definition now uses a different output
what version of the container definition are you using
try using this output instead json_map_encoded
I have just switched to 0.38.0, going to try your suggestion now, will update you shortly.
I now see the following error: Error: ECS Task Definition container_definitions is invalid: Error decoding JSON: json: cannot unmarshal string into Go value of type ecs.ContainerDefinition
on .terraform/modules/ecs_alb_service_task/main.tf line 34, in resource "aws_ecs_task_definition" "default":
34: resource "aws_ecs_task_definition" "default" {
Maybe try "[${module.container.json_map_encoded}]"
to see if it works
Then add additional containers to that one at a time
Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource - cloudposse/terraform-aws-ecs-container-definition
Guys, sorry for the delay, I will try your suggestion @RB and I will check the updated entries from that link @Aleksandr Fofanov and let you know if they work, will probably be tomorrow before you hear back from me, appreciate the help so far.
np. this person asked a similar question to you.
Multi-container task definition The current module implementation assumes one container per tast definition. AWS however allows multiple container definitions per single tast. terraform-aws-ecs-alb…
2020-07-29
Hello, I am looking for resources and examples on how to use terraform console
there is not so much information around .. can you point me to videos or web pages ?
Hey all! Does anyone here use TFE? If so, do you know if it supports submodules? example
module "iam" {
# this can and should be pinned to a release tag using ?ref=tags/x.y.z
source = "[email protected]:terraform-aws-modules/terraform-aws-iam.git//modules/iam-assumable-role"
}
example for https://github.com/terraform-aws-modules/terraform-aws-iam/tree/master/modules/iam-assumable-role
terraform enterprise and terraform free should suppor tboth
Anyone got a recommended full fledged terraform template to run an app in ECS? (specifically on ec2)
Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app
set the launch_type to ec2 if it doesnt default it already
2020-07-30
Hello. Is anyone using the terraform-aws-ecs-web-app module with EFS Volumes on Fargate? I’m looking for an example on how to configure it
you can create your own container definition with any mounted volumes if you like and then pass it into the ecs-web-app module using var.container_definition
looks like there is a new EFSVolumeConfiguration directive in the task definition
This parameter is specified when you are using an Amazon Elastic File System file system for task storage. For more information, see Amazon EFS Volumes in the Amazon Elastic Container Service Developer Guide .
Ah via container_definition
would be another way of doing that worth investigating. Thus far I’ve been trying to get it to work using the efs_volume_configuration
, but for some reason it’s not working
anyone else notice a slowdown with terraform plan/apply targeting AWS? I only started noticing it a couple of days ago, or maybe I’m imagining things
I’ve noticed this as well actually..
us-east-1 is on fire the last 2 weeks
so that might be it
https://status.aws.amazon.com/ there are couple of stories around ec2 api error rates this week for us-east-1
several of which weren’t even noted there but a bunch of folks in the hangops slack all confirmed we were seeing the same problems around the same times
i’m currently targeting us-east-2
I’m trying to set cors_rule
in an aws_s3_bucket
resource, using a dynamic block. the data looks like this:
cors_rules = {
cdn = {
allowed_headers = ["*"]
allowed_methods = ["POST", "GET"]
allowed_origins = concat([
"brace.ai",
"brace.ai"
],
lookup(local.extra_bucket_origins, var.name, [])
)
expose_headers = ["ETag"]
max_age_seconds = 3000
},
borrower = {
allowed_headers = ["*"]
allowed_methods = ["POST", "GET"]
allowed_origins = concat([
"brace.ai",
"brace.ai"
],
lookup(local.extra_bucket_origins, var.name, [])
)
expose_headers = ["ETag"]
max_age_seconds = 3000
},
servicer = {
}
}
I’m just not able to visualize how to reference this in the dynamic block. Anybody have any ideas?
I tried this:
dynamic "cors_rule" {
for_each = lookup(local.cors_rules, var.service)
content {
allowed_headers = lookup(cors_rule.value, "allowed_headers", null)
allowed_methods = lookup(cors_rule.value, "allowed_methods", null)
allowed_origins = lookup(cors_rule.value, "allowed_origins", null)
expose_headers = lookup(cors_rule.value, "expose_headers", null)
max_age_seconds = lookup(cors_rule.value, "max_age_seconds", null)
}
}
Thanks for any help you can provide.
im unsure what the exact issue is but here’s a good example of its use
https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/blob/master/main.tf#L138
Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn
great! that’s just the kind of example what i’ve been looking for. Thanks
Hmm…that’s not really my use case. They’re passing in separate variables for the various fields, whereas I’ve got a data structure that I want to use to populate the cors_rule
values.
So, the question is how I access specific values in that local data structure.
If I could do this, it would solve my problem:
cors_rule {
allowed_headers = local.cors_rules[var.service]["allowed_headers"]
allowed_methods = local.cors_rules[var.service]["allowed_methods"]
allowed_origins = local.cors_rules[var.service]["allowed_origins"]
expose_headers = local.cors_rules[var.service]["expose_headers"]
max_age_seconds = local.cors_rules[var.service]["max_age_seconds"]
}
So, the basic question is how to access elements of the local.cors_rules here.
I tried a double lookup()
, but that didn’t work. This works, though I had to add empty/blank values to the formerly-empty servicer
attribute in the data:
cors_rule {
allowed_methods = lookup(local.cors_rules, var.service)["allowed_methods"]
allowed_origins = lookup(local.cors_rules, var.service).allowed_origins
expose_headers = lookup(local.cors_rules, var.service).expose_headers
max_age_seconds = lookup(local.cors_rules, var.service).max_age_seconds
}
Notice that both the [<key>]
and .
syntaxes work.
NOTES: provider: This version is built using Go 1.14.5, including security fixes to the crypto/x509 and net/http packages. BREAKING CHANGES provider: New versions of the provider can only be aut…
interesting : resource/aws_codepipeline: Removes GITHUB_TOKEN
environment variable
NOTES: provider: This version is built using Go 1.14.5, including security fixes to the crypto/x509 and net/http packages. BREAKING CHANGES provider: New versions of the provider can only be aut…
and this too
provider: Add assume_role
configuration block duration_seconds
, policy_arns
, tags
, and transitive_tag_keys
arguments (#14077)
Community Note Please vote on this pull request by adding a reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave "…
So, what’s the recommended adoption schedule for updates like this?
recommended by who ? aws ? hashicorp ?
you
“Us”
heres aws providers migration guide https://registry.terraform.io/providers/hashicorp/aws/latest/docs/guides/version-3-upgrade
My guess is that it’ll be a few months, before there’s significant movement to deploying to production environments with this release.
probably
i usually wait for some patches to come out before adopting
2020-07-31
Hi all - terraform’s explanation of self
is rather confusing for someone using terraform for the first time like myself. Can anyone ELI5 it for me please, thanks in advance.
This is common programming reference, https://en.wikipedia.org/wiki/This_(computer_programming)
For example
resource "aws_instance" "web" {
# ...
provisioner "local-exec" {
command = "echo The server's IP address is ${self.private_ip}"
}
}
self.private_ip means that is private_ip atribute of resource containing this reference == aws_instance.web.private_ip (can be used any from listed here: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/instance)
Terraform could not use direct reference to this aws_instance.web.private_ip from inside local provisioner because it will create dependency cycle
Hi Denys, thanks for the ping. That does make sense, taking a look at the below example:
data "aws_ami" "selected" {
most_recent = true
owners = ["self"]
filter {
name = "foo"
values = ["bar-*"]
}
}
Would I be correct in saying, it’ll use the owner ID of the individual executing the terraform apply
in stdin to retrieve that data source
Oh, no. This has another meaning. It means that owner is the current AWS account.
Find all ami images using filter foo=bar-* created by the same AWS account as you running terraform from https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ami
Ok got it, would it be possible to run and see the output of self
in terraform console for instance, I’m just curious to see what sort of stuff it returns. Or is it just the aws account id
actually there is most_recent = true, means only latest one
I suppose https://www.terraform.io/docs/commands/show.html is what you are looking for
The terraform show
command is used to provide human-readable output from a state or plan file. This can be used to inspect a plan to ensure that the planned operations are expected, or to inspect the current state as Terraform sees it.
Appreciate the help Denys, I’ll go grab lunch and read it when i get back to my workstation. Thanks again
Random question. During a TF run, I want to pull in a JSON file from another github repo….can’t use git submodules as a) they are kinda nasty b) (upstream) Atlantis doesn’t support cloning submodules. Anything bad about (ab)using the terraform module source code to pull in the repo?
module "badgers" {
source = "git::[email protected]:foo/bar"
}
output "schema" {
value = jsondecode(file("${path.module}/badgers/db_schema/table_account_schema.json"))
}
works…but I’m not sure if it is a terrible idea
to be clear, [email protected]:foo/bar
is not a Terraform codebase.
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
schema = {
"AttributeDefinitions" = [
{
"AttributeName" = "id"
"AttributeType" = "S"
},
]
"KeySchema" = [
{
"AttributeName" = "id"
"KeyType" = "HASH"
},
]
"ProvisionedThroughput" = {
"ReadCapacityUnits" = 1
"WriteCapacityUnits" = 1
}
"TableName" = "foobar"
}
¯_(ツ)_/¯
and you can’t add a little tf to that remote git repo to just output the file, so you can reference module outputs?
Could, but rather not, there are a lot of services
the only “bad” thing about it i can think of is that you need to embed a lot of info in this module about how the remote repo is structured
can protect against changes in the remote using a ref, of course
Aye, this will be at the top level wrapper module (terraform-root-module style) and I actually want to get a few JSON files from there
Aye
Cool, thanks for the people
im using atlantis 0.14.0 with submodules without any issue
I kinda like it too, but it feels too easy hah
¯_(ツ)_/¯
cloudposse/atlantis @RB?
nope, using the official one
hmm, are you doing something special in your atlantis.yml ?
nah
“it just works”
what Support git submodules why It appears that clones are not recursive: atlantis/server/events/working_dir.go Line 113 in f057d6d cloneCmd := exec.Command("git", "clone", clon…
hmm
oh oh wait a minute…
i was thinking “submodules” were specific directories (modules) in a git repo
that is for “git submodules”
yes, i dont use “git submodules”
OK, makes sense, cheeers
TIL during a terraform init for a git source it will also pull in submodules if there are any.
Package for downloading things from a string URL using a variety of protocols. - hashicorp/go-getter
Nice, thanks @contact871