#terraform (2020-06)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2020-06-01
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Jun 10, 2020 11:30AM.
Register for Webinar
#office-hours (our channel)
Is there a public conftest bundle for terraform? I figure there is some customisation for every system but there must be a core revolving around common practices the community uses?
Regarding https://github.com/cloudposse/terraform-aws-ecs-alb-service-task
i just had a quick question about that. why is icmp ingress enabled on the security group resource ?
@RB For using ping
I would assume.
Regarding https://github.com/cloudposse/terraform-aws-ecs-alb-service-task
i just had a quick question about that. why is icmp ingress enabled on the security group resource ?
ya but why would you need ping
its 2020. what use does this have ?
a monitor or something like that
FTP is still a thing
It should definitely be optional, but it is still a debugging tool.
Also the average joe is going to lean on ‘ping’ for “is this thing up”
2020-06-02
Hi Folks…is there an easier way to get list of required aws permissions for a Terraform task…when we had to upload and Modify certificates we didn’t have policy for DescribeCertificate and had some failures. we tried doing a trace to see if there are any insights but haven’t been successful.
What’s the error you’ve encountered?
anyone hear of a project to build the terraform website in a way that supports specific versions? for example, looking to browse the docs for a specific version of the azurerm provider… (and i do know how to use tags on the github repo to view specific pages, but it’s not very navigable…)
Not really what you’re asking for, but just reminded me of @antonbabenko project to turn docs into PDFs https://github.com/antonbabenko/terraform-docs-as-pdf
Complete Terraform documentation (core + all official providers) as PDF files. Updating nightly. - antonbabenko/terraform-docs-as-pdf
dang, too bad it’s not per-version
I’m late coming here, but I’ve also been asking for versioned docs since I’ve started here. Apparently it’s on the way, but that doesn’t help today. Unfortunately checking out the version in VCS is the best way to get to the version you want. Sorry.
thanks @Jake Lundberg (HashiCorp)
i am irrationally excited about this pr: https://github.com/hashicorp/aws-sdk-go-base/pull/38
Community Note Please vote on this pull request by adding a reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave "…
oh great! why did it ever be this way in the first place - it’s always been such a pain
Community Note Please vote on this pull request by adding a reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave "…
sooooo many tools have problems with aws credentials, regardless of the sdk/language, i don’t really blame them
i’m just excited it’s been getting some good attention today
Hoping to find some best practices here with module creations/tree structure or maybe some use cases if anyone has out there.
Here’s what I have going:
I’ve created a separate repo for a module that contains both aws_launch_template
AND aws_autoscaling_group
resources (all in main.tf) . I would also like to attach/create an load_balancers
Would it make sense for me to create a separate module for the resource aws_lb
or incorporate that into the main.tf file ?
2020-06-03
Hey I am using CloudPosse’s awsome terraform-aws-eks-cluster and node-group modules. The cluster module merges credentials info into my kube config, very nice. I have setup my terraform main.tf to also create a service account for my pods to integrate with IAM, using kubectl apply called by terraform. Sometimes this fails, apparently because the kube API server is not ready. Details at https://stackoverflow.com/questions/62173213/intermittent-kubectl-apply-error-when-run-from-terraform-after-aws-eks-cluster-c. If I can make it to the office hours today I will ask there, but I thought I should ask here first.
Had similar issue as well while trying to create config-map using kubectl command for aws-auth after eks cluster deployment . As you said, even I worked it around with couple of mins sleep using null_resource and looking for better solution. One thing I noticed is eks nodes (EC2 instance) which are getting deployed on VPC are not still green on status check. If it reaches green I could see that kubeconfig starts to work..maybe we need to find a way to verify the status check before firing kubectl command. Not able to get status check info of EC2 instance in Terraform or at least I dont know how to get it.
Hey
I am trying to use the elasticache module, on 0.11
despite
transit_encryption_enabled = "false"
at_rest_encryption_enabled = "false"
Terraform still insists on having an auth_token
Error: module.eu-aws-nft-main-redis.aws_elasticache_replication_group.default: "auth_token" must contain from 16 to 128 alphanumeric characters or symbols (excluding @, ", and /)
Error: module.eu-aws-nft-main-redis.aws_elasticache_replication_group.default: "replication_group_id" must contain from 1 to 20 alphanumeric characters or hyphens
Error: module.eu-aws-nft-main-redis.aws_elasticache_replication_group.default: only alphanumeric characters or symbols (excluding @, ", and /) allowed in "auth_token"
Am I doing something silly
And then if you try to set it to shut up the error, the deployment fails:
* aws_elasticache_replication_group.default: Error creating Elasticache Replication Group: InvalidParameterValue: The AUTH token is only supported when encryption-in-transit is enabled
status code: 400, request id: ec8aa5fa-f5d6-48cd-b1fa-2dea064cc8e2
v0.13.0-beta1 0.13.0-beta1 (June 03, 2020) NEW FEATURES: count and for_each for modules: Similar to the arguments of the same name in resource and data blocks, these create multiple instances of a module from a single module block. (#24461) depends_on for modules: Modules can now use the depends_on argument to ensure that all module resource…
This turns on module expansion in the config, with most basic functionality working. A major exception is that while providers cannot be configured within an expanded module, there is no validation…
Hey, folks. I’m trying to grant Organization-level access to an S3 bucket. I have only 3 accounts that need this cross-account access, but I can’t seem to get it to work. Here’s my bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowOrgAccountsReadOnly",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::123412341234:root"
]
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-artifacts",
"arn:aws:s3:::my-artifacts/*"
]
}
]
}
With this in place, I can do an aws s3 ls
on that bucket, but aws s3 cp
returns this: (AccessDenied) when calling the GetObject operation
error.
The policy is definitely what’s providing ls
access (s3:ListObject,I believe), which I have tested, by removing it. So, why can’t I access s3:GetObject
on that bucket from another account?
Thanks for taking a look.
Replace
"Resource": [
"arn:aws:s3:::artifacts/*",
"arn:aws:s3:::artifacts""
],
yeah, tried that, thanks. That ain’t it.
this is the OU id on organizations ?
anyhow you can’t do cp with this
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
you need Put* or PutObject
Hey, @jose.amengual. That’s the root (principal) org ID, redacted. I don’t use the OU id.
Here’s the really odd thing: I set up a policy that sets the Principal to the AWS account ARN of my test account. I gave it s3:*
. At that point, I can do an ls on objects in the bucket, but cp does not work for read or write.
When I remove that account from Resources
, i lose the ability to do an ls
, which indicates that that policy is in effect for this role (BTW, I’m using an assumed role from from that same master account. I’m setting up an artifacts bucket that each account should be able to pull from.
is the bucket encrypted using KMS by any chance ?
yep. That occurred to me yesterday at some point…that i’d need to assign the same access to the key. that where you’re going?
or just disable that encryption…different type?
AES256 is safe enough and simpler
if you goal is encryption at rest
otherwise you will need to share CMKs
but with AES256, it will be transparent, with no policy fiddling?
if there is any object in the bucket and KMS is enabled using a aws managed kms key you will have the problems described
correct, it is transparent
it is server side
That’s great. Thanks so much!
np
Thanks again for the help. It works, after switching to AES256 and using this policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowOrgAccountsReadOnly",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:List*"
],
"Resource": [
"arn:aws:s3:::my-artifacts",
"arn:aws:s3:::my-artifacts/*"
],
"Condition": {
"StringEquals": {
"aws:PrincipalOrgID": "o-xxxxxx"
}
}
}
]
}
2020-06-04
HI guys.
When deploying terraform, I first run terraform plan then terraform apply based on the plan output. If there is no plan output then apply does not run. It’s been a few days now that I noticed that terraform plan includes “reading data”
data.aws_availability_zones.available: Reading... [id=2020-06-04 22:18:37.236953382 +0000 UTC]
data.aws_availability_zones.available: Read complete after 0s [id=2020-06-04 22:18:57.193921254 +0000 UTC]
This is causing the terraform apply to be triggered. I know it is not doing anything but it is kind annoying. Is there any way to ignore readings output from terraform plan? Or is it a new way the plan works and we have to live with it?
@Gui Paiva, I was just noticing that the plan info is not printed, when you run apply either with a plan file or with the --auto-approve
param. I like having that.
You can run terraform show <plan file>
, to see the plan.
Showing the plan is fine when there are resources to be created, deleted or modified, it is just that “Reading” that looks like it is now part of the plan and then the apply job in my pipeline gets triggered because the plan output was not empty
So, you have an empty plan, but apply still runs…stuff, right? It gets the data
yes, before the plan would be empty but now, there is this reading
as you can see, there are no changes
I only find annoying because now the apply jobs gets triggered even though it is not going to do anything
I see. that seems to be a maintainer-level issue.
it must have been a recent change… I can’t remember seeing it happening last week
try using the detailed exit code flag on your plan
-detailed-exitcode - Return a detailed exit code when the command exits. When provided, this argument changes the exit codes and their meanings to provide more granular information about what the resulting plan contains:
0 = Succeeded with empty diff (no changes)
1 = Error
2 = Succeeded with non-empty diff (changes present)
that’s probably the best way to determine if you need to apply in CI
I am already using the detailed exitcode
oh really? that’s super odd then - probably a bug
but let me review my script…. I might be checking for the exit error only
I am going to test showing the output exit code
just want to see what it shows
actually my script already does that
yeah, looks like a bug
now I wonder if it is with terraform itself or the aws provider
are you literally just using something like this:
data "aws_availability_zones" "available" {
state = "available"
}
or have you got some filters in there?
found the issue
at least I believe so
the terraform binary upgrade had a bad logic and it download the new 0.13 beta version
I am rolling back the terraform version to 0.12.26 and it should be fine now
hate having to hack the terraform state files… but finally got the rollback done… you can’t just change the terraform version, but also the provider like from
v0.13 beta
provider[\"<http://registry.terraform.io/hashicorp/aws\|registry.terraform.io/hashicorp/aws\>"]
to
v0.12
provider.aws
Oh that’s an interesting change
lesson learnt hahahaha
roll back fixed it? if so that’s probably still a bug
yeah, roll back fixed.
0.13 still beta, lets see how it behaves when the final version is released
Hi all, I’m trying to use the complete example of the terraform-aws-eks-workers project.
I thought that I could copy the files under https://github.com/cloudposse/terraform-aws-eks-workers/tree/master/examples/complete and then point the source for the eks_cluster
module to git::<https://github.com/cloudposse/terraform-aws-eks-workers.git?ref=tags/0.14.0>
but I am getting a bunch of errors
Error: Missing required argument
on main.tf line 64, in module "eks_cluster":
64: module "eks_cluster" {
The argument "cluster_certificate_authority_data" is required, but no
definition was found.
Error: Missing required argument
on main.tf line 64, in module "eks_cluster":
64: module "eks_cluster" {
The argument "cluster_endpoint" is required, but no definition was found.
Error: Missing required argument
on main.tf line 64, in module "eks_cluster":
64: module "eks_cluster" {
The argument "cluster_security_group_id" is required, but no definition was
found.
Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers
module "eks_cluster" {
source = "git::<https://github.com/cloudposse/terraform-aws-eks-workers.git?ref=tags/0.14.0>"
namespace = var.namespace
stage = var.stage
name = var.name
attributes = var.attributes
tags = var.tags
region = var.region
vpc_id = module.vpc.vpc_id
subnet_ids = module.subnets.public_subnet_ids
kubernetes_version = var.kubernetes_version
local_exec_interpreter = var.local_exec_interpreter
oidc_provider_enabled = var.oidc_provider_enabled
enabled_cluster_log_types = var.enabled_cluster_log_types
cluster_log_retention_period = var.cluster_log_retention_period
}
Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers
Hey folks… how does everyone have their TF repos setup? I’ve seen some do 1:1 per AWS account, but others would do a repo per service that would go across all accounts (VPC, S3, etc). What worked for you?
We have all in one git repo (sre, providing a platform etc). We use layers of directories to limit blast radius, tfstate file per directory. This enables easy symlink for Makefiles and comparisons across environments or regions. All has same controls from git perspective. Simplifies CI.
Modules separate repo.
It depends… I’ve used a monorepo for all environment and module code, modules in different repos, modules vendored into the repo, a separate repo for everything with Terragrunt / Terraform.
In terms of blast radius I’ve worked with orgs that classed the environment as the blast radius in Terraform and orgs that take a more modular approach (layers).
I wouldn’t say that there is a magic repo layout that fits ALL situations. It depends on complexity, how much you share code across teams and the scope of the infrastructure you’re delivering.
modules in seperate repos and all different implementations in a single repo each in their own subdirectory. I use terraform workspace (per environment) so the monorepo is only nested one level deep directory-wise.
wow… quite a diversity of layouts!
I’m somewhat apprehensive of a monorepo… but I can see making things simpler and having tfstate per directory seems good. But the complexity of maintaining that seems high
I’m thinking from the perspective of an organization that is new to TF and IaaC in general… so I think separate repos per AWS account would be cleaner. This is assuming separate AWS account per app/environment. Also think using TF-workspaces would be key here too.
Have your seen Charity Majors blog post about separate tfstate? I feel those in this thread might be a little ahead of that but it’s a good read about the journey. Whilst we have separate tfstate per directory, it’s a given that it’s using remote backend like S3. For us it’s 3 layers (of directories).
- AWS Account (or other)
- Region (we also have a global region)
- Services boundary (base, k8s, other ec2, rds etc, …) Variable and auto tfvars files at each level symlinked as necessary into 3.
Separate repo’s useful if merge master access control or CI differs significantly, else it’ll be a pain IMHO. I can’t speak to workspaces as haven’t used.
2020-06-05
Hello guys, we need to use lifecycle hooks in eks worker node asg. Currently using below terraform git for eks workers. is it achievable?
https://github.com/cloudposse/terraform-aws-eks-workers?files=1
Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers
2020-06-06
Any suggestions for automatically populating a tag with date when the resource was created? Eg I’d like to have a function
resource aws_.... {
tags {
CreationDate = creation_date()
}
}
where the value returned by creation_date()
is constant once the resource has been created, so that future runs of terraform apply do not modify the value. Similarly, an update_date()
would only change on resources that have otherwise been flagged as needing a change, again otherwise all resources that have that tag will get a new update date whenever I run terraform apply, even those resources that have not changed!
I suspect these do not exist, and cannot be easily implemented from within terraform (possibly as post-processing if terraform has a hook for that, which would give list of resources created and updated, then a script could loop over those and set the tags), what do people do? Same question arises for git commits, it would be nice to have a tag that identifies the git commit of the terraform root module used to produce a resource. What do you do for this kind of tracking/auditing?
Hi @OliverS, what is it you want to achieve with this kind of tagging ?
I want to be able to look at a resource in AWS and know who created it and when, who updated it last and when, and what was git commit of the repo on which terraform apply
was run. I rather not depend on aws Cloudtrail for that.
Hi @OliverS, did you try formatdate and https://www.terraform.io/docs/configuration/functions/timestamp.html ?
The timestamp function returns a string representation of the current date and time.
Problem with timestamp is that it will get a different value every time you terraform apply. This will update all resources that have var.tags not just the ones that are changing from the apply. You can’t configure tf to ignore tags because sometimes you really want to update them. In fact you want to update one of the tags on every resource that terraform is changing.
I want to be able to look at a resource in AWS and know who created it and when, who updated it last and when, and what was git commit of the repo on which terraform apply
was run. I rather not depend on aws Cloudtrail for that.
But why ? Things don’t break when resources are created but when they are deleted and this scenario is not captured ?
If you want to have more control then have a CI/CD with git review setup.
data "null_data_source" "start_time" {
inputs = {
timestamp = timestamp()
}
}
then reference with:
data.null_data_source.start_time.inputs.timestamp
parsing examples:
date_ymd = "${substr(data.null_data_source.start_time.inputs.timestamp, 0, 4)}${substr(data.null_data_source.start_time.inputs.timestamp, 5, 2)}${substr(data.null_data_source.start_time.inputs.timestamp, 8, 2)}" #equivalent of $(date +'%Y%m%d')
date_hm = "${substr(data.null_data_source.start_time.inputs.timestamp, 11, 2)}${substr(data.null_data_source.start_time.inputs.timestamp, 14, 2)}" #equivalent of $(date +'%H%M')
Hey guys, I’m kind of new to terraform (and devops in general). I’m wondering what the difference is between this: https://registry.terraform.io/modules/cloudposse/elastic-beanstalk-environment/aws/0.3.10
and this: https://www.terraform.io/docs/providers/aws/r/elastic_beanstalk_environment.html
Both api’s look pretty similar, is there an advantage to using one over the other?
Provides an Elastic Beanstalk Environment Resource
one is a resource and the other is a module
Provides an Elastic Beanstalk Environment Resource
read a bit more about modules and you will see the difference
Modules encapsulate not just one resource, usually modules create all resources necessary to run something, in the case of beanstalk there is a bunch of configs to get one app running
if you look at the module code and the tf files you will see many resources being created
oh okay, thanks, that clears up a lot
2020-06-07
Hey everyone, do you have an example of an elastic beanstalk config for a single instance only? I imagine I just set environment_type = single instance
, but I can’t find anything to know for sure.
Hello guys, while using cloudposse terraform for eks worker nodes, can we create asg with custom name. Currently its comes up with a suffix(consists of date of creation). Name prefix we can modify, but about suffix?
Reason is I need to create lifecycle hook for asg while it’s spin up through terraform
Here we have to explicitly provide asg name
Hey guys i have a situation like pipeline need some suggestions:
- create ec2 and run a ansile command ( i can do that with user data)
- after that make a ami from it
- create launch group for autoscaling from that ami
2020-06-08
@sahil kamboj I do that all the time. I use python to handle the overall process, gather information, etc. Python also runs the ansible command and creates the ami.
can we do it in terraform
or python handle terraform
If the only reason for 1 is to create the AMI in 2, then you should look at packer to do this. Then terraform can do step 3
https://www.packer.io/intro/getting-started/build-image/ - i have not used packer but it is interesting to think about.
With Packer installed, let’s just dive right into it and build our first image. Our first image will be an Amazon EC2 AMI with Redis pre-installed. This is just an example. Packer can create images for many platforms with anything pre-installed.
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Jun 17, 2020 11:30AM.
Register for Webinar
#office-hours (our channel)
how can i call packer from terraform to create ami and resume like (depends on packer-ami)
Is it possible to do or
comparisons in a terraform if
? Example: name = var.branch == "master" || "sandbox" || "demo" ? "service-prod" : (var.branch == "staging" ? "service-staging" : "service-dev")
you have to perform the comparison for each value, when taking this approach…
name = var.branch == "master" || var.branch == "sandbox" || var.branch == "demo" ? "service-prod" : (var.branch == "staging" ? "service-staging" : "service-dev")
but you can also use contains()
for your use case:
name = contains(["master", "sandbox", "demo"], var.branch) ? "service-prod" : (var.branch == "staging" ? "service-staging" : "service-dev")
or you can just use a map for a more declarative approach:
locals {
name = {
master = "service-prod"
sandbox = "service-prod"
demo = "service-prod"
staging = "service-dev"
}
}
name = local.name[var.branch]
I see ||
listed under the logical operators, but terraform is throwing an error on a plan
with this line. Unsuitable value for right operand: a bool is required.
is there a way to use something like replace
on an entire list ?
like replace(var.atlantis_repo_whitelist[*], "/github.com/ORG//", "")
The Terraform language allows the use of expressions to access data exported by resources and to transform and combine that data to produce other values.
e.g.
[for s in var.list : upper(s)]
awesome, I have used for for dynamics and never had to use it this way, thanks
2020-06-09
is anyone using the https://github.com/cloudposse/terraform-aws-eks-workers to bring up windows workers?
Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers
Haven’t heard any reports of it
Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers
can we write efs id or other resources id to a file that will auto uploaded to s3(all in terraform) so i can use that file for ec2 provisioning with in terraform.OR is there any better way
Yes you could construct a string with the efs_id and create an object in s3 with the information. That said, we have used EFS for years and never had to do that, so I think this might be a case of http://xyproblem.info/
Asking about your attempted solution rather than your actual problem
Hello everyone. Using https://github.com/cloudposse/terraform-aws-elasticsearch, is it possible to define “Require HTTPS”?
Terraform module to provision an Elasticsearch cluster with built-in integrations with Kibana and Logstash. - cloudposse/terraform-aws-elasticsearch
There’s an open PR already https://github.com/cloudposse/terraform-aws-elasticsearch/pull/54
what Add domain_endpoint_options variable. why To be able to configure enforce https in elasticsearch_domain resource. references https://www.terraform.io/docs/providers/aws/r/elasticsearch_domain….
The Terraform 0.13 beta is out with module for_each, count, and depends_on. These are complex features so please give it a shot and report any feedback. https://buff.ly/2Y9Wya9
This is very exciting. We have many places where being able to loop modules will dramatically simplify our code.
The Terraform 0.13 beta is out with module for_each, count, and depends_on. These are complex features so please give it a shot and report any feedback. https://buff.ly/2Y9Wya9
ya totally
no kidding. can’t wait to see how moving loops out of the modules changes the patterns we use to address the “cannot be computed”/”value not known until apply” style of errors
anyone using AWS Service Catalog? My company wants us to start using service catalog to let teams deploy things (RDS, s3 buckets, etc) w/ sane defaults/best practices.
Wondering if there’s some way to allow teams to provision terraform templates through it. i try to avoid writing CF if at all possible
googled “service catalog cloudformation terraform” and came up with this… https://aws-service-catalog-factory.readthedocs.io/en/latest/factory/terraform_support.html
Apply Terraform configurations using CloudFormation through a proxy lambda - aws-samples/aws-service-catalog-terraform-reference-architecture
ahah, i was actually looking for this, remembered reading an aws blog… https://aws.amazon.com/blogs/apn/using-terraform-to-manage-aws-programmable-infrastructures/
Terraform and AWS CloudFormation allow you to express infrastructure resources as code and manage them programmatically. Each has its advantages, but some enterprises already have expertise in Terraform and prefer using it to manage their AWS resources. To accommodate that preference, CloudFormation allows you to use non-AWS resources to manage AWS infrastructure. Learn the steps to create a CloudFormation registry resource type for Terraform and deploy it as an AWS Service Catalog product.
I was checking the docs of the terraform label module and I saw this : terraform-terraform-label is a fork of terraform-null-label which uses only the core Terraform provider
what is the benefit of using or or the other ?
SweetOps Slack archive of #terraform-0_12 for June, 2019.
I was trying to find that exact chat
awesome thanks
but is there a preference over one or the other ?
that conversation does not really go over that much
Sounds like null-label is preferred again from that conversation and the terraform-label module is now just ‘supported’
Ya null label is back in favor - though I don’t think there’s anything null about it anymore
We should probably archive the other one to reduce confusion
Thanks guys, so my guess for Terraform 0.13.0 we will have something similar
something similar?
terraform-terraform-label
was the first to be 0.12 compatible and terraform-null-label
was not upgraded until other modules were upgraded, so the upgrade path for 0.13 I guess will be similar, a new terraform-terraform-label
0.13 will be created and then terraform-null will be upgraded etc…
that is history I have in my head about this
ohhhh I see now, I reread the old post , well I guess I have a lot of imagination
for some reason I thought that the upgrade to 0.12 was the reason
it was mostly about HCL1 -> HCL2 (not terraform 0.11 to 0.12)
so 0.13 is still HCL2, so I don’t imagine too much pain
I see ok, thanks for clarifying
2020-06-10
hey there … spinning up an EKS cluster with terraform-aws-eks, I get:
Error: the server has asked for the client to provide credentials (post configmaps)
on .terraform/modules/eks.eks/terraform-aws-eks-12.0.0/aws_auth.tf line 62, in resource "kubernetes_config_map" "aws_auth":
62: resource "kubernetes_config_map" "aws_auth" {
using the wrong module mate https://github.com/cloudposse/terraform-aws-eks-cluster
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
lol
not the answer i was lookingfor but very well
when I run KUBECONFIG=<my local kubeconfig, generated by the module> terraform apply
for a second time, it works
any idea how I can avoid running TF a second time?
Howdy y’all
I’ve got an existing AWS stack managed by terraform with a RDS Postgres db and replica inside a vpc with developer access provided by a bastion host and ssh forwarding. I’ve been tasked with adding an aurora postgres replica, and shortly thereafter replacing the pg primary/replica with aurora completely. I’m running into a few issues utilizing the terraform-aws-rds-cluster module (https://github.com/cloudposse/terraform-aws-rds-cluster/tree/0.15.0) and trying to get the steps correct.
Is there anyone here who has done this on terraform 11.x?
Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster
The Terraform VS Code extension version 2.0.0 has been released with support for 0.12 syntax and autocompletion! Behind the scenes its using the new Terraform language server (https://github.com/hashicorp/terraform-ls) which is compatible with most editors. https://www.hashicorp.com/blog/announcing-the-terraform-visual-studio-code-extension-v2-0-0/
who else had gotten used to seeing a bunch of 0.12 syntax errors in their VSCode :P
Saw that too, I had to remove some ‘legacy’ terraform extension from my settings which seemed to fix those errors.
However the bigger WTF is that the extension/server seems to expect that the workspace root == the terraform root. So it errors anyways if you’re using any sort of sub-folders in the path that you opened in VSCode
I switched to IntelliJ months ago and never looked back. The TF extension is really stable on 0.12.
was very happy with intelliJ extension, but recent update(s) really screwed the .tf files auto-formatting =(
Same, I’ve been using pycharm for Terraform development for about a year now
2020-06-11
Is it possible to show “Output” after the destroy in terraform?
you can use the terraform output
command to display the output key value available in the state
Hey guys
Follow this thread for a temporary solution if you use Terragrunt with VSCode using the popular vscode-terraform
extension https://github.com/hashicorp/vscode-terraform/issues/372
Not sure this is the right place, please correct me if I'm wrong. Trying to pin to an older version of the plugin as linked in the Readme leads to VS Code failing with a 404 error. Steps to rep…
worked well for me
hello, is there a way to conditionally create a block in a terraform resource ?
please provide more context
what kind of block?
Terragrunt generate can do this for you :)
but so can a for_each
below I would like to set site config = true
only when tier = Free and Size = F1.
resource "azurerm_app_service_plan" "app_service_plan" {
name = "${var.prefix}-${local.environment}-service-plan"
location = azurerm_resource_group.web_app_services_rg.location
resource_group_name = azurerm_resource_group.web_app_services_rg.name
sku {
tier = local.sku_tier
size = local.sku_size
}
site_config {
always_on = true
}
}
haha, I just found out how to do it with a map. May be you have a better or more creative way to do it ? here i switch the variable value and not the block ..
variable "always_on_enabled" {
type = map(string)
default = {
stage = true
prod = false
}
}
locals {
always_on = "${lookup(var.always_on_enabled, var.env, "stage")}"
}
and having
site_config {
always_on = local.always_on
}
Hmmm… That doesn’t look sensible to me… Sure it works but couldn’t it just be:
always_on = local.sku_tier == "Free" && local.sku_size == "F1" ? true : false
Which makes it clear that:
I would like to set site config = true only when tier = Free and Size = F1.
ha nice ! it looks far better like this thanks @Tim Birkett
More specific question: is there a way to create a RDS Aurora cluster with a cluster_identifier
field and not have it force a new resource? Why tf does terraform think I need a new thing every time simply because I want to name it?!
Terraform cloud can give an estimate of cost before deploying. Is there a way we can do this from command line (with regular Terraform )
give this a try? https://github.com/antonbabenko/terraform-cost-estimation
Anonymized, secure, and free Terraform cost estimation based on Terraform plan (0.12+) or Terraform state (any version) - antonbabenko/terraform-cost-estimation
Yeah.. Saw that .. But not sure if that is accurate . Anyone here used it ?
it’s aws pricing. will never be accurate.
Also it does not cover all resources .. Only 5 resources according to docs..It does not cover RDS for example
Below are the supported resources
@loren ^^
it’s not my project, and i haven’t used it myself. i honestly don’t think it is possible to estimate aws costs with an accuracy worth making the effort. so, i’m just trying to answer the question you posed, and this is the only tool i’m aware of… feel free to keep looking, or maybe contribute to it until it addresses your needs
ok. Thanks Loren.. Will research further on this topic
I worked out used lifecycle ignore_changes, but what about this: when creating an aurora cluster replicating from rds postgres, the instance created is in writer mode, when it should be a reader, if I’m not much mistaken. Any pearls of wisdom? Here’s the relevant tf config:
resource "aws_db_instance" "rds_postgres" {
identifier = "${var.environment}-postgres-database"
allocated_storage = "${var.allocated_storage}"
engine = "postgres"
engine_version = "${var.db_version}"
instance_class = "${var.database_instance_class}"
multi_az = "${var.multi_az}"
name = "${var.database_name}"
username = "${var.database_username}"
password = "${var.database_password}"
db_subnet_group_name = "${aws_db_subnet_group.rds_subnet_group.id}"
vpc_security_group_ids = ["${aws_security_group.rds_sg.id}"]
skip_final_snapshot = true
backup_retention_period = 30
performance_insights_enabled = true
storage_encrypted = true
tags {
Environment = "${var.environment}"
Application = "company-api-postgres"
}
}
resource "aws_db_instance" "rds_postgres_r1" {
identifier = "${var.environment}-postgres-database-r1"
allocated_storage = "${var.allocated_storage}"
engine = "postgres"
engine_version = "${var.db_version}"
instance_class = "${var.database_instance_class}"
multi_az = "${var.multi_az}"
name = "${var.database_name}"
username = "${var.database_username}"
vpc_security_group_ids = ["${aws_security_group.rds_sg.id}"]
replicate_source_db = "${aws_db_instance.rds_postgres.identifier}"
skip_final_snapshot = true
performance_insights_enabled = true
storage_encrypted = true
tags {
Environment = "${var.environment}"
Application = "company-api-postgres"
}
}
resource "aws_rds_cluster" "default" {
cluster_identifier = "${var.environment}-aurora-cluster"
database_name = "${var.database_name}"
master_username = "${var.database_username}"
master_password = "${var.database_password}"
backup_retention_period = 30
preferred_backup_window = "03:00-07:00"
preferred_maintenance_window = "sat:04:00-wed:04:30"
skip_final_snapshot = true
storage_encrypted = true
kms_key_id = "${local.db_kms_key}"
vpc_security_group_ids = ["${aws_security_group.rds_sg.id}"]
db_subnet_group_name = "${aws_db_subnet_group.rds_subnet_group.id}"
engine = "aurora-postgresql"
engine_version = "${var.db_version}"
replication_source_identifier = "${aws_db_instance.rds_postgres.arn}"
port = 5432
lifecycle {
ignore_changes = [
"id",
"kms_key_id",
"cluster_identifier"
]
}
tags {
Environment = "${var.environment}"
Application = "company-api-aurora"
}
}
resource "aws_rds_cluster_instance" "default" {
identifier = "${var.environment}-aurora-r1"
cluster_identifier = "${aws_rds_cluster.default.id}"
instance_class = "${var.aurora_instance_class}"
db_subnet_group_name = "${aws_db_subnet_group.rds_subnet_group.id}"
publicly_accessible = false
engine = "aurora-postgresql"
engine_version = "11.6"
performance_insights_enabled = true
lifecycle {
ignore_changes = [
"identifier",
"cluster_identifier"
]
}
tags {
Environment = "${var.environment}"
Application = "company-api-aurora"
}
}
2020-06-12
Hey guys after attempting testing my terraform script many time i am facing this issue now module.vpc-1.aws_iam_role.vpc_flow_log_cloudwatch[0]: Still creating… [57m20s elapsed] aws_iam_role.node-policy: Still creating… [57m30s elapsed] aws_iam_role.ssm-role: Still creating… [57m30s elapsed] module.vpc-1.aws_iam_role.vpc_flow_log_cloudwatch[0]: Still creating… [57m30s elapsed] aws_iam_role.node-policy: Still creating… [57m40s elapsed] aws_iam_role.ssm-role: Still creating… [57m40s elapsed] module.vpc-1.aws_iam_role.vpc_flow_log_cloudwatch[0]: Still creating… [57m40s elapsed]
what could possibly go wrong here
same thing happening to me in terraform destroy module.db.module.db_option_group.aws_db_option_group.this[0]: Still destroying… [id=frappe-db-20200612073045588700000002, 9m10s elapsed]
@sahil kamboj - aws are having issues with IAM role creation and related resources atm
Terraform Cloud outage Jun 12, 19:12 UTC Monitoring - Terraform Cloud and our public registry underwent a 21-minute outage (1804 UTC) due to a TLS failure in a cloud service provider on which the application depends. We’re monitoring for further problems related to this outage and following up with the service provider.
Any Terraform runs that failed during this period can safely be re-queued.
HashiCorp Services’s Status Page - Terraform Cloud outage.
Hi, I filed an issue about map_additional_iam_users at https://github.com/cloudposse/terraform-aws-eks-cluster/issues/63 I created eks cluster than add the following variable but terraform plan could not detect this change
map_additional_iam_users = [
{
userarn = "arn:aws:iam::xyz:user/myuser"
username = "myuser"
groups = ["system:masters"]
}
]
Describe the Bug First Created a eks cluster without any map_additional_iam_users variable then added the following lines into terraform.tfvars and run map_additional_iam_users = [ { userarn = &quo…
$ terraform plan
No changes. Infrastructure is up-to-date.
This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, no
actions need to be performed.
2020-06-13
Terraform Visual is a simple tool to visualize your Terraform plan - hieven/terraform-visual
Neat… live example here: https://hieven.github.io/terraform-visual/examples/aws-s3
Terraform Visual is a simple tool to visualize your Terraform plan - hieven/terraform-visual
2020-06-15
Hello,
did you manage to install the newly terraform vscode language package on Windows ?
https://marketplace.visualstudio.com/items?itemName=HashiCorp.terraform
from github link it seems that the absolute path has to be set manually in the extension json, but I still got the error “ Expected absolute path for Terraform binary” https://github.com/hashicorp/vscode-terraform/wiki/Manually-Setting-the-Terraform-Executable-Path
Extension for Visual Studio Code - Syntax highlighting, linting, formatting, and validation for Hashicorp’s Terraform
A Visual Studio Code extension for Hashicorp Terraform - hashicorp/vscode-terraform
While using this terraform git https://github.com/cloudposse/terraform-aws-eks-node-group/blob/master/README.md Can we have existing iam roles to be attached in node groups? Because autoscaling describe policies are not attached in the newly created iam role. These policies are need to be attached in node iam role to have autoscaling to work.
Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.
@rahulm4444 do you mean you need to add another permission to the policy besides what we have here https://github.com/cloudposse/terraform-aws-eks-node-group/pull/13/files?file-filters%5B%5D=.md&file-filters%5B%5D=.tf
Hi, it would be nice to allow the node group to autoscale his ASG, besides adding the required tags to it. In this PR I added a boolean-based flag to "enable" the cluster autoscaler in or…
@Andriy Knysh (Cloud Posse) I am now able to add extra policies to EC2 role by using existing_workers_role_policy_arns = [“my policy”] existing_workers_role_policy_arns_count = 1 So this is working now..
One more help needed is to incorporate “asg lifecycle hooks”
Only after node group creation I am getting the asg name, so not able to incorporate asg lifecycle hook with this script.
Is there a way to predefine asg name during the time of node group creation?
we use the label
module to define all the names for all resources
so the name should be known in advance
also, you can open a PR to add a variable to specify the ASG name
if the variable is not set (null
), then fall back to the old functionality
PR created to add asg_name as variable https://github.com/cloudposse/terraform-aws-eks-node-group/pull/18
what Describe high-level what changed as a result of these commits (i.e. in plain-english, what do these changes mean?) Use bullet points to be concise and to the point. why Provide the justific…
Please have a look
One more thing to note is, we can have predefined name for the node group created by this terraform. Issue is with naming of backend asg of that node group
That backend asg is not a resource, it is created as a part of node group
thanks for the PR @rahulm4444
I see you defined variable "asg_name"
but it’s never used
Yes, I was trying that to incorporate with main.tf but I find out asg is not created as a resource. Node group is created as a resource and asg was created in its backend. And that asg is not defined anywhere in terraform..
Not sure this is possible I am now trying to use eks-workers terraform module
ah, yes
Managed Node Group creates the ASG for you
If it’s not possible u can close that PR I will raise a new PR against eks-workers terraform module for the same purpose..
(closed the PR, thanks)
I have created a PR to have custom asg name
what Describe high-level what changed as a result of these commits (i.e. in plain-english, what do these changes mean?) Use bullet points to be concise and to the point. why Provide the justific…
Could you please have a look
Terraform Cloud outage Jun 15, 13:38 UTC Resolved - This incident has been resolved.Jun 12, 19:12 UTC Monitoring - Terraform Cloud and our public registry underwent a 21-minute outage (1804 UTC) due to a TLS failure in a cloud service provider on which the application depends. We’re monitoring for further problems related to this outage and following up with the service provider.
Any Terraform runs that failed during this period can safely be re-queued.
HashiCorp Services’s Status Page - Terraform Cloud outage.
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Jun 24, 2020 11:30AM.
Register for Webinar
#office-hours (our channel)
Scheduled Maintenance | Terraform Cloud THIS IS A SCHEDULED EVENT Jun 21, 07:00 - 08:00 UTCJun 15, 16:58 UTC Scheduled - Terraform Cloud will undergo scheduled maintenance on Sunday, June 21st beginning at 7:00 UTC. We expect this maintenance to complete in an hour. During this time, terraform runs themselves will be operational but output from the runs may be impacted for a short period of time.
I just started using terraform today. I am seeing “(remote-exec): Catalog is not reachable. Try again later” when creating an ec2 instance. I can ssh into the server using the key pair. Any ideas?
(remote-exec): 12: Timeout on http://amazonlinux.us-east-1.amazonaws.com/2/core/latest/x86_64/mirror.list: (28, ‘Connection timed out after 5001 milliseconds’)
Has anyone heard anything about if / when Hashicorp is going to push a fix to the VS code extension? The language server constantly crashes and opens a terminal every time I save if there’s an error (I believe?)
you check issues and prs? https://github.com/hashicorp/vscode-terraform/issues?q=sort%3Aupdated-desc+
A Visual Studio Code extension for Hashicorp Terraform - hashicorp/vscode-terraform
Yeah, had a look, but didn’t see much aside from people complaining. No official statement so far as I know
their announcement of the extension just says to file an issue
doesn’t mention any other avenue for discussions… maybe their discourse site tho
here’s the language server repo… https://github.com/hashicorp/terraform-ls/issues?q=sort%3Aupdated-desc+
Terraform Language Server. Contribute to hashicorp/terraform-ls development by creating an account on GitHub.
no discussions on discourse… https://discuss.hashicorp.com/c/terraform-core/terraform-editor-integrations/46
Discussion and Q&A for the Terraform Language Server, Visual Studio Code extension, and other editor integrations for Terraform.
there was some discussion about this in the hangops terraform channel ~3 days ago. nothing promising though: https://hangops.slack.com/archives/C0Z93TPFX/p1591973636201800
Surprised I wasn’t in that channel. Ah well, at least Hashicorp has to have heard somewhere that things are not in good shape
Has anyone used TF new plugin for vsCode? Currently i’m using intellij and their TF plugin has been awesome (https://plugins.jetbrains.com/plugin/7808-hashicorp-terraform--hcl-language-support)
This plugin adds support for HashiCorp Configuration Language (HCL) and HashiCorp Interpolation Language (HIL), as well as their combination used in Terraform…
hello, on Vscode it seems that the plugin don’t find the terraform.exe file and giving the exact PATH don’t work
https://sweetops.slack.com/archives/CB6GHNLG0/p1592211857186400
Hello,
did you manage to install the newly terraform vscode language package on Windows ?
https://marketplace.visualstudio.com/items?itemName=HashiCorp.terraform
from github link it seems that the absolute path has to be set manually in the extension json, but I still got the error “ Expected absolute path for Terraform binary” https://github.com/hashicorp/vscode-terraform/wiki/Manually-Setting-the-Terraform-Executable-Path
It’s broken for me from the last update.
I had to switch back to using intellij , the tf plugin in vscode wasn’t working too well for me.
I remove new version and manually installed older one
on Vscode I am using the language support written by “Anton Kulikov” version V0.1.10 which support well 0.12 extension ID: 4ops.terraform
2020-06-16
I wanted to create a PR for one of the Cloudposse modules (the terraform-aws-ses
one to be exact) but for some reason the make readme
is changing “bla_bla” into “bla_bla” and adds in <pre>
and <br>
tags for some reason.
Anything I’m doing wrong here? I simply forked the repo, made the required changes and executed make init
and make readme
.
That’s intentional. Previously we ran an OOOOOOOOLD version terraform-docs
and an awk
script to rewrite HCLv2
to HCLv1
(this is before they added support). In the past week we fixed that.
See here for more context.
Generate documentation from Terraform modules in various output formats - segmentio/terraform-docs
Ah good to know, thanks!
is there a way to display <computed> values in terraform 0.11.14 when running plan or apply?
pretty sure a “computed” value is not known until after the apply… you can add an output for the value, then apply, then you’ll see the output…
i’m unable to understand the significance of terraform.tfstate created on the .terraform directory along with the plugins directory when we give a terraform init. i initially wasn’t aware a state file existed on .terraform directory until i ran into a issue when i changed my backend.
oh yeah, playing with tf 0.13 today, and the ability to disable a module using count = 0
is the … eliminates so much of the cruft in advanced modules
@RB
no need for “create/enable” variables, and no need for lists of complex objects in order to create multiples of things. remove all associated junk from the module, and just count/for_each on the module itself when you call it. so easy!
plus the tests are simpler, because all that logic is gone and there are fewer edge cases of user input
Yes that is something i would love. However, 0.12 was a mess when it was released and 0.13 is not even out of beta test so I’m going to wait on this one
I have played with it and liked it so far. Maybe this release will be much better than 0.12
Yeah, same. I just was updating a module today to be less opinionated and create arbitrary numbers of things… Generally easy enough with for_each, but this module has a nested community module that doesn’t support multiples, and I didn’t want to rewrite that also, with module-level support around the corner! So, decided to play with 0.13 and get some experience with new patterns. Now, I’m super excited for the release!
This work will just sit in a branch until 0.13 is released and we decide to make a big push to convert things
Hey! Just a quick one hopefully. How does everyone deploy a new version of an ECS service when a new version image has been uploaded to the ECR?
Context. I am trying to replace a python version that is being used. However I have read some varying approaches. The workflow is. New image is tagged in ECR > then a new Task Definition is created > Services is then deployed with update Task definition.
Your workflow looks good, what’s your concerns or issues?
If you deploy from CI/CD pipelines, you could use aws cli, or third party tools like https://github.com/fabfuel/ecs-deploy or https://github.com/silinternational/ecs-deploy aws-cli and the first one from third-party are written in python so may not work for you and not very CI friendly as requires python runtime. The second is pure shell.
That is the general pattern. Our CI builds the images and pushes them to ECR. Then I have a Jenkins job that someone can run with the desired tag. It then gets the current task def, changes the image tag, creates new task def, and updates the service
Does your Terraform script do this @Steven or is that through aws cli?
Jenkin pipeline. Mix of AWS cli and groovy)
Has anyone used Terraform to do the updates?
Terraform manages the services, but not image deploys. I’ve had terraform do the deploys at a different job. That can work, but it is much slower
Good to know. Thanks @Steven
Just depends what your environment and release process is like. Where I am now, we deploy many services at once and do a number of environments around the same time. So speed was important
Speed is critical to us as well. It doesn’t sound like Terraform for deployments is the way to go then. Except to create the infra.
2020-06-17
Hey, Im looking at the cloudposse terraform modules today to see if i can use some of the modules to replace our own work and i see a difference between ssh key behaviour between two modules. We have placed an SSH key in our AWS environment, and we are able to reuse this in the single EC2 instance without the need to generate a new SSH key.
module "public_node" {
source = "git::<https://github.com/cloudposse/terraform-aws-ec2-instance.git?ref=master>"
ssh_key_pair = "id_rsa_test"
vpc_id = module.vpc.vpc_id
security_groups = [module.vpc.vpc_default_security_group_id]
subnet = module.subnets.public_subnet_ids[0]
associate_public_ip_address = true
name = "public"
namespace = var.namespace
stage = var.stage
ebs_volume_count = 2
allowed_ports = [22, 80, 443]
}
This works, the ssh key is found in the existing AWS environment and a plan is made. When similar setup is used in the ec2 instance group setup like this:
module "public_nodes" {
source = "git::<https://github.com/cloudposse/terraform-aws-ec2-instance-group.git?ref=master>"
namespace = var.namespace
stage = var.stage
name = var.name
region = var.region
ami = var.public_ami
ami_owner = var.owner_id
vpc_id = module.vpc.vpc_id
subnet = module.subnets.public_subnet_ids[0]
security_groups = [module.vpc.vpc_default_security_group_id]
assign_eip_address = "true"
associate_public_ip_address = "true"
instance_count = var.public_instances_count
root_volume_type = var.root_volume_type
root_volume_size = var.root_volume_size
delete_on_termination = "false"
ssh_key_pair = "id_rsa_test"
}
The result is this:
Error: Error in function call
on .terraform/modules/public_nodes.ssh_key_pair/main.tf line 30, in resource "aws_key_pair" "imported":
30: public_key = file(local.public_key_filename)
|----------------
| local.public_key_filename is "/Users/rogierd/Git/platform/terraform/infrastructure/eu-c1-core-test-cluster.pub"
Call to function "file" failed: no file exists at
/Users/rogierd/Git/platform/terraform/infrastructure/eu-c1-core-test-cluster.pub.
I found the difference between the two modules in the ec2 instance resource: In terraform-aws-ec2-instance
key_name = var.ssh_key_pair
In terraform-aws-ec2-instance-group
key_name = signum(length(var.ssh_key_pair)) == 1 ? var.ssh_key_pair : module.ssh_key_pair.key_name
Is this last the intended behaviour?
Solved it by cloning the repo and removing the module.
Hi all - does anyone know how to perform a state rm
using the CP tooling on TF 0.12? I believe I am running into a variant of https://github.com/hashicorp/terraform/issues/17300.
Terraform Version Terraform v0.11.3 + provider.aws v1.8.0 Terraform Configuration Files # aws-stack/backend.tf terraform { backend "s3" { bucket = "my-project" key = "state…
I had worked through this previously with output, but my notes don’t seem to be accurate with respect to a procedure for running within .module, with proper envvars set
Terraform Version Terraform v0.11.3 + provider.aws v1.8.0 Terraform Configuration Files # aws-stack/backend.tf terraform { backend "s3" { bucket = "my-project" key = "state…
I hope this patch I created to tfmask will be useful, https://github.com/cloudposse/tfmask/pull/17
This change will mask lines that match the pattern of "" = "" which usually shows up in property that are maps. For example: When passing secrets to the AWS lambda environment v…
v0.13.0-beta2 0.13.0-beta2 (June 17, 2020) NOTES: backend/s3: Deprecated lock_table, skip_get_ec2_platforms, and skip_requesting_account_id arguments have been removed (#25134) backend/s3: Credential ordering has changed from static, environment, shared credentials, EC2 metadata, default AWS Go SDK (shared configuration, web identity, ECS, EC2…
Closes #13410 Closes #18774 Closes #19482 Closes #20062 Closes #20599 Closes #22103 Closes #22161 Closes #22601 Closes #22992 Closes #24252 Closes #24253 Closes #24480 Closes #25056 Changes: NOTES …
Community Note Please vote on this pull request by adding a reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave "…
Here is a fragment of my terraform code. This issue is how does my ansible script know about the eip that is created after the instance. I hope there is a way. Alternatively, can I run the ansible script after the eip is created? I’m shutting down tonight. Hopefully, someone can answer while I am sleeping! Thanks.
resource "aws_instance" "openrmf" {
ami = var.ami
... details ...
# this provisioner waits for the instance to accept SSH.
provisioner "remote-exec" {
inline = [
"sudo yum install -y python3"
]
}
# this provisioner runs the ansible script. however, the eip is not available yet?
provisioner "local-exec" {
command = "ansible-playbook --extra-vars \"rmf_admin_password=${var.rmf_admin_password}\" -u ${var.ssh_user} -i '${self.public_ip},' --private-key ${var.pki_private_key} playbook.openrmf.yml"
environment = {
ANSIBLE_HOST_KEY_CHECKING = "False"
}
}
# here is the eip creation and association.
resource "aws_eip" "openrmf" {
instance = aws_instance.openrmf.id
vpc = true
tags = {
Name = "openrmf"
}
}
Hello David, you need to use the depends_on to create resource sequentially https://www.terraform.io/docs/configuration/resources.html#depends_on-explicit-resource-dependencies
to get information on the eip created, the documentation chapter “attributes references” list the variables that are exposed after the resource is created
Resources are the most important element in a Terraform configuration. Each resource corresponds to an infrastructure object, such as a virtual network or compute instance.
Provides details about a specific Elastic IP
How and where does the local exec get called?
local exec run from your machine remote exec run on the server
My understanding is this timeline:
- create instance
- remote exec - install python
- local exec - run ansible playbook
- create and attach eip With the above time, when the ansible playbook is executed the EIP does not exist.
I am considering using a load balancer which will get created first so the ansible playbook can know about it.
(also consider using the terraform-ansible-provider
)
2020-06-18
I was able to run the local-exec (to execute the ansible playbook) inside the aws_eip resource.
also I have found this video helpfull specially at minutes 12:56 which is on your topic : https://www.hashicorp.com/resources/ansible-terraform-better-together
Learn how users of the HashiCorp stack can use Ansible to achieve their goals of an automated enterprise—through complimentary security, image management, post provisioning configuration, and integrated end to end automation solutions.
Hi Everyone, Is there any one who updated to Terraform 0.13 Beta, Would you please explain how to build 0.13 Beta
You can download it from here
that’s what i did, though beta2 seems to have introduced some regressions in the init code. i went back to beta1
seems something to do with the new provider/registry logic… https://github.com/hashicorp/terraform/issues/25282
Terraform Version �Terraform v0.13.0-beta2 Terraform Configuration Files terraform { required_providers { local = { source = "hashicorp/local" version = ">=1.4.0" } null = { …
have a few questions around backend/tfstate and migrating things around ( a thread) :
TL;DR I have some state in a S3 backend, and some local state saved alongside .tf files. How do I separate for now then merge later ?
Background: I inherited a TF setup that was done a year or so ago. Its setup to use a S3 backend Currently it contains a bunch of company wide stuff and some random S3 buckets etc.. also in there is the tf for a couple fargate clusters, and a redshift DB that are all unrelated to each other.
Additionally I have a new fargate cluster setup along with a couple S3 hosted websites, all with local terraform.tfstate files.
Plans: I would like to keep all of the company wide things in its own github repo, and terraform thats related to all these apps inside the repo’s of the respective projects they belong to.. generally in a .terraform directory.
Step 1. break out all the stuff thats co-mingled in the S3 backend and put it into the project repos it belongs to. and get it all up to data nd working with local terraform.tfstate files
Step 2. merge them back in a way with either a S3 backend, or terraform cloud so I can only push changes and make updates to project level things
This is so that a developer can make changes to a projects resources, but not see other projects or have the ability to change other projects.
would appreciate and tips / suggestions !
Our approach with has been to centralize all terraform code into a single repository that we (Ops/DevOps) control. This allows us to have a single source of truth for all deployed infrastructure.
To limit a developer’s ability to modify the overall infrastructure, you could look into Terraform Cloud. We use it and the paid version has the ability to require approvals before changes are deployed.
Is spreading out the across the projects where it belongs an anti-pattern ?
Perhaps… I think there are pros and cons both ways.
But having all infrastructure code in one place makes management of the infrastructure simpler.
You can allow folks to make changes to the TF code, but require code review/approval in TF Cloud
I really like the idea of keeping it localized inside the projects repo. we have very little “overarching” code that all projects use.
plus it allows devs of that project to see ( and modify ) the resources for only that project, and not get bogged down wading thru thousands of lines of other stuff for other projects.
Yeah… I can see the appeal
We have a small devops team (me) and almost no one else knows how to use TF.
Most of our apps are just dockerized and deployed via a TF module…so each app is encapsulated by 1 file generally in the main TF repo
so right now in the S3 backend I have code that provisions a redshift DB. How can I yank that out of that backend and put it into local tfstate files ?
while leaving the other stuff in there in tact
I would try to import the resources
import into local tfstate, then how to remove it from the other backend without deleting it ?
humm I see the issue
so you want to move the resources to a new TF repo without deleting them?
Maybe try terraform state rm
and import into the other repo. https://www.terraform.io/docs/commands/state/rm.html#<i class="em em-~"</i>text=The%20terraform%20state%20rm%20command,%2C%20entire%20modules%2C%20and%20more>.
The terraform state rm
command removes items from the Terraform state.
Sounds dreadfully manual
But probably the only way to do it
so yeah, this worked pretty well.. installed tfenv
for switching back and forth between versions, did all the imports on the new one first, then state rm
them from the old backend and its all sorted out.
to answer a point about terraform project. I use multi repos and a remote tfstate for each . then I use data terraform_remote_state to have include state from needed repos.
The pros are:
• access and change are limited to the cloned repos
• state are splitted so user using different repos can run terraform plan in parallel The cons are:
• take care to output required value in each repos
• smaller tfstate, quicker terraform plan execution For exampel I have repos per project: network , core, kubernetes cluster, web … ( one per “product” or perimeter )
also if you have remote backend, switching to local backend will copy it .. then you can change things and push on an other backend.
awesome ! thanks @Pierre-Yves this sounds exactly how I want to organize everything.
2020-06-19
Hello, I’m getting an error running the following in terraform plan
. From everything I read, this seems like it should work. If anyone can see what I’m doing wrong, I would appreciate the help.
variable restrict_to_az {
type = list
description = "Optional list of Availability Zone IDs (not names) to restrict the deployment to."
default = []
}
locals {
subnet_az = length(var.restrict_to_az) > 0 ? var.restrict_to_az : []
}
data aws_subnet_ids public {
vpc_id = var.vpc_id
tags = {
Visibility = "public"
}
# Optionally filter the subnets we deploy to by a list of availability zone IDs
dynamic "filter" {
for_each = local.subnet_az
name = "availabilityZoneId"
values = [filter.value]
}
}
The error is:
Error: Unsupported argument
on .terraform/modules/…/subnets.tf line 31, in data “aws_subnet_ids” “public”:
31: name = “availabilityZoneId”
An argument named “name” is not expected here.
Error: Unsupported argument
on .terraform/modules/…/subnets.tf line 32, in data “aws_subnet_ids” “public”:
32: values = [filter.value]
An argument named “values” is not expected here.
@DJ You want to wrap name and values in a
content {}
block
You are absolutely right! Thanks! How did I miss that?
Hello SweetOps!
I’m hitting this one: https://github.com/hashicorp/terraform/issues/17300
Terraform Version Terraform v0.11.3 + provider.aws v1.8.0 Terraform Configuration Files # aws-stack/backend.tf terraform { backend "s3" { bucket = "my-project" key = "state…
Ya, we have that issue too
Terraform Version Terraform v0.11.3 + provider.aws v1.8.0 Terraform Configuration Files # aws-stack/backend.tf terraform { backend "s3" { bucket = "my-project" key = "state…
Kind of given up on finding a fix. Closest we’ve come is using symlinks
Thanks for the quick answer
Ya, it’s annoying that terraform output
doesn’t behave like all the other commands
for this reason, we had to give up on using terraform init -from-module=...
directly, but using something like #terragrunt will work around it
Will give a look to terragrunt, thanks!
Any other workaround than moving everything to the config dir that has all the tf files?
I just need to terraform output
btw, @zadkiel any insights on what we can do about this? https://github.com/aslafy-z/helm-git/issues/9
The plugin fails when ref refers to an annotated tag, which is the usual case for GitHub release tags. For example: helm repo add istio git+https://github.com/istio/istio@install/kubernetes/helm?re…
Right :smile: Had no time to look more into this from now.. Best I can say is using sparse=0
, which is definitively not ideal
The plugin fails when ref refers to an annotated tag, which is the usual case for GitHub release tags. For example: helm repo add istio git+https://github.com/istio/istio@install/kubernetes/helm?re…
ok - ya, seems more like a git problem
@Jeremy G (Cloud Posse)
@Erik Osterman (Cloud Posse) Not sure what you are asking. Between the problem statement and the further details in the comments, the Git issue explains pretty much everything.
More of an FYI
@zadkiel We are having another problem with helm-git
when we run helmfile
with helm
v2, we get
Error: unknown shorthand flag: 'c' in -c
Which most likely comes from helmfile
auto-detection of helm3. Do you have a workaround for that?
Works with helm-git
version 0.4.2 but not 0.7.0
Looks like the problem is https://github.com/ms32035/helm-git/blob/fd7f37634d3d1d0d964a37653a9ef7f2ad92ecd2/helm-git-plugin.sh#L30
Helm plugin to fetch charts from Git repositories. Contribute to ms32035/helm-git development by creating an account on GitHub.
@mumoshu :point_up: This is an interaction among helmfile, helm diff, and helm-git. Seems something sets HELM_BIN to helm diff
Can you have a try with the last release?
@jaroslaw-osmanski Allow using git+http (#105) @jaroslaw-osmanski fix: Change HELM_BIN when some plugins breaks it (#106) @aslafy-z fix test for helm3
@Jeremy G (Cloud Posse)
@zadkiel This is going to be problematic for us as we use helm2
and helm3
for Helm binaries so we can have both versions installed at the same time.
I suggest you look for an environment variable called HELM_GIT_HELM_BIN
and use that if set:
- If
HELM_GIT_HELM_BIN
setHELM_BIN="$HELM_GIT_HELM_BIN"
- Check
HELM_BIN
for sanity, use it if it passes tests. Include in sanity tests thathelm version -c
returns without an error, although that could be a bit tricky because you will want a portable way to enforce a timeout on that command, since it appears the terraform provider causes a hang. - Set
HELM_BIN
to “helm” if the sanity check fails
Hi all! Just saw this and was wondering for any GitLab shops or greenfield stuff, has anybody tried out “GitLab managed Terraform State” https://docs.gitlab.com/ee/user/infrastructure/#gitlab-managed-terraform-state
Documentation for GitLab Community Edition, GitLab Enterprise Edition, Omnibus GitLab, and GitLab Runner.
We’ve been using Terraform Cloud (free-tier) linked to GitLab. Works well for us.
Documentation for GitLab Community Edition, GitLab Enterprise Edition, Omnibus GitLab, and GitLab Runner.
nice! Seems they both have encryption, locking, and remote execution, wondering what the upside of GitLab managed is – just one less 3rd party service integration to maintain?
I think the argument for using GitLab would be total control. In our case, we were looking to have a more managed service that let us get up and going quickly. GitLab option would be more work upfront but may give you greater ability to really fine tune things.
I’ve been using it for a few projects hosted in GitLab that were previously using TF Cloud and S3 backends. It works fine with no fuss. You have to use a PAT to interact with the backend, though.
This is neat - didn’t know gitlab offered this. Fits nicely into their devops positioning.
From the looks of it, seems you’d need to create individual gitlab projects/subprojects for each TF state file you’d init? (Not simply a matter of specifying a backend “key”)?
I believe you’re correct, @vicken - one state file per project. If your project structure aligns to that, it should work well. If you’re doing a monorepo style of coding, though, it may not be the right solution.
thanks for clarifying!
Actually, I’m wrong. The projects do support multiple state files. The documentation is just behind on that. You just change the name of the file path after /terraform/state/
Issue: https://gitlab.com/gitlab-org/gitlab/-/issues/220559
Docs MR: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/34797
Our documentation for using Terraform backend doesn’t include an example of using multiple state files, however, our blog post references support for multiple named state files. Our customer’s recommendation is…
Ah nice find! cutting edge stuff
Gitlab says the state is encrypted and I imagine the durability is good - does anyone have concerns with storing Terraform State in Gitlab as opposed to say s3 + dynamodb?
Does the typical “TACOS” store the state files / provide options to store in your cloud provider account of choice?
Some questions came up regarding the loss or compromise of the statefile by the provider within our team regarding storing statefiles with Gitlab - I reached out to our TAM to get more clarification on durability and security guarantees.
Thoughts @Erik Osterman (Cloud Posse) ?
in case ya’ll haven’t switched to the templatefile()
function, away from the template_file
provider yet, the provider was just archived… https://github.com/hashicorp/terraform-provider-template/issues/85
This Terraform provider is archived, per our provider archiving process. While Terraform configurations using this provider will continue to work, we recommend that usages of this provider's re…
eek. I think we have a few template_file
references
This Terraform provider is archived, per our provider archiving process. While Terraform configurations using this provider will continue to work, we recommend that usages of this provider's re…
GitHub is where people build software. More than 50 million people use GitHub to discover, fork, and contribute to over 100 million projects.
no bueno
i expect the provider will continue to be available, but i’d definitely not use it in new work, and move away from it in anything that i’m touching…
worst case, keep a copy of the binary handy and drop it alongside the terraform binary when you need it?
2020-06-21
Scheduled Maintenance | Terraform Cloud Jun 21, 07:01 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Jun 15, 16:58 UTC Scheduled - Terraform Cloud will undergo scheduled maintenance on Sunday, June 21st beginning at 7:00 UTC. We expect this maintenance to complete in an hour. During this time, terraform runs themselves will be operational but output from the runs may be impacted for a short period of time.
HashiCorp Services’s Status Page - Scheduled Maintenance | Terraform Cloud. |
Scheduled Maintenance | Terraform Cloud Jun 21, 08:05 UTC Completed - The scheduled maintenance has been completed.Jun 21, 07:01 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Jun 15, 16:58 UTC Scheduled - Terraform Cloud will undergo scheduled maintenance on Sunday, June 21st beginning at 7:00 UTC. We expect this maintenance to complete in an hour. During this time, terraform runs themselves will be operational but output from the runs may be impacted for a short period of time.
2020-06-22
Scheduled Maintenance - Terraform Cloud THIS IS A SCHEDULED EVENT Jun 25, 08:00 - 09:00 UTCJun 22, 10:17 UTC Scheduled - Terraform Cloud will undergo scheduled maintenance on Thursday, June 25th 2020 beginning at 8:00 UTC. We anticipate the maintenance will take no longer than an hour. During this time, some Terraform Cloud plans or runs may be delayed.
HashiCorp Services’s Status Page - Scheduled Maintenance - Terraform Cloud.
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Jul 01, 2020 11:30AM.
Register for Webinar
#office-hours (our channel)
Hi! is anyone else getting
Error: unexpected plugin checksum terraform
in their plans? i’m getting that locally and from atlantis
ME
ME
Us to.
Just came here to check.
the status page is shows nothing
failing on AWS provider CI tests as well: https://github.com/terraform-providers/terraform-provider-aws/runs/796393230
Terraform AWS provider. Contribute to terraform-providers/terraform-provider-aws development by creating an account on GitHub.
They are fixing it now: https://github.com/terraform-providers/terraform-provider-aws/issues/13877
Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or other comme…
Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or other comme…
yeah; was just saying to myself that this is one of most fast moving GH issues I’ve ever seen!
thanks guys
maybe I should pin the providers……
How many people pin their providers? Does it depend on the provider? Personally I never pin the AWS one, and it hasn’t bitten me very often. But some on my team pin providers all the time.
We’ve started pinning providers as part of upgrading everything to HCL2.
Terraform module that creates an S3 bucket with an optional IAM user for external CI/CD systems - cloudposse/terraform-aws-s3-bucket
example
but we’re using ~>
so it’s not too strict.
Guess that would help in this situation.
~>
is not a pin, it is a pessimistic that has an upper bound
FWIW, I use ~>
too
I wouldn’t make this issue the reason to start pinning the AWS provider. It is generally extremely stable. The recommended practice is to use the pessimistic so that you never jump to another major version.
provider "aws" {
version = "~> 2.67.0"
}
Is functionally identical to version = ">= 2.67.0, < 3.0.0"
we also use ~>
, but we also use terraform-bundle to pre-stage the providers, so we only use versions we’ve actually run through tests with the bundle
@Robert Horrox just pointed out to me something that’s going to be a big problem we need to address ASAP in anticipation of 0.13.
All cloud posse mods will fail to init
Makes testing 0.13 really hard
Here I was thinking 0.13 was no big deal.
what ?
What’s going to cause them to fail?
In the versions.tf there is a constraint for 0.12
~> 0.12
Ah. That’s an intentional thing though right? Same as they were from 11 to 12. Each one needs to be tested to see if it causes any problems on 13, and if not the constraint would just be updated
I dont see that as a problem, just a step that needs to be taken in order to support 13.
Yeah, seems legit. Luckily, if we do we want to upgrade that across all the repos it’d be easy to automate that.
It makes perfect sense, unfortunately there is no way to override the version constraint. I tried override.tf and it didn’t work
@Erik Osterman (Cloud Posse) maybe you can cut a 0.13 branch for testing on the repos
Or I have a bad idea that no one would like
Not even I like
ya, maybe we need to do the 0.13/master
branches like we did for 0.11/master
Has anyone overcome the issue with CloudFront not accepting a Lambd@Edge version of $latest
? I’ve tried creating an alias and referencing that but still get $LATEST
as the version number.
did you try to use two $$
?
I haven’t yet @Andriy Knysh (Cloud Posse), how would I do that?
$$LATEST
- maybe try that
Ahh ok. thanks @Andriy Knysh (Cloud Posse) I will give it a go. Because CloudFront needs a version number e.g. 1
when I reference the lambda in cloudfront config it errors.
The other workaround is to create an alias with a specific version number.
But that is still manual if I want to bump the version
let me know if it fixes it. We had something similar in other resources (not related to CF and Lamnda@Edge), but something to check
Managed to find a solution. I needed to have the publish
attribute on my lambda_resource
which then would publish the new lambda and give it a version.
if any volunteers are interested in helping out in maintaining our modules, please reach out to me. we have a private contributors channel where we coordinate efforts. Typical things we need help with are terraform code reviews, fixing terratest (golang) integration tests, helping other contributors get their PRs merged, etc.
2020-06-23
Hello, I am trying to dynamically find the NAT gateways in my various env VPCs using datasources for both subnets and nat gateways. However the nat gateway datasource expects a single value as an output. Our subnets are across multiple azs and each has a NAT. Is there a way for still dynamically retrieving nat gateway IPs dynamically using TF ?
for this module https://github.com/cloudposse/terraform-aws-ecs-alb-service-task
it creates a aws_iam_role.ecs_service
task role but we already have the AWSServiceRoleForECS
aws iam list-roles --path-prefix /aws-service-role/ecs.amazonaws.com/
Is there a difference between the role that it creates and the AWS provided role ?
If there isn’t a difference, we could omit the creation of that role, use a data source to retrieve the AWS one, and save ourselves a service role per task.
Describe the Feature Use AWS service role instead of creating our own https://console.aws.amazon.com/iam/home?region=us-west-2#/roles/AWSServiceRoleForECS Expected Behavior Reuse an existing role i…
I think (not sure) that AWS only creates the AWSServiceRoleForECS
role when you create your first cluster. So it wouldn’t be guaranteer to be around when the modules tries to associate it.
ah interesting
i also realized that if youre using fargate this service is no longer created
cause fargate requires awsvpc network mode
(this goes beyond my knowledge of ECS - so maybe double check in #terraform or other trusted contributor) =0
Morning! I’m using this module: https://github.com/cloudposse/terraform-aws-ec2-autoscale-group and it’s working great except I’m trying to figure out how to lifecycle ignore latest_version
because it increases every time I do a terraform apply
and I’d like to prevent doing an update in-place on every run.
Anyone know how to get around this?
I’m having a stupid issue with lists : I’m passing a list as an argument [aws_iam_role.AuroraAccessToDataBuckets.arn, aws_iam_role.AuroraIamAuth[*].arn]
but this does not work
since it produces
an error
so I tried join
join(", ", aws_iam_role.AuroraIamAuth[*].arn)
but that does not work because it will look like this
This is a guess… have you tried: concat([aws_iam_role.AuroraAccessToDataBuckets.arn], aws_iam_role.AuroraIamAuth.*.arn)
iam_roles = [
"arn:aws:iam::1111111111111111:role/AuroraAccessToDataBuckets",
+ "arn:aws:iam::1111111111111111:role/RDSIamAuth-xxxxx, arn:aws:iam::1111111111111111:role/RDSIamAuth-bbbb",
]
mmmm let me try that
that did it
thanks a lot
Guesses FTW!
In my variables.tf file, I set an AMI value. Instead of hard-coding it I’d like to get the value from a script execution. Is there a way to do that? This is the equivalent of $(./get-current-centos7-ami.sh) in a shell script.
Use the aws ami data source
Thanks.
2020-06-24
what interpolation can I use to detect dir from terragrunt is run ? like I am using terragrunt --terragrunt-working-dir
It depends… You can try Terraforms path.root
, path.cwd
etc… https://www.terraform.io/docs/configuration/expressions.html#references-to-named-values
If you want to detect the dir where the terragrunt.hcl
is, you’ve got Terragrunts get_terragrunt_dir()
https://terragrunt.gruntwork.io/docs/reference/built-in-functions/#get_terragrunt_dir
Terragrunt allows you to use built-in functions anywhere in terragrunt.hcl
, just like Terraform.
No I want to detect dir where terragrunt is run
like:
/
/projects/project1/terragrunt.hcl
/templates/terragrunt.hcl
, and I am running terragrunt from root path with --terragrunt-working-dir /projects/project1/terragrunt.hcl
and terragrunt.hcl should contain something like include = $pwd/templates/terragrunt.hcl
interesting… Have you tried anything yet?
how can I use terraform func in terragrunt ? Like in inputs ?
inputs = {
foo = path.cwd()
}
wont work
Hello, did you use https://github.com/eerkunt/terraform-compliance ?
I have installed it but 1.2.7 use the latest terraform version 0.13.0-beta1 and don’t run against 0.12 Is there a way to find a version compatible with the stable terraform 0.12.26 ?
a lightweight, security focused, BDD test framework against terraform. - eerkunt/terraform-compliance
it looks I have to convert a terraform template to json before parsing it terraform show -json /tmp/plan.out > /tmp/plan.out.js
then I can run terraform-compliance -f compliance -p /tmp/plan.out.js
and failed with any version with an error
a lightweight, security focused, BDD test framework against terraform. - eerkunt/terraform-compliance
which tool do you use to check your terraform code ?
Sorry just saw this. terraform-compliance doesn’t use any terraform
unless you are using the docker image. Apart from that, if you already have a terraform
executable within your disk, you can use -t
parameter to pass that terraform executable.
Apart from those, the only reason why it needs terraform
is to convert the plan.out
to a plan.out.json
. If you already convert that somehow, you can directly use the plan.out.json
file with -p
which won’t check for any terraform
executable.
Currently terraform-compliance
only support 0.12.*, we didn’t test with 0.13 yet, whenever its released, we will also add support that version asap.
I need to check again, terraform-compliance :)
qq on https://github.com/cloudposse/terraform-aws-jenkins a required option is
var.github_branch
GitHub repository branch, e.g. 'master'. By default, this module will deploy '<https://github.com/cloudposse/jenkins>' master branch
it’s the name of the branch
see this example https://github.com/cloudposse/terraform-aws-jenkins/blob/master/examples/complete/fixtures.us-east-2.tfvars#L45
Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack - cloudposse/terraform-aws-jenkins
ty!
is this asking for the branch name or the url to the github repo?
v0.12.27 Version 0.12.27
v0.12.27 0.12.27 (June 24, 2020) BUG FIXES: backend/remote: fix panic when there’s a connection error to the remote backend (#25341)
Ensure that the *http.Response is not nil before checking the status. This can happen when retrying transport errors multiple times.
For the devs who create and maintain company specific terraform modules, how do you maintain your modules? do you have pre-commit hooks? automatic terraform fmt? terraform-docs? chatops?
would love to hear what you do to initialize your repository and what you use to maintain your internal modules.
thread start
im trying to create a nice checklist so i can follow a plan that’s been vetted. so far ive been thinking of following the cloudposse method. except we dont use github actions.
plan so far
• .github
for pull request templates
• docs
for module docs
• examples
for full examples
• test
for tests that run using bats
• .precommit
to use anton’s precommit hooks
• then the standard terraform files main
, outputs
, variables
, and versions
To get full coverage (all regions) in an account with AWS Config and AWS Guardduty, am I correct in thinking that I have to add AWS providers for every active region in the account, and add one of each of the following for each region/provider?:
aws_config_configuration_recorder
aws_config_configuration_recorder_status { is_enabled = true }
aws_guardduty_detector
Does this change if I am using AWS Organizations features together with these services? E.g., if I create
aws_guardduty_organization_configuration { auto_enable = true }
in every region in the guardduty master-account, can I skip creating aws_guardduty_detector
in all of the member accounts?
I just made a mistake …..I was using resource “random_string”
and I realize the terraform plan and apply are showint the password in the terminal, I thought there was a sensitive
clause or something like that to obscure them ?
for sensitive random values please use random_password.
Identical to random_string with the exception that the result is treated as sensitive and, thus, not displayed in console output.
There’s a sensitive
flag on outputs, but I don’t think that will prevent it from showing up in a plan, so I’m not sure what use it is.
just found the random_password
this was not an issue until we started working with Atlantis
thanks guys
2020-06-25
Scheduled Maintenance - Terraform Cloud Jun 25, 08:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Jun 22, 10:17 UTC Scheduled - Terraform Cloud will undergo scheduled maintenance on Thursday, June 25th 2020 beginning at 8:00 UTC. We anticipate the maintenance will take no longer than an hour. During this time, some Terraform Cloud plans or runs may be delayed.
HashiCorp Services’s Status Page - Scheduled Maintenance - Terraform Cloud.
Scheduled Maintenance - Terraform Cloud Jun 25, 09:00 UTC Completed - The scheduled maintenance has been completed.Jun 25, 08:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Jun 22, 10:17 UTC Scheduled - Terraform Cloud will undergo scheduled maintenance on Thursday, June 25th 2020 beginning at 8:00 UTC. We anticipate the maintenance will take no longer than an hour. During this time, some Terraform Cloud plans or runs may be delayed.
HashiCorp Services’s Status Page - Scheduled Maintenance - Terraform Cloud.
v0.12.28 Version 0.12.28
v0.12.28 0.12.28 (June 25, 2020) BUG FIXES: build: build the 0.12 version of Terraform with Go 1.12.13, rather than 0.13 Terraform’s 1.14.2 (#25386)
Terraform Version $ terraform version runtime: netpoll: break fd ready for -2 fatal error: runtime: netpoll: break fd ready for something unexpected runtime stack: runtime.throw(0x2a2e35a, 0x39) /u…
I was hoping to use interpolation in the following way but it is not allowed. What is the better way?
variable "tag_name" {
default = "centos-${formatdate("YYYYMMDDhhmmss", timestamp())}"
}
I can create tags using interpolation, but wanted to centralize the value.
tags = { Name = "centos-${formatdate("YYYYMMDDhhmmss", timestamp())}" }
@David Medinets Instead of input variables, which accept just static input, please use locals for “run-time” variables: https://www.terraform.io/docs/configuration/locals.html
Local values assign a name to an expression that can then be used multiple times within a module.
How come https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/password does not say how to find the password for later use?
Its stored in the state, you reference by the name of the resource
ie in the docs it shows `
password = random_string.password.result
If you mean why isn’t there a ‘data’ lookup for it, its because this isn’t backed by some external resource such as AWS SecretsManager.
Thanks. I did not read carefully enough to see the ..result was the password.
I am creating a password as resource
resource "random_password" "centos_user_password" {
length = 16
special = true
override_special = "_%@"
}
Then trying to use inside a template_file so that I can push it to Ansible.
data "template_file" "tf_ansible_vars_file" {
template = "${file("./tr_ansible_vars_file.yml.tpl")}"
vars = {
centos_password = random_password.centos_user_password.result
}
}
Terraform does not like the interpolation. The message is There is no variable named "centos_user_password"
did you set the var name correctly in the template, centos_password
vs centos_user_password
?
also, i would recommend using templatefile()
instead of template_file
, since the latter has been archived and at some point will probably be deprecated, https://github.com/hashicorp/terraform-provider-template/issues/85
This Terraform provider is archived, per our provider archiving process. While Terraform configurations using this provider will continue to work, we recommend that usages of this provider's re…
That’s just embarrassing. Thanks.
eh, it’s easy to miss. it gets easier to spot when you’ve done the exact same thing yourself before, mebbe several times…….
Learn about some of the tentative milestones that the HashiCorp Terraform engineering team wants to meet before they scope out a 1.0 release of Terraform.
I watched this day of cause the title got me. Honestly, didn’t think it gave too much concrete information. Just their philosophy. Which was cool, but not sure if it was what I was looking for haha.
Learn about some of the tentative milestones that the HashiCorp Terraform engineering team wants to meet before they scope out a 1.0 release of Terraform.
Is there a terraform equivalent to ansible’s password_hash
? I know about the random_password resource. I want to do something like "my_password | password_hash('sha512')"
I’m not familiar with ansible, so I don’t know what the password_hash function does, but there is a sha512 function… https://www.terraform.io/docs/configuration/functions/sha512.html
The sha512 function computes the SHA512 hash of a given string and encodes it with hexadecimal digits.
How come, it is only after I ask the questions that I realize the answer? Push the clear password into ansible and hash the password in the playbook. The terraform side only needs the clear text.
password: "{{ centos_user_password | password_hash('sha512') }}"
2020-06-26
Hello, it seems terraform
resource "azurerm_template_deployment"
to deploy ARM template don’t print out the outputs values unlike what is stated in https://www.terraform.io/docs/providers/azurerm/r/template_deployment.html
Manages a template deployment of resources.
This would make vetting terraform modules a lot easier
Is your feature request related to a problem? Please describe. I'm often missing the list of resources that are being created by a module. It would help to get a feeling of what a module provid…
i like the idea, but maybe a “control variable” or even the entire “control expression” instead of specifically enable/disable. not sure how to differentiate whether it is a single enable/disable variable, or a more complex expression consisting of multiple variables/locals that influence the count
Is your feature request related to a problem? Please describe. I'm often missing the list of resources that are being created by a module. It would help to get a feeling of what a module provid…
this is always a good module for getting a feel for how complex a count expression can be… https://github.com/terraform-aws-modules/terraform-aws-vpc/blob/master/main.tf#L932
Terraform module which creates VPC resources on AWS - terraform-aws-modules/terraform-aws-vpc
Perhaps if it has a count expression would be enough to presume that it may have logic to enable it
Hi - I’d like to use the ecr module, but can the version-range be changed to allow for 0.13? I want to iterate over a collection in the module - which is a TF0.13 feature
Hey @David J. M. Karlsen — Contributor team is in talks about supporting 0.13 still but no final plan yet right now. I think it’ll likely be tackled at some point in the coming weeks across all the repos as we’ve discussed just updating the terraform_version constraint to be >= 0.12 && <= 0.14
, but haven’t confirmed how we’re going to accomplish it.
I’d say submit a PR that updates that constraint and we can chat through it there if you’re only looking to do this for one module.
I think you want this: https://github.com/cloudposse/terraform-example-module/blob/master/versions.tf#L2
Example Terraform Module Scaffolding. Contribute to cloudposse/terraform-example-module development by creating an account on GitHub.
Signed-off-by: David Karlsen [email protected] what Change TF version range why Support iteration on modules with for_each references https://sweetops.slack.com/archives/CB6GHNLG0/p159319…
We’ve just pushed a cloudposse reference module example. This is the scaffolding we use to create new modules following all the best practices I know of
@Josh Duffney woot woot
Example Terraform Module Scaffolding. Contribute to cloudposse/terraform-example-module development by creating an account on GitHub.
2020-06-27
Terraform v0.12.20 launched some sweet helper functions that I am just now finding out about: can()
and try()
.
variable_validation
+ can
is pretty rad — definitely could use that in CP modules for better input validation / messaging to consumers at some point once it’s more widely supported and more folks are on v0.12.20+.
https://levelup.gitconnected.com/using-terraforms-try-can-and-input-validation-eb45037af2b2
Learn how to use Terraform’s can() and try() functions.
I remember seeing those functions but haven’t tried them yet. Thanks for the reminder.
Also why after there so many spelling errors in that post. Even tho it’s a great point to use these functions, the errors distract from this point.
Learn how to use Terraform’s can() and try() functions.
great thanks @Matt Gowie for sharing your work , I was looking for a way to do variables validation !
2020-06-28
During remote execution (remote-exec) my script supposed to run to completion. however it got terminated in halfway mark and it made my deployment unusable. If I am not wrong I guess this got terminated due to some string from the script output made terraform to assume script execution is complete. If this is the case, any idea how to make terraform to wait until script is completed? is there a way to overcome this?
hello @Haroon Rasheed, have you look at https://www.terraform.io/docs/providers/time/r/sleep.html ?
Manages a static time resource.
Looks interesting let me check it out..Thanks mate!!
2020-06-29
Hi Guys I am using https://github.com/cloudposse/terraform-aws-cloudtrail to setup cloudtrail I am currently facing issues with:
The given value is not suitable for child module variable "event_selector"
defined at ../modules/terraform-aws-cloudtrail/variables.tf:90,1-26: list of
object required.
And here is my tf file
event_selector = {
read_write_type = "All"
include_management_events = true
data_resource = {
type = "AWS::S3::Object"
values = ["arn:aws:s3:::"]
}
}
Can’t get it what is wrong here… Can someone please suggest something…
Thanks!
It expects a list, you gave it a map
Just add [ ]
around the map and I think you’re ok
Watch out on that selector, if you have tons of activities with S3 you co be looking at a big bill for cloud trail
I did this last year and we had to pay 3.5k for one month of cloud trail
event_selector [
include_management_events = true
read_write_type = "All"
data_resource [
type = "AWS::S3::Object"
values = [
"arn:aws:s3:::",
]
]
]
@Zach did u meant it like this…?
No it needs to be a list with that map in it
@jose.amengual Thanks for the heads up!
event_selector = [{
<stuff>
}]
presumably the module is using this as a quick selector where if the list is empty it doesn’t create that block on the resource
Just one more thing…
How do u manage Insights events
using terraform while creating cloudtrail
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Jul 08, 2020 11:30AM.
Register for Webinar
#office-hours (our channel)
I am trying to create two redis cashes on azure, on for with a standard profile and another with a premium profile I have copied this code form Microsoft, but I believe that it is for 0.11 or lower not 012
resource “azurerm_redis_cache” “standard” { count = var.redissku == “Standard” ? 1 : 0
name = “${var.prefix}.Redis${random_id.redis.hex}” location = “$azurerm_resource_group.main.location” resource_group_name = “$azurerm_resource_group.main.name” capacity = var.redisvmcapacity family = var.redisvmfamily sku_name = var.redissku }
when I run a terrafrom plan or validate I get an error here:
Error: “resource_group_name” may only contain alphanumeric characters, dash, underscores, parentheses and periods
on Lamp-Azure.tf line 274, in resource “azurerm_redis_cache” “standard”: 274: resource “azurerm_redis_cache” “standard” {
I think it is to do with the format of the name option. none of the outputs from my variables have any banned characters, so I suspect that is something to do with the way my variable is built up and that terraform is reading it as ${var.prefix{Redis${random.id.redis.hex} rather than “myValueRedis6d (or some other random hex value)
Anyone using Dependabot with Terraform updates…? I’ve been trying to find some real-life examples and haven’t had a moment to try out yet anything, just wanted to check if someone has experience already. I might ask GitHub as well, they had their KB-forum…
any particular question? we do it, but we have to run dependabot with a custom patch with support for tf 0.12
easy enough with a github action… https://github.com/patrickjahns/dependabot-terraform-action
Github action for running dependabot on terraform repositories with HCL 2.0 - patrickjahns/dependabot-terraform-action
We use https://github.com/renovatebot/renovate to update our TF dependencies
Universal dependency update tool that fits into your workflows. - renovatebot/renovate
@loren where do you store your modules…? Ours are in TFE… To my understanding, the private module registries are not (yet at least) supported - only (public?) GitHub repository module sources…
@Tyrone Meijn any issues…? Are you able to handle all of them, like, core version bumps, provider bumps and module bumps with Renovatebot?
using dependabot via the github action should work for private modules also
Ok thanks @loren - I’ll test this
Reposting one last time:
Morning! I’m using this module//github.com/cloudposse/terraform-aws-ec2-autoscale-group> and it’s working great except I’m trying to figure out how to lifecycle ignore latest_version because it increases every time I do a terraform apply and I’d like to prevent doing an update in-place on every run.
Anyone know how to get around this?
@Erik Osterman (Cloud Posse) not sure if you caught my comment, but it looks like the ecs-web-app
module failure is due to https://github.com/terraform-providers/terraform-provider-github/commit/5e9d0756b071efa05a909fcff6428010ad661181#diff-9d0b8010dc61a844fd5ffde3f9a8c38e. The webhooks module is going to start breaking again for people.
Terraform GitHub provider. Contribute to terraform-providers/terraform-provider-github development by creating an account on GitHub.
Very frustrating - that was deprecated in one release, on a minor bump
Terraform GitHub provider. Contribute to terraform-providers/terraform-provider-github development by creating an account on GitHub.
we’re playing wackamole trying to keep tests stable
Yeah, understood. Apologies - I think one of the original breakages was my change. But this is a bit disappointing for the official provider, IMO
I was just looking before you posted this, it is pretty disappointing and it was not documented in the provider docs
Not even in the changelog
nope
FYI, I tried submitting an upstream issue with TF, but I can’t find the magic formula to let me submit the issue (submit is disabled).
This issue is definitely doing its best to make me feel dumb
lol
anyone know what the difference is between environment
and stage
in the label module:
https://github.com/cloudposse/terraform-null-label/blob/master/variables.tf#L7-L16
They are very similar.
real word use cases would be nice to see
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label
their variable descriptions look similar
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label
only difference that i can see is that the environment is shown before the stage according to the label order
just choose one and leave the other undefined
exactly, very similar. hence my confusion!
A label follows the following convention: {namespace}-{environment}-{stage}-{name}-{attributes}
. The delimiter (e.g. -
) is interchangeable. The label items are all optional. So if you prefer the term stage
to environment
you can exclude environment and the label id
will look like {namespace}-{stage}-{name}-{attributes}
. If attributes are excluded but stage
and environment
are included, id
will look like {namespace}-{environment}-{stage}-{name}
from the readme
looks like stage
came first and then environment
but yea, looks like you can provide either or so if you dont like one, keep it undefined
good catch!
thanks!
ya, it was really to appease more people
our usage of stage is pretty opinionated. so adding environment
appeals to more! now, IMO a stage might have multiple environments, so this is a nice way to disambiguate.
note, not all of our modules have been updated to support it. if you find it lacking, PR it and we’ll promptly approve and merge.
ooo thats good to know. i havent adopted null label but if/when i do, i’ll use stage
to prevent limitations
Has anyone toyed around with https://www.terraform.io/docs/providers/aws/r/servicequotas_service_quota.html
Manages an individual Service Quota
only trough the console…I wonder how that will work
Manages an individual Service Quota
what’s the best way for us to monitor limits? have you seen a lambda for this? the one by AWS labs requires trusted advisor
I’ve used trusted advisor and cloud custodian for this
We recently moved to aws service limit checker which does this in a smoother way
this is the lambda by aws labs?
Yes
?
2020-06-30
Anyone using Open Policy Agent/Conftest with Terraform?
haven’t tried it but it looks useful. id love to use it but i think CICD is a prerequisite for this so it would be interesting to see it with atlantis.
just found this: https://marcyoung.us/post/atlantis-opa/
We really like Atlantis and smart pre-flight checks. If you see a trend in my latest blogs it’s likely you’ll guess i really like OPA. We …
Ah, yup, doing something similar
Can anybody help with what it think is a basic Terraform issue.
I am having issues wtih parsing derived values with the azurerm provider.
it seems to be related to the resource.examp[le.id value
I am using 0.12.16 on windows