#terraform (2020-08)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2020-08-02
Hey everyone, have been using this module: https://github.com/cloudposse/terraform-aws-rds-cluster - and didn’t get around to sorting out our upgrade strategy :face_palm:
Now looking to update our minor version via engine_version
variable. Changing this and doing a plan shows:
- rds_cluster to be updated in place.
- rds_cluster_instance to be recreated « downtime event I guess (RDS instances can take many minutes to be created). Thoughts and questions:
apply_immediately
variable is set to true (default). We could change.auto_minor_version_upgrade
- for instances defaults totrue
, we would want to change (add to module) if managing version in TF- I guess ZDP (
Zero Downtime Patching
) isn’t available via Terraform? Or might happen if we do disable immediately and let update during maintenance window? - We could use
-target=cluster|instance[0-2]
to limit the actions a plan does but this means babysitting the upgrade and could result in downtime if we did anapply
without-target
flag - How are others doing this?
Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster
Our plan is:
- version in tf - no surprises
- disable auto updates - no surprises - PR added
- apply immediately false - do when ready via maintenance window/console/target
we do this manually and the we update terraform so it matches the version
I have not found a clean way to do this without taking down the cluster cleanly or getting a timeout
released #74 as 0.29.0
Thanks PePe. Thanks Erik for the quick turnaround!
Indeed. So my latest attempt following this: https://github.com/terraform-providers/terraform-provider-aws/issues/9401#issuecomment-551350474
With this change to instance in module:
# engine_version = "" # commented out, leave versioning in cluster only
lifecycle {
create_before_destroy = true
ignore_changes = [engine_version]
}
Then version bump in TF then terraform apply
with:
apply_immediately = false
« resulted in no change, or even pending maintenance in Consoleapply_immediately = true
(cluster only, not instance) « resulted in no change I think?apply_immediately = true
(cluster & instances) << See below…
Test results:
- upgrade to cluster happening straight away in place
- upgrade to instances happening straight away in place without creating new instances
- ~23m taken to complete according to Terraform
- ~10s shutdown and restart events in Console. << presumably hard outage
- No failover events in Console - total cluster outage?
- versioning in Terraform matching AWS Console
To be confirmed:
- actual app downtime during this - reboots are at unknown/unscheduled time, didn’t have any apps connected to RDS.
- whether this is still worth doing rather than just updating manually in Console or splitting TF module
Terraform Version Terraform v0.12.3 provider.aws v2.16.0 provider.template v2.1.2 Affected Resource(s) aws_rds_cluster aws_rds_cluster_instance Terraform Configuration Files resource "aws_rds_…
Overall I think this might be sanest way to do in place upgrades with module as is.
Curious whether we:
- We PR this, or
- Give up and we split out of using module and try upgrading instances one at a time. We already need to blue green from Aurora MySQL 5.6 to 5.7 as that can’t be done in place. However I’d like to have a plan about future version bumps.
Hey team, we’ve reached a consensus that internally we are going to continue with the RDS module modified per above so we can do in-place minor version upgrades of our Aurora MySQL clusters.
Question I have is whether this makes sense PR’d back to the upstream module or not.
• Because regular RDS and Postgres support major version upgrades and it’s only Aurora MySQL that (currently) doesn’t the changes may not make sense.
• I don’t have capacity to test such changes with the other database types. Can only confirm good for Aurora MySQL and thus doing related logic in the module seems overly complex.
but maybe adding something to the docs maybe a good idea?
@Andriy Knysh (Cloud Posse) can you evaluate this when you have a chance? No rush.
Thanks everyone. It’s only a small diff and we’ll still be rebasing on and contrib upstream as go.
thanks for undestanding… 0.13 is taking all our spare time right now
You’re welcome. Very grateful to you and team with all the modules. Cheers
2020-08-03
Hello everyone, I am working on a multi availability zones terraform template on aws. I am fairly new to terraform. Can anyone help me with it please? Any advice or sample template to start with my project?
what AWS resources are you terraforming?
Hello @Andriy Knysh (Cloud Posse), Sorry for the late reply! EC2 Instances resource
take a look at these modules, they all related to EC2 https://github.com/cloudposse?q=ec2&type=&language=
each module has a working example in the examples/complete
folder https://github.com/cloudposse/terraform-aws-ec2-instance/tree/master/examples/complete
Terraform Module for providing a general EC2 instance provisioned by Ansible - cloudposse/terraform-aws-ec2-instance
Terraform Module for provisioning multiple general purpose EC2 hosts for stateful applications. - cloudposse/terraform-aws-ec2-instance-group
Thanks @Andriy Knysh (Cloud Posse) Very helpful!
Anyone using AWS SSO and configuring it via Terraform? I don’t believe there are resources to do so from a quick google search, but just want to confirm. Really dig AWS SSO, but suggesting it to a client without having Terraform support is making me hesitant as I’m trying to get everything for them onto IaC.
i don’t think there is yet much in the way of an api for aws sso, for terraform to interact with… https://docs.aws.amazon.com/singlesignon/latest/PortalAPIReference/API_GetRoleCredentials.html
Returns the STS short-term credentials for a given role name that is assigned to the user.
this issue appears to discuss the same problem… https://github.com/terraform-providers/terraform-provider-aws/issues/13755
Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or other comme…
i had tried to use aws sso previously, but found permission sets far too limiting, compared to managing policies on roles directly… posted a couple times to the aws sso forum, but haven’t come up with a viable approach yet…
• https://forums.aws.amazon.com/thread.jspa?threadID=312303&tstart=0
• https://forums.aws.amazon.com/thread.jspa?threadID=282793&tstart=0
@loren Awesome — you confirmed my findings / fear. Thanks man!
2020-08-04
Does anyone know what properties I can use in aws_cloudwatch_event_target.input
when used with ECS target? I’m trying to get "PropagateTags": "TASK_DEFINITION"
to work. The example shows only the usage of containerOverrides
.
I see this is NOT yet possible: https://github.com/aws/containers-roadmap/issues/89
Tell us about your request Support for tagging a task started through CloudWatch Events. Which service(s) is this request for? Fargate, ECS Tell us about the problem you're trying to solve. Wha…
Hi, facing an issue with dynamic values in CodeDeploy appspec.
With CodeBuild, you can set environment variables from Terraform:
resource "aws_codebuild_project" "myapp" {
// ...
environment_variable {
name = "MY_VARIABLE"
value = var.my_variable
}
// ...
}
Which you can conveniently reference in a buildspec.yml
phases:
commands:
- echo $MY_VARIABLE
However with CodeDeploy, you can’t environment variables from Terraform, or at least, I have not found such an argument in the codedeploy_app, codedeploy_deployment_config, and codedeploy_deployment_group resources.
So, I’m having to manually sync values between my TF file and appspec.yml. For example, I’m using ECS, so appspec.yml wants:
• TaskDefinition*
• ContainerName*
• Port*
• PlatformVersion
• NetworkConfiguration: { Subnets, SecurityGroups, AssignPublicIp } These are either TF variables or generated at TF runtime. It would be great if I could inject them into the appspec / CodeDeploy runtime environment and reference them as variables, but again it doesn’t seem possible. What might be good workaround?
Hi! I am using terraform .12 with workspaces and modules and everything WAS fine. I had prod
and qa
as workspaces mainly for “namespacing” in the state and to use as variable around the code. Everything WAS fine because I had the exact same components with light variations that could easily be configured in their respective .tfvar
files.
Now I need to add a new prod environment in europe and it won’t have the all the things deployed over there so my setup is kinda screwed
I was thinking moving to a structure such as:
.
├── environments
│ ├── prod
│ │ ├── america
│ │ │ ├── [main.tf](http://main.tf)
│ │ │ └── prod-america.tfvars
│ │ └── europe
│ │ ├── [main.tf](http://main.tf)
│ │ └── prod-europe.tfvars
│ ├── qa
│ │ ├── [main.tf](http://main.tf)
│ │ └── qa.tfvars
│ └── staging
│ ├── america
│ │ ├── [main.tf](http://main.tf)
│ │ └── staging-america.tfvars
│ └── europe
│ ├── [main.tf](http://main.tf)
│ └── staging-europe.tfvars
└── modules
├── dns
│ └── [main.tf](http://main.tf)
├── gcp
│ ├── gke.tf
│ ├── [main.tf](http://main.tf)
│ └── network.tf
└── releases
├── product-a-infra
│ └── [main.tf](http://main.tf)
├── product-b-infra
│ └── [main.tf](http://main.tf)
└── shared-infra
└── [main.tf](http://main.tf)
This would allow me to reference the modules I want in my various environments main.tf…
What would you guys do in this case ?
Also, not sure the workspaces still make sense since each environments main.tf will need to define their own terraform config block thus will separate tfstate in there
i’ve often seen folks use the exact <region>
for the directory name, as it’s more of a unique key in the hierarchy… e.g. eu-central-1
, because there are multiple aws regions in europe. of course, you can also get around that with multiple providers in a main.tf…
yep ok, makes sense, thanks
im currently creating a similar project.. however gone down the terragrunt route.. this is how my repo is current structured:
├── non-prod
│ ├── account.hcl
│ └── amer
│ └── us-west1
│ ├── mgmt
│ │ ├── env.hcl
│ │ ├── networking
│ │ │ ├── firewall_rules
│ │ │ │ └── ingress
│ │ │ │ └── terragrunt.hcl
│ │ │ ├── subnetworks
│ │ │ │ ├── subnet_data
│ │ │ │ │ └── terragrunt.hcl
│ │ │ │ ├── subnet_dev
│ │ │ │ │ └── terragrunt.hcl
│ │ │ │ ├── subnet_internaldmz
│ │ │ │ │ └── terragrunt.hcl
│ │ │ │ ├── subnet_services
│ │ │ │ │ └── terragrunt.hcl
│ │ │ │ └── subnet_stage
│ │ │ │ └── terragrunt.hcl
│ │ │ └── vpc
│ │ │ └── terragrunt.hcl
│ │ └── security
│ └── region.hcl
└── terragrunt.hcl
because of the way vars are passed in, if there is a requirement to go into other regions will simply be a case of copying the directory and updating a few vars.. as it follows the DRY code practices helps reduced repeated code.
this then pulls in the required modules
this is obviously at the start of the project… 2 days into the first sprint so will grow out over the next week or so with additional resources…
Terraform gurus
Using locals and built-in string functions in Terraform, is it possible to change or alter the label name not the value?
task_logging = [
for k, v in var.task_logging_options : {
name = trimprefix(k,"TASK_LOGGING_")
value = v
}
]
So I can do the following in the module instantiation:
task_logging_options = {
TASK_LOGGING_Name = "es"
// TASK_LOGGING_Host
}
Which is backed by task definition in the module:
task_logging_options = {
TASK_LOGGING_Name = "es"
// TASK_LOGGING_Host
}
So basically, strip the prefix from each argument to build a logging options object to pass down to the ECS task?
has anyone created an EKS cluster using Terraform Cloud. How does one retrieve the kubeconfig file?
aws eks update-kubeconfig
oh wow, nice
Hey folks, has anyone come across a tool like scenery that supports terraform 0.12+? I’m more interested in cleaning up the output to obtain just the diff..
| grep '#'
Command line utility and JavaScript API for parsing stdout from “terraform plan” and converting it to JSON. - lifeomic/terraform-plan-parser
Oh yea, disregard outdated npm
terraform show -json <PLAN FILE>
2020-08-05
Hey, how are you doing?
I’m using cloudposse/eks-cluster/aws
version 0.24.0
and I’m always experiencing the issue:
Error: configmaps "aws-auth" already exists
on .terraform/modules/eks_cluster/terraform-aws-eks-cluster-0.24.0/auth.tf line 84, in resource "kubernetes_config_map" "aws_auth_ignore_changes":
84: resource "kubernetes_config_map" "aws_auth_ignore_changes" {
Does someone already experienced this config map issue?
I have manually solved with:
terragrunt import --terragrunt-iam-role "arn:..." module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes[0] kube-system/aws-auth
Then running terragrunt apply
again o/
This should not happen on a clean EKS deployment (from scratch). If you’re upgrading an existing EKS cluster from an older version of the module, I would expect that to happen.
Our automated tests run on this module all the time and do not encounter this error.
Will let you know on the case this happens again. But basically it was a clean EKS deployment.
@Andriy Knysh (Cloud Posse) any thoughts on what’s going on?
will deploy this module again in a few to see if will explode
yes I know, 1 sec
@ayr-ton you need to add the same data source as here https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/examples/complete/main.tf#L71
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
and cluster name should be from that data source https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/examples/complete/main.tf#L91
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
ahh, that’s interesting because I did this yesterday when I was removing the fargate profile
that will eliminate a race condition
and then I didn’t saw this error
We’re in syntony, nice.
Thanks!
same thing for fargate profile https://github.com/cloudposse/terraform-aws-eks-fargate-profile/blob/master/examples/complete/main.tf#L98
Terraform module to provision an EKS Fargate Profile - cloudposse/terraform-aws-eks-fargate-profile
I’m running EKS 1.17 right now, and when I use terraform destroy to get rid of a kubernetes_deployment, the replicaset is not deleted in cascade and the pods stays. Anyone else experienced this ?
Sounds like a conversation I was having this morning with a pal of mine - he was talking about using null_resources with triggers to handle that, maybe? https://www.terraform.io/docs/provisioners/null_resource.html
The null_resource
is a resource allows you to configure provisioners that are not directly associated with a single existing resource.
But it used to work before ?
Or at least the leafover replicaset was downscaled to zero
Ah - probably not then!
Now the containers stays up
Hello guys. I updated to EKS 1.17, so I think it's pretty much related to that, but it seems terraform is not able to delete my deployments anymore. When I terraform destroy a kubernetes_deploy…
Over the past few weeks, i’ve had to describe pods or namespaces, update the JSON to remove the finalizers, and push that change back. not sure if a) this is asking for trouble at some point, or b) if there’s a better way…or if it’s directly related to your issue, but it’s helped me out quite a few times, recently.
Is there a tf module for aws workspaces in the cloudposse repo? I was not able to find any examples
There’s still some features lacking in the terraform aws provider to probably make a full IaC module, but you could certainly handle the directory and creation of them like a list of users to create them
The module could handle creation of vpc, subnets etc, workspace directory and users
I see thx for the info.
Yea there’s some issue tickets in the aws provider tracking new and upcoming workspace features
I think there’s a lack of aws api calls to so without those there’s nothing the provider can do
Ya, this was the state of it last year when we looked into it. The next best thing we found was this: https://github.com/eeg3/workspaces-portal
Amazon WorkSpaces Self-Service Portal. Contribute to eeg3/workspaces-portal development by creating an account on GitHub.
(CFT)
No apis for creating bundles
2020-08-06
Does anyone manage to view terraform docs for AWS PROVIDER – staying blank forever https://registry.terraform.io/providers/hashicorp/aws/latest/docs
loads for me
Load perfectly for me, try clearing your cookies, as there could be a stale one
Thanks Tom & RB, still cannot figure out the issue, I have tried clearing cookies and tried in private browsers still no use and weirdly even i cant access using mobile network . and got this message when seen in Chrome developer tools
Failed to find a valid digest in the 'integrity' attribute for resource '<https://registry.terraform.io/assets/terraform-registry-3c3897b8880537ab9759d2e91a1a39c5.js>' with computed SHA-256 integrity 'NL3YiUcpnfSMYU99vstLHDhVzYi63JZfABW7NIQVZmQ='. The resource has been blocked.
le me know if i miss anything
• try firefox or another browser
• create a new profile in chrome and try that
if you’re on a vpn, try toggling it
I am not in VPN, will try firefox.
Its still the same .
can you try a vpn if you have one? if not, you can use free protonvpn. see if that works.
brew cask install protonvpn
if that works, then I’d run a traceroute
to the link on and off the vpn and diff them side by side
see where the connection is failing. it’s possible that an ISP has blocked your IP address.
will try in VPN and about IP Address Block – i wonder how they managed to even block my mobile network.
hey all, just a general Terraform question: I have an EC2 instance that I need to add some userdata to. I want to put a file on the node and run a command. The file is a CA cert:
Error: Invalid expression
on main.tf line 35, in module "eks":
35: pre_userdata = << EOF
Expected the start of an expression, but found an invalid expression token.
<<EOF
? i’ve only seen it with no space
or <<-EOF
to allow indentation of the heredoc
also, I recommend avoiding all forms of HEREDOC
and sticking it in a file and reading that file in.
e.g. I rather edit a shell script that I can run locally during development, rather than editing a shell script inside of terraform that I have to deploy with every change.
definitely find it easier to manage userdata as a file, though if vars need to be passed from terraform, then you’re probably using templatefile()
and running that locally probably won’t work anyway
haha, possibly
but still, yeah, set the template values at the top of script, then at least can swap them out easily-ish
I am using terraform to create and manage most of my teams repos.
• I want to configure all the managed repos to have a hook integration with Microsoft Teams so Pull notifications come through, anyone done this?
• The branch protection policy is empty. I want to set it to enforce branch protection policy IF anything is placed in the branch hooks. Is that possible or is this just something I’ m going to have to do manually after creation of the repo? PR follow-ups are a pain to configure in teams with GitHub so was hoping for a simple way to integrate
2020-08-07
Any one have experience implementing Azure resource locks in tandem with terraform (ie https://www.terraform.io/docs/providers/azurerm/r/management_lock.html). Specifically I am wondering how folks deal with cases when terraform wants to recreate resources.
Manages a Management Lock which is scoped to a Subscription, Resource Group or Resource.
Is anyone using TF to build auto-scaling groups with capacity-optimized spot instances? I’m not finding a good resource that demonstrates how this can be done.
i hope this helps
resource "aws_emr_instance_group" "task" {
cluster_id = aws_emr_cluster.this.id
instance_count = var.task_instance_count
bid_price = var.bid_price
instance_type = var.task_instance_type
name = "${var.cluster_name}-task-grp"
ebs_config {
size = var.core_volume_size
type = var.core_volume_type
volumes_per_instance = var.volumes_per_instance
}
autoscaling_policy = data.template_file.task_autoscaling_policy.rendered
}
data "template_file" "task_autoscaling_policy" {
template = file("${path.module}/templates/autoscaling_policy.json.tpl")
vars = {
min_capacity = var.task_instance_count_min
max_capacity = var.task_instance_count_max
}
}
done it for emr
I’ll check it out. Thanks!
Error: error using credentials to get account ID: error calling sts:GetCallerIdentity: RequestError: send request failed caused by: Post <https://sts.amazonaws.com/>: dial tcp: lookup [sts.amazonaws.com](http://sts.amazonaws.com) on 10.x.x.x:x: write udp 172.31.176.x:54764 >10.x.x.x:53: write: no buffer space available
Does anyone know the cause please
maybe you ran out of space on your laptop?
Yes unknown issue that sorted with system restart , but not out of space. thanks
what terraform version are you using? according to this it looks like it might be fixed with the latest 0.12
https://github.com/terraform-providers/terraform-provider-aws/issues/4709#issuecomment-453554068
Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or "me to…
Using TF 12, I’d like to define a block for tags that I reuse everywhere, but I’m having a little trouble figuring out how to do this in a way that doesn’t feel hacky.
locals {
tags = {
application = "banana"
}
}
resource "aws_iam_role" "bananas" {
...
tags = local.tags
}
I’ve got a ticket to update my tags to allow for both a standard set of tags as well as resource-specific set. Goes something like this, using @RB’s local.tags
, above:
tags = merge(local.tags,
{"Name" = "Resource Name}
)
Has anyone seen this kind of thing before?
on main.tf line 168, in resource “aws_security_group” “lb_adv2_sg”:
168: tags = locals.tags
A managed resource “locals” “tags” has not been declared in the root module.
locals {
tags = {
env = var.environment
owner = "DevOps"
product = "adv2.0"
managed_by = "terraform"
}
}
is defined above (before) the aws_security_group resource in the same file.
This is when running terraform validate
for TF 12. … Ah. I think I added an ‘s’ to local.
Cross promoting this here: https://sweetops.slack.com/archives/CBVK43B6W/p1596820995133400
On recent #office-hours we talked about opsgenie
automation. Today we released :tada: first version of our module to manage it with terraform
, we currently use that to manage most of our opsgenie
setup.
https://github.com/cloudposse/terraform-opsgenie-incident-management
See recording: https://cloudposse.wistia.com/medias/9d4ase4qjy
Is there a way we can destroy specific resource from a workspace in terraform enterprise? when i comment out the block of code, it says block of code missing.
You will need to use either terraform taint and point it and the requisite resource this will mark it for replacement or terraform destroy and again mark the relevant resource. Use terraform state list to find the correct name of the resource in your state file for deletion of replacement
2020-08-08
I am using the kubernetes provider’s kubernetes_namespace
resource, successfully, but I’m trying to get the LB URL, which the docs for kubernetes_namespace say should output self_url
, but I get this error, when I try to use it:
Error: Unsupported attribute
on ../../../../../application-stack/helm.tf line 19, in output "k-ns-url2":
19: value = kubernetes_namespace.borrower[0].self_url
This object has no argument, nested block, or exported attribute named
"self_url".
Instead, we’re using a data.kubernetes_service
to pull out the elb name, to use that to create R53 CNAMEs. This results in never having a clean plan, because data.kubernetes_service.nginx-ingress
has to be read and resource.aws_lb_ssl_negotiation_policy
has a dependency to it for the id
.
How do i get the ELB from helm.helm_release
for nginx-ingress, maybe?
Thanks for any ideas.
I see self_link
in the docs, not self_url
?
Sorry if I wasn’t clear. Neither self_url
, nor self_link
work. Pycharm doesn’t see any of the documented attributes for auto-completion, either. It feels like i’m using a different kubernetes_namespace
, but i don’t think that’s even possible. My kubernetes provider is pinned to “~> 1.12”.
if the attribute isn’t there, all i can think of is that it is null due to a count/for_each issue… the error there looks to be from an output, and outputs do not support count/for_each. so it is not recommended to use indexing of an “optional” resource… instead use the old “join” trick
output = join("", kubernetes_namespace.borrower.*.self_link)
I created an output of kubernetes_namespace.datadog
and got this, which shows that self_link
is an attribute of the metadata
attribute, not a direct attribute of the namesapce:
test = {
"id" = "datadog"
"metadata" = [
{
"annotations" = {}
"generate_name" = ""
"generation" = 0
"labels" = {}
"name" = "datadog"
"resource_version" = "637"
"self_link" = "/api/v1/namespaces/datadog"
"uid" = "1c61de6b-9dc3-4ee7-9b71-b85a78a54655"
},
]
}
In any case, self_link
doesn’t give me what I wanted.
the question remains, how do i get the URL of the ELB of the EKS clusters’ nginx-ingress
shoot, was hoping by resolving the tf issue that would get you there. i don’t know EKS well enough to help with that part… maybe ask in #kubernetes ?
I don’t think there’s anyplace to get that URL, other than using a data block, like this:
data "kubernetes_service" "nginx_ingress" {...
and then referencing it like this:
data "aws_elb" "nginx-ingress" {
name = split("-", data.kubernetes_service.nginx_ingress.load_balancer_ingress[0].hostname)[0]
}
The only problem with this way – the way we’ve been doing it - is that – and I recall someone else getting unclean plans, because TF was reading some data blocks, even though nothing had actually changed.
So, pivoting from finding another way to get the ELB for nginx-ingress running on an EKS cluster, to finding out how to do this, so that you get clean plans, rather than the following, is the goal:
# module.stack_install.data.aws_elb.nginx-ingress will be read during apply
# (config refers to values not yet known)
<= data "aws_elb" "nginx-ingress" {
+ access_logs = (known after apply)
+ arn = (known after apply)
+ availability_zones = (known after apply)
+ connection_draining = (known after apply)
+ connection_draining_timeout = (known after apply)
+ cross_zone_load_balancing = (known after apply)
+ dns_name = (known after apply)
+ health_check = (known after apply)
+ id = (known after apply)
+ idle_timeout = (known after apply)
+ instances = (known after apply)
+ internal = (known after apply)
+ listener = (known after apply)
+ name = (known after apply)
+ security_groups = (known after apply)
+ source_security_group = (known after apply)
+ source_security_group_id = (known after apply)
+ subnets = (known after apply)
+ tags = (known after apply)
+ zone_id = (known after apply)
}
# module.stack_install.data.kubernetes_service.nginx_ingress will be read during apply
# (config refers to values not yet known)
<= data "kubernetes_service" "nginx_ingress" {
+ id = (known after apply)
+ load_balancer_ingress = (known after apply)
+ spec = (known after apply)
+ metadata {
+ generation = (known after apply)
+ name = "nginx-ingress-singleton-controller"
+ namespace = "nginx-ingress"
+ resource_version = (known after apply)
+ self_link = (known after apply)
+ uid = (known after apply)
}
}
# module.stack_install.aws_lb_ssl_negotiation_policy.external_tls must be replaced
-/+ resource "aws_lb_ssl_negotiation_policy" "external_tls" {
~ id = "abcdabcd...:443:external-tls" -> (known after apply)
lb_port = 443
~ load_balancer = "abcdabcd..." -> (known after apply) # forces replacement
name = "external-tls"
....
# module.stack_install.aws_route53_record.borrower_api_a_record will be updated in-place
~ resource "aws_route53_record" "borrower_api_a_record" {
fqdn = "borrower-api-demo-brace.brace.ai"
id = "Z1WJ47LG7V031G_borrower-api-demo-brace.brace.ai_A"
name = "borrower-api-demo-brace.brace.ai"
records = []
ttl = 0
type = "A"
zone_id = "Z1WJ47LG7V031G"
- alias {
- evaluate_target_health = false -> null
- name = "ab73319dbbd184637aed2ae9b56b85a6-1595539325.us-east-2.elb.amazonaws.com" -> null
- zone_id = "Z3AADJGX6KTTL2" -> null
}
+ alias {
+ evaluate_target_health = false
+ name = (known after apply)
+ zone_id = (known after apply)
}
}
# module.stack_install.aws_route53_record.servicer_api_a_record will be updated in-place
~ resource "aws_route53_record" "servicer_api_a_record" {
fqdn = "servicer-api-demo-brace.brace.ai"
id = "Z1WJ47LG7V031G_servicer-api-demo-brace.brace.ai_A"
name = "servicer-api-demo-brace.brace.ai"
records = []
ttl = 0
type = "A"
zone_id = "XYZ" - alias {
- evaluate_target_health = false -> null
- name = "ab73319dbbd184637aed2ae9b56b85a6-1595539325.us-east-2.elb.amazonaws.com" -> null
- zone_id = "Z3AADJGX6KTTL2" -> null
}
+ alias {
+ evaluate_target_health = false
+ name = (known after apply)
+ zone_id = (known after apply)
}
}
Understanding that it has to run, before it gets the info for the load balancer, how do i set this up to get a clean plan? Every time, it recreates the aws_route_53 records for those API servers and the aws_lb_ssl_negotiation_policy. I need this to create a clean plan.
i presume the elb is created from a different terraform config? can you just feed in the elb name as a variable, instead of a data source?
2020-08-09
2020-08-10
Hi there! I’m trying to use https://github.com/cloudposse/terraform-aws-cicd and the README example seems really outdated. So much that it becomes a pain to try and use the project out-of-the-box.
The example:
- says
app
but it should beelastic_beanstalk_application_name
- says
env
but it should beelastic_beanstalk_environment_name
- says
aws_region
but should beregion
Not sure if there are any other misses, but maybe it would be nice that someone who knows the project can revise this
Terraform Module for CI/CD with AWS Code Pipeline and Code Build - cloudposse/terraform-aws-cicd
Terraform Module for CI/CD with AWS Code Pipeline and Code Build - cloudposse/terraform-aws-cicd
Hi @Almog Cohen if you’re looking at it in detail with the intention of using it, it’s a good opportunity for a PR!
Terraform Module for CI/CD with AWS Code Pipeline and Code Build - cloudposse/terraform-aws-cicd
Terraform Module for CI/CD with AWS Code Pipeline and Code Build - cloudposse/terraform-aws-cicd
Hi guys, I am very newbie to terraform, I need your help.
I have been asked to conduct an audit and tighten a considerable list of IAM Roles. The problem is as follows:
The previous DevOps guys manually added the AmazonEC2RoleforSSM
policy to the IAM Roles (it’s not written the change in TF).
SOC Auditors consider that AmazonEC2RoleforSSM
is too wide open, and it needs to be replaced by the following managed policies instead: SSMMaintenanceWindowRole
and SSMManagedInstanceCore
.
I don’t want to do this task manually, I would like to automate using terraform. Could you please give me some guidance on how to proceed?
I already have the IAM roles list in locals { roles_list=list(role1,role2, role3) }, but I would like to:
1 - Check if AmazonEC2RoleforSSM
exists in that role.
2 - If the policy exists, remove it
and add SSMManagedInstanceCore
and SSMMaintenanceWindowRole
3 - If AmazonEC2RoleforSSM
doesn’t exist, add SSMManagedInstanceCore and SSMMaintenanceWindowRole
I am not quite sure how to deal with loops. Thanks!
If the infrastructure is not already in terraform, then using terraform to “patch” things in AWS won’t really work well. E.g. terraform cannot remove something it didn’t provision.
Should I import the managed policy first in each IAM role?
Yea you would have to first import what’s there into state using terraform import and then change it
That process should be run carefully and you should run a plan until it’s a noop and then make your changes
Yep that’s def the easier option
cloudposse has a module for creating roles https://github.com/cloudposse/terraform-aws-iam-role
A Terraform module that creates IAM role with provided JSON IAM polices documents. - cloudposse/terraform-aws-iam-role
thanks!
Look at Cloudsplaining to help fix this.
Cloudsplaining is an AWS IAM Security Assessment tool that identifies violations of least privilege and generates a risk-prioritized report. - salesforce/cloudsplaining
then either bridgecrew/airIAM or duo-labs/Parliament from duo-labs to fix your IAM configs going forward
cool never heard of parliament, Im a check that out
err the terraform aws docs seem to be down https://registry.terraform.io/providers/hashicorp/aws/latest/docs
this comes in handy https://kapeli.com/dash
Dash is an API Documentation Browser and Code Snippet Manager. Dash searches offline documentation of 200+ APIs and stores snippets of code. You can also generate your own documentation sets.
terraform docset does exist even tho it’s not listed on their website
`nice
well technically they aren’t down, just a lot harder to read
that’s a paddlin
v0.13.0 0.13.0 (August 10, 2020)
This is a list of changes relative to Terraform v0.12.29. To see the incremental changelogs for the v0.13.0 prereleases, see the v0.13.0-rc1 changelog.
This section contains details about various changes in the v0.13 major release. If you are upgrading from Terraform v0.12, we recommend first referring to <a href=”https://www.terraform.io/upgrade-guides/0-13.html” rel=”nofollow”>the…
Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amon…
think ill give that a few weeks, kind of hard to roll back as terraform upgrades pollute your remote state
I feel that. I’ve been using the beta / rc versions on a few projects pointed at test regions that deploy in parallel with production. Haven’t hit any stumbling blocks yet, but bake time is always good.
I gave it a try and was getting some really weird errors and rolled back haha
2020-08-11
Hi guys, I have a use case like this. I want create a module to receive a custom variable as type “list” EX:
modules "example" {
services = [ "a", "b", "c"]
.
.
.
}
As after that variable is input, matching resource for each element in the list will be created
Ex: if the list contains matching [“a”, “b”] then only matching resource “a”, “b” is created. If the list is empty, then no resource created additionally then the matching code will trigger up local value as I define
locals {
`` I need to know if local can loop on the list services and pick up each element and assign the value matching
a = if "a" exist then local.a = 1 or if not there no matching element a = 0
b = ...
c = ...
}
resource "aws_resoure_type" "service a" {
count = local.a
.
.
.
}
resource "aws_resoure_type" "service b" {
count = local.b
.
.
.
}
I know terraform is powerful but I need to ask you guys if that logic can be done. Thanks a lot
locals {
services = ["a", "b", "c"]
}
resource "aws_instance" "this" {
for_each = toset(local.services)
...
}
This will create
aws_instance.this["a"]
aws_instance.this["b"]
aws_instance.this["c"]
thank for your reply. unfortunately, it’s not the case I want. It’s not replicated resources. Each resources is different depend case. like iam policy, each service will have it own. And the condition to trigger creation follow what I described.
maybe you are using the wrong type of variable, try to use a hash not a list
any chance for a review on https://github.com/cloudposse/terraform-aws-ecr/pull/61 ?
Signed-off-by: David Karlsen [email protected] what Upgrade terraform-null-label to TF 0.13 compat version why Support TF 0.13 references #51
Checking it out now.
Signed-off-by: David Karlsen [email protected] what Upgrade terraform-null-label to TF 0.13 compat version why Support TF 0.13 references #51
@David J. M. Karlsen Shipped! https://github.com/cloudposse/terraform-aws-ecr/releases/tag/0.23.0
what Upgrade terraform-null-label to TF 0.13 compat version why Support TF 0.13 references #51
awesome - thanks!
Seems we need https://github.com/cloudposse/terraform-aws-iam-user/issues/6 too
When using either master or the terraform-0.12 branches, Terraform v0.12.20 complains about the terraform-null-label module, as the present module references terraform-null-label module.git?ref=0.1…
is this module still maintained?
I guess it stopped at https://github.com/cloudposse/terraform-aws-iam-user/pull/3
Updated to be compatible with Terraform v0.12
Ah yeah @David J. M. Karlsen… that module is a good bit outdated. 12.x branch but no tests which is why it isn’t merged. We should get that back into the fold as I’ve definitely used the terraform-aws-modules equivalent in the past instead of using this. To provide consistency to the community and my own future tf codebases I’d like to fix that.
Seems like it’d be an easy modules to add tests to honestly. Unless you’re interested in tackling it, I can add that to my queue for when I have a spare hour or so.
I switched to vanilla aws_* resources in the mean time, which seems to have me covered, I need some tweaking anyways. Thanks for responding though!
Terraform Cloud Outage Aug 11, 18:40 UTC Investigating - We are currently experiencing issues with Terraform Cloud. The UI is down and terraform runs and plans will not complete at this time. We are investigating the issue.
HashiCorp Services’s Status Page - Terraform Cloud Outage.
how many outages has it been this year?
6 outages since june 1
Hello, I’m using https://github.com/cloudposse/terraform-aws-dynamic-subnets and I set enabled = false
and I’m getting this error
Error: Error in function call
on .terraform/modules/haystack.dynamic_subnets/nat-gateway.tf line 42, in resource "aws_nat_gateway" "default":
42: subnet_id = element(aws_subnet.public.*.id, count.index)
|----------------
| aws_subnet.public is empty tuple
| count.index is 0
Call to function "element" failed: cannot use element function with an empty
list.
I think someone else had this issues before?
Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets
I think I saw this in other modules and we wrapped element on coalesense?
Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets
this is not going to work
resource "aws_nat_gateway" "default" {
count = local.nat_gateways_count
even if the module is set to false it will try to create the resource
PR comming
Terraform Cloud Outage Aug 11, 18:48 UTC Monitoring - We have identified the issue and plans and applies are currently succeeding, the UI is back up. We are continuing to monitor.Aug 11, 18:40 UTC Investigating - We are currently experiencing issues with Terraform Cloud. The UI is down and terraform runs and plans will not complete at this time. We are investigating the issue.
Terraform Cloud Outage Aug 11, 19:05 UTC Resolved - Terraform Cloud is operational again. If a run failed during this outage, please re-queue it. If you have problems queueing runs, please reach out to support.Aug 11, 18:48 UTC Monitoring - We have identified the issue and plans and applies are currently succeeding, the UI is back up. We are continuing to monitor.Aug 11, 18:40 UTC Investigating - We are currently experiencing issues with Terraform Cloud. The UI is down and terraform runs and plans will not complete…
Hi All, got sort of an opinion question here. Does anyone have a preference on using remote state for looking up resources, or using data sources to look them up at runtime? Been trying to research, and not having much luck finding which is recommended or preferred
I assume it might be related to a size thing, that is size of terraform repos, or amount of resources under management
I don’t think there’s a black/white rule on this. Off the cuff, here’s my recommendation:
• Use SSM for sharing values across toolchains (e.g. #terraform and helmfile) or where you need to access the values outside of terraform
• Use remote state between terraform projects. E.g. everything you provision that you are in control over.
• Use data sources for things which you might not have control over but depend on.
Thanks @Erik Osterman (Cloud Posse), this pretty much aligns with at least my personal opinion. Though i never thought about the first bullet, that is a really cool idea
a side benefit of using SSM is the permissions can be controlled with more granularity than tfstate, when needing to control access to specific values
Hi, anyone have successfully used https://github.com/cloudposse/terraform-aws-ecs-alb-service-task with ECS+EC2 autoscaling and capacity providers?
Terraform module which implements an ECS service which exposes a web service via ALB. - cloudposse/terraform-aws-ecs-alb-service-task
I’m literally poc’ing them this week
Terraform module which implements an ECS service which exposes a web service via ALB. - cloudposse/terraform-aws-ecs-alb-service-task
I’ve had success with combo of cpu/mem/request count for the container metrics, and leveraging cps for the instance scaling
Granted all against a crappy node/express app I mocked up with faker
And I just crush it with artillery
I’m just started playing with it but it complains that I have no instances in my capacity group
New or existing cluster?
It doesn’t play well with an existing cluster, but after cycling instances in the asg, it worked
I deleted the ask and service, I will play with it a bit more
So it looks like you can NOT create a capacity provider on a service
oh, apologies i misunderstood, no the cp is on the cluster
you need to create it and attach it to the cluster and then attach it to the ecs service
that tracks the capacity cpu/mem of the instances in the asg
no worries, the consolke showed some weird empty selector and then I was wondering
so I just added in TF to the cluster and it worked
funny that TF let’s you do it and shows broken in the console
well the aws API let you do it
2020-08-12
Problems setting workspace execution mode in Terraform Cloud Aug 12, 10:35 UTC Investigating - We are currently investigating an issue where customers may be unable to set workspace execution mode in Terraform Cloud via the web interface. The API for this feature is still functional and can be used while we investigate. Please contact the support team if you need further assistance with this feature.
HashiCorp Services’s Status Page - Problems setting workspace execution mode in Terraform Cloud.
How does one create an Azure Service Principal with sdk auth?
Problems setting workspace execution mode in Terraform Cloud Aug 12, 14:42 UTC Resolved - We have deployed a fix for this issue. Customers should now be able to set workspace execution modes successfully in the web interface as well as via the API.Aug 12, 10:35 UTC Investigating - We are currently investigating an issue where customers may be unable to set workspace execution mode in Terraform Cloud via the web interface. The API for this feature is still functional and can be used while we investigate. Please contact the support team if you need…
HashiCorp Services’s Status Page - Problems setting workspace execution mode in Terraform Cloud.
Hey for folks using the CP key-pair module (https://github.com/cloudposse/terraform-aws-key-pair) — How do you manage not checking that the pem file into git? Removing the pem file from the location the module writes it to obviously causes the module to want to recreate that file which I don’t want. Any tips to deal with that?
Terraform Module to Automatically Generate SSH Key Pairs (Public/Private Keys) - cloudposse/terraform-aws-key-pair
could you put the pem in an s3 bucket and use a data source to bring it down in order to feed it into the module ?
Terraform Module to Automatically Generate SSH Key Pairs (Public/Private Keys) - cloudposse/terraform-aws-key-pair
Yeah definitely. Sadly I was just trying to make it work so that I didn’t need to pre-generate it or set up something like that.
I tried:
# If keypair's public key exists then no need to generate the key again.
generate_ssh_key = fileexists("../pub_keys/${var.project}-${var.environment}-keypair.pub") ? false : true
But then it ends up deleting the pub key on the 2nd apply which I don’t want.
Going to put it down for now unless somebody shouts in here: “Here’s the way to do it!”
Parameter Store
we use that for keys , licenses etc
I lol’d at that dude, well done.
I created and documented a manual process for now. My team can deal.
Use our SSM module instead
Terraform module that provisions an SSH TLS Key pair and writes it to SSM Parameter Store - cloudposse/terraform-aws-ssm-tls-ssh-key-pair
Unfortunately, this is not the popular module but it’s the superior one.
Hahah I’m the last contributor
Didn’t even know about this. Did the most recent release as part of the ChatOps mass-update.
next time use my username
Ya, @Matt Gowie is now probably the #1 contributor in terms of PRs
haha
I need more dark green in my github activity graph
Yeah, I don’t mind my “74 contributions on Aug. 5th 2020” day.
Anyway — Thanks for pointing that out @Erik Osterman (Cloud Posse). Will definitely check that!
I do not care
Haha @jose.amengual you can do the microplane update for bumping the version of terraform-null-label across all the repos. That’ll leave you with some nice dark green in your graph!
HAHAHAHAHA
no worries , if someone ask who is this @Matt Gowie I will say is a machine bot user
Look, I even documented it for you!
How do i slice a map, given an array of keys? I have a map containing monitor definitions, but i only need some of them for each invocation of the module. I pass in the keys to the monitor defs that I want, but I have not been able to create a map slice, containing just the keys that were passed in.
So, it looks something like this, where log_errors_high_volume
and oom
are keys that get passed in.:
locals {
monitor_defs = {
log_errors_high_volume = {
type = "log alert",
recipients = local.recipients,
query = "....
},
oom = {
type = "log alert",
recipients = local.recipients,
query = "....
If i only passed in oom
, i’d need a map with one element. I intend to use the resulting map as the target for a for_each.
Ok. Got it. I could have just referenced the larger data structure everywhere in the module call, using each.value
as the key, but this worked nicely and got rid of a lot of text.
Given a map of definitions (local.monitor_defs), keyed on name, and a list of keys (local.monitor_def_keys), the following gives just the entries from the map that correspond to the specified keys:
md = {
for key, value in local.monitor_defs: key => value if contains(local.monitor_def_keys, key)
}
@Eric Berg, what version of TF is that?
that was 0.12.29, but we’ve upgraded most things to 0.13.2.
Can you link a reference you used to figure out the above? I can’t quite tell how you solved the issue – what is the structure of monitor_def_keys ?
I don’t have the refs I used to figure this out anymore, @Jaeson. monitor_key_defs
is just a list of strings, representing keys in the monitor_defs
map. Here, I’m returning (key, value) for each entry in monitor_defs
for which there is an entry in monitor_def_keys
.
Terraform Cloud Plan creates 3 IAM Service Account Users & set the permissions inline for a group called “infra-service-accounts”…..
Would you:
• One Workspace Per Account/Stage: Create a workspace, ie separate terraform job in the cloud, for each account so each runs independently and looks up the credentials based on a variable called “account_alias”.
• One Single Plan With Aliased Providers: Create a single plan that uses aliased providers and just repeat the code/module call, credential lookup, and all 8 times in a single file. I’ve tended to keep things separate, so each plan and successful failure reports back, but wondering if the provider alias all in one plan is more common. Less workspaces to review as well.
Hello, I have terraform repo that is going beyond any usefulness and I need to separate in different repos and hopefully different state files, what would be the best way to import the resources to the new states? by manually doing terraform import
or something else?
you can move resources between state files
terraform state mv -state-out=PATH
Path to the destination state file to write to. If this
isn’t specified, the source state file will be used. This
can be a new or existing path
best to lock and disconnect your remote state before undertaking this task
Ok so I will have to go from S3 to local and then back in another S3 backed I guess
Ahhh but I need to move/import certain resources
Usage: terraform state mv [options] SOURCE DESTINATION
This command will move an item matched by the address given to the
destination address. This command can also move to a destination address
in a completely different state file.
This can be used for simple resource renaming, moving items to and from
a module, moving entire modules, and more. And because this command can also
move data to a completely new state, it can also be used for refactoring
one configuration into multiple separately managed Terraform configurations.
This command will output a backup copy of the state prior to saving any
changes. The backup cannot be disabled. Due to the destructive nature
of this command, backups are required.
If you're moving an item to a different state file, a backup will be created
for each state file.
ahhhhhhhh cool
ok awesome , thanks
2020-08-13
That’s pretty cool. I didn’t know this. I wonder if you can simply provide the same s3 key argument in the s3 backend from another module. I also wonder how this works with workspaces
We’re a little paranoid in my office about giving terraform roles to an external service like Terraform Cloud, or if we had a GitHub Action that ran terraform on some trigger, to the point that we’ve resisted using any of them. … how do you guys scope/limit the role for a terraform ‘actor’ given that its potentially touching a lot of different types of resources and API Actions?
Learn how policies can be used to set the permissions boundary for a user or role.
plus proper use of the Resource:
block
I find those so confusing … its an IAM policy that governs another IAM policy that governs a role/user
right?
permission boundaries set the absolute maximum limit of what the entity is able to do. Even if someone screws up and gives them every permission under the sun the permission boundary will still block anything that isn’t allowed in it
They’re a big pain IMO, but yeah permission boundaries is the way I’ve dealt with this in the past as well.
Ok thanks for confirming I find this aspect of IAM very confusing but my junior guy says he’s got a good grasp of it
Also use branch protection and required reviews, and when running in a pull request use a credential that is limited to read-only. plan
will work, but not anything that actually makes changes… Gives you the ability to inspect and approve, before doing anything crazy
and on prem runners are now supported
(announced yesterday)
Whoah
Hi with this cloudfront S3 module https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn it recommends creating the ACM certificate using the cli. Why isn’t the certificate terraformed? It looks like it’s possible: https://github.com/cloudposse/terraform-root-modules/blob/master/aws/acm-cloudfront/main.tf
Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
iirc, due to cloudfront limitations, the cert has to be created in us-east-1. If your provider is defaulted to another region, you have to create another provider and use that one specifically for us-east-1, and I don’t think cloudposse repos generally pin providers?
Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
i created mine outside of the module using a targeted apply, then verified the record, then applied the module which takes the resource acm as an input argument
if you reuse an acm cert, you can use a data source instead
This works if you want to do it as part of one pipeline, you can just depend on both of the validations:
resource "aws_route53_zone" "delegate" {
name = var.domain
}
resource "aws_acm_certificate" "cert" {
provider = aws.us-east-1 # Forced to use us-east-1 due to cloudfront limitations
domain_name = var.domain
subject_alternative_names = ["*.${var.domain}"]
validation_method = "DNS"
lifecycle {
create_before_destroy = true
}
}
resource "aws_route53_record" "cert_validation" {
# It took me TWO HOURS to figure out that Terraform converted this from a list to a set in 0.13, grrrr!
name = element(tolist(aws_acm_certificate.cert.domain_validation_options), 0).resource_record_name
type = element(tolist(aws_acm_certificate.cert.domain_validation_options), 0).resource_record_type
zone_id = aws_route53_zone.delegate.zone_id
records = [element(tolist(aws_acm_certificate.cert.domain_validation_options), 0).resource_record_value]
ttl = 60
}
resource "aws_acm_certificate_validation" "cert" {
count = 0
provider = aws.us-east-1
certificate_arn = aws_acm_certificate.cert.arn
validation_record_fqdns = [aws_route53_record.cert_validation.fqdn]
}
(note specifically the provider us-east-1 alias)
The “grrrr” comment is
Haha, came across that one today, can you tell?
Omg I ran into same issue probably took me about the same amount of time to figure out
I finally looked over the change log and found it
It makes like, literally no difference operationally, it’s JUST there to break random tf files
My coworker said to pin aws version at 2 but I was determined to upgrade to 3
ah i was not aware of the aws_acm_certificate_validation
resource. very cool
and good to know abotu the aws 3.x provider with that breaking change. we’re not currently pinning but will be on the lookout for that same issue
I ended up figuring it out by looking at the state file, and nestled in between a bunch of lists was a lonely set…
I’m still pessimistically pinning the AWS provider to avoid a 3.0 surprise.
iirc, changing to a set was necessary because the aws api returns it as a set, which meant the order was constantly changing. order matters in a list, so when using multiple SANs and dealing with multiple validation records as a result, the constantly changing order would cause perpetual diffs in a plan
Ah, that’s a fairly good reason, I rescind my comment about it being pointless then
And also, a record with the domain and its wildcard subdomain use the same validation record… With a list, there would be a duplicate entry, but with a set all the duplicates get removed automatically
Ran into some of these “fun” problems working this module… https://www.github.com/plus3it/terraform-aws-tardigrade-acm/tree/master/main.tf
Contribute to plus3it/terraform-aws-tardigrade-acm development by creating an account on GitHub.
here’s the reference to the change in the upgrade guide, https://registry.terraform.io/providers/hashicorp/aws/latest/docs/guides/version-3-upgrade#domain_validation_options-changed-from-list-to-set
Thanks for the advice . Got it all Terraformed now (with Terragrunt). Also noticed Erik has an issue to update the docs to use their ACM module: https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/issues/26
what Use terraform-aws-acm-request-certificate instead of aws cli why 100% native terraform, no cli necessary references https://github.com/cloudposse/terraform-aws-acm-request-certificate
i’m working on updating our test infrastructure for 0.13. not much too it if we just want say, “screw it, we’re going to 0.13”. i’m definitely leaning towards that, but it means that support for 0.12 will be only for hotfixes.
I guess you could do something like run the test twice in parallel stages, once for each version?
But at some point they diverge. What happens if it passes 0.13 and fails 0.12?
If it’s a contributor, so we push back and ask them to fix it? Make it work for both? What if it can’t work for both?
Well that’s the decision, right? To use tf 0.13 features, or not
The ability to do for_each and count on modules is likely going to simplify and reduce a lot of code in the cloudposse repos. I’d really prefer not to disallow 0.13
Personally I’d just cutoff 0.12 and go all in on 0.13
The cloudposse repos all need to work on 0.13 before that’s possible though
I’d enforce a minimum version known to be required based on features used, e.g. >=
not ~=
. Enforcing the upper bound is too much, too hard
The pinned versions for commonly used modules is a big issue tbh. For example, almost every single repo is currently broken (on 0.13) due to being pinned at <=0.16.0 of null-label, which is pinned to tf 0.12
@loren I think you’re right. mea culpa on this one. At least for for the terraform core version, I think we should only use a minimnum version because upgrading across minor versions is basically impossible in the current setup. the other challenge is our tests for examples/complete
usually pull in many other modules to bring up a stack. so even simple modules with only 1 dependency, can have many dependencies in the examples.
@Makeshift (Connor Bell) so we’re using git ref sources, so it’s technically ==
I think it’s time to switch our modules away from git ref sources too and use module registry sources
@Andriy Knysh (Cloud Posse)
@Matt Gowie
i’m still wary of moving away from git refs, just because they are more portable and easier to override… maybe with the new registry features i’ll be able to set aside that concern, but i’d need to get some experience with it first
Hm, how do releases work on the registry? Do they just copy directly from github releases?
you register the module with the tf registry, and then any git tags are published as a version in the registry
all automatic at that point, no further interaction required
I guess the only difference is the syntax looks slightly cleaner then?
you also can use all the version constraints on the version
field…
Modules allow multiple resources to be grouped together and encapsulated.
Something to keep in mind is that large-reaching changes eg. microplane may be (slightly) more difficult using that method, since the sed find/replace will be a slightly more pain in the backside multiline
Huh, switching to the terraform registry would be interesting. I could see the benefit for hurdles like this where git refs are causing us to do manual updates and using the module version
input would provide us a cleaner way to say “use any upcoming release”. But that might mean that our always increasing minor version could break module dependencies out in the wild pretty easily. Or are we thinking we would want to only allow patch version increases for module deps?
@Erik Osterman (Cloud Posse) I wonder if we would want to do a test run or two with a couple popular modules to see how that would turn out?
we have hundreds of PRs we gotta open and merge to bump versions everywhere.
This change will forcefully require 0.13 for merge to master
- so I don’t want to merge it until we decide how to proceed with 0.12 support.
https://github.com/cloudposse/actions/pull/42/files
GitHub Use terraform 0.13 by osterman · Pull Request #42 · cloudposse/actions what Use terraform 0.13 for tests why Latest release
what Use terraform 0.13 for tests why Latest release
Interesting terraform module generator: https://github.com/sudokar/generator-tf-module
Very similar to the Cloud Posse example module: https://github.com/cloudposse/terraform-example-module
Project scaffolding for Terraform. Contribute to sudokar/generator-tf-module development by creating an account on GitHub.
Example Terraform Module Scaffolding. Contribute to cloudposse/terraform-example-module development by creating an account on GitHub.
– TERRAFORM 0.13 –
Here’s my plan for tonight.
- Push a
0.12/master
branch up to all modules, cut from current master - Update test automation to use
0.13
for PRs againstmaster
- Update test automation to use
0.12
for PRs against0.12/master
During this time, I’m going to change the default branch ofcloudposse/actions
to my development branch and chatops will likely be broken for a few hours. If I get that working, then I’ll merge that and restore the default branch. Then we’re set to test the onslaught of PRs.
Test infra has been updated. Unfortunately, the scope of this change is only now apparent to me. See the thread here: https://sweetops.slack.com/archives/CB6GHNLG0/p1597383937062900?thread_ts=1597346608.048900&cid=CB6GHNLG0
@loren I think you’re right. mea culpa on this one. At least for for the terraform core version, I think we should only use a minimnum version because upgrading across minor versions is basically impossible in the current setup. the other challenge is our tests for examples/complete
usually pull in many other modules to bring up a stack. so even simple modules with only 1 dependency, can have many dependencies in the examples.
I’ll put a plan together tomorrow, but we’re likely going to need some help opening PRs to expedite this upgrade. Not sure how much of this we can automate.
hey all. I’m currently a bit stuck. I’ve successfully made and provisioned a server, setup s3 buckets etc. How do I then use the docker provider to setup my containers on the newly created server?
I tried moving the docker provisioning to a new module, but got this message, and can’t work out the syntax :
This module can be made compatible with depends_on by changing it to receive all of its provider configurations from the calling module, by using the "providers" argument in the calling module block.
2020-08-14
Hey guys,
I have a question regarding TF state as set from different GitHub branches
Let me explain
So I am working on a branch, seeking to create a new terraform resource. My branch creates this resource in a staging environment. Once I am done with all tests, then I will merge to merge and allow deployment to production
My colleague is doing the same thing - a different resource, but deploying to staging, pending tests, upon whose success the resource will then be deployed to production
So whenever I push my changes to my branch, CI/CD performs checks and deletes my colleagues resources because they are in the shared state file, but absent from my branch
Same thing happens when he pushes his commits to the remote branch. Checks are triggered on GH, removing my resource(s) which is/are missing from his branch but present in the remote state file. This is really slowing us down as one has to wait for the other to be finished with their testing and so forth. I have a feeling that there should be a way to prevent this from happening, but I am just not sure how
Your assistance is appreciated
I’m not sure if this is neccessarily a good idea, but Terragrunt can generate remote state configuration files when running. It may be possible to make branch-specific remote states.
Sounds like it would work… Let me throw that to my lead and hear what thinks. Thank you
Alternatively, if this is for testing anyway, I would spin up a new stack per developer when needed with a completely fresh state. This does assume your TF is modularised enough to be able to spin up test instances without costing a fortune though
You could use a different s3 key backend
Or you could merge the branches and both of you can work off the same branch
Merging the branches is super unlikely because we work at different paces, different timezones…
I like the idea of having branch-specific state files better
I wonder if it would be possible to merge state files like you merge branches
That sounds… unpleasant
What? The different paces of work or merging state files?
Merging of state files. I think you’d be better off doing development in entirely separate states, merging the TF changes into a master branch, then deploying that to the master state file
Ya different states make more sense to me too. Seems cleaner. If you ever want to combine, you can always reimport resources from another state and then remove that state.
what is a safe way to migrate from route table/security group/etc in-line routes/rules to individual routes/rules terraform resources ?
e.g. aws_route routes vs setting all the routes in-line in aws_route_table as there is this warning
Terraform currently provides both a standalone Route resource and a Route Table resource with routes defined in-line. At this time you cannot use a Route Table with in-line routes in conjunction with any Route resources. Doing so will cause a conflict of rule settings and will overwrite rules.
thread
what i’ve done is separate all the in-line to separate aws_route resources and then imported each aws_route from the aws_route_table
2020-08-16
Hello All, I’m new to the terraform, i’m trying to covert terraform kubernetes ingress resource into module(https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/ingress#backend), but when i’m calling the module getting below error. Can someone help me how to fix it
resource ”kubernetes_ingress” ”example_ingress” { count = var.ingress ? length(var.spec) : 0 metadata { name = var.name namespace= var.namespace labels = var.labels annotations = var.annotations } dynamic ”spec” { for_each = length(keys(var.spec[count.index])) == 0 ? [] : [var.spec[count.index]] content { rule = lookup(spec.value, ”rule”, null) dynamic ”rule” { for_each = length(keys(lookup(spec.value, ”rule”, {}))) == 0 ? [] : [lookup(spec.value, ”rule”, {})] content { host = lookup(rule.value, ”host”, null) http { path { path = lookup(rule.value, ”path”, ”*/”) backend { service_name = lookup(rule.value, ”service_name”, null) service_port = lookup(rule.value, ”service_port”, null) } } } } } } } }
Error: Missing item separator
on ingress.tf line 17, in module “ingress”: 16: 17: rule = {
Expected a comma to mark the beginning of the next item.
[terragrunt] 2020/08/17 0539 Hit multiple errors: exit status 1
2020-08-17
Those of you using chamber to store secrets in AWS Parameter Store - do you store the secrets encrypted in a git repository as well as a backup?
CLI for managing secrets. Contribute to segmentio/chamber development by creating an account on GitHub.
I use chamber but keep it in AWS only.
CLI for managing secrets. Contribute to segmentio/chamber development by creating an account on GitHub.
@Andy I’ve done purely storing in PStore before. I’m also more recently using Mozilla Sops + the Sops provider alongside that.
Hi there - we are looking to import our Route53 hosted zones (that are currently managed by hand) into TF Cloud. The ideal layout (at least the way I see it now) is that we have the following GitHub layout and can be managed by CODEOWNERS to approve specific files. Does it make sense to put it all in one workspace, or should each hosted zone get its own workspace? I know ideally, we’d want each lifecycle to get it’s own workspace, but that isn’t a reality yet as many things are old and spun up by hand and may never exist in terraform.
### main.tf
provider "aws" {
region = "us-east-1"
access_key = KEY
secret_key = SECRET
}
terraform {
backend "remote" {
hostname = "app.terraform.io"
organization = "Org"
workspaces {
name = "aws-prod-route53"
}
}
# <https://github.com/terraform-providers/terraform-provider-aws/issues/13626>
required_providers {
aws = "~> 2.64.0"
}
}
resource "aws_route53_record" "staging_record" {
zone_id = aws_route53_zone.org.zone_id
name = "staging.org.com"
type = "A"
ttl = 300
records = ["55.82.222.111"]
}
### network-team.tf
resource "aws_route53_record" "cisco_record" {
zone_id = aws_route53_zone.org.zone_id
name = "cisco.org.com"
type = "A"
ttl = 300
records = ["55.82.222.112"]
}
### CODEOWNERS
## <https://docs.github.com/en/github/creating-cloning-and-archiving-repositories/about-code-owners>
* @global-owner1 @global-owner2
network-team.tf @network-owner
I’m actually in the middle of doing this as well and interested to find out what people recommend. Right now because my network team oversees more than one client/envioronments… I created a repo infra-networking
that is a single workspace vs one per environment - dev,stage,prod because the same core team would likely be in control of all of that and it doesn’t change as often as the application or operations code does
if you have the dns staff infra-networking repo managed by the network team, what you can do is to use data source resources to query that resources and use them in your environments
How did your project go with this? Any lessons learned you would mind sharing with me?
We had to put this on hold to setup some azure k8’s clusters using tf. But if anyone else has success please share.
Anyone get a basic quote on Terraform Business Tier? Trying to figure general price to put a placeholder in the budget, but no phone number to call and not sure how long before I get an email from sales.
Ballpark would be helpful. Team tier $70 per person, and business is “contact sales”
Hi, I noticed value for module.ssh_key_pair.name capital letter are automatically converted to lowercase!
module "ssh_key_pair" {
source = "git::<https://github.com/cloudposse/terraform-aws-key-pair.git?ref=0.11.0>"
name = "KEYPAIR"
ssh_public_key_path = "${path.module}/"
generate_ssh_key = "true"
private_key_extension = ".pem"
public_key_extension = ".pub"
}
Is this a known thing or am I doing something wrong ?
The entire cloud posse ecosystem of modules uses the terraform-null-label
module to enforce consistency
thanks for letting me ill follow the convention
2020-08-18
hello all, I’m trying to point a list of records sets to a single load balancer. I’ve got this tf settup
#---------------------------------------------------
# CREATE ALIAS RECORDS
#---------------------------------------------------
resource "aws_route53_record" "alias_route53_record" {
zone_id = data.aws_route53_zone.selected.zone_id
name = values(var.record_sets)
type = "A"
alias {
name = data.aws_lb.selected.dns_name
zone_id = data.aws_lb.selected.zone_id
evaluate_target_health = true
}
}
#tfvars
record_sets = {
record_set0 = "foo"
record_set1 = "bar"
}
Error output
Error: Incorrect attribute value type
on main.tf line 39, in resource "aws_route53_record" "alias_route53_record":
39: name = values(merge(var.record_sets))
|----------------
| var.record_sets is object with 2 attributes
Inappropriate value for attribute "name": string required.
Was wondering if this is possible or must I create two aws_route53_record resources. Thank you
Resolved. Forgot for_each existed
resource "aws_route53_record" "alias_route53_record" {
zone_id = data.aws_route53_zone.selected.zone_id
for_each = var.record_sets
name = each.value
type = "A"
alias {
name = data.aws_lb.selected.dns_name
zone_id = data.aws_lb.selected.zone_id
evaluate_target_health = true
}
}
Hashicorp Consul Service (HCS) for Azure affected Aug 18, 15:43 UTC Resolved - At approximately 14:26UTC we began experiencing errors with the creation of consul clusters for HashiCorp Consul Service (HCS) on Azure.
Engineers quickly identified the issue and implemented corrective action.
Services are now in fully operational.
HashiCorp Services’s Status Page - Hashicorp Consul Service (HCS) for Azure affected.
I’m facing a strange issue. Cannot figure out what’s happened
We just did some modifications to one resource in our .tf file, nothing else and from that point terraform plan
started showing logs as if it were with TF_LOG=TRACE. There is no env var set.
Terraform 0.12.24, it’s a resource of some external provider.
Are you sure it’s TRACE-level output and not tf dumping state?
It might be, indeed.
So initially we faced an issue with TF locks during terraform plan
, but there couldn’t be any other pipeline or person triggering the same terraform apply
. The error was:
2020/08/18 22:12:14 [TRACE] backend/local: requesting state manager for workspace "my-workspace"
2020/08/18 22:12:15 [TRACE] backend/local: requesting state lock for workspace "my-workspace"
o:Acquiring state lock. This may take a few moments...
e:
Error: Error locking state: Error acquiring the state lock: writing "<gs://my-bucket/terraform/my-workspace.tflock>" failed: googleapi: Error 412: Precondition Failed, conditionNotMet
Lock Info:
ID: 1597788628081747
Path: <gs://my-bucket/terraform/my-workspace.tflock>
Operation: OperationTypePlan
Who: runner@runner-urx-q8js-project-102-concurrent-0nbfb2
Version: 0.12.24
Created: 2020-08-18 22:10:27.940237341 +0000 UTC
Info:
Terraform acquires a state lock to protect the state from being written
by multiple users at the same time. Please resolve the issue above and try
again. For most commands, you can disable locking with the "-lock=false"
flag, but this is not recommended.
Note the TRACE output, there are other TRACE, DEBUG and INFO lines prior to this.
If we do terraform plan -lock=false
it shows tons of debug output like this:
...
2020-08-18T17:35:05.431Z [DEBUG] plugin.terraform-provider-helmfile: labels:
2020-08-18T17:35:05.431Z [DEBUG] plugin.terraform-provider-helmfile: app: prometheus-operator-prometheus
2020-08-18T17:35:05.431Z [DEBUG] plugin.terraform-provider-helmfile:
2020-08-18T17:35:05.431Z [DEBUG] plugin.terraform-provider-helmfile: chart: prometheus-operator-8.7.0
2020-08-18T17:35:05.431Z [DEBUG] plugin.terraform-provider-helmfile: release: "prom"
2020-08-18T17:35:05.431Z [DEBUG] plugin.terraform-provider-helmfile: heritage: "Helm"
2020-08-18T17:35:05.432Z [DEBUG] plugin.terraform-provider-helmfile: roleRef:
...
And it takes ages to proceed and usually fails afterwards without noticeable error.
But sometimes, quite rarerly now, it works ok, it shows normal output. We can’t figure out what was the reason. We started getting the same thing on another environment where there were no modification of tf resources, no tools were updated or something.
What a cryptic case.
Filed an issue in helmfile-provider repo
Hey! Would I be able to get a review on this PR to add in kms_key_id to terraform-aws-elasticache-redis - https://github.com/cloudposse/terraform-aws-elasticache-redis/pull/75
Change-Id: I23d1288851301328afaa61686b42d8376d303415 what This change allows a user to supply their own kms_key_id from a previously created kms key when at rest encryption is enabled why Securi…
Thanks!
Change-Id: I23d1288851301328afaa61686b42d8376d303415 what This change allows a user to supply their own kms_key_id from a previously created kms key when at rest encryption is enabled why Securi…
https://github.com/aws/containers-roadmap/issues/56 is killing me. Anybody got a better alternative to config files in Fargate, other than a sidecar container&base64?
I was thinking maybe EFS & local-exec to copy a file but that’s even worse
Tell us about your request Would be very nice to be able to mount strings (secrets/configurations) defined in the task definition into the container as a file. Which service(s) is this request for?…
Sorry! missed this. will add to agenda next week.
Tell us about your request Would be very nice to be able to mount strings (secrets/configurations) defined in the task definition into the container as a file. Which service(s) is this request for?…
I wouldn’t add it There really is no better way. But hey, it might lead to an interesting discusion
oh, haha thought I was in the #office-hours channel
What I ended up doing is using templatefile()
+ AWS SSM and overriding the entrypoint to the container.
I’d rather not change the app or modify the code / image so this was the easiest way.
secrets = [
# Hacky hack to get the config file in the Fargate container
# see <https://github.com/aws/containers-roadmap/issues/56>
# TODO: move this to Secrets Manager for extra 2KB size
{
name = "ENCODED_CONFIG"
valueFrom = aws_ssm_parameter.config.arn
},
{
name = "ENCODED_RULES"
valueFrom = aws_ssm_parameter.rules.arn
},
]
entrypoint = [
"bash",
"-c",
"set -ueo pipefail; unset AWS_CONTAINER_CREDENTIALS_RELATIVE_URI; unset AWS_EXECUTION_ENV; mkdir /etc/samproxy; echo $ENCODED_CONFIG | base64 -d > /etc/samproxy/samproxy.toml; echo $ENCODED_RULES | base64 -d > /etc/samproxy/rules.toml; /usr/bin/samproxy"
]
Posting the link here in case any poor souls find this reference
A Terraform module for running Honeycomb.io’s Samproxy on AWS in Fargate - Vlaaaaaaad/terraform-aws-fargate-samproxy
2020-08-19
https://github.com/aliscott/infracost looks interesting, but early days
Get cost estimates from a Terraform project. Contribute to aliscott/infracost development by creating an account on GitHub.
Hey guys i have a work flow with a loophole in terraform script i use a provision server which make a ami. that ami is used by autoscaing groups. after that i dont need that provision server , if i force remove it manually it creating a issue like whenever i do terraform apply it recreates that provision server and want to update that ami. I dont want to do provisioning manually.
Added to list for next week
oh, haha thought I was in the #office-hours channel
Does anybody know if Terraform has a state file size limit? Is there any recommendations that it shouldn’t be, say, more than nMb? All I could find is this discussion: https://discuss.hashicorp.com/t/getting-error-when-tfstate-is-larger-than-4mb/6121
We’re storing our tfstate files in our documentDB (AWS mongodb compatible database) with a HTTP REST API in front of it using the HTTP backend, which has been fine thus far but it seems the size of the state file has crossed the 4MB threshold and the terraform apply worked fine but now I can’t do a terraform destroy because I’m getting the error rpc error: code = ResourceExhausted desc = grpc: received message larger than max (5429624 vs. 4194304) Is there a way we can increase this limit for t…
I don’t know about actual limits but I think people tend to keep their state files small to reduce the blast radius and make it not unbearably slow to run (big state file = lots of resources = lots of API calls = slow).
We’re storing our tfstate files in our documentDB (AWS mongodb compatible database) with a HTTP REST API in front of it using the HTTP backend, which has been fine thus far but it seems the size of the state file has crossed the 4MB threshold and the terraform apply worked fine but now I can’t do a terraform destroy because I’m getting the error rpc error: code = ResourceExhausted desc = grpc: received message larger than max (5429624 vs. 4194304) Is there a way we can increase this limit for t…
Agree with @randomy - but will also bring up next week
oh, haha thought I was in the #office-hours channel
On the other hand, it is pretty nice when you can run terraform and it handles everything. Fewer chicken-and-egg situations, no need to update various dependent stacks afterwards.
@randomy ya i’m torn on it
Multiple projects is ideal for reducing blast radius, but it complicates cold-starts and moves the responsibility of managing the complete DAG to you.
Using a single project is great because it’s all handled for you, but plans/apply can take hours. Using -target
is not ideal either because you need to know which things to target or risk inconsistent state as well.
I’ve been working with CodePipeline lately to deploy to multiple AWS accounts. It’s pretty tempting to make a single TF stack that creates resources in every target environment/account + pipeline resources. It’s got a fairly small blast radius but still mixes nonprod with prod. I’m writing the module example this way and mostly prefer it over the actual implementation that involves separate stacks per account/environment using remote states etc.
To add context, this is an auto scaling group in multiple environments + 2 pipelines to deploy new AMIs and app artifacts to them. So it would be a stack per “service” but it covers all environments. It is pretty quick to run TF so the dilemma is mostly around blast radius/separation. In my current case the ASG is stateless (web servers) so I’m comfortable with it. Would be less comfortable otherwise.
Bringing this back to the original question, I struggle to decide on how to slice and dice state files but it’s always a long way off hitting 4mb.
ello peoples, anyone know of any good resources on implementing ci/cd on aws with terraform. In particular best practices on managing the plan and apply commands in the build phase using codebuild and interacting with s3 state files?
Is there any solution to using the Terraform Cloud module repository for my team without everyone needing a login? I mean, i like the browsing the ability to use version syntax without tags and all, but I realized if I wanted to rollout consuming those, NOT running plans, I’m stuck as 5 free users is hit. I need to basically allow “read-only” users for the registry for example.
Any known solution to this?
Anyone know how to resolve this issue? https://github.com/cloudposse/terraform-aws-ecs-web-app/issues/63
I’m trying the complete example using EC2 with codepipeline_enabled = false
, webhook_enabled=false
, ecs_alarms_enabled=false
, codepipeline_badge_enabled=false
I get:
Error: If `individual` is false, `organization` is required.
on .terraform/modules/ecs_web_app.ecs_codepipeline.github_webhooks/main.tf line 7, in provider "github":
7: provider "github" {
If I define codepipeline_repo_owner= "hashicorp
like the issue describes, I get:
Error: If `anonymous` is false, `token` is required.
on .terraform/modules/ecs_web_app.ecs_codepipeline.github_webhooks/main.tf line 7, in provider "github":
7: provider "github" {
I looked at the provider "github"
definition in the subdep and there does not seem to be a direct way of defining it from this repo
Found a bug? Maybe our Slack Community can help. Describe the Bug Here is my terraform code module "ecs_web_app" { source = "git://github.com/cloudposse/terraform-aws-ecs-web->…
Set the GITHUB_TOKEN environment variable and try again
Hm, no go, tried a few ways
• export GITHUB_TOKEN=test
• GITHUB_TOKEN=test terraform apply –var-file=eng.tfvars
• terraform apply –var-file=eng.tfvars -var ‘GITHUB_TOKEN=test’
Oh, I noticed I get 401 Bad credentials []
but I don’t even want to use anything related to codepipeline
The terraform-aws-ecs-web-app
is designed to be an opinionated module that shows how to use all the other modules together. Since we write very small composable modules, this lets you pick and choose the best pieces. If the web-app module doesn’t fit that use-case, check out the main.tf file and see how it uses the other modules. Rip out what you don’t need.
But as @RB points out, there might be an easier alternative.
maybe this module isn’t for me - maybe I should use terraform-aws-ecs-alb-service-task
instead
thanks for your help, I’m going to use the other module
This looks like a good conclusion.
Just set the repo_owner = "hashicorp"
and it should work. It’s mentioned in the steps to reproduce.
i tried at the time (also stated it in the original message), that did not work
setting the repo_owner
is not the same as setting the codepipeline_repo_owner
I only see one reference to repo_owner
in the example main.tf
repo_owner = var.codepipeline_repo_owner
have you tried setting repo_owner = "hashicorp"
without codepipeline ?
hm i see
ok, i’ll try it out
yeah, it is not working
Warning: Value for undeclared variable
The root module does not declare a variable named "repo_owner" but a value was
found in file "eng/gp-view.tfvars". To use this value, add a "variable" block
to the configuration.
Using a variables file to set an undeclared variable is deprecated and will
become an error in a future release. If you wish to provide certain "global"
settings to all configurations in your organization, use TF_VAR_...
environment variables to set these instead.
Error: If `anonymous` is false, `token` is required.
on .terraform/modules/ecs_web_app.ecs_codepipeline.github_webhooks/main.tf line 7, in provider "github":
7: provider "github" {
I went to the [main.tf](http://main.tf)
in master, and that corresponds to
https://github.com/cloudposse/terraform-aws-ecs-web-app/blob/master/main.tf#L175
which is the same in the example as
https://github.com/cloudposse/terraform-aws-ecs-web-app/blob/master/examples/complete/main.tf#L138
Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app
Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app
My source is terraform-aws-ecs-web-app.git?ref=tags/0.39.1
btw
try planning this entire block
module "ecs_web_app" {
source = "git::<https://github.com/cloudposse/terraform-aws-ecs-web-app.git?ref=tags/0.36.0>"
name = local.name
vpc_id = local.vpc_id
alb_ingress_unauthenticated_listener_arns = [data.aws_lb_listener.selected.arn]
alb_ingress_unauthenticated_listener_arns_count = 1
aws_logs_region = "us-east-2"
region = "us-east-2"
ecs_cluster_arn = data.aws_ecs_cluster.selected.arn
ecs_cluster_name = data.aws_ecs_cluster.selected.cluster_name
ecs_security_group_ids = [data.aws_security_group.selected.id]
ecs_private_subnet_ids = local.subnet_ids
alb_ingress_healthcheck_path = "/healthz"
alb_ingress_unauthenticated_paths = ["/*"]
codepipeline_enabled = false
cloudwatch_log_group_enabled = false
webhook_enabled = false
alb_security_group = "sg-11112222"
repo_owner = "hashicorp"
}
see if this block works for you and then modify it to your liking
still no go
Error: If `anonymous` is false, `token` is required.
on .terraform/modules/ecs_web_app.ecs_codepipeline.github_webhooks/main.tf line 7, in provider "github":
7: provider "github" {
I replaced the ecs_web_app
block from the complete example with yours, and replaced some variables
module "ecs_web_app" {
source = "git::<https://github.com/cloudposse/terraform-aws-ecs-web-app.git?ref=tags/0.36.0>"
name = "test-web-app"
vpc_id = module.vpc.vpc_id
alb_ingress_unauthenticated_listener_arns = [module.alb.alb_arn]
alb_ingress_unauthenticated_listener_arns_count = 1
aws_logs_region = "us-west-2"
region = "us-west-2"
ecs_cluster_arn = aws_ecs_cluster.default.arn
ecs_cluster_name = aws_ecs_cluster.default.name
ecs_security_group_ids = [module.vpc.vpc_default_security_group_id]
ecs_private_subnet_ids = module.subnets.private_subnet_ids
alb_ingress_healthcheck_path = "/healthz"
alb_ingress_unauthenticated_paths = ["/*"]
codepipeline_enabled = false
cloudwatch_log_group_enabled = false
webhook_enabled = false
alb_security_group = "sg-11112222"
repo_owner = "hashicorp"
}
also made sure to terraform init
thats really weird
it looks like youre using an old version
can you try using 0.39.1
which is the latest tag in the repo
id also try a terraform init -upgrade
or better yet, wipe rm -rf .terraform/ && terraform init -upgrade
if that doesnt work then idk
2020-08-20
hoping someone can help. I created an individual environment module which sources out to my main module.
module "test" {
source = "../../terraform-aws-ec2-auto-scale/"
region = var.region
aws_vpc = var.aws_vpc
subnet_ids = var.subnet_ids
}
If i execute a plan in my environment module I get the following error
2020/08/20 11:00:57 [ERROR] eval: *terraform.EvalSequence, err: Your query returned no results. Please change your search criteria and try again.
2020/08/20 11:00:58 [WARN] ReferenceTransformer: reference not found: "var.subnet_ids"
module.prtg.data.aws_security_group.default: Refreshing state...
module.prtg.data.aws_subnet_ids.default: Refreshing state...
Error: Your query returned no results. Please change your search criteria and try again.
But if i execute a plan within my source module, terraform plan and apply works.
link?
Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amon…
Hi all, I am using this module https://registry.terraform.io/modules/terraform-aws-modules/atlantis/aws/2.23.0 thanks to @antonbabenko, for setting up Atlantis with Gitlab, it works fine when the the load balancer is external, but as soon as I change it to be internal is not reachable from the internet and therefore from Gitlab servers. Do you know if this module supports or what I can do for being able to use the internal ALB and being reachable from gitlab servers / the internet ? Many thanks in advance.
If you’re using public subnets, you’ll have to also set the public ip variable to true
If your elb is publicly facing, make sure to allow only the GitHub and your office ipcidrs
Otherwise anyone in the world will be able to hit it
correct is how I have it right now, open but restricted to my office and the gitlab server
I am using both public and private basically I am following this https://github.com/terraform-aws-modules/terraform-aws-atlantis#run-atlantis-as-a-terraform-module
I am going to set ecs_service_assign_public_ip=true
as you mentioned
Terraform configurations for running Atlantis on AWS Fargate. Github, Gitlab and BitBucket are supported - terraform-aws-modules/terraform-aws-atlantis
2020-08-21
Hi All, I’m trying to setup a EKS cluster and have the following config on a brand new AWS account. I want the private subnets to use a NAT gateway
module "vpc" {
source = "git::<https://github.com/cloudposse/terraform-aws-vpc.git?ref=tags/0.16.1>"
namespace = var.namespace
stage = var.stage
name = var.name
attributes = var.attributes
cidr_block = "172.16.0.0/16"
tags = local.tags
enable_internet_gateway = true
}
module "subnets" {
source = "git::<https://github.com/cloudposse/terraform-aws-dynamic-subnets.git?ref=tags/0.19.0>"
availability_zones = var.availability_zones
namespace = var.namespace
stage = var.stage
name = var.name
attributes = var.attributes
vpc_id = module.vpc.vpc_id
igw_id = module.vpc.igw_id
cidr_block = module.vpc.vpc_cidr_block
nat_gateway_enabled = true
nat_instance_enabled = true
tags = local.tags
vpc_default_route_table_id = ""
}
module "eks_cluster" {
source = "git::<https://github.com/cloudposse/terraform-aws-eks-cluster.git?ref=tags/0.24.0>"
namespace = var.namespace
stage = var.stage
name = var.name
attributes = var.attributes
tags = var.tags
region = var.region
vpc_id = module.vpc.vpc_id
subnet_ids = module.subnets.public_subnet_ids
kubernetes_version = var.kubernetes_version
local_exec_interpreter = var.local_exec_interpreter
oidc_provider_enabled = var.oidc_provider_enabled
enabled_cluster_log_types = var.enabled_cluster_log_types
cluster_log_retention_period = var.cluster_log_retention_period
kubernetes_config_map_ignore_role_changes = false
}
When I apply this there is some weird race condition which results in every time, even if I delete the resources and start again
Error: Error creating route: RouteAlreadyExists: The route identified by 0.0.0.0/0 already exists.
status code: 400, request id: xxx
on .terraform/modules/subnets/nat-gateway.tf line 62, in resource "aws_route" "default":
62: resource "aws_route" "default" {
Error: Error creating route: RouteAlreadyExists: The route identified by 0.0.0.0/0 already exists.
status code: 400, request id: xxx
on .terraform/modules/subnets/nat-gateway.tf line 62, in resource "aws_route" "default":
62: resource "aws_route" "default" {
I only see this error when
nat_gateway_enabled = true
nat_instance_enabled = true
in the subnets module. Can someone please help me track why this occurs or help me debug this before I raise it as a bug?
yes, either one, not both
we use nat gateways in staging and prod
we use nat instances (micro) in dev and test and sandbox to save some money (EC2 instances are cheaper than NAT Gateways)
@Andriy Knysh (Cloud Posse) btw, is there any special reason to create EKS workers in public subnet? Isn’t secure to put in private subnet by default? at https://github.com/cloudposse/terraform-aws-eks-cluster examples
module "eks_workers" {
...
subnet_ids = module.subnets.public_subnet_ids
thanks for the advice guys, the comments make it clear, at the moment I’m just testing around but the advice to use one instead of both worked
@ismail yenigul in production you should put the worker nodes into private subnets
Yes, that would be great to update example to make it safer
the cluster itself should be given all subnets, private and public. k8s will use the public subnets to create load balancers in
also it could be better to add subnet tagging with elb
and internal-elb
by default
# <https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html>
locals {
public_subnets_additional_tags = {
"kubernetes.io/role/elb" : 1
}
private_subnets_additional_tags = {
"kubernetes.io/role/internal-elb" : 1
}
}
module "subnets" {
source = "git::<https://github.com/cloudposse/terraform-aws-dynamic-subnets.git?ref=tags/0.22.0>"
namespace = var.namespace
stage = var.stage
environment = var.environment
name = var.name
delimiter = var.delimiter
attributes = var.attributes
tags = local.tags
availability_zones = var.availability_zones
cidr_block = module.vpc.vpc_cidr_block
igw_id = module.vpc.igw_id
map_public_ip_on_launch = var.map_public_ip_on_launch
max_subnet_count = var.max_subnet_count
nat_gateway_enabled = var.nat_gateway_enabled
nat_instance_enabled = var.nat_instance_enabled
nat_instance_type = var.nat_instance_type
public_subnets_additional_tags = local.public_subnets_additional_tags
private_subnets_additional_tags = local.private_subnets_additional_tags
subnet_type_tag_key = var.subnet_type_tag_key
subnet_type_tag_value_format = var.subnet_type_tag_value_format
vpc_id = module.vpc.vpc_id
}
Ah where is that? I was looking at https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/examples/complete/main.tf
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
the example is outdated and was not updated to use the subnet tags
we will update it asap when have time
I can send PR if you happy for that changes but it seems README.md is auto generated
we have GitHub action for that, don’t worry, we’ll run the command on your PR
Enabled subnet tags for ALB ingress controller in the example Create NAT gateway for private subnets Use private subnet for EKS nodes
2020-08-22
is a .hcl
file the same as a .tf
file ?
They are both HashiCorp configuration language but one uses the specific terraform syntax
Is there a way to convert between tf and hcl and back?
What’s the use case
I was looking at https://github.com/minamijoyo/hcledit and wondering how to programmatically remove a resource from terraform code while maintaining file structure and comments
A command line editor for HCL. Contribute to minamijoyo/hcledit development by creating an account on GitHub.
.tf is just a file extension. The syntax of the contents is HCL
Looks like hcledit should be able to handle that
Nifty project, nice find
I know .hcl is used for vault config and policies
A schema inspector for Terraform providers. Contribute to minamijoyo/tfschema development by creating an account on GitHub.
it’s like policy_sentry querying of iam perms but instead it queries arguments for terraform resources
A schema inspector for Terraform providers. Contribute to minamijoyo/tfschema development by creating an account on GitHub.
this guy minamijoyo is killing it with the tf tools
minamijoyo has 65 repositories available. Follow their code on GitHub.
This is cool!
2020-08-23
Hey Guys, I’ve started using EKS cluster terraform module. I was running few tests with the release tag - 0.26.1
. I see that there is no kubeconfig path to store on local or the content that is being shown, all i see is the "kubernetes_config_map_id"
. I’ve seen people using kubeconfig_path = var.kubeconfig_path
in the earlier versions. Not sure if i’m missing something or what is the best way to display the contents of kubeconfig from the outputs or storing it on local path.
Thanks for you help in advance.
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
I’m currently working on a Terraform module that needs to create a Kubernetes cluster as well as deploy some helm charts to it. I need it to be as “production-ready” as possible. What’s the best approach right now for using Terraform to deploy things to Kubernetes?
For further context, the module will spin up AWS resources (EC2 instances, security groups, etc), then use the Terraform RKE provider to create the k8s cluster. Here’s an example from Rancher that is close to what I want to do, but they clearly say that it is not meant for production. Here’s my repo if you want to follow along with my progress. I’m working in the feature/initial_dev
branch.
While not a set-in-stone requirement, if at all possible, I would like to avoid requiring any local-exec
or dependencies on any local installed tools other than Terraform. If it does require something local, using Docker to do it would be nice.
Terraform Helm Provider? I don’t know much about it, though it looks to have decently good support
- Does it require
helm
to be installed on the machine running Terraform? - Is it being used anywhere successfully in production? Terraform Helmfile Provider? Probably not much more than an honorable mention since it is so new, but I do :heart: pretty much anything @mumoshu touches :grin:
- Does it require
helm
,helmfile
,helm-diff
,helm-git
, etc to be installed on the machine running Terraform? (If I am reading correctly, the answer is yes) Local-exec using helm/helmfile in an idempotent way? Some of my colleagues do this, but I believe it is just too crude to use in production
Terraform Shell Provider? This feels like a souped-up version of local-exec that at least gives me better lifecycle management (thanks @mumoshu for linking to it in the helmfile provider docs)
Flux Helm Operator? the Flux project has a Helm operator that looks really nice. I’d need to get the operator installed, and then need to figure out the best way to get the CRDs applied, but it looks like it has nice potential
https://github.com/minamijoyo/hcledit - a commandline hcl2 attribute editor
A command line editor for HCL. Contribute to minamijoyo/hcledit development by creating an account on GitHub.
Ya. Minamijoyo has some great tools. Check out tfschema too
A command line editor for HCL. Contribute to minamijoyo/hcledit development by creating an account on GitHub.
Update version constraints in your Terraform configurations - minamijoyo/tfupdate
Niiiiiice!
whic is one of those is better, resouce “foo_resource” “example” {} or resouce foo_resource “example” ?
i’ve been going with the policy of quotes only when absolutely necessary
so resource foo_resource example {}
whoah hold on. The quotes aren’t required on the resource type and name, for terraform?
Nope, maybe once upon a time they were, but not since hcl2 and tf 0.12 iirc
omg
2020-08-24
Has some gotten 0.13 working with locally installed providers that are not yet in the terraform registry ?
Hello, I want to build a bootstrapping for a new SM IaaS process so that I can deploy SM resources in a standardized and repeatable fashion. I need some help and guidance to start my project. Any advice please?
If you are having your hands first time dirty with Terraform, for starters this book is a must:
https://www.terraformupandrunning.com/
P.S. I am not related to the book or authors in any way.
This book is the fastest way to get up and running with Terraform, an open source tool that allows you to manage your infrastructure as code across a variety of cloud providers.
Hi, How do I output a resource created with for each?
resource "aws_route53_record" "cloudfront" {
for_each = toset(var.domain_name)
name = each.key
}
output "dns_record" {
value = aws_route53_record.cloudfront[each.key].fqdn
}
Gives error :
The "each" object can be used only in "resource" blocks, and only when the
"for_each" argument is set.
you can output the entire resource:
output "dns_record" {
value = aws_route53_record.cloudfront
}
or a map of specific attributes:
output "dns_record" {
value = { for key, resource in aws_route53_record.cloudfront : key => { fqdn = resource.fqdn } }
}
or a list of a a single attribute:
output "dns_record" {
value = [ for resource in aws_route53_record.cloudfront : resource.fqdn ]
}
docs on for expressions: https://www.terraform.io/docs/configuration/expressions.html#for-expressions
The Terraform language allows the use of expressions to access data exported by resources and to transform and combine that data to produce other values.
Can also do
aws_route53_record.cloudfront.*.fqdn
iirc
Or does that only work on counts?
That syntax only works on count
… When using for_each
the object is a map, not a list
Ah, my apologies
hello peoples - anyone know of any good terraform slack notification tool which shows plan/destroy in slack?
https://github.com/terraform-aws-modules/terraform-aws-notify-slack
Usually, notifications are part of the CI/CD pipeline if you are using it for infra delivery.
Terraform module which creates SNS topic and Lambda function which sends notifications to Slack - terraform-aws-modules/terraform-aws-notify-slack
Thanks for the ping. Will this send plan and/or destroy output to slack? Is there an example slack msg I can look at to see if that’s what I want?
Does anybody know if there’s a way to force a recreate on an instance when userdata changes? I’ve found a lot of documentation on people having the opposite problem a few years ago (that it used to do that, but now it doesn’t)
In general though, where we’ve had to control these sorts of events we use the null_resource
with triggers
https://github.com/cloudposse/terraform-aws-ecs-alb-service-task/commit/19b680d0d21796fa21bf97aef40ebd8f1acc84c4 It looks like this is a breaking change for the way we’re using this module which maybe I wasn’t using correctly. Why is aws_iam_role.ecs_service disabled when using awsvpc as the network_mode?
- Use the ecs service role Fixes https://github.com/cloudposse/terraform-aws-ecs-alb-service-task/issues/51 * Update main.tf Co-authored-by: Andriy Knysh <[email protected]…
Ref: original issue: https://github.com/cloudposse/terraform-aws-ecs-alb-service-task/issues/51
Ref: original pr: https://github.com/cloudposse/terraform-aws-ecs-alb-service-task/pull/52/files
- Use the ecs service role Fixes https://github.com/cloudposse/terraform-aws-ecs-alb-service-task/issues/51 * Update main.tf Co-authored-by: Andriy Knysh <[email protected]…
i believe the awsvpc netowrk mode is required for fargate
so the code is basically saying, if the network mode is not awsvpc (not for fargate), then disable creation of the ecs service role, the service policy, the role policy, and remove the iam role from the ecs service
could you create an issue ticket of what is breaking, how it’s breaking, and what your use case is ?
Our Terraform plan is attempting to disable this role for our fargate instances..
I’ll create a ticket
fyi, for now, can you pin to the tag version prior to that pr ?
please also include your arguments for the terraform module like the var.network_mode
yeah, we’re pinning to a tagged version. that’s why we’re just now seeing an issue
oh ok cool so at least 0.25.0 works for now until we can come up with a solution upstream
Sure thing! thanks for your help
then it will be easier to move the pin forward
of course, np!
i was the one that put the change in so i should probably fix the mistake wherever it may be
seems like region is not a field on the bucket resource anymore? how does it decide which region to provision in? using the provider region?
yes, it uses the provider region
kewl i see
https://github.com/cloudposse/terraform-aws-elasticsearch/pull/68 this blocks migration to 0.13
…raform 0.13 what Update referenced module version so that we don't get a version conflict when using TF 0.13 why Currently I get the following error message when running terraform init: Er…
This new, opinionated Terraform wrapper / framework just launched: https://terraspace.cloud/
I wouldn’t use it, but would be interested in hearing from others if they would.
The Terraform Framework
There is a need for a nice template system I think. I’d be much more interested if it was a Go cross platform app and easy as git town or similar tool.
The Terraform Framework
Yeah, I mentally took points off because it was written in Ruby as well. I’m no great Go programmer by any means, but any ops / infra related tooling creator should realize by now that you need to write your tool in Go for the community to get behind you.
Lol. Maybe not required to be in Go but saying no windows support is kinda dropping a huge gapping hole in coverage. I work on macOS but any tooling I use for team needs cross platform support or at least be a docker workflow at the minimum. Has to be super easy to get going or no chance of adoption in such a busy world imo.
We are releasing a library built around variant2 by @mumoshu that extends this kind of functionality for terraform and helmfile. The rad thing about variant2 is it cross compiles to multiple platforms and is written in go. But all the workflows are written in native HCL2.
The ETA is “any day” now, but there are a couple of bug fixes we need before it works as a remote module. Btw, variant2 supports remote variants just like terraform supports remote modules. So it’s an insanely reusable cli workflow.
Turn your bash scripts into a modern, single-executable CLI app today - mumoshu/variant2
Stoked
Hah. Uses Ruby/Thor. Looks just like my opinionated Puppet tool from ages ago: https://github.com/tehmaspc/puppet-magnum
Agree - wouldn’t want to use Ruby anymore myself either.
2020-08-25
anyone have an example of disabling/enabling replication configuration in an s3 bucket via a flag?
here’s my config for the main bucket that isn’t currently working:
replication_configuration {
role = aws_iam_role.replication.*.arn
rules {
status = local.enable_replica_bucket ? "Enabled" : "Disabled"
destination {
bucket = aws_s3_bucket.replica_bucket.*.arn
storage_class = "STANDARD"
}
}
}
The thing is the resources for the iam role and the replica bucket only exist if enable_replica_bucket is true
What do folks here do with respect to AWS Subnets and using data
lookups to find Subnet IDs to use in other modules - more importantly need a pattern for finding Subnet IDs with free address space and then ensuring future state doesn’t change for already deployed infrastructure? Basically - our network team doesn’t give us alot of network/IP space - so we’ve carved what we do have and created this concept of Subnet Pools across AZs. Issue is how to fan across each Pool while being properly deterministic? I usually don’t have this problem since most places I’ve worked aren’t so stingy w/ IP space in AWS
we use a data source to find the vpc using an application tag
we have application=oregon for a single vpc so a data source uses that to get that vpc.
then we split that vpc up in public and private subnets
so we then use a data source using the vpc id from the first vpc data source and look for public=true
and we get a list of public subnets
And then you just make sure you leverage all those subnet ids across the AZs; do you do anything more? We’re doing the same with respect to looking up the subnets via tags.
I’m thinking now maybe my devs haven’t been iterating through the list of subnets and that’s probably why we’re exhausting our little address we do have in one of the 4 subnets for which we’re having a current issue
@RB - appreciate it; needed to spitball out ideas. Cheers. Think we’ve found some culprits.
ah yes that makes sense. yes we leverage all those subnet ids across the azs so amazon will balance between them
we dont do anything more but we use /18s and /20s so we havent run out of address space in our vpcs
we do have a balancing issue tho. we recently added a new public and private subnet in the 2d az and now we have to re-apply the terraform that uses the dtaa sourcrs in order to take advantage of the new subnet
2020-08-26
Hi all, I am using this module https://registry.terraform.io/modules/terraform-aws-modules/atlantis/aws/2.23.0 thanks to @antonbabenko, for setting up Atlantis with Gitlab, it works fine when the the load balancer is external, but as soon as I change it to be internal is not reachable from the internet and therefore from Gitlab servers. Do you know if this module supports or what I can do for being able to use the internal ALB and being reachable from gitlab servers / the internet ? Many thanks in advance.
Isn’t that the whole idea of having a public vs. private load balancer?
HCS Azure Marketplace Integration Affected Aug 26, 15:38 UTC Investigating - We are currently experiencing a potential disruption of service regarding our Azure Marketplace Application offering. Our incident handlers and engineering teams are investigating this matter to provide a timely mitigation of impact.
As our teams work on this issue we will provide further ongoing information as it becomes available. If you have questions or are experiencing difficulties with this service please reach out to our customer support team (Needs…
HashiCorp Services’s Status Page - HCS Azure Marketplace Integration Affected.
HCS Azure Marketplace Integration Affected Aug 26, 16:55 UTC Update - We have confirmed a disruption of service regarding Azure Marketplace Application integration. Our incident handlers and engineering teams are continuing to address this matter with our Azure partners and hope to provide further updates soon.
As our teams work on this issue we will provide further ongoing information as it becomes available. If you have questions or are experiencing difficulties with this service please reach out to your customer support team….
HashiCorp Services’s Status Page - HCS Azure Marketplace Integration Affected.
HCS Azure Marketplace Integration Affected Aug 26, 16:55 UTC Update - We have confirmed a disruption of service regarding Azure Marketplace Application integration. Our incident handlers and engineering teams are continuing to address this matter with our Azure partners and hope to provide further updates soon.
As our teams work on this issue we will provide further ongoing information as it becomes available. If you have questions or are experiencing difficulties with this service please reach out to your customer support team….
Hello, I am using terraform with azure and I have an strange issue, I create a cluster with two nodes pools and everything works, but I remove one of the node pool, the cluster get recreated
v0.13.1 0.13.1 (August 26, 2020) ENHANCEMENTS: config: cidrsubnet and cidrhost now support address extensions of more than 32 bits (#25517) cli: The directories that Terraform searches by default for provider plugins can now be symlinks to directories elsewhere. (This applies only to the top-level directory, not to nested directories…
The cidrsubnet and cidrhost functions were limited to supporting 32-bit systems. This PR: updates the "github.com/apparentlymart/go-cidr" library, which was recently extended with Subnet…
Anyone work with SSM document using schema 2.x and get an error when associating them with an instance? I get the error
Error creating SSM association: InvalidDocument: Document schema version, 2.2, is not supported by association that is created with instance id
Oh, i think i need to use the targets
property on aws_ssm_association
HCS Azure Marketplace Integration Affected Aug 26, 20:04 UTC Update - We are continuing to see a disruption of service regarding Azure Marketplace Application integration. Our incident handlers and engineering teams are continuing to address this matter with our Azure partners and hope to provide another update soon.
If you have questions or are experiencing difficulties with this service please reach out to your customer support team.
IMPACT: Updating, creating, or deleting HashiCorp Consul Service on Azure clusters may be delayed…
HashiCorp Services’s Status Page - HCS Azure Marketplace Integration Affected.
HCS Azure Marketplace Integration Affected Aug 26, 20:55 UTC Resolved - Our Azure partners have mitigated the issue and we are seeing recovery from our tests. We are considering this incident resolved. If you see further issues please contact HashiCorp Support.Aug 26, 20:04 UTC Update - We are continuing to see a disruption of service regarding Azure Marketplace Application integration. Our incident handlers and engineering teams are continuing to address this matter with our Azure partners and hope to provide another update soon.
If…
heads up! we’ve made a some significant improvements to the terraform-null-label
module and it’s handling of context
passing betwen modules. we’ve introduced a new concept of [context.tf](http://context.tf)
which is the standard set of variable inputs for each module. we’ll be slowly rolling this out to all our modules. this change preserves backwards compatibility, but adds the ability to very tersely pass context between our modules. this change was spearheaded by @Jeremy G (Cloud Posse) who updated it.
now with the [context.tf](http://context.tf)
we can plop that into every module so we keep our variables consistent. as many of you have probably realized, we inconsistently support things like environment
, label_order
, etc. because it was a manual, error prone process of copying distributing them to all the modules.
Here’s an example of the file: https://github.com/cloudposse/terraform-null-label/blob/master/exports/context.tf
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label
You can start using this today in your projects.
Note, this also adds a this
module
so you can pass module.this.context
between modules.
@Joe Hosteny we also fixed the issue you raised.
The delimiter is calculated with the coalesce function, which considers and empty string and a null string as the same thing. This prevents the delimiter from being set as "", which is us…
2020-08-27
Terraform Cloud scheduled maintenance THIS IS A SCHEDULED EVENT Aug 30, 07:30 - 08:30 UTCAug 27, 10:28 UTC Scheduled - We will be undergoing a scheduled maintenance to make some network upgrades to Terraform Cloud. We don’t anticipate any customer-facing impact to our services during this window.
HashiCorp Services’s Status Page - Terraform Cloud scheduled maintenance.
Good morning, I am running into a dynamic grants issue with the S3 module version 0.17.1 and was wondering if anyone had any recommendations. I’m using Terraform v.12.24. I have ACL set to private and do not wish to use grants at this time so it should be defaulting to null.
Error: Unsupported block type
on .terraform/modules/terraform_s3_bucket/terraform-aws-s3-bucket-0.17.1/main.tf line 92, in resource "aws_s3_bucket" "default":
92: dynamic "grant" {
Blocks of type "grant" are not expected here.
looks like it’s happening to be on version 17.0 and 16.0 as well. I probably did something wrong
These error messages are very cryptic. I ran into something similar with using dynamic blocks for copy_action
for AWS Backup service. In my case, the issue was that the AWS provider 2.58
had been updated to include support for it and even though my code was using 2.70
, I was seeing this error , until I pinned the AWS provider version in code explicitly to 2.70
instead of saying >2.11
etc.
The reason I am telling you this is that it could be your code is correct but maybe one of the provider version etc is breaking it. I would be very interested in knowing how you fix this error for your case.
also, I saw this error with TF versions :12.19
12.23
and 12.29
very interesting, I currently have my AWS provider set to v2.28.1
try setting it to something latest..2.70
for example
the provider changelog typically indicates when support for a particular feature was introduced. But testing it like this will save the trouble of going through the versions.
also I found this -> https://github.com/terraform-aws-modules/terraform-aws-s3-bucket/issues/20
Hi there, We have a requirement to implement Bucket ACLs on a few buckets in S3 and have been using this module for other buckets we have created, so we'd like to keep some consistency if possi…
weird, I keep getting an error trying to terraform init --upgrade
with using any of the newer providers.. I wonder if my Terraform version needs to get bumped up. Even if I put it back to the way I had it or just completely remove the requirement Terraform still errors out. :confused:
Error: no suitable version is available
and my ~/.terraform.d/
directory only has a checkpoint_cache and checkpoint_signature file in it
No provider “aws” plugins meet the constraint “ < 4.0,>= 3.0,~> 2.0,~t; 2.0,~> 2.0,~t; 2.0”.
The version constraint is derived from the “version” argument within the provider “aws” block in configuration. Child modules may also apply provider version constraints. To view the provider versions requested by each module in the current configuration, run “terraform providers”.
To proceed, the version constraints for this provider must be relaxed by either adjusting or removing the “version” argument in the provider blocks throughout the configuration.
Error: no suitable version is available
hmm looks to be only for this one repo. Terraform init still works in my other repos.
fixed it.
hey..awesome ! what was the issue ?
I ended up pulling in a different TF module for S3 and bumped the version to the latest which sourced in the AWS 3.0 provider. So when I ran terraform providers
I saw the 3.0 being sourced from that S3 module.
still haven’t fixed the initial issue I had with the original module
Another neat one:
Terraform versioned modules in AWS CodeCommit with federated access? Has someone had success with it?
Hey, all! I see in the Terraform documentation they recommend using the provider in the module name, for example terraform-aws-xxxx
for a module that contains all aws resources. I’ve got a situation where I’ve technically two providers in a module of mine - helm and aws. Is there guidelines on this? Or do I need to split this up
terraform-aws-helm-xxxx
?
is that the terraform recommendation? or a suggestion?
not sure there is an official recommendation
there is no “rule” here, just guidelines. When I do something like this I do use the terraform-prov1-prov2-xxx
syntax
The recommending naming that you’re looking at, is a requirement for releasing on terraform registry. If you don’t plan on doing that then naming is completely up to you. example: we use that naming for public modules, but for internal modules we use tf-<thing being managed>. Helps people know what is public and what is not
yeah it would be going into a registry (terraform enterprise…or is it called cloud now ) which is why im trying to figure out
Good to know thank’s @Steven, I’ve never published one on the registry so that was news to me
right, I shouldve mentioned that this was for a registry
this might sound like an odd request, but I am attempting to automatically configure kubectl after deploying an EKS cluster I have captured the generated clustername as a Terraform Ouptut and I want to feed that to kubectl as an input.
I was thinking of using a provisoner local exec to configure kubectl in the script, but I know the using provisioners is frowned upon.
You could use local_file
and make sure the directory that the file gets added to is set up in your KUBECONFIG
environment variable
@Tom Howarth This is how I’m doing this on a project:
resource "null_resource" "eks_kubeconfig" {
triggers = {
eks_endpoint = module.eks_cluster.eks_cluster_id
}
provisioner "local-exec" {
command = "aws eks --region ${var.region} update-kubeconfig --name ${module.eks_cluster.eks_cluster_id}"
}
}
That works too, though I prefer to keep different files when possible so $HOME/.kube/config
file doesn’t get super cluttered
aws eks update-kubeconfig
has a --kubeconfig
flag to specify the file that gets modified/created
@roth.andy Ah TIL. I like that idea. This is my first major k8s project, so I haven’t hit that level of .kube/config clutter, but I can see that being annoying with a bunch of projects / environments.
so I was wondering if there was a better way of doing it.
anyone regularly using tflint? I’m suprised I hadn’t run across it sooner. I don’t have sentinel right now as on free terraform cloud version. The tflint seems pretty cool to plug-in to my github actions checks and all.
pre-commit git hooks to take care of Terraform configurations - antonbabenko/pre-commit-terraform
is what I use frequently. includes tflint
Very cool!!!! I just ran in cloudposse template I used and it flagged pinned at master recommending better practice. Love it. Going to check out this repo too
first time i setup pre-commit successfully. Love it. Terraform markdown docs + formatting + whitespace trim now enabled.
I’m a fan of trying to do this in github actions when possile to eliminate any local dependency on someone but i imagine it would be really easy to run the same triggers in github
tflint and checkov are very useful
caught a lot of things before committing
How can I properly set my own CMK using the dynamodb module version 0.18.0
? I see the enable_encryption
parameter that I can set to true. That works. I want to specify a kms_key_arn
. I see it’s normally available through the resource but when I try to add it, it says it’s in invalid parameter. When I look at the [main.tf](http://main.tf)
in the repo, I see the server_side_encryption
block without the ability to add a CMK. I’m wanting to setup Terraform dynamoDB locking so I’d like to use a CMK. Just curious if the module by default creates a separate key, uses a default one, uses the one associated with my S3 bucket or what.. Thank you in advance!
Wow. What a setup experience.
Check this out: https://www.gitpod.io/docs/self-hosted/latest/install/install-on-aws-script/
Super smooth docker container that downloaded all the required terraform internally and persisted to local volume when done + asked for input from user. Super impressed
Documentation site for Gitpod.
If anyone is curious how much effort goes into code review for new modules, check this one out! https://github.com/cloudposse/terraform-aws-msk-apache-kafka-cluster/pull/1
what Initial Version of Module why Needed a way to setup AWS MSK via terraform.
AWS recently released Private CA sharing.. I wonder if this would work with MSK now.. Crazy how you’re required to use Private ACM for client auth.
what Initial Version of Module why Needed a way to setup AWS MSK via terraform.
Heh, we just built this internally lol
When writing code and you notice that you are taking copy-paste from place to place you will think about DRY “don’t repeat yourself” principle and you will move the shared code to a class of function, so you don’t need to repeat it again and again, but what about if you want to apply the same in terraform? here are some tips that I follow when I write a terraform script, check it out and let me know what do u think? https://www.dailytask.co/task/tips-that-i-follow-when-i-provision-my-aws-resources-using-terraform-ahmed-zidan
Tips that I follow when I provision my aws resources using terraform written by Ahmed Zidan
Nice work. This is definitely an issue with a lot of terraform i see
Tips that I follow when I provision my aws resources using terraform written by Ahmed Zidan
I’m surprised you didn’t mention the terraform registry. The registry modules, like the vpc module or cloud posse nodules have reduced our code a lot
But better use workspaces to divided envs
I’ve recently moved away from using workspaces to divide my environments after about 4 years.. I was using workspaces to create resources based off of my AWS accounts (dev,staging,prod). Things started to break whenever I wanted to introduce a sub-environment into the mix (e.g. qa, aat, uat). Maybe I was using workspaces too broadly? Anyway, I’ve migrated to environment specific tfvars recently and so far it seems to be working better for me and has reduced a lot if not most of the complex variable maps I used to have.
I agree. I don’t like using workspaces when i could use a module with different tfvars input or a simple module reference instead
I didn’t get the point of sub environment , I love to avoid to have directory per env to not duplicate my code and directory management, could you provide an example where workspace doesn’t works?
@RB I am using modules with workspace, I love the idea to have an state file per environment
Read my example. If you use workspace to separate your code based on your AWS accounts (or VPC’s if you’re doing that) by dev/staging/prod. Go deploy a MSK (Kafka) cluster or something. Now try to deploy another cluster for a QA or UAT environment. Assuming you are using variable maps per environment/workspace then it makes it annoying.
@RB yes, online modules are the first place to look into it, check the standard that I follow I have mentioned that too https://www.dailytask.co/task/a-standard-that-i-follow-when-i-write-terraform-script-ahmed-zidan
A standard that I follow when I write Terraform script written by Ahmed Zidan
2020-08-28
gday everyone. I’ve got a quick question about the cloudposse module. I am looking to setup a static (restricted access) website, and decided to go down the s3 + cloudfront + ACM + Lambda@edge (for authentication and basic routing). I was wondering if https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn does a lot of those heavy lifting already?
Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn
Yes it does
Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn
It does s3 and cloudfront but your lambda has to be created separately
yeah, okay, that makes sense. I can do the ACM and lambda separately. I’ll give it a shot and will look at the docs to see how to connect the lambda.
Thanks heap for replying.
No problem at all. My company might be open sourcing their viewer response lambda as a module so that would make creating secure static sites even easier
Essentially it just adds headers using a terraform map dynamically to the lambda
Each of our sites use a unique csp so each static s3 site requires a different lambda@edge
ohhh, that’d be nice. I just need the lambda@edge to do 2 things, basic auth + a redirect rule (*/ -> */index.html)
fairly easy to throw that into a python file.
You may also want to check out AWS Amplify Console which is basically managed S3+CloudFront+Lambda@Edge+CI/CD from AWS.
It supports static websites and password protection
Build, deploy, and host static web apps and websites using frameworks like React, Gatsby, Vue, Angular, Ember, Jekyll, and Hugo.
oh wow, i did not know of the above ^
i wonder what the pricing differences are betw amplify vs s3+cf+lambda
fyi if you are going to stick with the lambda@edge method, be sure to check out the other tf modules in the area.
https://github.com/search?o=desc&q=lambda+edge+terraform&s=stars&type=Repositories
Hm… I did not compare pricing at all Even if it’s more expensive, setting up the per-PR preview is reason enough to pay for it
Also, you don’t really get CloudWatch Metrics with Amplify Console which is a shame. And you don’t get access to any underlying AWS resources There’s a feature request open for metrics IIRC
hmmm, looks interesting. I only want to host a docs site, so I am just looking for the simplest option possible. I got cloudfront + s3 working, tho configuring the cloudfront resource is a bit of a pain, so I am looking at a module to simplify that process a bit.
you already found one in the original post regarding both cf and s3
yeah, which lead me here.
I’ve got a question about for_each
. What I am trying to do is to create an SSL certificate via ACM. Following the example from hashicorp, https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/acm_certificate_validation, they have DNS validation and have an example resource. However, when grouping all the resources together, I get the (somewhat famous) error: The "for_each" value depends on resource attributes that cannot be determined until apply
.
Is there any common practices to get around this issue, or should I just do a terraform apply -target && terraform apply?
here you go, not yet updated for tf 0.13 though, nor aws provider v3… https://github.com/plus3it/terraform-aws-tardigrade-acm/blob/master/main.tf
Contribute to plus3it/terraform-aws-tardigrade-acm development by creating an account on GitHub.
This seems very hacky. Is this due to aws provider v3 update?
it’s due to funny business in tf 0.12 and v2 of the aws provider
and because it is attempting to address create and destroy lifecycles, multiple SANs, etc… modules are hard
i haven’t gotten around to updating it for tf 0.13 and v3 of the aws provider, which i think will reduce quite a bit of the hackiness, but will also be totally backwards incompatible
Thanks heap.
Hey there SweetOps community!!! We appreciate a :+1: if someone came across this :terraform: tf-0.13.1
issue -> https://github.com/hashicorp/terraform/issues/26038
Your collab would be trully useful
Thanks in advance!
Terraform Version $ terraform version Terraform v0.13.1 + provider registry.terraform.io/-/aws v3.4.0 + provider registry.terraform.io/hashicorp/aws v2.70.0 Terraform Configuration Files Terraform …
We can now accept PRs for 0.13 and test them
2020-08-29
Hoping this is the correct channel. I’m trying to figure out how to get a json object with jsonencode
from a list of maps. But I’m not getting the desired outcome.
What I need the json to be is…
{
"bob.role":"arn:aws:iam::1234:role/access-role-bob",
"bob.path":"*",
"dave.role":"arn:aws:iam::1234:role/access-role-dave",
"dave.path":"prod/files"
}
Here’s an example list
locals {
users = [
{ username : "bob", path : "*" },
{ username : "dave", path : "prod/files" },
]
secret = [
for index, data in local.users :
map(
"${data["username"]}.role", "arn:aws:iam::1234:role/access-role-${data["username"]}",
"${data["username"]}.path", "${data["path"]}"
)
]
}
And I get the correct map that I want for each, but it’s still a list(map). I’ve tried merge
and few other functions.
$ terraform console
> local.secret
[
{
"bob.path" = "*"
"bob.role" = "arn:aws:iam::1234:role/access-role-bob"
},
{
"dave.path" = "prod/files"
"dave.role" = "arn:aws:iam::1234:role/access-role-dave"
},
]
if you’re a little flexible on the data structure, you can get there pretty easily using the “map” syntax of the for
loop, i.e. { for ... }
(instead of the list syntax you have now, [ for ... ]
)
something like this:
secret = {
for data in local.users : data.username => {
role = "arn:aws:iam::1234:role/access-role-${data.username}"
path = data.path
}
}
ought to give you a data structure like this:
{
bob = {
path = "*"
role = "arn:aws:iam::1234:role/access-role-bob"
}
dave = {
path = "prod/files"
role = "arn:aws:iam::1234:role/access-role-dave"
}
}
so you can then get the role or the path for any given user, using the username as the map lookup, e.g. secret["bob"].path
and secret["bob"].role
Great! Thank you. Made some progress with this and I’m confident it will work.
Ok, I got what I wanted. By using the original logic, I used the …
operator like so.
merge(local.secret...)
And I got… (pun intended?)
{
"bob.path" = "*"
"bob.role" = "arn:aws:iam::1234:role/access-role-bob"
"dave.path" = "prod/files"
"dave.role" = "arn:aws:iam::1234:role/access-role-dave"
}
Hello, I have a module that creates an SSL certificate. Now, I am running on a different region to us-east-1
, so I declared 2 provider "aws"
and the us-east-1 has an alias.
Things are looking fine until I run terraform plan, and I get this error,
To work with module.docs-site.aws_acm_certificate.ssl its original
provider configuration at
module.docs-site.provider["registry.terraform.io/hashicorp/aws"].us-east-1
is required, but it has been removed. This occurs when a provider
configuration is removed while objects created by that provider still exist in
the state. Re-add the provider configuration to destroy
module.docs-site.aws_acm_certificate.ssl, after which you can
remove the provider configuration again.
Is there a way for me to pass the provider reference that defined in the global module into the child module?
I have checked my statefile and it’s empty, which is quite confusing, because I’ve just restructured my code but havent ran anything, so I didnt expect to have objects created by that provider still exists in the state
.
Have you checked out these docs https://www.terraform.io/docs/configuration/modules.html
Modules allow multiple resources to be grouped together and encapsulated.
You can pass the provider to the module you’ll need to use an alias when using two aws providers
Turned out I need to do something similar to the module "tunnel"
example. Thanks for pointing me in the right direction.
2020-08-30
hey, when I rename the local reference to a module, it destroys the stack previously referenced and build a new one. In this scenario, I am wondering if:
• there is a way for me to migrate the reference to the new name? The resources are the same, just the references in the statefile changes.
• When terraform deletes and creates the same resource (eg an S3 bucket with the same name), it throws an error. Is there a way to nicely handle this situation?
- Sounds like
terraform state mv
- If it really deletes it first, this should be fine. What’s the error? If it’s trying to create the new one first and failing because of the old one, terraform has a lifecycle hook to control that:
lifecycle { create_before_destroy = true }
check if you accidentally have this set somehow
the problem is that when you change the resource name, you break any ability for terraform to track the dependency between the “old” resource and the “new” one. so you have a race condition on the destroy/create. i.e. it is not destroying and then creating the s3 bucket. terraform works in parallel, so it is destroying and creating the s3 bucket, at the same time
Thanks James and Loren. I’ll try terraform state mv
next time.
Terraform Cloud scheduled maintenance Aug 30, 08:00 UTC Completed - The scheduled maintenance has been completed.Aug 30, 07:00 UTC In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.Aug 27, 10:28 UTC Scheduled - We will be undergoing a scheduled maintenance to make some network upgrades to Terraform Cloud. We don’t anticipate any customer-facing impact to our services during this window.
warning: the combination of terraform cloud + terraform 0.13.0 has completely deadlocked one of our workspaces:
- one of my team upgraded to terraform 0.13.0, applied a change
- now nobody can touch that state with 0.12
- BUT somehow the state has something inside that 0.13.0 doesn’t like: ``` Error: Invalid resource instance data in state
Instance module.public_api01.aws_instance.instance data could not be decoded from the state: unsupported attribute “network_interface_id”. ```
- https://github.com/hashicorp/terraform/issues/25752 - previously the only fix was to manually edit the state back to say 0.12, but manually editing this state file is not possible in terraform cloud (
state push
with 0.13 will override the terraform version to 0.13, and 0.12 refuses to push)
…but now after writing all this I see that a fix was included in 0.13.1
Terraform Version v0.13.0-rc1 Although its also being reported with v0.12.29. Terraform Configuration Files main.tf terraform { required_providers { aws = { source = "hashicorp/aws" versi…
still, terraform’s handling of upgrades is really obnoxious IMO
working in a team, if anyone uses a newer version, everyone is forced to upgrade and there’s often no way back
@james highly recommend a strict pin of the terraform version, to prevent such accidental upgrades… put this in your root config:
terraform {
required_version = "0.13.1"
}
Yep. My team does this with all projects. This combined with Terraform installed using asdf and having a .tool-versions
file makes things pretty painless
Extendable version manager with support for Ruby, Node.js, Elixir, Erlang & more - asdf-vm/asdf
2020-08-31
hello, when writing a composite module for terraform, is it worth split logical components into separate files? My [main.tf](http://main.tf)
is growing a bit, and I was wondering if there are best practices around this scenario. Thanks in advanced.
I always structure my modules the same as my main terraform
main.tf
variables.tf
outputs.tf
iam.tf
etc
makes it alot easier for managing
That’s what I did with one of my projects. I think there’s a bit too much going on at this point. this new project I think we’re going to stick to main.tf containing most of the resources (vpc, iam, etc…) whereas any ecs app with r53 entries, its dependent ecr, etc.. would go under app1.tf or example.tf.
I see, thanks for the inputs.
My initial thought was to split it by components, such as a lambda, an archive resource, the iam role for the lambda for example. This makes up a standalone part of the application, and it doesn’t depend on other components.
From what mentioned above, it seems that it is preferred that the resource are grouped by type, eg iam, s3 etc?
Yeah, I typically group by logical types (either application layers such as secrets.tf, network.tf, data.tf or AWS services such as route53.tf, kms.tf, etc.) and had success with that.
This is one of the places where TF doesn’t have much structure or convention and it’s painful.
I do the logical types too but I keep them grouped by “resource affiliation” meaning if I have an instance.tf file and it needs an instance profile then those roles and policies will not be on the iam.tf file
I will treat iam.tf as a global group of resources that are needed for the whole project to work
Yeah similar thought process for myself as well. Then more generic things that are used across various resources / modules would go into the iam.tf or other service.tf files.
for anything larger I don’t use any main.tf anymore tbh.
That’s probably a sign it needs to be organized a bit better though by me.
I tend on those type of plans to do stuff like
backend.tf
iam.tf
ec2.tf
… when I can’t use a module. I’d rather organize by types of content in that case.
For a best practice layout for a “root master module” I think Erik has some excellent root module repo layouts that pretty solid and I’d look at if I was building a new project as we ll.
Thanks for the insights. I’ll go have a look at the root master module
mentioned.
I keep having issues on a module I pulled in with the failed to find provider due to the new source path stuff in 0.13. Anyone have a quick fix to this besides the upgrade command as I just need to tear some stuff down but dependent modules
Error while installing hashicorp/template v2.1.2: after installing
registry.terraform.io/hashicorp/template it is still not detected in the target directory; this is a bug in Terraform
terraform {
required_providers {
template = {
source = "hashicorp/template"
version = "~>2.1.2"
}
}
}
Can always just make a new branch in git, use the upgrade command, steal the provider code, and then reset.
hmm. i think i tried that already and still failed. I converted all the code to the new format, still failed.
I just grabbed the docker-terraform repo and built a local terraform cli container. Doing init with this succeeded without error, so I’ll try this instead. Some issue with docker vs local i think
You might try purging the local .terraform
folder if it has references to the old provider, or perhaps your terraform.d
settings if you think it’s the local profile hitting a snag. I hit that issue with one of my 0.12
configs.
I thought the template provider was deprecated?
(and replaced by the templatefile
function)
“This provider has been archived. Please use the Cloudinit provider instead.”
Has anyone kicked the tires on Terraspace at all? https://terraspace.cloud/
The Terraform Framework
This new, opinionated Terraform wrapper / framework just launched: https://terraspace.cloud/
I wouldn’t use it, but would be interested in hearing from others if they would.
Why you wouldn’t use it? Asking for a friend.
It looks like the opinion is that it’s written in Ruby and lacks support for Windows systems