#terraform (2020-04)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2020-04-01
FWIW on my small team (< 5 engineers) we use mozilla’s sops with great success.
For ages we relied purely on the PGP feature, but recently switched to KMS and it works great. We also use it in ansible using a custom plugin.
I believe our technique meets all your requirements. You can only store json or yaml data, but we get around that by wrapping blobs (pems, etc) in json/yaml and then shovelin…
Curious how others are feeding output from terraform deployments into their pipeline as code
You mean chaining it ? What I used was to store certain results in SSM, everything else can read from that.
I know of SSM but haven’t used it, my assumption is that consul would be the non-cloud specific alternative to it right?
correct
Thanks @maarten, I appreciate the quick feedback
Hi all. Hope you are all well during the covid outbreak. I’ve been attempting to use this great terrafrom module https://github.com/cloudposse/terraform-aws-jenkins
However my use case is slightly different to how this module has been setup. At the moment jenkins is running on elastic bean stalk with public facing load balancers. I want to these load balancers private facing only accessible by VPNing into the the specific VPC that is being run on
Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack - cloudposse/terraform-aws-jenkins
i.e. place the load balancers on a private subnet per region and each instance on a private subnet per region
wondering if anyone had done this before with this module and how they got it to work?
@Aaron R take a look at this example https://github.com/cloudposse/terraform-aws-jenkins/blob/master/examples/complete/main.tf#L45
I’m basing it off this
the complete
example gets automatically provisioned on AWS when we run tests
so I’m basing it off this
however I presume I need to create additional private subnets to place the elastic load balancers
as I also want the elastic load balancers to be private
loadbalancer_subnets = module.subnets.private_subnet_ids
then place it in private
can the load balancer and instance be placed in the same subnet though?
you can place it in any subnet you want
depends on your use-case
if you need to place LBs in separate subnets (for any reasons), you can create more private subnets, using any subnet strategy you want
so i’ve used this complete main example to scale up a working jenkins instance on aws. (public facing load balancers). I then modified it to place the load balancers in the same private subnet as the instances. However when on the elasticbeanstalk console, attempting to set the application as internal it fails. (even tho everything is private). I connected a client vpn to this vpc and attempted to connect via the load balancer - couldn’t manage to do it. This led me here: https://forums.aws.amazon.com/thread.jspa?messageID=415184
thanks for your help by the way
I think it’s something to do with trying to set the elastic bean stalk application from public to internal
as it’s set to public by default
when then attempting to set it as an internal app (once terraformed up) it fails to do so
therefore I cannot hit the load balancer when in a VPN attached to the VPC but I can directly hit the instance
Anyone else with this pet peeve? When using a data source for example to get the current IP addresses of elastic network interfaces attached to an NLB… any resource which then make use of this data will always show a diff known after apply even though the data is actually the same every time … Any way around this except just not using it as data, but converting it a possibly a static variable after first creation is done? …
out of curiosity, are you looking up an EIP you generated and assigned to the NLB or is the the managed IP’s that AWS assigns to the NLB?
If the later, I wonder if the provider can distinguish the difference for that resource and assumes that the IP can change (similar to how it does in ALB’s) and just doesn’t keep the IP static
since the resource itself is known to have changing IP’s out of band of the provision process
I let AWS assign those IPs in this case.
The problem is the resource (aws_lb
) doesn’t supply the data at all - so I have to go via the data source.
But you are on to something.
or hmm maybe not.
yeah if you’re letting AWS assign them, there’s no gauntree they’ll remain that IP I don’t think
The problem is the internal IPs…
I’m not sure I can explicitly decide them?
maybe I can create two elastic interfaces. let me check
no, they can only be auto-assigned it seems.
Amazon creates those elastic network interfaces, and “owns” them, but they are created “in” my account
yeah I think you’re right on NLB, no ability in API to specify anything beyond the subnet
I can live with this… but anyone else is going to freak out when things show up as changes.
I wonder if you can use lifecycle ignore_changes on that scope
I wonder if maybe it’s mostly depends_on which makes this an issue…
Currently depends_on for data resources forces the read to always be deferred until apply time, meaning the results are always unknown during planning. depends_on for data resources is therefore useful only in some very specialized situations where that doesn't cause a problem, as discussed in the documentation.
Yeah, it’s that.
And I use depends_on here for a good reason…
Because a data source cannot be “optional” I have to make sure the aws_lb resource gets created first. So data source is used second. Otherwise, the first terraform run won’t work fully. dangit.
Background Info Back in #6598 we introduced the idea of data sources, allowing us to model reading data from external sources as a first-class concept. This has generally been a successful addition…
ah boo
well, I normally wouldn’t suggest this but in this kind of scenario you could replace your eni/eip datasource with an external datasource. Write a botoscript or something to lookup the IP and handle it gracefully.
because it seems more of an issue with the provider and resource than with the API itself
Amazon needs to add these IPs in their API responses in order for Terraform to be able to fix it properly I guess.
Because going via the network interface list is a bit of a kludge.
I’ll see if I can do some coalesce trick with an external program like you said.
I have the same pain with ecs task definitions… because they can be updated from terraform - or from CodeDeploy …
hope this data source issue on plan can be fixed soon .
yeah, for that it looks like airship.tf has a nice approach with drift management via lambdas
And my current way of running terraform doesn’t even support executing an external programs right off the bat … since I use AWS profiles.
But I tested it, and it works.
output nlb_ingress_lb {
value = merge(
aws_lb.nlb_ingress,
{
proxy_ips = coalescelist(
flatten([data.aws_network_interface.nlb_ifs.*.private_ips]),
jsondecode(data.external.get_nlb_ips.result.private_ips),
)
}
)
}
airlift yeah this one had some nice things. “cron” jobs etc…
# lookup_type sets the type of lookup, either
# * lambda - works during bootstrap and after bootstrap
# * datasource - uses terraform datasources ( aws_ecs_service ) which won't work during bootstrap
variable "lookup_type" {
default = "lambda"
}
clever indeed.
CFN stacks with custom resources can help sometimes. I used one here to do ECS deployments that don’t interfere with Terraform. It’s just using ECS rolling updates, not CodeDeploy though, but we might extend it to support CodeDeploy. https://github.com/claranet/terraform-aws-fargate-service (it’s in early stages of being taken from a production system and turned into generic module)
Terraform module for creating a Fargate service that can be updated with a Lambda function call - claranet/terraform-aws-fargate-service
Also you can use 2 separate stacks and output data source values from one, use it in the other, if you want to “save” values. Not always ideal of course.
Yep, considered the two stacks..
I adapted my module to use the tricks from the airlift.tf modules now and have solved the task definition issues Now I’m building a similar lambda to solve the NLB lookup .
does airlift require setting the lookup variable to one value during bootstrap, and then another afterwards?
i mean airship (guess you did too?)
Terraform module which creates an ECS Service, IAM roles, Scaling, ALB listener rules.. Fargate & AWSVPC compatible - blinkist/terraform-aws-airship-ecs-service
Has anyone seen Terraform useful for a situation where you want about 500 non-technical users create their own prepackaged resources in the cloud? For example, everyone gets the same account setup with predefined VM instance? My instinct is that Terraform is not the best tool for this, but I’ve seen people start with the idea that Terraform could run the backend.
If it’s non technical-users does it make sense for them to do the deployment themselves ? From Administrator POV Terraform would be perfect to create 500 predefined instances for example.
We wouldn’t want them to do the deployment, but they would sign up for the package that would trigger a deployment. So think of a case where a member of the public signs up for a service that deploys a server for their use. The end user would be a data scientist. So it may be 200. It may be 500. It may be a 1000.
I’m asking for a general pattern, but the couple of use cases I’ve seen is with an organization that wants to allow teams of data scientists from the public to use a common dataset with jupyterlab instances stood up, etc.
Ah ok, in that case Terraform could for sure work out. In AWS Case I think Cloudformation would maybe be more straight forward as no tooling would be necessary.
What’s the pattern you would use for Terraform? Would it be something like a webhook trigger on Jenkins to run a Terraform module that accepts different inputs? Is that a better option than say use a language SDK to great a gui?
I was on a demo call with Scalr a while back for a different use case. I understand they use Terraform on the backend, but I don’t see how Terraform can be used in a normal GitOps way for them. Do you know what pattern they use?
@Joe Presley I’d leave Terraform out of it as it’s not an API and with many automated deployments error handling of Terraform will be an issue. Maybe you can look into using the provided IaaS for the Cloud Provider and it’s native deployment mechanisms like Cloudformation.
That makes sense. Thanks for the feedback.
2020-04-02
Does anyone have an example of using log_configuration
with the terraform-aws-ecs-container-definition module? I’m trying to update from 0.12.0 to latest (0.23.0) and it looks like the logging configuration has changed. But I can’t find an example of how to implement it now.
"logConfiguration" = {
"logDriver" = "awslogs",
"options" = {
"awslogs-region" = var.region,
"awslogs-group" = "/${terraform.workspace}/service/${var.service_name}",
"awslogs-stream-prefix" = var.service_name
}
}
oh sorry, I didn’t see the terraform-aws-ecs-container-definition module part
nm that
@sweetops I’m using v0.23.0 and the following works for me:
locals {
log_config = {
logDriver = "awslogs"
options = {
awslogs-create-group = true
awslogs-group = "${module.label.id}-logs",
awslogs-region = var.region,
awslogs-stream-prefix = module.label.id
}
secretOptions = null
}
# ...
}
# ...
log_configuration = local.log_config
You having trouble beyond that?
Do I need to escape any characters in the following terraform?
coredns_patch_cmd = "kubectl --kubeconfig=<(echo '${data.template_file.kubeconfig.rendered}') patch deployment coredns --namespace kube-system --type=json -p='[{"op": "remove", "path": "/spec/template/metadata/annotations", "value": "eks.amazonaws.com/compute-type"}]'"
I get the following error and I’m not sure why its asking for a new line
Error: Missing newline after argument
on variables.tf line 101:
(source code not available)
An argument definition must end with a newline.
nevermind, got it
Has anyone here ever had a lambda in one account that needed to be triggered by SNS in another account and used the aws_sns_topic_subscription
resource?
I keep getting an error on plan that the SNS account is not the owner of the lambda in the lambda account
Maybe https://jimmythompson.co.uk/blog/sns-and-lambda/ is helpful describing the permissions policy that will need to be setup in a cross account scenario ?
A guide on how to link together Lambda functions and SNS topics that belong in different AWS accounts.
Question regarding launch templates, block devices and instance types: Do you always use the same device_name
for the root volume? Or change it based on instance type? For example, do you always use /dev/sda1
or /dev/xvda
?
changes based on instance type, and on OS
What was the terraform provisioner used to collect script output into state again?
thanks!
github has fallen over, in case that’s not responding for you… https://www.githubstatus.com/
Welcome to GitHub’s home for real-time and historical data on system performance.
that link is for an actual custom provider… there is also a module that abuses null resources terribly to get the outputs into state… but can’t find it right now cuz it’s on github
Yeah, I know they are all dirty hacks and all. I’m not super proud to have to be using it
or you can use the external provider, which is easy too, and pretty clean. only extra downside to me is that the stdout is not output which can make it hard to troubleshoot. i compensate by using the python logger to write to a file
I just need one dumb airflow fernet key
its such a small python script you can oneline the thing
I’ve been avoiding doing so for a while now but I’m seeing no good way to generate an airflow fernet key via tf and its such a short python script and all..
btw, for all you terraformers out there, I think you’ll dig what @mumoshu as been cooking up with #variant v2. He has created a declarative way to describe any “cli” tool using HCL that can be compiled down to a single binary. It means anyone on your team who can write HCL can help write the cli tool for your company (e.g. ./acme eks up
) Maybe you use #terragrunt today. Maybe you wish it did things that it doesn’t do today, but you’re not a go programmer. With #variant , that’s less the case because you can define arbitrary workflows like you would in a Makefile
(only with a Makefile
it doesn’t compile down to a binary for easy distribution) and it’s all in HCL so it easy for everyone to grok.
Turn your bash scripts into a modern, single-executable CLI app today - mumoshu/variant2
What! That’s so cool. Going to check it out for sure. And I can figure out a way to shoehorn some power shell in there
Turn your bash scripts into a modern, single-executable CLI app today - mumoshu/variant2
@mumoshu has joined the channel
2020-04-03
hey guys how to update map values while using it
tags={ Department=”cabs” OS=”ms” Application=”cabs” Purpose =”app”} when i . use tags=var.tags just need to update few values such as OS = ms to OS=linux , purpose=app to purpose=db
The merge function takes an arbitrary number of maps and returns a single map after merging the keys from each argument.
tags = merge(
var.tags,
{
Purpose = "baz"
}
)
Dear all, is there a way how to define locals for the file scope only? Not for the whole module?
negatory. terraform loads from all .tf files in the directory. only option is to create separate modules
okay, thanks!
https://github.com/cloudposse/terraform-aws-eks-cluster/commit/162d71e2bd503d328e20e023e09564a58ecee139 removed kubeconfig_path which I was using to ensure the kubecfg was available to apply the haproxy ingress after setup. Looking at the changes to the outputs etc I can’t see a way to still get my grubby mitts on the cfg file.
- Use
kubernetes
provider to apply Auth ConfigMap * Usekubernetes
provider to apply Auth ConfigMap * Usekubernetes
provider to apply Auth ConfigMap * Usekubernetes
provider to a…
kubeconfig path was used only for the module’s internal use, to get kubeconfig from the cluster and apply the auth config map
- Use
kubernetes
provider to apply Auth ConfigMap * Usekubernetes
provider to apply Auth ConfigMap * Usekubernetes
provider to apply Auth ConfigMap * Usekubernetes
provider to a…
you can use the same command aws eks update-kubeconfig
as the prev version of the module was using
to get kubeconfig from the cluster
but outside of the module since it does not need it now, and using CLI commands in the module is not a good idea anyway - not portable across platforms, does not work on TF Cloud (you have to install kubectl and AWS CLI)
Ta its all in geodesic so its been fine. I just tripped up leaving it on “rel=master” .
The .11 version had a kubectl file which applied the configmap auth; we were hooking on to that thread to bootstrap rancher.
yes, if you want to continue using the prev functionality, don’t pin to master
in fact, never pin to master, always pin to a release
(fewer surprises since any module will evolve and introduce changes)
yep yep; Totally down to me rushing to test migrating to the v.12 version. I’ll file it along side my decision to upgrade EKS to 1.15 on a friday and it murdering the cluster in the eyes of rancher (but running fine in EKS)
hey guys, I created an ami from an ec2 instance I had configured and am trying to now deploy a new ec2 via terraform using that AMI while also joining a managed AD domain in AWS. For some reason when I use my AMI and not say the default win2019 amazon AMI to build this EC2 it fails to join my domain upon creation. Any thoughts? Do I need to prepare the machine in any way prior to creating an AMI out of it so that Terraform can do the domain joining?
Need some help - we’re about to begin rolling out our modules to production and need to decide whether to break out the modules like CloudPosse has it (a module per github repo) OR just make a monolith repo that contains all of our modules (making it easier to develop against since all dependencies are in one repo). Likely using TF Cloud. I’m in the boat of break them out - my teammates are against me. Need help! lol Thoughts?
Break them out!
Agreed - now how do I defend it…
Tell them it’s how terraform module versioning and integration works.
You increase the reliability by seperating as you tag and release versions. This powers the terraform modules. You can’t git tag folders
I’ll probably write up a blog post on this soon. I ended up building an Azure DevOps pipeline that I just copy and pasted into different terraform module projects and it automatically versions them using Gitversion.
I showed this logic and my team is starting to do the same thing now not a monolith repo. The very nature of terraform modules and being able to version breaking changes and so on would get requires single repost per terraform module.
I’m sure you can get some work around but it’s definitely an anti pattern in my opinion to try and do a monolith repo for Terraform modules. And let me clarify I mean this for terraform cloud. I’m doing all of my pipelines and telephone cloud I don’t even know how it could even work reasonably with their terraform module registry if you are trying to use a monolith repo. It’s not really even a question of an uphill battle, in my opinion, it’s a question of if they’re trying to force an anti-pattern based on how they prefer to do it rather than how terraform cloud is structured to do it
Lastly I’ll mention that you should create your get repos for the simple yaml file and a Terraform plan. Then you won’t have any problems setting up your and managing your repos. I have 90. I think the closest team has like 10 because they’re working on more traditional applications. small purpose repos just work better in my opinion with terraform as well as CICD tools
Hope that helps
Terraform docs back this up
you can publish a new module by specifying a properly formatted VCS repository (one module per repo, with an expected name and tag format; see below for details). The registry automatically detects the rest of the information it needs, including the module’s name and its available versions.
Cheers
Since we are using GitHub, they are arguing you could just pin it to a commit hash…
Of course, ie tags
I really appreciate the input! I’ll give you a summary in a bit of what happens…
Ask them how they plan on ensuring
- one version of a module in one folder is easily versioned against another that has pinned changes from a different time.
- not download the entire repo for each module use… Which it would do by default
To me it seems they need to get comfortable with the terraform recommendation and not try think more repos more problems. It makes it easier.
And solve the management of them by in and hour or so write a yaml file that created all your repos and even adds GitHub hooks for slack and more. It’s actually not much extra effort at all then and your actually improving your source control system manangement too
Throwing everything into one unwieldy configuration can be troublesome. The solution: modules.
And ig you choose to not use terraform registry then you have to start managing each jobs GitHub authorization instead of having that handled by the built by in registry oath connection. It’s really to me a lot more work to try and not use their recommendation IF you are using terraform cloud.
for some background, @Mike Martin are you practicing monorepos for other things you’re doing?
(i find the monorepo / polyrepo argument to be more of an engineering cultural holy war with everyone on both sides wounded and fighting on principle)
also, one repo per module is not necessarily required either. we do that as large distributors of open source. there’s also a hybrid mode.
Terraform by HashiCorp
This talks about doing something like creating a terraform-networking
repo and then sticking modules for that in there. Since this is for your organization it can be highly opinionated and grouped accordingly.
100% agree. The folks I’m working with don’t do everything through code and so using one get repo for everything miscellaneous has been typically the approach. they get nervous about new repos cuz they’re not used to it. Making it easier to manage and create them consistently I think is a big big win that helps that move forward
Hi, I’m trying to use a feature flag/toggle in Terraform with for_each
previously I have used a count
for the toggle but that does not work with for_each
Does anyone know how I can do this?
resource "google_project_service" "compute_gcp_services" {
for_each = {
service_compute = "compute.googleapis.com" # Compute Engine API
service_oslogin = "oslogin.googleapis.com" # Cloud OS Login API
}
project = google_project.project.project_id
# count = "${var.compute_gcp_services_enable == "true" ? 1 : 0}"
service = each.value
disable_dependent_services = true
disable_on_destroy = false
depends_on = [
google_project_service.minimal_gcp_services
]
}
@Patrick M. Slattery You need to create a condition and provide an empty value and a non-empty value. Here’s an example:
dynamic "action" {
for_each = var.require_deploy_approval ? [1] : []
content {
name = "ProductionDeployApproval"
category = "Approval"
owner = "AWS"
provider = "Manual"
version = "1"
run_order = 1
}
}
2020-04-04
Anyone else using pycharm and the HCL plugin for terraform work? Is there a way to solve the “Can’t locate module locally” error when using remote sourced modules from github? The only workaround I’ve found is to declare a ‘wrapper’ module around the remote module that passes vars to it … which is pretty silly. So if I try to directly use a CloudPosse module or even one from the official terraform registry, it can’t do any code completion or validations.
Failing that, what are people using these days for ‘full featured’ terraform authoring?
I know a lot of folks use IDEA, which is just a more generic PyCharm. I use VS Code, but as I move to TF12 it doesn’t handle the mix of 11 and 12 very well, and the experimental language server is not the best, but it’s been improving. I think the IntelliJ plugins are more featureful and supporting of 12. I could just have my stuff set up poorly, I haven’t spent much time on it.
Yes I used VS Code originally but TF12 broke it, and they never really got a good plugin after that
Yeah. Besides lack of time and just having everything already built in TF11 so path of least resistance, that’s probably the biggest thing holding me back is that it’s not well supported in my dev environment
hmm so IDEA is just the java IDE from JetBrains, they make PyCharm too. I’d have to assume its the same plugin in fact.
2020-04-05
is someone here working on this module ? https://github.com/bitflight-public/terraform-aws-app-mesh
Terraform module for creating the app mesh resources - bitflight-public/terraform-aws-app-mesh
Deploying Kube Apps via the terraform provider, a quick blog I whipped up. More on the beginner side of things but with some interesting tools and a pretty comprehensive example terraform example module for a full deployment of an AKS cluster with app deployment: https://zacharyloeber.com/blog/2020/04/02/kubernetes-app-deployments-with-terraform/
Kubernetes App Deployments with Terraform - Zachary Loeber’s Personal Site
My first introduction to finalizers were with installing rancher
Kubernetes App Deployments with Terraform - Zachary Loeber’s Personal Site
OMG what a PIA to uninstall an app with dozens of finalizers
I am so glad to get validation from someone like yourself on those stupid things
2020-04-06
Hi, Any ide why this https://github.com/cloudposse/terraform-aws-s3-bucket/blob/master/main.tf have only lifecycle rules for versioned buckets ?
Terraform module that creates an S3 bucket with an optional IAM user for external CI/CD systems - cloudposse/terraform-aws-s3-bucket
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Apr 15, 2020 11:30AM.
Register for Webinar
#office-hours (our channel)
@here https://github.com/cloudposse/terraform-aws-s3-bucket/pull/21 when you have a chance
This is a copy from : https://github.com/cloudposse/terraform-aws-s3-log-storage/blob/master/main.tf to enable additional options for the s3 lifecycle expirations
@Maxim Mironenko (Cloud Posse) is back, so we can take a look at some things
This is a copy from : https://github.com/cloudposse/terraform-aws-s3-log-storage/blob/master/main.tf to enable additional options for the s3 lifecycle expirations
let’s move PR requests to #pr-reviews
ohh cool, sorry I did not that channel existed
it’s pretty new
Another terraform best practices question.. i have a multi account setup wherein I create one environment type (dev, test, stage) per AWS Account. What could be a preferred strategy for storing remote state in S3 backend ? I am currently using 1 bucket /env to store state and the state bucket resides in the same account as the infrastructure being spun by terraform. Someone on the team recommended using one state bucket in the shared service account for all environments. They just want to be able to see all state files in one bucket. While it’s technically feasible, I am thinking this adds additional complexity (cross account setup) without any real benefit. What do folks think ?
I don’t know if its considered best practice but I use one bucket per account and use workspaces for environments. I switch between accounts with -backend-config
I’ve got a shell function to switch back and forth:
tfswitch() {
export AWS_PROFILE=$1
terraform init -reconfigure -backend-config=backends/$1.tf
}
I guess I’d also be hesitant to have the prod s3 state bucket share a space with the dev s3 state bucket and the IAM policies for both state writing and cross-account access get complex to manage
I agree… those are my concerns too.
additionally if you’re operating as a team you’ll still probably need separate dynamodb lock tables for each env
ok sure.. I haven’t used them thus far but I can take a look to see how that would work out.
I’m using one single s3 bucket, one single dynamo table for maybe 30 or so state files (30 “stacks”)
how do you use 1 dynamo table for multiple states?
is it just an item per state in the same table and you specify it explicitly in the backend config?
my backend.config is the same in ALL stack folders:
terraform {
backend "s3" {
profile = "aws-profile"
region = "aws-region"
bucket = "s3-bucket"
dynamodb_table = "my-infra-terraform-locks"
}
}
then person B making a change to state B has to wait until person A making a change to state A is finished?
if they’re all using the same lockfile?
then I have a little Makefile
by which I do everything … make plan
for example.
That sets the key for each folder based on the current path I’m in:
so if I’m in repo/terraform/infra/battlefield/bl-prod-mesos
:
it will do:
echo key = "infra/terraform/b9d-infra/infra/battlefield/bl-prod-mesos/terraform.tfstate" > .terraform/backend.conf
I could also have hardcoded the above in all individual folders… but so far I never did. Makefile does everything the same every time
They just want to be able to see all state files in one bucket.
I question that.. why do you need to see the state files? Smells fishy.
We have one account per environment generally speaking, and one state bucket per account. Sometimes we end up with a second or third state bucket in account for a different client project, that’s usually just for small projects and only in the dev/test account.
i have a syntax question. How do I use a colon ‘:’ inside a conditional expression? I want to append a second variable (var.env) to the end of either option like this value = “${var.one != “” ? var.oneenv : var.twoenv}” what am i missing?
regular string interpolation:
- 0.12:
value = var.one != "" ? "${var.one}:${var.env}" : "${var.two}:${var.env}"
ah perfect. thanks aton!
0.11 looks off to me on first sight. I would create a local first with : and then reference it for value here
something like this:
locals {
trueValue = "${var.one}:${var.env}"
falseValue ...
}
value = "${var.one != "" ? local.trueValue : local.falseValue}"
thanks guys, i used v0.12
2020-04-07
question, I have a dir/repo setup like this:
/repo/
-- terraform.tfvars
-- .envrc
• .envrc has:
export TF_CLI_INIT_FROM_MODULE="git::<https://github.com/***/terraform-root-modules.git//aws/tfstate-backend?ref=master>"
export TF_CLI_PLAN_PARALLELISM=2
export TF_BUCKET="devops-dev-terraform-state"
export TF_BUCKET_REGION="us-east-1"
export TF_DYNAMODB_TABLE="devops-dev-terraform-state-lock"
source <(tfenv)
• terraform.tfvars has:
namespace="devops"
region="us-east-1"
stage="dev"
force_destroy="false"
attributes=["state"]
name="terraform-tfstate-backend"
But when i run terraform init
it complains about a non-empty directory. i am trying to learn this before jumping to geodesic, but i don’t know how to get the root module copied to my repo above. am i doing something incorrectly?
error:
❯ terraform init
Copying configuration from "git::<https://github.com/***/terraform-root-modules.git//aws/tfstate-backend?ref=master>"...
Error: Can't populate non-empty directory
The target directory . is not empty, so it cannot be initialized with the
-from-module=... option.
^bump
can you post the specific error
looking at what you’ve supplied it doesn’t look like there’s anything for terraform to init
do you have a main or any other kind of .tf with provider resources defined?
@androogle i updated the original post
The behavior of terraform changed in 0.12
0.11 used to allow initialization of a directory with dot files
We have a long thread discussing this here: https://sweetops.slack.com/archives/CB84E9V54/p1582930086018800
I’ve run into the
The target directory . is not empty, so it cannot be initialized with the
-from-module=... option.
issue trying to use terraform 12 on a geodesic container. Anyone know of a work around? For the meantime I’m going to create a temp dir and init into there and move the files the the dir I want to use.
@Erik Osterman (Cloud Posse) aww, damn. i was looking for anything in the changelogs yest about this specific behavior. maybe just init into a new dir, and then copy the tfvars/envrc into the copied-module dir?
How do you all work with a situation where you want a terraform module to spin up resources (instances in this case) in multiple regions?
I’ve got my terraform root module (remote state in s3) and i want to create an app server in several regions, ideally in one terraform invocation.
This is a design decision that comes down to what you want to achieve with multi-region support
One consideration is to move your S3 state out of AWS so that it’s decoupled from AWS failures.
Otherwise, if you want to use something like the terraform-aws-s3-statebackend
module, best-practice would be to have one state bucket per region to decouple state operations so they aren’t affected by regional failures.
Note that terraform cloud is not hosted on AWS, so using terraform cloud as the statebackend is an alternative.
Ignoring whether that’s a good idea or not, if you want it in 1 terraform invocation and state file then you can:
• create a module with your instances
• call your module for each region, passing a region-specific aws provider into the module
See the first example of https://www.terraform.io/docs/configuration/modules.html#passing-providers-explicitly where they call a module and pass a region-specific provider into the module as the default aws provider for that module
Modules allow multiple resources to be grouped together and encapsulated.
Thanks! I remember now I’ve used explicit providers before when using letsencrypt staging and production api servers at the same time
Ignoring whether that’s a good idea or not,
I’d be happy to hear your thoughts on why it’s not a good idea.
I wouldn’t say it’s not a good idea, just that there are trade offs. It’s mainly to do with “blast radius” of changes, and what happens if the main region fails. It is probably fine to do what you’ve proposed though.
Terraform Cloud Wrapper. Contribute to mvisonneau/tfcw development by creating an account on GitHub.
2020-04-08
Hey guys, just started working with terraform can we make output of terraform store in s3 so my services fetch details from there like elb name- id etc. dont want to use aws cli
for other terraform you can use this datasource to query a remote state
Accesses state meta data from a remote backend.
for things other than terraform - you probably want to write values to something like parameter store or something like that
if you’re really set on writing to s3 you could use this resource to write a file out to s3 as part of your terraform module https://www.terraform.io/docs/providers/aws/r/s3_bucket_object.html
Provides a S3 bucket object resource.
thnx Chris
Does anyone know of a way that you can take an output and set a CircleCI env var as that output repended with TF_VAR_
? Other than a script possibly…im really more wondering if there is a way in CCi but thoguht id ask here since its TF related too.
I don’t know in CCi however I’d use terraform-docs json & jq to prepare it.
I have a question about terraform-null-label; It seems like the tags support maps. However, some Azurerm resources like azurerm_kubernetes_cluster
support maps while others like azuread_service_principal
support only lists. Is there any way to output (or input) lists of tags from null-label?
I don’t know for certain, but I guess likely not since most CP modules are AWS focused and tags in AWS are exclusively (I think) key value pairs. This would probably be a good contribution to that module though.
Right now, when we add new devs to our team, we add their name, email, github handle, etc. to a terraform input file, and then we have Atlantis provision their Datadog account, github invite to our org, AWS IAM User, etc.
I am looking into Okta so that our Ops side of things can create users across all our accounts, but have some concerns where it seems like support for AWS users and some other orgs would become harder to work with (SAML seems quite annoying compared to SSO, for example, and we like having IAM Users as we already have strategies for eliminating long-lasting credentials)
For those who have faced similar issues before, how did you decide which accounts to provision through Okta/terraform? If you have a well-oiled terraform/atlantis setup, do you feel that Okta is still worth pouring some money into?
I definitely do not have a well oiled setup yet, but what I do like is that Okta lets me add MFA to things that otherwise don’t really support it. And have different MFA policies depending on what application they’re accessing.
The other great advantage of SAML is a single point to de-provision access quickly. Auditing login events also becomes a lot easier.
I’d look at the decision by prioritized pragmatism - are things that Okta is going to do part of the current major priority of the business? Usually I see this kind of priority come around during a compliance event, like targeting ISO 27001 or PCI DSS compliance or an IPO.
2020-04-09
Hello, I am using Vscode and was looking for syntax highlight and linting; I just want to share that the syntax highlight extension to use is “terraform” 0.1.8” from Anton Kulikov ( 25k download ) latest commit in December 2019.
the extension “terraform” version 1.4.0 from “Mikael Olenfalk” (520k download ) is not working with the latest terraform syntax: latest commit October 2019.
Yah ~ansible~scode is pretty bad for terrraform now due to the lack of support on the plugin.
JetBrains (IDEA or PyCharm) has a better plugin although its suffering from the same sort of lack of updates and doesn’t recognize some of the newer features in 0.12
ansible ?
err sorry, vscode I was editing ansible IN vscode when I wrote that
i’ve been using the vscode with the language server for tf 0.12 support. it’s still a work in progress, but works for a good 90% of my use cases
I kept trying the language server and it never seemed to do anything (at the time) so I eventually gave up
been using it for months now, no problem
Ok I can give it a shot again.
The nice thing with Jet Brains IDE plugin is you get full featured ‘where is this used’ and refactoring, and you can cmd/ctrl-click to jump to the definition of a var/local or into a module
i never use that kind of feature anyway can’t keep track of the shortcuts
same usually, but since this is mostly ‘click and go’ its the one I find helpful
I have switched to IntelliJ because of this, and I am a big fan of their plugin
Anyone know if it’s possible to use the new tf12 syntax to add optional parameters to a resource? I want to pass through a string for redrive_policy on an sqs_queue but it doesn’t take blank as an empty string. Trying to figure out if i can use a for loop to skip adding the redrive_policy parameter at all if the variable is empty?
yes, you can use null
for that
variable "redrive_policy" {
type = string
default = null
}
Does anyone know how to setup codepipeline resource for multi-region actions? The documentation is unclear, saying I need to add “region” to the “artifact_store” block for multi-region, but every time I try that, I get “Error: region cannot be set for a single-region CodePipeline”
How would I make it mutli-region?
My colleague helped get this added to the AWS provider but I’m not very familiar with it myself. Did you see that it was only just released, and check the relevant PRs? https://github.com/terraform-providers/terraform-provider-aws/blob/master/CHANGELOG.md#2560-april-03-2020
Thanks man, yeah I saw it was only recently released. I’ve updated to latest AWS provider but still doesn’t seem to work. The documentation just isn’t clear enough, or else it’s still broken?
I wish there was an example in the PR with it working
Use the AWS CloudFormation AWS::Pipeline resource for CodePipeline.
See how there is ArtifactStore
and ArtifactStores
- the latter has regions
Oh, it looks like Terraform combines them into a single concept, somehow.
yeah, i tried using the plural ArtifactStores instead of ArtifactStore, but it threw another error saying: Unsupported block type
if you have 1 artifact_store
block then it maps to ArtifactStore
but if you have multiple it will map to ArtifactStores
have you got multiple blocks and did you set region in them all?
my goal is to run the pipeline with a single artifact in one region (CodeBuild) and deploy to multi regions (ECS)
so i only need the single S3 bucket artifact in one region
Let me see if I can convince my colleague to join this channel. It sounds like what he is currently working on (which is why he worked on the region feature).
cool, great
I don’t think he’s online at the moment.
ok
it might be that it doesn’t support my use case yet
the alternative seems to build in each region, then deploy from there, but that just seems redundant
I tried to figure it out myself but I can’t tell why multiple artifact stores are necessary. Will wait and see what he says. It might be a while though because it’s the Easter weekend.
sure, no worries. I really appreciate you jumping in to try and help though. thanks
I might be on to something. I created a single region pipeline using TF, and then manually added the other Deploy regions, and ran terraform plan again to see the differences. It looks like it might need multiple artifact_store blocks for each region, but the location isn’t the S3 bucket, it’s a codepipeline ID for that region, which I’m trying to figure out…. The diff shows this:
- artifact_store {
- location = "codepipeline-eu-west-1-221120437065" -> null
- region = "eu-west-1" -> null
- type = "S3" -> null
}
- artifact_store {
- location = "codepipeline-us-west-2-423457879940" -> null
- region = "us-west-2" -> null
- type = "S3" -> null
}
are they your account ids? maybe they are real buckets
yeah those aren’t my AWS account numbers, and those aren’t S3 buckets in my account
looks like AWS-owned IDs?
https://stelligent.com/2018/09/06/troubleshooting-aws-codepipeline-artifacts/
When you first use the CodePipeline console in a region to create a pipeline, CodePipeline automatically generates this S3 bucket in the AWS region.
oh wait, you are right.. it did create those S3 buckets for me
this is starting to make more sense. It might be required to have an artifact_store (S3 bucket) in each region I plan to deploy to
Is there a way for terraform to associate an instance with a launch template / ASG or wait for the ASG / launch template to create an instance? I’d like to include the launched instance’s details in the output of my Terraform module.
Specifically I’m looking for a way to provision a Launch Template + ASG and then get the resulting instance IDs that are spun up from those resources.
@Matt Gowie not sure if TF lets you this , Im think you could you use terraform local-exec and run an aws cli script which will query the ASG created and print the instance ids.
Yeah @msharma24, was afraid of that. Will have to look into that and see if I can make something like that work.
What do people think about Atlantis doing terraform apply
before merging the pull request? They explain why on this page https://www.runatlantis.io/docs/locking.html
my gut reaction is
Same, but then I thought about it and the alternatives seem worse.
Does the apply happen in a separate CI/CD system? Do you have Slack notifications to bring you into the approval page? Then another Slack notification if the apply fails? This seems messy.
it’s messy either way i suppose. we consider pr approval as “go” and the merge then just is the ci trigger. the ci job status needs to be monitored in either case.
I’m trying to build something similar to Atlantis using GitHub Actions, and having all of the human interaction in the pull request page is really tempting.
apply before merge makes it so much simpler
really? github-actions works fine on branches, right?
yes they do
we typically operate around several events… prs, branches, and tags. the ci does different things in all cases
external CI system?
usually all the same CI system, but it can be several. depends on the project
i mean do you have a separate CI system or just github?
we haven’t done much with github-actions ourselves. we’ve toyed with it here and there, but haven’t had a reason to move things actively
we’re mostly still on travis-ci, but also codecommit, appveyor, azure pipelines, and now a little github actions
i’m trying to build a terraform workflow that’s all in github
also toyed with a github repo linked to gitlab-ci…
so no logging into separate CI systems
yeah, if you have private github access, and github actions logs are private as well, i can see that
and no extra infrastructure to run
i’m always super cautious with lettings prs have access to credentials
yeah, it checks the user’s identity before running a plan, and pull requests from forks won’t work
are the logs private? i mean, what if an authorized user is debugging something and changes the pr to log sensitive values?
i may move the actual terraform apply
part into codebuild to reduce the AWS access that github needs
i think the logs will be as private as the repo itself, so a user could potentially reveal sensitive values that way
if the terraform apply
part is moved to codebuild, access to logs could be restricted to
but i’m more stuck on the user interface and workflow side of it, hence the question about applying before merging
heh
if you want something else to compare to, here’s one of our public projects that implements the pr/branch/tag workflow using travis-ci…. https://github.com/plus3it/wrangler-watchmaker/blob/master/.travis.yml#L31-L91
Manages buckets and files needed for the public/default watchmaker configuration - plus3it/wrangler-watchmaker
i’d like to get a plan in the pr, but exposing the credentials is a no go on the travis-ci side…
would could use a github webhook to trigger codebuild, run the plan, and post the status check back to the commit/pr with a link to the job. we’ve done that also, for other projects
what is watchmaker?
here’s what i’m working on https://github.com/pretf/example-project/pull/1#issuecomment-611670654
the travis stuff looks probably good but i want to keep the entire workflow within github, and preferably within pull requests
and i want to avoid: plan, approve, merge, apply fails, confusion
it’s very early stages. still trying to figure out how i want it to work.
Yeah, totally, was just providing another reference, maybe spark some ideas around the interface/config…
Thanks, it is helpful
I’ll do this iteratively. Start entirely in GitHub, then look at moving pieces into external systems (like CodeBuild). But first I need to nail the interface.
Atlantis seems to have it about right, but apply-before-merge is radical.
Does anyone know why using -detailed-exitcode
for terraform plan returns 2
shell exit code ?
It’s explained here https://www.terraform.io/docs/commands/plan.html#detailed-exitcode
The terraform plan
command is used to create an execution plan. Terraform performs a refresh, unless explicitly disabled, and then determines what actions are necessary to achieve the desired state specified in the configuration files. The plan can be saved using -out
, and then provided to terraform apply
to ensure only the pre-planned actions are executed.
Thanks @randomy I just RTD
Im just working on making our CI build fail and send Slack notification when the terraform plan stage for X env fails. Using azpipelines slack not and AzDO YAML pipeline to build CICD.
2020-04-10
Is it possible in terraform 12 to dynamically define provider blocks based on a variable?
I tried something like this:
locals {
satellite_regions = {
eu = "eu-central-1"
us1 = "us-east-2"
}
}
provider "aws" {
for_each = local.satellite_regions
alias = each.key
region = each.value
}
But terraform complains
Error: Reserved argument name in provider block
on main.tf line 17, in provider "aws":
17: for_each = local.satellite_regions
The provider argument name "for_each" is reserved for use by Terraform in a
future version.
I’m not fully up to speed on HCL2, so maybe my syntax is wrong? (FWIW Terraform v0.12.24)
Like the error says, you can’t use for_each in some blocks yet. Hashicorps is expanding the use of for_each soon, but you can’t use it everywhere.
is there another workaround to do achieve variable-defined providers?
I’m not sure its capable of it yet
i don’t think so… https://github.com/hashicorp/terraform/issues/19932
Current Terraform Version Terraform v0.11.11 Use-cases In my current situation, I am using the AWS provider so I will scope this feature request to that specific provider, although this may extend …
You can do it with https://github.com/raymondbutcher/pretf
Generate Terraform code with Python. Contribute to raymondbutcher/pretf development by creating an account on GitHub.
Or manually define every possible region provider, then for each region call a module and pass the region-specific provider in, along with an “enabled” flag for that module based on whether that region was included in your list.
2020-04-11
did anyone use this module ? if yes, can you let me know your experienve with it
I’m interested as well. I’ve heard some say for serverless just user serverless cli or Sam as terraform overcomplicates this one aspect and all toolchains are driven for cli for this. I’d like to also know if anyone has used that or equivalent and prefer it over the other methods.
yeah i would like to hear about it too
2020-04-12
2020-04-13
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Apr 22, 2020 11:30AM.
Register for Webinar
#office-hours (our channel)
hello - can someone possibly explain the difference between https://github.com/cloudposse/terraform-terraform-label and https://github.com/cloudposse/terraform-null-label. It’s ot obvious from their README. It seems the latter is more active and has more inputs and examples. Also confused as the doc says The null
in the name refers to the primary Terraform null provider used, but looks like as of v0.15.0
use of null_resource has been removed, but probably the name remains for backwards compatibility. Is the former terraform-terraform-label
deprecated in favor of terraform-null-label
? Thanks.
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-terraform-label
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label
@Doug Lethin we should have this documented, but for now here’s the quick answer
SweetOps is a collaborative DevOps community. We welcome engineers from around the world of all skill levels, backgrounds, and experience to join us! This is the best place to talk shop, ask questions, solicit feedback, and work together as a community to build sweet infrastructure.
(check out some of the responses there)
2020-04-14
I am attempting to build out my first autoscaling group and as a part of the launch instance I need to install some programs and upload some config files, when I was just creating a single instance I would use host = self.public.ip to login to the externally facing IP Address and it would run its magic, however host = self.public_ip is an unsupported attribute in “aws_launch_configuration. so how will I handle this now?
You would like to look into: https://docs.aws.amazon.com/batch/latest/userguide/launch-templates.html and Cloud-Init section of this you can provide this also via the terraform resource afaik.
AWS Batch supports using Amazon EC2 launch templates with your compute environments. Launch template support allows you to modify the default configuration of your AWS Batch compute resources without requiring you to create customized AMIs.
I would use suggest looking TF data templates to pass to pass custom scripts as user data to the launch templates
I’m trying to use emr-cluster module but I don’t see step component defined in the aws_emr_cluster resource. The only reference I see, applies ignore_changes meta-argument on the lifecycle block. Is there any way to provision custom jar with map/reduce jobs via terraform with this setup?
Hello I have a terraform modules + CircleCI situation that I am pretty sure others must have resolved for. Basically CircleCI is unable to download/clone terraform modules in a repo that references them, both hosted in our GitHub. From what I have researched looks like it may be a security feature in CircleCI. How should this be solved for ? Also let me know if there’s another channel I should post this query to.
@curious deviant If the TF modules are hosted in private repo, your CI agent will need clone access to your repository ,You may create a bot user in ur github org with read-only access to terraform modules repo and configure the access token in the CI pipeline with some sort of encryption , so when the pipeline executed the CI agent authenticates into your github org to clone TF repos
thanks !
if I use a module, say one of the awesome ones you guys created, can I on my local run of terraform use a terraform.tfvars file to input into the module that I’m using via git? or am I not understanding how the module creation and download works?
yes, you instantiate a module and provide the variables for it. The variables’ values can come from [variables.tf](http://variables.tf)
or from any .tfvars
files
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
OK cool so I just put all the variables I want defined in the variables.tf and go from there. Cool thank you
How do people here who use lambdas, deploy lambda config changes (with no code change) and get a version deployed and the alias updated? I dont think Terraform handles this well
It’s been a while since I had to manage lambdas but my go to was the Serverless Framework. Terraform isn’t good at managing the lifecycle of a lambda function.
terraform does suck for lambdas.
one of the best modules (or so I’ve heard) is https://github.com/claranet/terraform-aws-lambda if you do want to go this route.
Terraform module for AWS Lambda functions. Contribute to claranet/terraform-aws-lambda development by creating an account on GitHub.
Im using SAM from AWS
2020-04-15
When using https://github.com/cloudposse/terraform-aws-ec2-autoscale-group if I want to create an ASG of size=1 , and attach an EBS volume that persists and is always mounted onto the active instance, should I use the block_device_mappings
input.. or create the ebs volume separately and use userdata to attach it on boot?
Reading the docs of aws_launch_template doesn’t clear it up for me.
If I include a config such as the following, will it create a new volume for each ASG instance created or re-use the same one?
block_device_mappings = [
{
device_name = "/dev/sda1"
virtual_name = "root"
ebs = {
encrypted = true
volume_size = 50
delete_on_termination = false
volume_type = "gp2"
}
}
]
or create the ebs volume separately and use userdata to attach it on boot?
This is correct
Hello, I am noticing issues with https://github.com/cloudposse/terraform-aws-eks-cluster whether I am adding/removing managed node groups or just making a minor change to an existing one (example: increasing desired number of nodes). In all circumstances, I am able to run an initial terraform plan/apply
but any future run returns the following error
module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes[0]: Creating...
Error: configmaps "aws-auth" already exists
on .terraform/modules/eks_cluster/auth.tf line 84, in resource "kubernetes_config_map" "aws_auth_ignore_changes":
84: resource "kubernetes_config_map" "aws_auth_ignore_changes" {
This includes even setting the variable kubernetes_config_map_ignore_role_changes = false
as recommended. Using the latest stable version of the plugin (0.22.0) as well as the latest stable terraform. Thoughts?
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
the issue happens even I run the example from the examples/complete directory verbatim. I would expect that I could run terraform apply
seemingly over and over if there are no changes to state, but the 2nd run always results in the error.
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
can’t confirm right now the problem, but the gist of the issue is a fundamental problem with how the auth config map is managed. basically, managed eks node groups create this automatically if it doesn’t exist, and update it if it does exist. the hack is to create an interstitial dependency in terraform using a null resource so we first create the map with terraform (so terraform is aware of the resource) before the node groups and then create the managed node group.
not ruling out that the behavior might have changed as is sometimes the case.
@johncblandii have you noticed anythign lately?
@Alex Friedrichsen manually delete the config map from the cluster
Or you can import it
Thanks @Erik Osterman (Cloud Posse) and @Andriy Knysh (Cloud Posse).
Seems like the issue was in fact what was described about the configmap not being in state.
Equal parts working with an existing cluster (provisioned previously with an older version of the same module) and equal parts network timeout on my end when trying before with new clusters. Thanks again.
@Alex Friedrichsen if you don’t do it already, you should pin the module to a release (either the last one or the prev one if you used it before). Don’t use master b/c it could introduce breaking changes.
Ahhh cool, glad you got to the bottom of it. Yes, makes sense that it would have been a migration problem from the old version of the module to the new one.
2020-04-16
@channel I am thinking of using a remote storage backend for storing terraform state files , the way I use it , it often contains sensitive info like ssh keys etc , what is a good solution, S3 or consul ? From a cost and security perspective. Thanks in advance
We’ve always stored state encrypted in S3. Consul has too much overhead, unless perhaps you’re already using it
Thanks @Abel Luck , For encryption, do you suggest server side encryption or client side ?
The terraform remote s3 backend only supports server side encryption (using AWS KMS keys).
Noted thanks for the help
That’s for the storage of remote state.
However we use client side encryption for the storage of input variables. We do this with mozilla’s sops tool.
So our inputs to terraform are committed to git, encrypted.
There is a terraform sops provider that makes loading encrypted secrets from a sops file easy
We find it much easier to handle then a full HA vault deployment (We’re a small team, so vault is overkill).
Got it , I was confused too whether we should use vault or no , it’s definitely an overkill for small teams , That was some good insight
Hello, I am new here. SweetOps looks like a strong community!
I am thinking about using Terraform and I am wondering if it would easily allow me to specify that one of the resources needed is a helm chart in a private helm repo on gcs and it would create (and push) it if it does not exists, and another resource needed would be a docker image on google container registry and if it does not exists it would create (and push) it… So far I’ve seen resources being “ip address, vnc, vm” but not what I just explained. Did I miss something?
Thanks
I’ve never used terraform to push a helm repo. The helm provider only has data sources for repositories and resources that allow you define and create releases from charts within them.
if you have the entire chart locally that you would push for a deployment you can just refer directly to the chart location and bypass the entire repo creation process
resource "helm_release" "local" {
name = "my-local-chart"
chart = "./charts/example"
}
Hello. I have trouble with aws_appautoscaling_policy
. When left one container this alarm not disappeared.
https://take.ms/5cyu8
resource "aws_appautoscaling_target" "target" {
service_namespace = "ecs"
resource_id = "service/${aws_ecs_cluster.main.name}/${aws_ecs_service.main.name}"
scalable_dimension = "ecs:service:DesiredCount"
min_capacity = 1
max_capacity = 5
}
resource "aws_appautoscaling_policy" "down" {
name = "${var.project}_scale_down"
service_namespace = "ecs"
resource_id = "service/${aws_ecs_cluster.main.name}/${aws_ecs_service.main.name}"
scalable_dimension = "ecs:service:DesiredCount"
step_scaling_policy_configuration {
adjustment_type = "ChangeInCapacity"
cooldown = 60
metric_aggregation_type = "Maximum"
step_adjustment {
metric_interval_lower_bound = 0
scaling_adjustment = -1
}
}
depends_on = [aws_appautoscaling_target.target]
}
resource "aws_cloudwatch_metric_alarm" "service_cpu_low" {
alarm_name = "${var.project}_cpu_utilization_low"
comparison_operator = "LessThanOrEqualToThreshold"
evaluation_periods = "2"
metric_name = "CPUUtilization"
namespace = "AWS/ECS"
period = "60"
statistic = "Average"
threshold = "10"
dimensions = {
ClusterName = aws_ecs_cluster.main.name
ServiceName = aws_ecs_service.main.name
}
alarm_actions = [aws_appautoscaling_policy.down.arn]
}
So, sounds like the terraform helm provider would need to have something called helm_chart
which would be linked to a helm_repository
and some files (local or git) that would run helm package ${DIR}
and then handle the different ways to “add” a chart to a repo, which I think may have many ways. I use gcs so for me its helm gcs push ${chart}.tgz ${reponame}
…
right a resource called helm_chart specifically that would run all the package and push stuff, typically if you are doing this it would be from another pipeline though (simply using the helm command). The gcs repo would be created by terraform in some other pipeline at some other level
Thank you
2020-04-17
So i’m trying to think of ways to simplify running in terraform cloud + later maybe some azure devops. Thinking of the terraform-root-modules repo Cloudposse has and made me think… while I need separate repos for modules, would it be a better practice to setup all my root module plans from 1 root repo in the same manner? The checks list in github would be huge after a while but would be probably easier to contribute new plans at that point, if I required no modules in there, just root plans.
I don’t use Make
, but probably would setup either some InvokeBuild
powershell helper for local ops, and then have a terraform plan in there that setup each folder added automatically as workspaces with version control hooks and all (already built this).
Any thoughts to the contrary?
I want something like this in my slack room Please tell me they need more beta users, because I would love for a bot that gave me suggestions from confluence and previous history. The only thing I’ve seen before cost a fortune and I couldn’t get something like that through at all.
You’ve seen #variant (variant2
)
comes with Slackbot built-in
Now i just want to figure out how to make it answer questions completely wrong by putting the wrong keywords in… Any know how to stop robots from taking over the world?
Maybe you didn’t all see it, but Foqal prompted me with some prior message history about make files. Pretty cool concept!
2020-04-18
Hi! Someone, I’m dying here. Does Terraform has a way to create EventBridge Event Buses? I just can’t find how. Every search points to aws_cloudwatch_event_permission but that’s out of scope here. Anyone?
Terraform v0.11.14
not yet, check this issue, and the prs linked to it… https://github.com/terraform-providers/terraform-provider-aws/issues/9330
Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or "me to…
2020-04-19
2020-04-20
Am I right that I cannot use this module for default VPC? Because it REQUIRES me entering all the CIDR info which the Terraform “standard” module can do without, in a default VPC environment. By the way, specifying in docs which are the mandatory arguments would only help…
Oops oky I’ve found a place where the “required” are marked as such (but not in the main page, only when I click inputs)
And none of the examples basic/complete validate, missing required arguments or having unsupported arguments… (tf 0.12.24)
How would one convert a map of key=value to a string of “key=value,key=value…”
probably using the string templating and a for loop… https://www.terraform.io/docs/configuration/expressions.html#string-templates
The Terraform language allows the use of expressions to access data exported by resources and to transform and combine that data to produce other values.
think ive got it
join(“,”, formatlist(“%s=%s”, keys(var.common_tags), values(var.common_tags)))
I’m surprised this works. Looking at it makes think it would produce a list like ['key1=key2', 'key3=key4', 'value1=value2', 'value3=value4']
you made me doubt myself so i went back to check it worked right:
Outputs:
trevor = bob=fred,dan=true,default_tag=default_value
and can confirm they go through my annotations of my ingress controller and then back as tags to the elb it sets up
Thanks for confirming! Interesting behavior of formatlist
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Apr 29, 2020 11:30AM.
Register for Webinar
#office-hours (our channel)
does anyone know if terraform-enterprise supports submodules? Similar to this https://registry.terraform.io/modules/terraform-aws-modules/iam/aws/0.0.4
enterprise or cloud?
oh, i suppose enterprise was renamed to cloud
2020-04-21
Ho to all, can anyone help to to fetch the alerting details or data from Elasticsearch and send it to otrs ticketing tool using terraform. I read the aws kinesis using terraform but a confusion is still there. So I am asking here. If anyone worked with on this plz help me.
Hello is there a way to reference multiple attributes for a partition key in dynamodb?
Hey folks — Anybody know of a way to add tags to an existing resource? Specifically the main route table which is generated implicitly when creating a VPC?
Is the answer to create a new route table with the same routes and then do a aws_main_route_table_association
?
Did you try importing the route table into your state then editing via specific TF code?
I’m writing an open source module, so I don’t think would work cause it needs to be easily repeatable across projects.
I would normally recommend avoiding the default route table and leaving it unused, but you should be able to use this resource in tf to adopt it and then tag it.
https://www.terraform.io/docs/providers/aws/r/default_route_table.html
Provides a resource to manage a Default VPC Routing Table.
Hello - I’m having bit of a brain fart , hoping someone can help.
Within my resource "aws_launch_configuration"
i’ve set my image_id
to reference var.image_id (depending on OS type (e.g windows or Centos)
If I set var.image_id to centos how can i tell user_data to execute a bash template file oppose to setting var.image_id to windows and executing a powershell template file?
hope that makes sense.
or should i use template_cloudinit_config
data source and include both bash and powershell script as multi part script?
i got what i’m expecting to work - seems a bit hacky.. Any feedback would be appreciated too :)
user_data = var.ami_name == "CentOS" ? file("${path.module}/templates/install_xxx.sh") : data.template_file.windows_userdata.rendered
If there are just two options then maybe make the incoming variable a binary “linux” or “windows” and do a map lookup on that within the module?
Or maybe use an ami data source to lookup the platform value and similarly map it out?
2020-04-22
can help me?
Hello. I have trouble with aws_appautoscaling_policy
. When left one container this alarm not disappeared.
https://take.ms/5cyu8
^^ this looks okay to me, the alarm definition is CPUUtil <= 10
-> therefor in ALARM state
So. There was one container and the alarm does not disappear.
hello,
I use count = var.env == "prod" ? 1 : 0
to trigger the creation of a resource only if env = “prod”. Is there a way to do it at the module level ?
module "something" {
source = "somewhere"
enabled = var.env == "prod"
}
resource something "something" {
count = var.enabled ? 1 : 0
}
thanks I will try it
basically, pass through an enabled
(or similar) variable and use that for count. there isn’t a terraform feature for enabling/disabling a whole module.
HashiCorp says they’re going to add for_each to the module level soon
thanks for you input, it did worked, but then I had to handle many array. so I have remove the exception that needs this customization
Introducing the HashiCorp Cloud Engineering Certification program, for cloud engineers to demonstrate their knowledge, capability, and expertise working with the de facto standard …
For those using Terraform Cloud
How do you simplify deploying the same work to multiple regions?
The way I see it is repeated code in the project or creating a workspace per region (can do through terraform provider so it’s not a pain with some initial legwork).
And no terragrunt or make files involved here… all terraform cloud please
Ya, programatic creation of workspaces
Consider defining them all in a .yaml
config and then terraforming it out.
But you can’t do a for_each style approach on providers so what would yaml give you?
creating the workspaces uses a single provider with terraform cloud, no?
the provider config can reference variables, right? so per workspace, toggle the region variable, reference it in the provider config?
then each workspaces is associated with a region which terraform runs against
Right, that’s what I said in post, workspace per region. I was wondering if there was anything more elegant or that’s what I’d have to do.
Not that I am aware of
@johncblandii
well, it’s a workspace per deployment, so it could be a matrix of region+whatever-else. half of why i avoided workspaces entirely, just too inflexible
Nice. I’ve actually been wanting to pilot doing this in Azure DevOps since this would be a very very simple thing to just flip into a matrix build and then have it run plans for every region and even account all in one plan.
It’s not prebuilt, have plumbing to do, but if I can convince folks to give me a chance to do it, i think its more scalable
yeah, we basically use terragrunt to give us a lot of that flexibility, while retaining deployments in independent remote-states
also allows us to test locally more easily
but if you don’t want terragrunt or some other wrapper, there’s a lot more legwork IMO
So it comes back to if I’m NOT using terraform cloud that terragrunt provides the most flexibility to rapidly deploy compared to doing all that in the Azure DevOps pipeline.
I’m still mixed on approach. Trying to keep it simple, but investing in a little more complication with something like terragrunt might provide initial complications while simplifying cross region/account deployments a lot
Decisions decisions
you can still use terragrunt in the pipeline
To a team early in any automated pipeline adoption do you feel terragrunt will over complicate or will be reasonable to adopt as a team
i think if you’re looking at multi-account, multi-region then the complexity will be there with any solution, and everyone will need to learn something new and dig pretty deep to understand the abstractions
And terragrunt is established enough that it’s a solid approach vs trying to do it all in pipelines etc. Leverage terragrunt to simplify while using pipelines to run what already was simplified
Makes sense. I tried it last year but still wasn’t 100% on cloud pipelines with terraform so it over complicated it for me. I’m pretty solid now so probably would make more sense
latest terragrunt has gotten pretty powerful with it’s dependency and file/code generation features, and access in terragrunt.hcl to tf 0.12 features and functions (e.g. for loops and templatefile(), etc)
sounds good. I’ll review for a new “root-module” repo and see how it goes then
#terragrunt has obviously proven very effective for this. Also, our latest project is using #variant and it’s hands down the coolest cli approach I’ve seen and it’s all written in pure HCL and compiled down to a single binary. We can define our cli exactly the way we want it to work and it’s not limited to terraform.
we’re using it with helmfile as well.
i’d love to see an implementation using variant to compare to terragrunt
I’m in a windows environment so didn’t look at variant more. I’ll have to rexamine.
I work with PowerShell and it seemed bash focused
ya, i want to discuss on office hours soon. just need to get approval from client.
just use wsl+bash instead of powershell, problem solved!
variant cross compiles (it’s in go)
variant gives you a modern cli interface to wrap any and all commands, workflows, pipelines, etc you need.
That way you define it in variant, run it locally for development, then stick those commands in your Ci/CD pipelines for automation.
this gives you the best of both worlds. simpler CI/CD pipelines and local execution for development.
Ok, so basically an alternative to like act
to run your github actions locally.
Instead now we remove build system and have the commands run by this executable and pipelines just run that and don’t do more logic/processing
ya, sorta. however, with variant, there’s a focus on the cli interface
e.g. this can be done with make
, ansible
, etc.
I’m already doing that with InvokeBuild in powershell for most things, but would be interested in learning more on this. Trying to learn Golang a bit too so i like cross platform/single exe concept
but you end up with ugly as sin commands
in our case, we literally run:
mycli deploy stack
Got you. I love trying new things. Just trying to see what value it brings for my use case.
In my case, I just run Invoke-Build deploy -Configuration qa
and it would do the same thing. it’s just PowerShell Core tasks that can be defined
I’ll have to experiment, sounds pretty cool
with variant, you have fully control over the positional arguments (e.g. mycli a b c
) and options (e.g. mycli a b -foo=2
)
it compiles down to a single binary that is easily shared
And as a plus i can easily run in vscode the same commands for example
built-in testing framework
built-in slack bot
That’s a plus for sure. Dependencies.
Nice! So I’m just chatting, not saying my solution is better. All of those things except dependencies are easy in PowerShell Core, as I have a function for New-BuildSlackMessage and I’ve designed these functions to update not post another completion message.
Argument parsing in powershell is arguably better than any cli, but definitely isn’t a single binary
After build completes it posts with the humanized timing. I hate build notification defaults as so noisy.
Definitely will look at variant when I get some time as I’m loving a single binary, just have to figure out the flexibility compared to what I’ve got right now
https://github.com/raymondbutcher/pretf has a lot of feature overlap with Terragrunt but with other features like support for multiple AWS credentials, and writing dynamic resources in Python if you feel the need.
Generate Terraform code with Python. Contribute to raymondbutcher/pretf development by creating an account on GitHub.
In particular it’s good for structuring projects with tfvars in environment/region-specific subdirectories, then you run pretf from that there without cli arguments and it all works.
I used terraform CLI workspaces + terraform cloud worksapces to achieve that. my terraform cli workspaces follow this format {environment}_{region}
i.e. dev_us-west-2
. Creating a new terraform CLI workspace auto-creates the TF Cloud workspace with the correct naming scheme using the name prefix in the remote backend config. The one caveat is that TF Cloud doesn’t understand TF CLI workspaces (terraform.workspace is “default” to it currently). There has been a bunch of complaints about that so I’m assuming in the future it will be fixed. The workaround I have is adding a workspace variable in TF_CLOUD and in my terraform code i just have this ternary operation that allows me to do local tf applies as well as remote tf applies:
var.workspace != "" ? var.workspace : terraform.workspace
How I get my region from my TF CLI workspace i.e. prod_us-east-1
:
output "region" {
description = "the environment provided in the Terraform CLI workspace (`<environment>_<region>[_<unique-name>]`)"
value = element(
split(
"_",
var.workspace != "" ? var.workspace : terraform.workspace,
),
1,
)
}
I honestly do not think TFC using default
is a bug they’ll fix. It is just an overuse of the word workspace
where anyone using the CLI thinks of workspaces differently.
In this case, there is a single place (tf workspace) being used so it is default
but…to your point, yes…just set a var for the region. AWS_REGION
(if you’re using AWS) to simply get it
This issue comment shows promise but i havent verified https://github.com/hashicorp/terraform/issues/22802#issuecomment-617642088
Terraform Version Terraform v0.12.8 Terraform Configuration Files resource "aws_instance" "example" { count = "${terraform.workspace == "default" ? 5 : 1}" #…
I’m doing something very similar. I don’t do name parsing, and everything is remote. I just use 2 workspace variables in remote
• region
• stage The stage let’s me get the remote credentials appropriate for the service account rather than including in the workspace. The region allows also setting. I am trying to setup workspaces with code as much as I can but only a few projects have that fully done. If i could link the github repo when doing tf init that would be great!
Thanks for sharing this info! Cool to see different ways people are doing it. Sounds like while I’m on terraform cloud I should use just one workspace with region name like I was approaching.
In the meantime, evaluating terragrunt is promising, albeit it might be hard to build the entire workflow of plan/wait for approval and all that terraform cloud offers already. Makes me think I’ll have a harder time selling that
@sheldonh in TFC, that is unfortunately going to be the TFC workspace (default
) as opposed to the TF CLI workspace name. Im personally curious if it is the full TFC workspace name <prefix><tf_cli_workspace>
and you could prob cut the prefix out in your TF code.
I haven’t tested it but this HashiCorp member said as such:
https://github.com/hashicorp/terraform/issues/22802#issuecomment-618583576
Terraform Version Terraform v0.12.8 Terraform Configuration Files resource "aws_instance" "example" { count = "${terraform.workspace == "default" ? 5 : 1}" #…
I just stopped a while back even trying to use it and instead configure based on a var if required. Thanks for clarifying that variable doesn’t do what expected
Not sure if this is the right channel for this, but would someone at CP mind tagging a new version of https://github.com/cloudposse/terraform-github-repository-webhooks? I had a PR that got merged, but it looks like that one is not setup to tag automatically on merge to master.
Terraform module to provision webhooks on a set of GitHub repositories - cloudposse/terraform-github-repository-webhooks
will check for it right now
Terraform module to provision webhooks on a set of GitHub repositories - cloudposse/terraform-github-repository-webhooks
Thanks @Maxim Mironenko (Cloud Posse)! Sorry for commenting on the closed PR - there didn’t look to be an issue template I could use to open one for that.
here it is: <https://github.com/cloudposse/terraform-github-repository-webhooks/releases/tag/0.6.0>
thanks for your contribution
Thanks for taking care of this, appreciate it!
For anyone using the terraform-aws-vpc
module, I noticed it doesnt create a vpc endpoint policy. Is this intentional? Does anyone have a work-around to add a policy to a vpce created by the module?
Just haven’t gotten around to it.
PR’s welcome
oh great, i’ll take a look
Are you working towards CIS compliance?
no, we are in the middle of a cloud security audit and some things came up which made me think about current resources im deploying and making sure they dont show up in the audit in the future
I realized that the vpce is created with a public policy so I wanted the option to tighten it down a bit
Ok, makes sense
For terraform cloud, is there any name matching rules for auto.tfvars
for loading? The docs were confusing. Was hoping to preload a few of these autos for different regions/names and just set one workspace variable that would ensure this file for default values gets loaded.
i do this in one project via yaml but trying to explore the native built in offering for merging default values from files.
2020-04-23
Someone has graciously updated tfmask
to support 0.12 output
I can’t seem to get it to work on redacting the output body. I defined an output with the name secret and the value still shows.
it doesn’t seem it works with map/object output. For example, if you take the lambda environment variables config. and pass in a secret to the map, the entry in the map is not masked. Looking at the source code it seems it may need to look for <string> = <string> and replace the right hand side.
I’ll see if I can add support for that
@zeid.derhally is just a very crude regex on the body and looks for the obvious ways they are expressed. If you get into objects
I think it gets hairy quickly.
We could possibly create a new env variable called SECRET_STRINGS
and anything that is in SECRET_STRING
gets obfuscated.
but then you still need to write the secrets to that ENV… might be a deal breaker
the code does checks line by line, so having a regex that looks for a pattern of key/value would work, it wouldn’t be perfect but i think it would catch the majority of cases. Unfortunately it has been a while since i’ve written anything in golang
Terraform utility to mask select output from terraform plan
and terraform apply
- cloudposse/tfmask
0.4.0
adds support.
2020-04-24
Thing 1: I want to improve my terraform IAM service account security. What’s the best cross platform way to ensure the credentials are encrypted in state but can be accessed by other modules, and not output to console any longer.
Thing 2: I want to give a very simplistic process for updating ssm parameter store values and eventually lock down console. I already have format, and was planning on environment folders with tfvars and single resource call for parameters.
The catch is I can’t lock down drift yet. I want to make it automerge to master for this one project (will use github action/probot) after it passes all checks. I don’t want it to allow destruction of resources though without approval.
Would setting a dynamic block for lifecycle work, requiring a explit “allow destruction” in the input array? And do you think i could trigger the pull request to NOT auto apply if it detects destruction?
That might be an enhancement with terraform cloud. Autoapprove except with destroy, or autoapprove new but not changed/destroy.
2020-04-27
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is May 06, 2020 11:30AM.
Register for Webinar
#office-hours (our channel)
How are folks testing their terraform code? Are there any recommended tools/frameworks out there ? Appreciate any pointers.
Terratest is a Go library that makes it easier to write automated tests for your infrastructure code. - gruntwork-io/terratest
Most of the recently updated CloudPosse modules have tests which are good examples.
thank you..I’ll take a look
i always highly recommend these slides, also from the gruntwork team… https://www.infoq.com/presentations/automated-testing-terraform-docker-packer/
Yevgeniy Brikman talks about how to write automated tests for infrastructure code, including the code written for use with tools such as Terraform, Docker, Packer, and Kubernetes. Topics covered include: unit tests, integration tests, end-to-end tests, dependency injection, test parallelism, retries and error handling, static analysis, property testing and CI / CD for infrastructure code.
Yeah, Loren’s link is a good one. Here is the talk: https://www.youtube.com/watch?v=xhHOW0EF5u8&feature=emb_title
I’ve been testing with Python. It’s not documented yet but here are some examples: https://github.com/raymondbutcher/terraform-aws-lambda-builder/tree/master/tests https://github.com/claranet/terraform-aws-fargate-service/tree/master/test https://github.com/raymondbutcher/terraform-archive-stable/tree/master/tests
Thank you everyone for your help . Really appreciate it.
so for some reason, when I ran a terraform destroy
, terraform decided to attempt to delete my vpc module components first before my EKS cluster built on top of it causing it to error out. (terraform needs module dependency IMO ) It also deleted my kube auth config map for EKS along with it so now terraform or I cannot auth to the cluster. Anyone know of a workaround to fix this (recreating the auth config map)? Other than manually deleting all the AWS resources.
why not run terraform plan/apply
again and see what it will try to create?
did you deploy anything to EKS/Kubernetes like cert-manager
or externa-dns
?`
before you destroy the cluster?
for example, if you deploy nginx-ingress
to Kubernetes, and then will try to destroy the EKS module and the VPC, it will error out, because nginx-ingress
deploys a load balancer into the public subnets
I just did that @Andriy Knysh (Cloud Posse). First I needed to get access back into my EKS cluster by using the user that provisioned the cluster (in my case, my terraform bot account)
if you don’t delete the nginx-ingress
release and try to destroy EKS/VPC, it will be stuck at deleting the subnets b/c the load balancer and ENIs will be still attached to them
@Andriy Knysh (Cloud Posse) that was it
I feel like the helm release was deleted first, but the load balancer that got provisioned might not be deleted w/it
If anything like that happens, we just manually delete the load balancer using AWS console, and then destroy/plan/apply work again
that’s what I did which irritates me. But good to know that’s the best we can get sometimes.
if you forget to delete Kubernetes releases and destroy the EKS cluster, then yes, you can delete the leftovers manually. It applies to nginx-ingress
and a buch of other releases like cert-manager and
external-dns
I’ve got a question about the EMR Cluster module.
I would like to create a DNS record that points to all of the master instances, for HA purposes
It seems that TF doesn’t allow a way to do that, though? https://www.terraform.io/docs/providers/aws/r/emr_cluster.html#master_public_dns
Provides an Elastic MapReduce Cluster
eventhough the master group can be 3 nodes for HA, I’m wondering how I could use TF to assign a DNS name to all three …
did you try to create 3? what is the value ofmaster_public_dns output in this case?
It might be an array if you create 3 masters, so you can add a DNS record for all of them. (I don’t remeber if it was a single value or an array, although we did not test with 3 masters)
also, what AWS console shows for the master DNS if you create 3? Does it show all three domains, or just 1?
Just 1.
master_public_dns
is a CNAME pointing to a single compute.internal ip address/record
2020-04-28
Hey there!
How would you set instance_market_options
of aws_launch_template
only for stage environment? Define two similar launch templates with count
condition?
using a dynamic block
The Terraform language allows the use of expressions to access data exported by resources and to transform and combine that data to produce other values.
looks great! thanks
well …not great, it looks ugly, but works
hah, that is a verbose dummy value. that is how i’d do it except the for_each would fit on line line.
thanks again! multi line is because of pycharm auto-format
hey all, if you had to give a talk to a bunch of ppl for 30 min to an hour on infrastructure as code (terraform) and policy as code (open policy / cloud custodian), what are some things you’d mention ?
automated testing
how do you do automated testing with terraform? we recently started terratest for internal modules coming from devops team but havent pipelined it
terratest
we’re a few degrees behind testing but ill definitely add more information for testing, ty
code quality/validation. Sonarqube isn’t much help but terraform has lots of linting/etc
I’d make some kind of joke about ClickOps to start. Then I’d roll right into ‘this craze of DevOps’ and how that has taken over our industry, then I’d bounce right into the fact that infrastructure was jealous so now we have Infra As Code.
Then I’d go over how it speeds up deployments, allows for closer integration into the same tools that the developers use for version control, and why it rocks
or something like that
I also generally am not allowed to talk too much for a reason so take what I say with several grains of caution…
I love the term ClickOps. I’ll definitely use that. I have some deployment speed slides, version control, and other bennies
lol wuttt what did you do Zachary
I’m way too into this stuff, it can be great for the right crowd, but scary to others
Sounds ylike you have a fun task regardless.
If you get it recorded or something you should share with the rest of us to live vicariously through ya
haha yea definitely fun! i’m trying to make it less company specific so in the future the deck can at least be released
If you want to be a total nutcase about it you can create your deck with hugo, markdown, and reveal.js (my own example here: https://github.com/zloeber/deck.loeber.live)
Zachary Loeber’s Presentations. Contribute to zloeber/deck.loeber.live development by creating an account on GitHub.
not quite the same as a pptx file…
you should add that link to the repo description at the top
thats beautiful looking! i will definitely have to convert this over and put in my own repo called presentations or similar great portfolio booster too
ha, I just haven’t created a professional powerpoint in like 20 years and found this as a geeky alternative
Nice!
2020-04-29
Does someone know if I can import an existing rds instance into my terraform stack ?
into this module: https://github.com/terraform-aws-modules/terraform-aws-rds
Terraform module which creates RDS resources on AWS - terraform-aws-modules/terraform-aws-rds
It needs to follow the import example
terraform import module.my.terraform.reference.aws_db_instance.this my-rds-name-in-aws
so maybe something like this
terraform import module.db.module.db_instance.aws_db_instance.this my-rds-name-in-aws
where my-rds-name-in-aws
is your RDS instance name
Terraform module which creates RDS resources on AWS - terraform-aws-modules/terraform-aws-rds
ouch !
Anyone else run into this? If so, would appreciate upvotes
https://github.com/terraform-providers/terraform-provider-aws/issues/11801
Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or other comme…
@RB I have not, but obviously the ordering problem in Terraform sucks. I did just notice yesterday that CP calls this out in their kms-key module:
https://github.com/cloudposse/terraform-aws-kms-key/blob/master/variables.tf#L76
A valid KMS policy JSON document. Note that if the policy document is not specific enough (but still valid), Terraform may view the policy as constantly changing in a terraform plan. In this case, please make sure you use the verbose/specific version of the policy.
So maybe you need to provide all policy key / values?
Terraform module to provision a KMS key with alias - cloudposse/terraform-aws-kms-key
In this case, please make sure you use the verbose/specific version of the policy
i will try this!
It looks like you are doing that though after reading your issue more closely… so I’m not sure what is up there.
I supply a policy like so in my module and I don’t have that issue:
https://github.com/masterpointio/terraform-aws-ssm-agent/blob/master/main.tf#L248
Maybe you need the top level version
and id
? That would be odd though..
Terraform module to create an autoscaled SSM Agent instance. - masterpointio/terraform-aws-ssm-agent
the https://www.terraform.io/docs/providers/aws/d/iam_policy_document.html data source provides the version
argument for you and I’ve already set the policy_id
which maps to Id
in JSON
Generates an IAM policy document in JSON format
you make a good point tho, perhaps I should try it with the raw JSON to see if it still has the issue… i did see this reordering happen on another usage of this data source for an iam policy but once applied, and modified, i no longer saw the reordering in the plan
¯_(ツ)_/¯
Haha yeah — I dunno. The ordering issue is a PITA wherever it rears its ugly head.
i have a gross python script for it now as a workaround… trying to get this put into our atlantis builds so we can use it in custom workflows
Does anyone know the right git repo or forum to provide feedback on terraform cloud? I have some general things as a user I want to provide feedback on regarding usability but can’t figure out the right place to get that heard.
or, ask on twitter for a poc. hashicorp seems pretty responsive on twitter…
You can also use the HashiCorp community forum on Discuss: https://discuss.hashicorp.com/c/terraform-core/27
For terraform cloud ? I’m specifically referring to the new website service. Some usability issues I’ve wanted to give feedback on and see if it helps.
Yes,
Terraform Cloud & Enterprise questions can be categorized under the “Terraform Cloud & Enterprise” subcategory.
Cool. That makes it easy. I’m so used to forums having low priority by staff and them wanting github issues or the like. I put some initial feedback and keep up to date over there then
Yeah, this is kind of a grey area though as the Enterprise platform isn’t open source, so it’s hard to reach via Github. Try there first.
I’m going crazy!!!! No matter what I do, this thing keeps trying to create webhooks!!!!!!!!! https://github.com/cloudposse/terraform-aws-ecs-atlantis/blob/0.14.0/main.tf#L54
Terraform module for deploying Atlantis as an ECS Task - cloudposse/terraform-aws-ecs-atlantis
Error: GET <https://api.github.com/orgs/xxxxxx>: 401 Bad credentials []
on .terraform/modules/atlantis.ecs_web_app.ecs_codepipeline.github_webhooks/main.tf line 1, in provider "github":
1: provider "github" {
Error: GET <https://api.github.com/orgs/xxxx>: 401 Bad credentials []
on .terraform/modules/atlantis.webhooks/main.tf line 1, in provider "github":
1: provider "github" {
Terraform module for deploying Atlantis as an ECS Task - cloudposse/terraform-aws-ecs-atlantis
I’m trying to make it so that I created the webhooks after the fact
terraform apply -var-file=“us-east-2.tfvars” -var=‘webhook_enabled=false’ even if I do that it just keep trying to enable it
I must be getting old
Is it just the github provider being invoked/initialized? Did you provide correct creds to that provider?
Well the module is being invoked even when the enabled flag is not true
I think that happens with providers unfortunately. There is no count
param so you can’t turn them off.
At least I think I remember running into that and it being an issue.
Ohhh so they always get evaluated
Yeah. So maybe passing a valid github_organization
and github_token
will do the trick? Regardless, that’s a pain.
I will
Pass it and see
yep, that worked
is not creating any webhooks
so it is the provider that gets evaluated always first
annoying
I’m using https://github.com/cloudposse/terraform-github-repository-webhooks.git?ref=0.6.0 to create webhooks but once the webhook is created and the user is changed from admin to read then terraform tries to read the /hook url for that repo and gets a 403, is there a workaround for this so that terraform does not try to create the webhook every single time ?
the bot needs to be an admin to create webhooks
after that, we usually switch it to Read
tested the module many months ago, did not see the issues with webhooks at the time
did you look at the tests for the atlantis module?
but then if I switch the bot user to read for that repo, then it tries to create the webhook repo again
is this maybe different in github enterprise ?
after changing to read
this will happen
Error: POST <https://api.github.com/repos/xxxxxx/terraform-xxxxxxs-ecs-atlantis/hooks>: 404 Not Found []
on .terraform/modules/atlantis.webhooks/main.tf line 6, in resource "github_repository_webhook" "default":
6: resource "github_repository_webhook" "default" {
but I can pull with that user no problem
the bot/atlantis user I have is a Member
in the org
not sure, never tested with GH enterprise
but yes, the bot needs to be an Admin at the time of provisioning
there is no way around that since to work with wenhooks you have to be admin
I understand, the bot user needs Admin
for provisioning and I do not have a problem on the provisioning stage, webhooks get created and all but after is provision and I go and change the permissions of the user as a Member( can't create webhooks but can read the repo)
that is when if I run terraform plan, it fails because it gets a 404 because it can’t access the /hooks
endpoint
I do not have Github enterprise, just Organizations (I think that is how is called now)
the bot needs to be Admin for terraform plan
as well
(that not good, but we did not find a way out)
ok, lets set this straight
for terraform plan, bot user needs admin
after webhooks are created IF I need to run plan again, it needs AGAIN admin
looks like yes
(but we test it many months ago, things could change, and GitHub made a lot of changes in the last few months, especially for security and permissions)
Terraform project folks are looking for backend / provisioner maintainers: https://discuss.hashicorp.com/t/seeking-terraform-open-source-backend-maintainers/8113
Terraform is actively seeking maintainers for some of our remote state backends. As a maintainer, you’ll participate in pull requests and issues for a given area of Terraform. For example, you could be working on the Postgres Backend. There is no expectation regarding time commitments or response frequency at this point. In addition to pull requests and issues, you’ll be helping us establish guidance and grow our contribution program. If you’d like to participate in the Terraform project, hea…
Hi everyone I’m trying to use the eks_fargate_profile
module from github on an EKS cluster in which I will define 2 namespaces, so I need 2 fargate profiles (I could use one profile with 2 selectors, but this is not really practical from an automation point of view – if I want to create/delete a namespace, I have to modify the one profile rather than create/delete a separate one – plus there is an AWS-imposed limit of 5 selectors per profile so 5 namespaces – not likely I would hit that limit but not unlikely either).
Unfortunately the fargate profile name generated by the module does not include the kubernetes_namespace
value, so I end up with duplicate resources (like the IAM role):
Error: Error creating IAM Role rnd-poc-kim-fargate: EntityAlreadyExists: Role with name rnd-poc-kim-fargate already exists.
status code: 409, request id: 12f2e0cb-9028-43ac-8791-b03bf5398f2a
on .terraform/modules/eks_fargate_profile_default/main.tf line 35, in resource "aws_iam_role" "default":
35: resource "aws_iam_role" "default" {
Error: Error creating IAM Role rnd-poc-kim-fargate: EntityAlreadyExists: Role with name rnd-poc-kim-fargate already exists.
status code: 409, request id: cbe7771a-a866-45d1-a35e-b82727e84f15
on .terraform/modules/eks_fargate_profile_staging/main.tf line 35, in resource "aws_iam_role" "default":
35: resource "aws_iam_role" "default" {
I could of course clone the module code since it is standalone, but then I loose the benefit of fixes made by CloudPosse, so I’m wondering if there is a better way, maybe there is a workaround I’m not thinking of.
each module has namespace
, stage
, name
and attributes
. You can add var.attributes=["something"]
to one of the modules so all the generated names/IDs will be in the format namespace-stage-name-attributes
which will be unique
That’s one of the first things I tried but it did’t work. However given other errors I had it is possible I had other stuff incorrect that I have since fixed so will try again.
2020-04-30
Hello I have built a same region cross account VPC Peering with Terraform, I have added one of the accepter VPCs SG to the one of the requester VPCs SG as an inbound rule with account-number/sg-Id and i have applied the TF resources, now, on every TF plan run, TF wants to recreate this cross account SG rule, and I don’t get to see the happy - Your infrastructure is upto date message its annoying me
Any one else faced this issue? I’m using latest TF and latest aws provider versions
are you by any chance using both inline SG rules in your aws_security_group
resource, and aws_security_group_rule
to add additional rules to that SG?
@loren Im using aws_security_group
and adding a list of values to security_groups in ingress {} block
can you share the .tf and the exact error output?
The SG config is in the main.tf of the module and when I create the module templaete, i just pas the var.redshift_ingress_security_group_ids as a list from tfvars
nothing really jumps out at me there… are you sure this is cross-account?
seems its fixed in todays provider release , gonna test out https://github.com/terraform-providers/terraform-provider-aws/pull/11809
Community Note Please vote on this pull request by adding a reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave "…
Trying to create S3 bucket with state, and create dynamodb with some tables. Works fine on one region, got this error trying on another one. What should I do next? Google alot.
Error: error using credentials to get account ID: error calling sts:GetCallerIdentity: InvalidClientTokenId: The security token included in the request is invalid.
Are you using AWS CLI creds from enviroment variables, configuration, or aws-vault?
@AugustasV — I’d run the following and try again:
function aws_reset_session() {
for var in AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN VAULT_PROFILE AWS_VAULT AWS_DEFAULT_REGION AWS_REGION AWS_SECURITY_TOKEN AWS_SESSION_EXPIRATION
do
unset ${var}
done
}
Does anyone know why the standard is to put quotes around blocks (resources and module) labels (resource identifiers and their associated names)? It seems the whole community does so and I didn’t question it… but I do not know why it’s done when the quotes can be omitted which seems cleaner to me.
This is what I’m talking about btw:
# What everyone does:
data "aws_iam_policy_document" "default" {
...
}
# What no one does, but is valid (and I personally like more):
data aws_iam_policy_document default {
...
}
The Terraform language has its own syntax, intended to combine declarative structure with expressions in a way that is easy for humans to read and understand.
was the latter valid before tf 0.12 / hcl 2?
The Terraform language has its own syntax, intended to combine declarative structure with expressions in a way that is easy for humans to read and understand.
Yeah, that was one possible answer I was expecting from this question: 0.11 enforced the quotes and 0.12 does not, but now everybody is in the mode of quoting the labels.
I guess because the Terraform docs do it.
And auto-complete – in my case in PyCharm – add them.
Yeah — Just seems superfluous. I’m likely going to start doing away with it since it doesn’t actually serve a purpose.
does terraform fmt
add them?
Let me check that.
tf fmt
does not complain.
yeh i’ve started removing them and not quoting where it’s unneeded
@Chris Fowles Cool, somebody who is on-board!
where it’s unneeded
Is there anywhere where it is needed?
around strings
Hahaha okay, gotcha.
Hi Any one facing trouble with terraform init for aws provider
terraform init 03:58 5.86G
Initializing provider plugins...
- Checking for available provider plugins...
Registry service unreachable.
This may indicate a network issue, or an issue with the requested Terraform Registry.
Registry service unreachable.
This may indicate a network issue, or an issue with the requested Terraform Registry.
Warning: Skipping backend initialization pending configuration upgrade
The root module configuration contains errors that may be fixed by running the
configuration upgrade tool, so Terraform is skipping backend initialization.
See below for more information.
Error: registry service is unreachable, check <https://status.hashicorp.com/> for status updates
Error: registry service is unreachable, check <https://status.hashicorp.com/> for status updates
@sarkis you break terraform cloud?
@msharma24 looks like there was a partial outage today https://status.hashicorp.com/
Welcome to HashiCorp Services’s home for real-time and historical data on system performance.
@Erik Osterman (Cloud Posse) Terraform init is still bein PITA for me , im doing cp -r .terraform/plugins/ . around my templates lol as a workaround
Is it possible to exclude a tf file from being picked up by TF plan ?
I rename the files to .tf.bak
to make TF plan ignore the file,
The default behaviour is terraform will attempt consume and process any file with extension .tf You use the module patten with conditionals to make ignore or include certain resources .
I wish Terraform had a Puppet like pattern, - Modules Roles and Profiles - But both are built for different purposes.
Agree, I’m just going to change the structure
I wanted to use the same backend since is part of the same thing but a the provider/service have an annoying behavior that makes it really inconvenient
lol
Yeah its F* 4am for me and i m trying to finish a project and the terraform inti wont download the providers and plugin for me :(
disk full ?
rm -rf .terraform and try again ?
maybe hashicorp CDN is having issues
affecting your region
do you have a server in the other side ?
Yeah seems like its a regional issue , I scp ‘ed .terraform from a us-east-1 ec2 finally
terraform cloud ?
Nopes
weird
instead of excluding a .tf file, use count/for_each and vars to disable the resources in the .tf…?
I did that first but this is a webhook in github that requires admin user to check and create the hook
once the user is member
then it can’t read the /hooks url so it tries to create it again
gonna have to throw an XY problem flag on that one
lol
Anyone facing issues with terraform init taking ages to download the provider and plugins ?
No, but you can configure centralized provider/plugins so you don’t have to keep fetching them
# Save all terraform provider plugins to a single location
export TF_PLUGIN_CACHE_DIR="${HOME}/.terraform.d/plugin-cache"
Thanks Zach this is amazing
I recall reading about this. I need to do this! much better
Link to docs on this. note they recommend for whatever reason to use the cli configuration file for this value
Providers are responsible in Terraform for managing the lifecycle of a resource: create, read, update, delete.
Hi all, is it possible to dynamically populate a list of resources inside an “aws_iam_policy_document” ? I’m trying to do something like this, with a list of arns passed into the module.
s3_bucket_arn = [arn1, arn2, arn3]
resources = [ “${var.s3_bucket_arn}”, “${var.s3_bucket_arn}/*”
but i keep running into the following error:
“is tuple with 3 elements Cannot include the given value in a string template: string required.”
Any ideas?
formatlist()
resources = flatten(var.s3_bucket_arn, formatlist("%s/*", var.s3_bucket_arn))
hmmm, something isn’t right still. I’m getting the following error with that
220: resources = flatten(var.s3_bucket_arn, formatlist(“%s/*”, var.s3_bucket_arn))
Function “flatten” expects only 1 argument(s).
the whole policy statement looks like this right now:
statement { effect = “Allow”
actions = [
"s3:PutObject",
"s3:GetObject",
"s3:GetObjectVersion",
]
resources = flatten(var.s3_bucket_arn, formatlist("%s/*", var.s3_bucket_arn)) }
oh yeh sorry
missed some square brackets
should be flatten([ blah, blah ])
it needs a list of lists
stylistically i’d recommend changing your variable name to something like s3_bucket_arns or something that indicates that it’s a collection of arns not just one
ah sweet, that worked. and yes, i shortened the name for this example.
thanks so much for the help man!!
no worries