#terraform (2021-03)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2021-03-01
Before I went off to develop a method I thought I would reach out and ask for some help/ guidance. Here’s the situation;
I have a unique scenario where the IAM role needs to have the “Name” = Value be all capitalized to match the AD naming schema requirement. (i.e., SERVICE-TEAM-ROLE )
I am using terraform-null-label module for everything and have the name = module.custom_role_label where I can …
does anyone know how you can run terraform state show
on module.gitlab_repository_webhook.gitlab_project_hook.this[15]
?
terraform state show "module.gitlab_repository_webhook.gitlab_project_hook.this[15]"
should work
i was missing the quotes thanks @RB
it’s not obvious but any time you use square brackets, you’ll need quotations
best to default to using quotations all the time
makes sense
in tf 0.13 can you do for a for_each
on a module?
yes
the issue i have is each time it needs to execute against a different account
I think for loops are totally random
so every time you execute them it will pick a different account
I do not know if there is a way to for an order
maybe @loren knows?
i’m guessing getting the provider right is the issue, more than order?
there is not yet a way to use an expression on a module’s “providers” block… it is static, even if the module is using for_each
. so i don’t think you can do what you are thinking yet…
you could maybe template the tf files from outside terraform, not using for_each, but creating an instance of the module per account and setting the provider
good point I was assuming he had that working
i want to loop around each account in a list and execute a module
if I have an aws_iam_policy_document and a JSON policy statement, what’s the best way to append the latter to the former?
On the aws_iam_policy_document, use source_json and/or override_json in pre-3.28.0 of aws provider. Or with >=3.28.0, use source_policy_documents and/or override_policy_documents
Meant to open an issue @Erik Osterman (Cloud Posse), with 3.28.0, that aggregator module can probably be archived…
ah cool, is this related to that AWS provider release that made you cry tears of joy?
Yeah, haha, that was the first of them… The second was managing roles with exclusive policy attachments
wow. I wish I waited to rewrite this policy statement in HCL/jsonencode
Great tool for if you ever need to do that in the future @Alex Jurkiewicz: https://github.com/flosell/iam-policy-json-to-terraform
Small tool to convert an IAM Policy in JSON format into a Terraform aws_iam_policy_document - flosell/iam-policy-json-to-terraform
I’m thinking about doing something horrible with jsonencode and jsondecode
This is an interesting use-case and it makes sense what you’re trying to do. Unfortunately, we don’t support selectively manipulating the case for an individual tag.
I suggest opening an issue to see if we can collect support for it. In the meantime, the best workaround I think suggest would be using a local and doing your own transformations there.
2021-03-02
Does anyone know how to work with dynamodb backup and restore using terraform ? Or by using aws backup with terraform?
I started using the /terraform-aws-ecs-codepipeline module yesterday and I ran into a couple issue, the first more immediate on is that when the Deploy to Ecs stage runs it will set the desired count to 3 containers. I am not setting this anywhere in my configuration, I actually have it set to 1 as this is for dev. I am running EC2 to host my containers.
basically its just stuck here til it times out
is there some setting in my ecs service that im missing?
i think it may have to do with ec2 capacity
the desired count being mysteriously set to 3 needs to be solved
create a ticket with a Minimum, Reproducible Example and then someone will investigate eventually
Stack Overflow | The World’s Largest Online Community for Developers |
this is probably upstream from this package tbh
i’ve reviewed all the tf code and i don’t see anything that would set a desired count at all. the module doesn’t control my service or anything and the Deploy stage part of the code is pretty minimal.
There’s just not enough to go on. The cloudposse/terraform-aws-ecs-codepipeline
doesn’t even create the tasks. You must be using other modules. Provide as much context as possible. We have hundreds of modules.
(and use threads)
i think i found the culprit actually, im running a next js deployment, and im doing a build on the container once its tarts. that build appears to trigger the autoscale based on cpu usage
the reason the deployment isn’t removing the previous deployment looks like a host capacity issue
that makes sense
i gotta figure out how to get my app autoscale policy to scale up more ec2s to fit the service needs
Generate documentation from Terraform modules in various output formats.
Yes! this is currently used in all the cloudposse tf modules
Generate documentation from Terraform modules in various output formats.
and in my company modules
Ah they launched a docs site though — that looks fresh.
nice
(didn’t know it supported a config)
Yes, and you can get header from a file like [doc.tf](http://doc.tf)
and add it to README.md
WAnt to share some cool stuff about this today during announcements in #office-hours?
Curious if anyone has done a TG -> TF migration. I’m about to embark on my own, and if you have any info to share it would be appreciated. I started with TF, but its been a couple years, so mostly trying to get my head around replacing the abstractions, for one i’ll be using https://github.com/cloudposse/terraform-provider-utils/blob/main/examples/data-sources/utils_stack_config_yaml/data-source.tf, for a complementary solution to TG going up two levels by default. Thanks for any input. (Edited due to response in thread, posted the wrong repo)
The Cloud Posse Terraform Provider for various utilities (E.g. deep merging) - cloudposse/terraform-provider-utils
https://github.com/cloudposse/terraform-yaml-config is more or less generic module to work with YAML files and deep-merge them (and using imports to be DRY)
Terraform module to convert local and remote YAML configuration templates into Terraform lists and maps - cloudposse/terraform-yaml-config
we have a more opinionated one for YAML stack configs (Terraform and helmfile vars defined in YAML files for stacks of any level of hierarchy ) https://github.com/cloudposse/terraform-yaml-stack-config
Terraform module that loads an opinionated "stack" configuration from local or remote YAML sources. It supports deep-merged variables, settings, ENV variables, backend config, and remote …
ahh, crud, thats what i meant to post.
Thanks for chiming in and correcting that.
it uses this TF provider https://github.com/cloudposse/terraform-provider-utils/blob/main/examples/data-sources/utils_stack_config_yaml/data-source.tf to deep-merge YAML configs (TF deep-merging was very slow, go
is much faster)
The Cloud Posse Terraform Provider for various utilities (E.g. deep merging) - cloudposse/terraform-provider-utils
(having said that, we did not try to convert TG to TF )
That makes complete sense. The stack module which has the child modules seems like a great fit to the default actions of TG. It also seems like it can solve my remote_state configuration issue, which was the second thing i was concerned about.
Its all in a branch, of course, and if i can wipe enough of company data and still be useful to others, i’ll make sure i do a write up and share here.
It’s all about Terraform modules that you are using, start with one module at a time, replace terragrunt blocks with terraform block:
• backend block >> backend.tf
• provider block >> provider.tf
• inputs block >> terraform.tfvars
• locals block >> locals.tf and for each env in Terragrunt create a env folder in terrraform
check this sample layout https://github.com/mhmdio/terraform-infracost-atlantis
Contribute to mhmdio/terraform-infracost-atlantis development by creating an account on GitHub.
so you need to map back the abstraction from Terragrunt into Terraform, let me know if you need any help
Thanks!
2021-03-03
does anyone know of a module or anything that can lock down the default security group in all regions within an account?
delete them!
can you do that via terraform though?
nope, try cloud-nuke
my favourite tool
HI All, Can anybody share their wisdom on the best way to rotate aws_iam_access_key’s Do most people taint the resource / module that created them or do you have two resources like aws_iam_access_key.keyone with a count var.rotate = true aws_iam_access_key.keytwo with a count var.rotate = true with and output of the above equally switching between the two? once applied you would then roll your environment and then set keyone to false? When come to then rolling them again in the future you’d state move keytwo to one and repeat?
AWS SSO would be better alternative
I think managing credentials that rotate with terraform is a bad idea.
Thanks for talking the time to leave comments. I’m always open to learning new ways to do things, so could you expand on your answers to suggest how I’d best manage things such as SMTP authentication details within applications?
I can’t always use an instance role as the application requires a username and password to be configured or they won’t start. Typically I create a user and then user data grabs the ID and ses_smtp_password_v4 details from secret store on boot up. This allows us to change the details every time we flatten and build the environment. My challenge comes when I get to a Prod system where I need to leave the access keys in place while a new ASG spins up and replaces the old machines/keys. I can simply do this by removing the current key from state, run the TF apply again but this never feels overly right. Hence my original thoughts about two variables that I use but I’m interested to hear an alternative approach.
Anybody got further thoughts on this?
I think you should move this out of Terraform, use lambda function with boto3 to provide temp credentials to you ASG when needed, otherwise use something like HashiCorp vault to implement that.
Thanks Mohammed, interesting idea, I’ll give it some further thought as I can see how that could work.
can hashicorp vault actually produce IAM secret key & access keys? doesn’t seem so with the kv engine. You’d have to re-gen the new keys, store them in the kv engine as a new set.. basically the multi-step plan @Gareth was intiailly thinking about
The issue I see with tainting it is stopping the removal of the current key before we’ve rolled our environment to get the new key. Guess you could target the creation or delete it from the state before applying but both options feel fudgy.
Hello,
In the module eks-iam-role, is there a recommended way for passing in an AWS managed policy ? An example use case would be creating an OIDC role for the VPC CNI add-on as described here. Currently all I can think of is something like:
data "aws_iam_policy" "cni_policy" {
arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
}
module "vpc_cni_oidc_role" {
source = "cloudposse/eks-iam-role/aws"
version = "x.x.x"
[...a bunch of vars...]
aws_iam_policy_document = data.aws_iam_policy.cni_policy.policy
}
Terraform module to provision an EKS IAM Role for Service Account - cloudposse/terraform-aws-eks-iam-role
The Amazon VPC CNI plugin for Kubernetes is the networking plugin for pod networking in Amazon EKS clusters. The CNI plugin is responsible for allocating VPC IP addresses to Kubernetes nodes and configuring the necessary networking for pods on each node. The plugin:
does anyone know what IAM permissions / RBAC is required to view workloads in the EKS cluster via the AWS console? I can’t for the life of me find it documented anywhere!
@Steve Wade (swade1987) This might help: https://docs.aws.amazon.com/eks/latest/userguide/troubleshooting_iam.html
This topic covers some common errors that you may see while using Amazon EKS with IAM and how to work around them.
HI all, I am wondering who you are managing the s3state files
S3 with dynamodb locking
Versioned s3 bucket
yeah, cloudposse has a module for this
Terraform module that provision an S3 bucket to store the terraform.tfstate
file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption. - cloudposse…
which works fine
but you cannot use a variable in the file it generates
backend "s3" {
region = "us-east-1"
bucket = "< the name of the S3 state bucket >"
key = "terraform.tfstate"
dynamodb_table = "< the name of the DynamoDB locking table >"
profile = ""
role_arn = ""
encrypt = true
}
I want to use a variable here because the profile can change, and the key as well
last time i checked, that module cannot reuse an s3 bucket unfortunately
what I can do is copy over the backend.tf file it generates and replace the key there
per thing that I need
I don’t want to put everything in a single state file
but I want to use a single bucket
i run this script
# bucket name
export tfstateBucket=mybucket
# get repo name
export repoName=$(git config --get remote.origin.url | cut -d '/' -f2 | cut -d '.' -f1)
# get current directory
export repoDir=$(git rev-parse --show-prefix | rev | cut -c 2- | rev)
# create backend
cat <<EOF > backend.tf
terraform {
required_version = ">=0.12"
backend "s3" {
encrypt = true
bucket = "$tfstateBucket"
dynamodb_table = "TerraformLock"
region = "us-west-2"
key = "$repoName/$repoDir/terraform.tfstate"
}
}
EOF
# see the newly created file just to be safe
cat backend.tf
it always gives me a unique key based on the repo name and repo path
youll have to change tfstateBucket
that’s easy, we have multiple customers
and I use the customername as a single identifier
hah you can use it
Terraform module that provision an S3 bucket to store the terraform.tfstate
file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption. - cloudposse…
they generate file based on variables
so a combo of init, apply and init again should work
¯_(ツ)_/¯
hmmm interesting
it wont work for me yet as the bucket is not configurable but im glad it works for you
don’t you use a single bucket to put all the state files in per project ?
sepparated by keys ?
yep
that module would have to be consumed in everyone of our modules for the backend.tf to be generated, which would then create a new bucket and dynamodb table for each module
what id want is to reuse a single dynamodb and state bucket for every terraform module
yeah
same here
but i dont believe the cloudposse tf module supports that (yet)
the rest can stay as is
im sure everyone would be open to a PR. probably no one has gotten around to it.
because I get in trouble as well when I have a single AWS account that we use as a internal account and we want to add a new project
then you need to import the current bucket and dynamodb table
and then it gets a bit messy
gross
i wouldnt import a bucket and dynamodb table to more than one module
i also wouldnt import any resource to more than one module
thats an antipattern
thanks for the suggestion
I am still a bit newbie in terraform
there’s a lot to learn but basically resources should be stored in a single state. if they are stored in multiple states then multiple modules may try to modify the same resource and then have conflicting states
keep shared resources in their own modules.
I do
what I am puzzled about is what the best suggestion to:
I have one AWS account
we have several machines under that account
that we use internally
for all external customers, they have their own account
which makes handling a state file easy
for the multiple projects under our own internal account
I am puzzled how to handle this the best
a bucker per internal project ?
or prefixes in one big s3 bucket
this is what i do with multiple accounts.
1 tfstate bucket 1 dynamodb reuse the bucket with a key of the repo name inside the repo name key, use a repo path key
same shared bucket across all accounts
same shared dynamodb across all accounts
the unique identifier is the key / prefix
so you dynamically create the s3state.tf that pushes the state to the backend right ?
no. i dynamically create the backend.tf using the script i shared earlier
then it magically works
nice
in the past I was the main creator of the terraform configs, now I need share work with co’s, that is better in one way but gives me way more work
but in the end, we will benefit all (I hope so )
divide and conquer!
We currently have the concept of a centralized aws account that manages buckets & dynamo (via terraform) as well as other things unrelated to this conversation (logging, monitoring, etc..). Using terraform workspaces & assume_role in the provider - you can do exactly what you’re looking for. This may be a bit more advanced and makes assumptions about having SSO and other policies in place already IIRC. I’ll try to provide a scrubbed example shortly.
nice thanks MattyB
Here’s an example I found for the provider portion: https://gist.github.com/RoyLDD/156d384bd33982e77630b27ff9568f63
Will try to get back to you about the terraform backend. Gotta run to a meeting..
Thanks for the convo
Back to it…using the examples above and a backend config like so:
terraform {
backend "s3" {
# Replace this with your bucket name!
bucket = "prod-terraform-state-bucket"
key = "terraform.tfstate"
region = "us-west-1"
# Replace this with your DynamoDB table name!
dynamodb_table = "prod-terraform-state-lock-table"
encrypt = true
}
}
When you use terraform workspace with this config it stores the state in the bucket path at :env/<environment>/tfstate in the centralized account
I avoided workspaces when i first started b/c they were rather opaque. Do you use these as a go to?
Yep. We use it with every account. 1 coworker copy/pasted between staging and prod directories instead…and that account is a mess. He also didn’t use CloudPosse modules, and developed his own instead. It’s seen as the red headed step child that nobody wants to touch. lol
Fair enough. I plan on using the modules plus some tips here and in #office-hours today. trying to get as many opinions as possible before i start changing everything.
Gotcha. It’s straightforward and IMO helps keep environments as close to the same as possible. If you want a smaller DB in staging than you need in prod just set a different variable. Super simple. I don’t know what copy/pasting between directories (dev/stage/prod) buys you.
Not something that drives me. I’m more interested in keeping environments clean and reduce copy/paste.
right now i have none
Nice! Bug free
yep, and that’s what i’m trying to do with plain TF, with some CP module assistance.
now all abstractions are handled via TG.
it’s a pain that terraform does not allow variables here
today’s dumb terraform code. I had some Lambda functions defined as a map for for_each purposes, like
locals {
lambda_functions = {
foo = { memory=256, iam_policies = tolist([...]) }
bar = { memory=128 }
}
}
and I wanted to loop over any custom iam policies defined to add them to the execution role. This is the simplest for_each
loop I could write that worked:
dynamic "inline_policy" {
for_each = length(lookup(local.lambda_functions[each.key], "iam_policies", [])) > 0 ? each.value.iam_policies : []
content {
name = "ExtraPermissions${md5(jsonencode(inline_policy.value))}"
policy = jsonencode(inline_policy.value)
}
}
You’d think for_each = lookup(local.lambda_functions[each.key], "iam_policies", [])
would work, but it doesn’t, because you can’t build a default value with the same type as the iam_policies
values from your data type.
Sometimes, I wish Terraform never tried to add strict typing
terraform “static” typing is annoying in many cases
try(local.lambda_functions[each.key]["iam_policies"], [])
maybe that ^ will work
for example (now that i’m back at a keyboard…)
locals {
lambda_functions = {
foo = { memory=256, iam_policies = [...] }
bar = { memory=128, iam_policies = [] }
}
}
...
dynamic "inline_policy" {
for_each = local.lambda_functions[each.key].iam_policies
content {
name = "ExtraPermissions${md5(jsonencode(inline_policy.value))}"
policy = jsonencode(inline_policy.value)
}
}
My first question is about the terraform-aws-rds-cluster module. I see the input “enabled_cloudwatch_logs_exports” we can use to select which logs send to Cloudwatch, but actually I don’t find any input relative to the Log Group in which I will send the logs. Any clue ?
you could set it like this
enabled_cloudwatch_logs_exports = ["audit", "error", "general", "slowquery"]
the doc are related to cloudwatch log groups for RDS
that is where you can find the info and details of the valid loggroups
Thanks @jose.amengual that works great. Actually what i didn’t get is that I don’t have to declare any Cloudwatch Log group. So I used enabled_cloudwatch_logs_exports = [“postgresql”] and everything has been fine! Thanks again.
np
Thanks.
is there a way to integrate “cloudposse/ecs-codepipeline/aws” module with SSM Parameter Store to feed the build image environment vars
FYI terraform test
is WIP >> https://github.com/hashicorp/terraform/pull/27873
During subsequent release periods we're hoping to perform more research and development on the problem space of module acceptance testing. So far we've been doing this research using out-of…
2021-03-04
I created VSCode Terraform IaC Extension Pack
to help with Develop Terraform templates and modules, please test and feedback https://marketplace.visualstudio.com/items?itemName=mhmdio.terraform-extension-pack
Extension for Visual Studio Code - Awesome Terraform Extensions
indent-rainbow
FTW
Extension for Visual Studio Code - Awesome Terraform Extensions
@Andriy Knysh (Cloud Posse) my greetings. I’m checking out the terraform-aws-backup
module. Also want to keep an a cross organisational copy. Would it make sense to use the module on both accounts or use a simple aws_backup_vault resource on the receiving end ?
Hi @maarten
what resources do you need to backup?
dynamodb (cmk)
did you look at https://aws.amazon.com/blogs/database/cross-account-replication-with-amazon-dynamodb/ ?
Hundreds of thousands of customers use Amazon DynamoDB for mission-critical workloads. In some situations, you may want to migrate your DynamoDB tables into a different AWS account, for example, in the eventuality of a company being acquired by another company. Another use case is adopting a multi-account strategy, in which you have a dependent account […]
I don’t think this module https://github.com/cloudposse/terraform-aws-backup supports https://docs.aws.amazon.com/aws-backup/latest/devguide/cross-account-backup.html
Terraform module to provision AWS Backup, a fully managed backup service that makes it easy to centralize and automate the back up of data across AWS services such as EBS volumes, RDS databases, Dy…
Using AWS Backup, you can back up to multiple AWS accounts on demand or automatically as part of a scheduled backup plan. Cross-account backup is valuable if you need to store backups to one or more AWS accounts. Before you can do this, you must have two accounts that belong to the same organization in the AWS Organizations service. For more information, see
this https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/backup_global_settings needs to be added (and maybe something else)
having said that, you can add the above to the top-level module
and then yes, you create a separate aws_backup_vault
and use https://github.com/cloudposse/terraform-aws-backup/blob/master/variables.tf#L42 to copy into it
Terraform module to provision AWS Backup, a fully managed backup service that makes it easy to centralize and automate the back up of data across AWS services such as EBS volumes, RDS databases, Dy…
(IAM permissions must be in place)
I noticed that the github_webhook module has an inline provider, this prevents from any parent modules from being used in a for_each, for example “cloudposse/ecs-codepipeline/aws” is affected by this. I was not able to spin up multiple codepipelines by a loop unless I removed all the github webhook stuff from it. A seperate issue is that the whole webhook thing doesn’t work with my org for some reason. when it tries to post to the gtihub api its missing the org so it ends up looking like https://api.github.com/repo//repo-name/hooks where the // should be org. Maybe there is some documentation about required parameters, but if you test the examples with 0.23 and terraform 0.14, it wont work.
Any Familar with the <https://github.com/cloudposse/terraform-aws-efs>
module? I can not figure out how it wants me to pass https://github.com/cloudposse/terraform-aws-efs/blob/master/variables.tf#L12
Terraform Module to define an EFS Filesystem (aka NFS) - cloudposse/terraform-aws-efs
Terraform Module to define an EFS Filesystem (aka NFS) - cloudposse/terraform-aws-efs
Word.
still not sure how to pass a map of maps like that
I’ve tried tons of variations on..
access_points = {
jenkins = {
posix_user = {
uid = 1000
gid = 1000
}
root_directory = {
path = "/jenkins"
creation_info = {
owner_gid = 1000
owner_uid = 1000
permissions = 777
}
}
}
}
but they all error out with things similar to
The given value is not suitable for child module variable "access_points"
defined at .terraform/modules/efs/variables.tf:12,1-25: element "jenkins":
element "root_directory": all map elements must have the same type.
this is the issue with the latest TF versions
they introduced so-called “strict” type system
all items in a map must have the same types even for all complex items
try to make all map entries having the same fields, set those not in use to null
posix_user = {
uid = 1000
gid = 1000
path = null
creation_info = {
owner_gid = null
owner_uid = null
permissions = null
}
root_directory = {
uid = null
gid = null
path = "/jenkins"
creation_info = {
owner_gid = 1000
owner_uid = 1000
permissions = 777
}
}
something like that ^
that looks horrible.
it’s not pretty (and might not work, in which case we’ll have to redesign the module)
we can separate those into two vars
PRs are always welcome
If I was able to understand what this was doing, i wouldnt be here asking
yea, but you are trying to use the feature of the module which was tested with TF 0.12
TF 0.13 and up introduced a lot of changes
so the feature needs to be updated to support TF 0.13/0.14
and the only ways around are: 1) use the ugly syntax above; 2) open a PR to separate the map into two variables
@Justin Seiser would you like to open a PR with the changes? We’ll help with it and review
I am not capable of writing the PR.
I see you opened an issue, thanks (we’ll get to it)
Given the map:
map = {
events = ["foo"]
another_key = "bar"
}
How would you go about appending "baz"
to the list in the events
key so that you end up with:
new_map = {
events = ["foo", "baz"]
another_key = "bar"
}
Hello
Can someone please provide the upgrade steps from 0.12.29>0.13.6?
In regards to CloudPosse Terraform modules or with Terraform in general?
- Start your morning with an extra hit of caffeine.
- Be thankful it’s not as bad as 0.11 to 0.12
it should work right away
Terraform In general
including Cloudposse modules
most of the modules are 0.13-0.14 compatible
I am running the following steps tfenv use latest:^0.13
terraform 0.13upgrade
in your modules?
yes
that is fine
then you need to update the source of cloudposse modules if one of them does not work
Is this mandatory to run terraform 0.13upgrade?
Even if run these it upgrades tfenv use latest:^0.13 terraform init terraform apply
in yours modules probably
you need to change few thing
the providers and such
so is better to run it
For upgrading 0.13.6–>0.14.5 tfenv use latest:^0.14 terraform init terraform apply
Are these steps good?
you still need to run the upgrade command I think
..and if our modules are in their own git repo, once the terraform init
and terraform apply
succeed, we’d have to commit in a featurebranch, then merge to master, correct?
git checkout -b upgrade_to_terraform013
tfenv use latest:^0.13
terraform init
terraform apply
git commit -m 'upgrade module to terraform 0.13.6'
git tag -a "v1.5" -m "v1.5" #(or whatever your next v is)
git push --follow-tags
Upgrading to Terraform v0.14
Looks like upgrade step is not needed for 13–>14 upgrade
i use this all over the place at my current job and it works well
2021-03-05
hi guys, anyone knows how to create “trusted advisor” in terraform?
Good Evening, I need to create an API Gateway => Lambda => Dynamodb setup for the first time. As far as I can tell each of these items sits outside of a VPC. and while I can see there are VPC endpoints for Lambda and Dynamodb; do I actually need to use them? Is this one of those times where you can do either way, and one is more secure than the other but has double the operating costs?
My Lambda only need to talk to the Dynamodb and nothing else. All requests come from the public Internet to the API. I’m used to my application always being on a private subnet, does that concept exist in this scenario? The tutorials I’ve watch on this from Hashicorp and AWS don’t really mention VPC’s , they do in an announcement about end point support. Which is why I think I’m over thinking this. Thanks for your time,
Leverage the serverless paradigm for all it’s worth! Skip the vpc, it is gloriously freeing
Thanks @loren you’re a fountain of knowledge as usual.
Just pay attention to your resource policies, control them tightly and actively with terraform, so you don’t expose anything you don’t mean to expose
VPC endpoints are needed only when you want to connect to your services from VPC without using public Internet (via AWS PrivateLink): https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints.html
Use a VPC endpoint to privately connect your VPC to other AWS services and endpoint services.
Hi, I’m also having a similar “eks-cluster” module issue like in these threads. Any leads as to what might be going on? https://sweetops.slack.com/archives/CB6GHNLG0/p1612683973314300
I also have this issue, is there any solution for this? thanks!
I’m using the cloudposse/eks-cluster with the cloudposse/named-subnets module mostly configured like the examples:
I also have this issue, is there any solution for this? thanks!
module "vpc" {
source = "cloudposse/vpc/aws"
version = "0.20.4"
context = module.this.context
cidr_block = "10.0.0.0/16"
}
locals {
us_east_1a_public_cidr_block = cidrsubnet(module.vpc.vpc_cidr_block, 2, 0)
us_east_1a_private_cidr_block = cidrsubnet(module.vpc.vpc_cidr_block, 2, 1)
us_east_1b_public_cidr_block = cidrsubnet(module.vpc.vpc_cidr_block, 2, 2)
us_east_1b_private_cidr_block = cidrsubnet(module.vpc.vpc_cidr_block, 2, 3)
}
module "us_east_1a_public_subnets" {
source = "cloudposse/named-subnets/aws"
version = "0.9.2"
context = module.this.context
subnet_names = ["eks"]
vpc_id = module.vpc.vpc_id
cidr_block = local.us_east_1a_public_cidr_block
type = "public"
igw_id = module.vpc.igw_id
availability_zone = "us-east-1a"
attributes = ["us-east-1a"]
# The usage of the specific kubernetes.io/cluster/* resource tags below are required
# for EKS and Kubernetes to discover and manage networking resources
# <https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html#vpc-subnet-tagging>
tags = {
"kubernetes.io/cluster/cluster" : "shared"
"kubernetes.io/role/elb" : "1"
}
}
module "us_east_1a_private_subnets" {
source = "cloudposse/named-subnets/aws"
version = "0.9.2"
context = module.this.context
subnet_names = ["eks"]
vpc_id = module.vpc.vpc_id
cidr_block = local.us_east_1a_private_cidr_block
type = "private"
availability_zone = "us-east-1a"
attributes = ["us-east-1a"]
ngw_id = module.us_east_1a_public_subnets.ngw_id
# The usage of the specific kubernetes.io/cluster/* resource tags below are required
# for EKS and Kubernetes to discover and manage networking resources
# <https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html#vpc-subnet-tagging>
tags = {
"kubernetes.io/cluster/cluster" : "shared"
"kubernetes.io/role/internal-elb" : "1"
}
}
module "us_east_1b_public_subnets" {
source = "cloudposse/named-subnets/aws"
version = "0.9.2"
context = module.this.context
subnet_names = ["eks"]
vpc_id = module.vpc.vpc_id
cidr_block = local.us_east_1b_public_cidr_block
type = "public"
igw_id = module.vpc.igw_id
availability_zone = "us-east-1b"
attributes = ["us-east-1b"]
# The usage of the specific kubernetes.io/cluster/* resource tags below are required
# for EKS and Kubernetes to discover and manage networking resources
# <https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html#vpc-subnet-tagging>
tags = {
"kubernetes.io/cluster/cluster" : "shared"
"kubernetes.io/role/elb" : "1"
}
}
module "us_east_1b_private_subnets" {
source = "cloudposse/named-subnets/aws"
version = "0.9.2"
context = module.this.context
subnet_names = ["eks"]
vpc_id = module.vpc.vpc_id
cidr_block = local.us_east_1b_private_cidr_block
type = "private"
availability_zone = "us-east-1b"
attributes = ["us-east-1b"]
ngw_id = module.us_east_1b_public_subnets.ngw_id
# The usage of the specific kubernetes.io/cluster/* resource tags below are required
# for EKS and Kubernetes to discover and manage networking resources
# <https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html#vpc-subnet-tagging>
tags = {
"kubernetes.io/cluster/cluster" : "shared"
"kubernetes.io/role/internal-elb" : "1"
}
}
module "eks_cluster" {
source = "cloudposse/eks-cluster/aws"
version = "0.34.0"
context = module.this.context
region = "us-east-1"
vpc_id = module.vpc.vpc_id
subnet_ids = [
module.us_east_1a_public_subnets.named_subnet_ids["eks"],
module.us_east_1b_public_subnets.named_subnet_ids["eks"],
module.us_east_1a_private_subnets.named_subnet_ids["eks"],
module.us_east_1b_private_subnets.named_subnet_ids["eks"]
]
kubernetes_version = "1.18"
oidc_provider_enabled = true
enabled_cluster_log_types = ["api", "authenticator", "controllerManager", "scheduler"]
cluster_log_retention_period = 90
cluster_encryption_config_enabled = true
map_additional_aws_accounts = [REDACTED]
}
# Ensure ordering of resource creation to eliminate the race conditions when applying the Kubernetes Auth ConfigMap.
# Do not create Node Group before the EKS cluster is created and the `aws-auth` Kubernetes ConfigMap is applied.
# Otherwise, EKS will create the ConfigMap first and add the managed node role ARNs to it,
# and the kubernetes provider will throw an error that the ConfigMap already exists (because it can't update the map, only create it).
# If we create the ConfigMap first (to add additional roles/users/accounts), EKS will just update it by adding the managed node role ARNs.
data "null_data_source" "wait_for_cluster_and_kubernetes_configmap" {
inputs = {
cluster_name = module.eks_cluster.eks_cluster_id
kubernetes_config_map_id = module.eks_cluster.kubernetes_config_map_id
}
}
module "eks-node-group" {
source = "cloudposse/eks-node-group/aws"
version = "0.18.1"
context = module.this.context
subnet_ids = [
module.us_east_1a_private_subnets.named_subnet_ids["eks"],
module.us_east_1b_private_subnets.named_subnet_ids["eks"]
]
cluster_name = data.null_data_source.wait_for_cluster_and_kubernetes_configmap.outputs["cluster_name"]
desired_size = 2
min_size = 1
max_size = 2
}
After creation, terraform plan
works fine.
When I change the existing us_east_1a_private_subnets/subnet_names
and us_east_1b_private_subnets/subnet_names
to be ["eks", "mysql"]
and do terraform plan
, I see:
Error: Get "<http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth>": dial tcp [::1]:80: connect: connection refused
Releasing state lock. This may take a few moments...
With the subnet_names change, the debug output contains a WARNING: Invalid provider configuration was supplied. Provider operations likely to fail: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
2021-03-05T10:10:43.078-0800 [INFO] plugin.terraform-provider-kubernetes_v2.0.2_x5: 2021/03/05 10:10:43 [WARN] Invalid provider configuration was supplied. Provider operations likely to fail: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable: timestamp=2021-03-05T10:10:43.078-0800
2021-03-05T10:10:43.079-0800 [INFO] plugin.terraform-provider-kubernetes_v2.0.2_x5: 2021/03/05 10:10:43 [DEBUG] Enabling HTTP requests/responses tracing: timestamp=2021-03-05T10:10:43.078-0800
2021/03/05 10:10:43 [INFO] ReferenceTransformer: reference not found: "local.enabled"
2021/03/05 10:10:43 [INFO] ReferenceTransformer: reference not found: "var.apply_config_map_aws_auth"
2021/03/05 10:10:43 [INFO] ReferenceTransformer: reference not found: "var.kubernetes_config_map_ignore_role_changes"
2021/03/05 10:10:43 [INFO] ReferenceTransformer: reference not found: "local.map_worker_roles"
2021/03/05 10:10:43 [INFO] ReferenceTransformer: reference not found: "var.map_additional_iam_roles"
2021/03/05 10:10:43 [INFO] ReferenceTransformer: reference not found: "var.map_additional_iam_users"
2021/03/05 10:10:43 [INFO] ReferenceTransformer: reference not found: "var.map_additional_aws_accounts"
2021/03/05 10:10:43 [DEBUG] ReferenceTransformer: "module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes[0]" references: []
module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes[0]: Refreshing state... [id=kube-system/aws-auth]
2021-03-05T10:10:43.083-0800 [INFO] plugin.terraform-provider-kubernetes_v2.0.2_x5: 2021/03/05 10:10:43 [INFO] Checking config map aws-auth: timestamp=2021-03-05T10:10:43.083-0800
2021-03-05T10:10:43.083-0800 [INFO] plugin.terraform-provider-kubernetes_v2.0.2_x5: 2021/03/05 10:10:43 [DEBUG] Kubernetes API Request Details:
---[ REQUEST ]---------------------------------------
GET /api/v1/namespaces/kube-system/configmaps/aws-auth HTTP/1.1
Host: localhost
User-Agent: HashiCorp/1.0 Terraform/0.14.7
Accept: application/json, */*
Accept-Encoding: gzip
-----------------------------------------------------: timestamp=2021-03-05T10:10:43.083-0800
2021-03-05T10:10:43.084-0800 [INFO] plugin.terraform-provider-kubernetes_v2.0.2_x5: 2021/03/05 10:10:43 [DEBUG] Received error: &url.Error{Op:"Get", URL:"<http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth>", Err:(*net.OpError)(0xc000e67cc0)}: timestamp=202
1-03-05T10:10:43.084-0800
Without the change, there is no WARNING.
2021-03-05T10:07:59.545-0800 [INFO] plugin.terraform-provider-kubernetes_v2.0.2_x5: 2021/03/05 10:07:59 [DEBUG] Enabling HTTP requests/responses tracing: timestamp=2021-03-05T10:07:59.545-0800
2021/03/05 10:07:59 [INFO] ReferenceTransformer: reference not found: "local.enabled"
2021/03/05 10:07:59 [INFO] ReferenceTransformer: reference not found: "var.apply_config_map_aws_auth"
2021/03/05 10:07:59 [INFO] ReferenceTransformer: reference not found: "var.kubernetes_config_map_ignore_role_changes"
2021/03/05 10:07:59 [INFO] ReferenceTransformer: reference not found: "local.map_worker_roles"
2021/03/05 10:07:59 [INFO] ReferenceTransformer: reference not found: "var.map_additional_iam_roles"
2021/03/05 10:07:59 [INFO] ReferenceTransformer: reference not found: "var.map_additional_iam_users"
2021/03/05 10:07:59 [INFO] ReferenceTransformer: reference not found: "var.map_additional_aws_accounts"
2021/03/05 10:07:59 [DEBUG] ReferenceTransformer: "module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes[0]" references: []
module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes[0]: Refreshing state... [id=kube-system/aws-auth]
2021-03-05T10:07:59.548-0800 [INFO] plugin.terraform-provider-kubernetes_v2.0.2_x5: 2021/03/05 10:07:59 [INFO] Checking config map aws-auth: timestamp=2021-03-05T10:07:59.548-0800
2021-03-05T10:07:59.548-0800 [INFO] plugin.terraform-provider-kubernetes_v2.0.2_x5: 2021/03/05 10:07:59 [DEBUG] Kubernetes API Request Details:
---[ REQUEST ]---------------------------------------
GET /api/v1/namespaces/kube-system/configmaps/aws-auth HTTP/1.1
Host: [REDACTED]
User-Agent: HashiCorp/1.0 Terraform/0.14.7
Accept: application/json, */*
Authorization: Bearer [REDACTED]
Accept-Encoding: gzip
Versions:
Terraform v0.14.7
+ provider registry.terraform.io/hashicorp/aws v3.28.0
+ provider registry.terraform.io/hashicorp/kubernetes v2.0.2
+ provider registry.terraform.io/hashicorp/local v2.0.0
+ provider registry.terraform.io/hashicorp/null v3.0.0
+ provider registry.terraform.io/hashicorp/random v3.0.1
+ provider registry.terraform.io/hashicorp/template v2.2.0
I’m not seeing any solutions out there, but lots of similar references https://github.com/cloudposse/terraform-aws-eks-cluster/issues/104#issuecomment-792520725
Describe the Bug Creating an EKS cluster fails due to bad configuration of the Kubernetes provider. This appears to be more of a problem with Terraform 0.14 than with Terraform 0.13. Error: Get &qu…
@Andriy Knysh (Cloud Posse) is still actively working on this, but no silver bullet yet.
yes, this is a race condition
Error: Get "<http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth>": dial tcp [::1]:80: connect: connection refused
between TF itself and the kubernetes provider
many people see similar error, but in different cases (some on destroy, some on updating cluster params)
(if you just provision the cluster, all seems ok, only updating/destroying it causes those race condition errors)
Hey, same issue here, apply a fresh eks cluster all good bu when I want to update or destroy:
╷
│ Error: Get "<http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth>": dial tcp 127.0.0.1:80: connect: connection refused
│
│
I cleaned the state with terraform state rm module.eks.kubernetes_config_map.aws_auth_ignore_changes[0]
and it worked
Amazon RabbitMQ support just shipped in the AWS Provider — https://github.com/hashicorp/terraform-provider-aws/pull/16108#event-4416150216
Community Note Please vote on this pull request by adding a reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave "…
2021-03-07
Get real-time IaC security and compliance scanning and inline fixes with our new Checkov VS Code extension.
2021-03-08
Any resource on how to add a shared LB to multiple Beanstalk environments? I don’t see such options on the example in the cloudposse terraform github or the official provider’s
if you do it by path or port you can do it
but you can’t use the same path and port
you could do same port as 443 and path /app1 /app2 / (default app)
Thank you
I’m going to give it a try
Hi all, as I want to migrate existing infrastructure to a module based configuration based, what’s the best approach ? Importing the existing configuration does not seem to be the best idea
Can you expand on the context / your migration a bit Bart? Might be able to provide some guidance, but I’m not sure what you’re referring to.
hey Matt
like a build iam policies with terraform before
now I used mostly the cloudposse modules, like this one for example:
A Terraform module that creates IAM role with provided JSON IAM polices documents. - cloudposse/terraform-aws-iam-role
the configuration does exist already
another example, I have existing s3 buckets
Terraform module that creates an S3 bucket with an optional IAM user for external CI/CD systems - cloudposse/terraform-aws-s3-bucket
so when I deploy the module, you get conflicts because the resource does exist already
so what I want to do is, lay the module configuration over the existing resources
As you in created those resources via ClickOps already?
yes and creating the resources in terraform myself
now I want to base my config on one codebase, mostly build by modules
If you created resources via ClickOps then you only have a few options:
- Delete the resources and let them be recreated by your Terraform code. This only works if the resources are not critical and are not in a production environment.
- Import the resources using
terraform import
- Accept that those resources are managed outside of IaC, document them, and then work to never do ClickOps again so you don’t run into that issue again in the future.
yeah, but I herited quite a legacy installed base
You look into terraformer as another option… but I haven’t used it myself so I’m not sure how if that will work out for you or not.
CLI tool to generate terraform files from existing infrastructure (reverse Terraform). Infrastructure to Code - GoogleCloudPlatform/terraformer
Looking for a reviewer https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/pull/136
You can't modify an S3 bucket's policy & public access block at the same time, AWS API will complain: OperationAborted: A conflicting conditional operation is currently in progress agai…
Anyone with experience in using tools (like parliament) in CI/CD to catch overly privileged IAM policies?
I’m writing a blog post about how to do this and am looking to review the different methods people have used out there.
wouldn’t that force some sort of code inspection? I’m familiar with tools that’ll scan traffic in order to figure least privilege of an app, and you might be able to have that as part of a complex integration test.. or are you\ talking about something else?
2021-03-09
Hi guys, anyone can give me a hand? When initially deploying cloudposse/alb/aws
with cloudposse/ecs-alb-service-task/aws
I am always getting:
The target group with targetGroupArn ... does not have an associated load balancer.
On the second run it works. I guess there is a depends_on
missing in cloudposse/alb/aws
or am I missing sth? Thx
Load balancer takes time to become available after it gets created, so TF calls the API which can’t find the LB in the “ready” state. At the time when we created the module, we could not find a workaround, except 1) using two-stage apply with -target
;2) separate the LB into diff folder and apply it first, then all the rest.
Can you add a depends_on argument directly to the alb service task module that depends on the alb? I believe that’s available on tf 14 and should work.
that prob would work. Can you test and open a PR? We’ll review promptly, thanks
can anyone help me with an issue with the upstream RDS module
module "db" {
source = "terraform-aws-modules/rds/aws"
version = "2.20.0"
I am trying to upgrade from 5.6 to 5.7 and getting the following error …
Error: Error Deleting DB Option Group: InvalidOptionGroupStateFault: The option group 'de-qa-env-01-20210119142009861600000003' cannot be deleted because it is in use.
status code: 400, request id: e9a3c5b5-61fa-4648-bc95-183fba0fa32b
however the instance has been upgrade fine
I’m having trouble mounting an EFS volume in an ECS Fargate container. The container fails to start with ResourceInitializationError: failed to invoke EFS utils commands to set up EFS volumes: stderr: mount.nfs4: Connection timed out : unsuccessful EFS utils command execution; code: 32
Terraform config in thread
The EFS resources:
resource "aws_efs_file_system" "ecs01" {
tags = {
Name = "ecs-efs-01"
}
}
resource "aws_efs_mount_target" "main" {
file_system_id = aws_efs_file_system.ecs01.id
subnet_id = aws_subnet.private2.id
security_groups = [aws_security_group.ecs_efs.id]
}
Networking:
resource "aws_security_group" "ecs_efs" {
name = "ecs-efs"
vpc_id = aws_vpc.ecs-service-vpc2.id
egress {
from_port = 2049
to_port = 2049
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_security_group_rule" "ecs_efs_to_cluster" {
from_port = 2049
to_port = 2049
protocol = "tcp"
security_group_id = aws_security_group.ecs_efs.id
source_security_group_id = aws_security_group.ecs_cluster.id
type = "egress"
}
resource "aws_security_group" "ecs_cluster" {
name = "ecs-cluster"
vpc_id = aws_vpc.ecs-service-vpc2.id
egress {
cidr_blocks = ["0.0.0.0/0"]
from_port = 0
to_port = 0
protocol = "-1"
}
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
security_groups = [aws_security_group.ecs_alb.id]
}
egress {
from_port = 2049
to_port = 2049
protocol = "tcp"
security_groups = [aws_security_group.ecs_efs.id]
}
ingress {
from_port = 2049
to_port = 2049
protocol = "tcp"
security_groups = [aws_security_group.ecs_efs.id]
}
lifecycle {
create_before_destroy = true
}
}
ECS task definition:
resource "aws_ecs_task_definition" "service_task" {
family = "service"
execution_role_arn = aws_iam_role.ecs_task_execution_role.arn
container_definitions = <<TASK_DEFINITION
[
{
"essential": true,
"name": "service",
"image": "${module.service_ecr.repository_url}:latest",
"command": [
"sh",
"-c",
"service serve -api-key $service_API_KEY -tls=false -server-url $service_FQDN -http-addr :80"
],
"environment": [
{
"name": "SERVICE_FQDN",
"value": "https://${aws_route53_record.ecs01.fqdn}"
},
{
"name": "SERVICE_API_KEY",
"value": "foo"
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "${aws_cloudwatch_log_group.ecs_service.name}",
"awslogs-region": "${data.aws_region.current.name}",
"awslogs-stream-prefix": "ecs"
}
},
"mountPoints": [
{
"containerPath": "/var/db/service",
"sourceVolume": "service-db"
}
],
"portMappings": [
{
"containerPort": 80,
"protocol": "tcp",
"hostPort": 80
},
{
"containerPort": 2049,
"protocol": "tcp",
"hostPort": 2049
}
]
}
]
TASK_DEFINITION
volume {
name = "service-db"
efs_volume_configuration {
file_system_id = aws_efs_file_system.ecs01.id
# root_directory = "/opt/db/service/01/"
root_directory = "/"
# transit_encryption = "ENABLED"
# transit_encryption_port = 2999
# authorization_config {
# access_point_id = aws_efs_access_point.test.id
# iam = "ENABLED"
# }
}
}
cpu = 256
memory = 512
network_mode = "awsvpc"
requires_compatibilities = ["FARGATE"]
}
Have you set the platform version for the ecs service to 1.4.0?
# Set platform version to 1.4.0 otherwise mounting EFS volumes won't work with Fargate.
# Defaults to LATEST which is set to 1.3.0 which doesn't allow efs volumes.
platform_version = "1.4.0"
yeah I have this in my service hah
platform_version = "1.4.0" # 1.3.0 doesn't support EFS mounts
I was missing an ingress rule in resource "aws_security_group" "ecs_efs"
:yes
does anyone know how to get RDS version upgrades working (e.g. 5.6 to 5.7) using the upstream RDS module?
i can’t seem to figure it out as it tries to delete the old option group which is referenced by the snapshot before performing the upgrade
i have been unable to get TF to apply cleanly without deleting the existing snapshot prior to the upgrade which is just crazy as i have no way of rolling back if the upgrade borks out
I’m in somewhat the same boat. I’ve never heard of TF trying to delete the snapshot (I mean, terraform doesnt even control the snapshot as a resource). Do you mean delete the option group as you stated earlier? because that HAS happened to me too. my brainstorming would be that we’d have to pre-create a second paramter group and option group for the NEW-INCOMING version. Then the upgrade path would be to change engine_version, option_group_name and parameter_group_name all at the same time from the 5.6’s to the 5.7 equivalents:
engine_version = "5.7"
option_group_name = module.db_option_group.my_5.7_optgrp.name
parameter_group_name = module.db_parameter_group.my_5.7_paramgrp.id
???
Yeh the option group for 5.6 couldn’t be deleted as it is used by previous snapshots (which is fine)
I have no idea why it’s trying to be deleted in the first place it’s too tighter coupling really
I agree - terraform is really ugly when handling RDS in my opinion. I thought i was going crazy until i happened on a youtube talk saying the same things. In theory if you switch the RDS instance to a new option group, then it won’t try to delete it, is my thoughts..
I am doing this with the upstream module I am wondering if the issue is I’m using the option group module and passing it to the rds module rather than directly to the instance module inside the Ed’s module
@mikesew i managed to hack this by deleting the option group and parameter group from terraform remote state so that it does not get deleted, but it works well
that’s great but a non-optimal solution.. the thing is, I haven’t seen this side of terraform management (stateful managed databases like RDS or AzureSQL) mentioned nearly enough in the blogs . I think I”m honestly doing it wrong.
• Do you have 1 option group per database, or 1 common option group for a group of DB’s? (we had been doing a common option group but are seeing how inflexible it has become with the snapshots being tied to it)
• how about paramter groups? 1 per db, or a common one?
I’m creating a parameter, option and subnet group per database as all our databases are different.
Module: cloudposse/elasticache-redis/aws. I got error below. Can someone help?
Error: Unsupported argument
on .terraform/modules/redis/main.tf line 92, in resource “aws_elasticache_replication_group” “default”: 92: multi_az_enabled = var.multi_az_enabled
An argument named “multi_az_enabled” is not expected here.
module "redis" {
source = "cloudposse/elasticache-redis/aws"
availability_zones = data.terraform_remote_state.vpc.outputs.azs
vpc_id = data.terraform_remote_state.vpc.outputs.vpc_id
enabled = var.enabled
name = var.name
tags = var.tags
allowed_security_groups = [data.terraform_remote_state.vpc.outputs.default_security_group_id]
allowed_cidr_blocks = ["20.10.0.0/16"]
subnets = data.terraform_remote_state.vpc.outputs.elasticache_subnets
cluster_size = var.redis_cluster_size #number_cache_clusters
instance_type = var.redis_instance_type
apply_immediately = true
automatic_failover_enabled = true
multi_az_enabled = true
engine_version = var.redis_engine_version
family = var.redis_family
cluster_mode_enabled = false
replication_group_id = var.replication_group_id
at_rest_encryption_enabled = var.at_rest_encryption_enabled
transit_encryption_enabled = var.transit_encryption_enabled
cloudwatch_metric_alarms_enabled = var.cloudwatch_metric_alarms_enabled
cluster_mode_num_node_groups = var.cluster_mode_num_node_groups
snapshot_retention_limit = var.snapshot_retention_limit
snapshot_window = var.snapshot_window
dns_subdomain = var.dns_subdomain
cluster_mode_replicas_per_node_group = var.cluster_mode_replicas_per_node_group
parameter = [
{
name = "notify-keyspace-events"
value = "lK"
}
]
}
have you tried removing multi_az_enabled
?
it does also look like a bug in the module
pretty weird because the arg multi_az_enabled exist on both the module level and on the aws_elasticache_replication_group
resource
Removed multi_az_enabled. #multi_az_enabled = true. But, still got error.
Error: Unsupported argument
on .terraform/modules/redis/main.tf line 92, in resource “aws_elasticache_replication_group” “default”: 92: multi_az_enabled = var.multi_az_enabled
An argument named “multi_az_enabled” is not expected here.
[terragrunt] 2021/03/09 1301 Hit multiple errors:
let’s chat in this thread
can you share your full terraform ?
cat *.tf
and dump it here ?
$ cat context.tf module “this” { source = “cloudposse/label/null” version = “0.24.1” # requires Terraform >= 0.13.0
enabled = var.enabled namespace = var.namespace environment = var.environment stage = var.stage name = var.name delimiter = var.delimiter attributes = var.attributes tags = var.tags additional_tag_map = var.additional_tag_map label_order = var.label_order regex_replace_chars = var.regex_replace_chars id_length_limit = var.id_length_limit label_key_case = var.label_key_case label_value_case = var.label_value_case
context = var.context }
variable “context” {
type = any
default = {
enabled = true
namespace = null
environment = null
stage = null
name = null
delimiter = null
attributes = []
tags = {}
additional_tag_map = {}
regex_replace_chars = null
label_order = []
id_length_limit = null
label_key_case = null
label_value_case = null
}
description = <<-EOT
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as null
to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
EOT
validation {
condition = lookup(var.context, “label_key_case”, null) == null ? true : contains([“lower”, “title”, “upper”], var.context[“label_key_case”])
error_message = “Allowed values: lower
, title
, upper
.”
}
validation {
condition = lookup(var.context, “label_value_case”, null) == null ? true : contains([“lower”, “title”, “upper”, “none”], var.context[“label_value_case”])
error_message = “Allowed values: lower
, title
, upper
, none
.”
}
}
variable “enabled” { type = bool default = true description = “Set to false to prevent the module from creating any resources” }
variable “namespace” { type = string default = null description = “Namespace, which could be your organization name or abbreviation, e.g. ‘eg’ or ‘cp’” }
variable “environment” { type = string default = null description = “Environment, e.g. ‘uw2’, ‘us-west-2’, OR ‘prod’, ‘staging’, ‘dev’, ‘UAT’” }
variable “stage” { type = string default = null description = “Stage, e.g. ‘prod’, ‘staging’, ‘dev’, OR ‘source’, ‘build’, ‘test’, ‘deploy’, ‘release’” }
variable “name” { type = string default = “redis-blue-green” description = “Name for the cache subnet group. Elasticache converts this name to lowercase.” }
variable “delimiter” {
type = string
default = null
description = «-EOT
Delimiter to be used between namespace
, environment
, stage
, name
and attributes
.
Defaults to -
(hyphen). Set to ""
to use no delimiter at all.
EOT
}
variable “attributes” {
type = list(string)
default = []
description = “Additional attributes (e.g. 1
)”
}
variable “tags” { type = map(string) default = { Name = “redis-blue-green” }
description = “Additional tags (e.g. map('BusinessUnit','XYZ')
”
}
variable “additional_tag_map” {
type = map(string)
default = {}
description = “Additional tags for appending to tags_as_list_of_maps. Not added to tags
.”
}
variable “label_order” { type = list(string) default = null description = «-EOT The naming order of the id output and Name tag. Defaults to [“namespace”, “environment”, “stage”, “name”, “attributes”]. You can omit any of the 5 elements, but at least one must be present. EOT }
variable “regex_replace_chars” {
type = string
default = null
description = «-EOT
Regex to replace chars with empty string in namespace
, environment
, stage
and name
.
If not set, "/[^a-zA-Z0-9-]/"
is used to remove all characters other than hyphens, letters and digits.
EOT
}
variable “id_length_limit” {
type = number
default = null
description = «-EOT
Limit id
to this many characters (minimum 6).
Set to 0
for unlimited length.
Set to null
for default, which is 0
.
Does not affect id_full
.
EOT
validation {
condition = var.id_length_limit == null ? true : var.id_length_limit >= 6 || var.id_length_limit == 0
error_message = “The id_length_limit must be >= 6 if supplied (not null), or 0 for unlimited length.”
}
}
variable “label_key_case” {
type = string
default = null
description = «-EOT
The letter case of label keys (tag
names) (i.e. name
, namespace
, environment
, stage
, attributes
) to use in tags
.
Possible values: lower
, title
, upper
.
Default value: title
.
EOT
validation {
condition = var.label_key_case == null ? true : contains([“lower”, “title”, “upper”], var.label_key_case)
error_message = “Allowed values: lower
, title
, upper
.”
}
}
variable “label_value_case” {
type = string
default = null
description = «-EOT
The letter case of output label values (also used in tags
and id
).
Possible values: lower
, title
, upper
and none
(no transformation).
Default value: lower
.
EOT
validation {
condition = var.label_value_case == null ? true : contains([“lower”, “title”, “upper”, “none”], var.label_value_case)
error_message = “Allowed values: lower
, title
, upper
, none
.”
}
}
Compare to terraform-aws-elasticache-redis/examples/complete/context.tf. Changes I made:
62c84
< default = true
—
default = null
86,87c108,109
< default = “redis-blue-green”
< description = “Name for the cache subnet group. Elasticache converts this name to lowercase.”
—
default = null
description = “Solution name, e.g. ‘app’ or ‘jenkins’”
107,110c129
< default = {
< Name = “redis-blue-green”
< }
<
—
default = {}
I did not make much changes in context.tf.
could you use triple backticks to format your code ? it makes it easier to read
also could you provide a minimal viable reproducible example ?
$ cat context.tf
module "this" {
source = "cloudposse/label/null"
version = "0.24.1" # requires Terraform >= 0.13.0
enabled = var.enabled
namespace = var.namespace
environment = var.environment
stage = var.stage
name = var.name
delimiter = var.delimiter
attributes = var.attributes
tags = var.tags
additional_tag_map = var.additional_tag_map
label_order = var.label_order
regex_replace_chars = var.regex_replace_chars
id_length_limit = var.id_length_limit
label_key_case = var.label_key_case
label_value_case = var.label_value_case
context = var.context
}
variable "context" {
type = any
default = {
enabled = true
namespace = null
environment = null
stage = null
name = null
delimiter = null
attributes = []
tags = {}
additional_tag_map = {}
regex_replace_chars = null
label_order = []
id_length_limit = null
label_key_case = null
label_value_case = null
}
description = <<-EOT
Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional_tag_map, which are merged.
EOT
validation {
condition = lookup(var.context, "label_key_case", null) == null ? true : contains(["lower", "title", "upper"], var.context["label_key_case"])
error_message = "Allowed values: `lower`, `title`, `upper`."
}
validation {
condition = lookup(var.context, "label_value_case", null) == null ? true : contains(["lower", "title", "upper", "none"], var.context["label_value_case"])
error_message = "Allowed values: `lower`, `title`, `upper`, `none`."
}
}
variable "enabled" {
type = bool
default = true
description = "Set to false to prevent the module from creating any resources"
}
variable "namespace" {
type = string
default = null
description = "Namespace, which could be your organization name or abbreviation, e.g. 'eg' or 'cp'"
}
variable "environment" {
type = string
default = null
description = "Environment, e.g. 'uw2', 'us-west-2', OR 'prod', 'staging', 'dev', 'UAT'"
}
variable "stage" {
type = string
default = null
description = "Stage, e.g. 'prod', 'staging', 'dev', OR 'source', 'build', 'test', 'deploy', 'release'"
}
variable "name" {
type = string
default = "redis-blue-green"
description = "Name for the cache subnet group. Elasticache converts this name to lowercase."
}
variable "delimiter" {
type = string
default = null
description = <<-EOT
Delimiter to be used between `namespace`, `environment`, `stage`, `name` and `attributes`.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all.
EOT
}
variable "attributes" {
type = list(string)
default = []
description = "Additional attributes (e.g. `1`)"
}
variable "tags" {
type = map(string)
default = {
Name = "redis-blue-green"
}
description = "Additional tags (e.g. `map('BusinessUnit','XYZ')`"
}
variable "additional_tag_map" {
type = map(string)
default = {}
description = "Additional tags for appending to tags_as_list_of_maps. Not added to `tags`."
}
variable "label_order" {
type = list(string)
default = null
description = <<-EOT
The naming order of the id output and Name tag.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 5 elements, but at least one must be present.
EOT
}
variable "regex_replace_chars" {
type = string
default = null
description = <<-EOT
Regex to replace chars with empty string in `namespace`, `environment`, `stage` and `name`.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits.
EOT
}
variable "id_length_limit" {
type = number
default = null
description = <<-EOT
Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for default, which is `0`.
Does not affect `id_full`.
EOT
validation {
condition = var.id_length_limit == null ? true : var.id_length_limit >= 6 || var.id_length_limit == 0
error_message = "The id_length_limit must be >= 6 if supplied (not null), or 0 for unlimited length."
}
}
variable "label_key_case" {
type = string
default = null
description = <<-EOT
The letter case of label keys (`tag` names) (i.e. `name`, `namespace`, `environment`, `stage`, `attributes`) to use in `tags`.
Possible values: `lower`, `title`, `upper`.
Default value: `title`.
EOT
validation {
condition = var.label_key_case == null ? true : contains(["lower", "title", "upper"], var.label_key_case)
error_message = "Allowed values: `lower`, `title`, `upper`."
}
}
variable "label_value_case" {
type = string
default = null
description = <<-EOT
The letter case of output label values (also used in `tags` and `id`).
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Default value: `lower`.
EOT
validation {
condition = var.label_value_case == null ? true : contains(["lower", "title", "upper", "none"], var.label_value_case)
error_message = "Allowed values: `lower`, `title`, `upper`, `none`."
}
}
could you provide a minimal example ?
minimal as in the minimum required to reproduce the same error message
$ cat variables.tf
variable "region" {
default = "us-west-2"
}
variable "redis_cluster_size" {
type = number
description = "Number of nodes in cluster"
default = 2
}
variable "redis_instance_type" {
type = string
description = "Elastic cache instance type"
default = "cache.t2.small"
}
variable "redis_family" {
type = string
description = "Redis family"
default = "redis5.0"
}
variable "redis_engine_version" {
type = string
description = "Redis engine version"
default = "5.0.6"
}
variable "at_rest_encryption_enabled" {
type = bool
description = "Enable encryption at rest"
default = false
}
variable "transit_encryption_enabled" {
type = bool
description = "Enable TLS"
default = false
}
variable "cloudwatch_metric_alarms_enabled" {
type = bool
description = "Boolean flag to enable/disable CloudWatch metrics alarms"
default = true
}
variable "replication_group_id" {
type = string
description = "The replication group identifier. This parameter is stored as a lowercase string."
default = "redis-blue-green"
}
#variable "replication_group_description" {
# type = string
# description = "A user-created description for the replication group."
# default = "redis-cluster-blue-green"
#}
variable "cluster_mode_num_node_groups" {
type = number
description = "Number of node groups (shards) for this Redis replication group"
default = 0
}
variable "cluster_mode_replicas_per_node_group" {
type = number
description = "Number of replica nodes in each node group. Valid values are 0 to 5."
default = 3
}
variable "automatic_failover_enabled" {
type = bool
default = true
description = "Specifies whether a read-only replica will be automatically promoted to read/write primary if the existing primary fails."
}
variable "multi_az_enabled" {
type = bool
default = true
description = "Multi AZ (Automatic Failover must also be enabled.)"
}
variable "snapshot_retention_limit" {
type = number
description = "The number of days for which ElastiCache will retain automatic cache cluster snapshots before deleting them."
default = 1
}
variable "snapshot_window" {
type = string
description = "The daily time range (in UTC) during which ElastiCache will begin taking a daily snapshot of your cache cluster."
default = "06:30-07:30"
}
variable "apply_immediately" {
type = bool
default = true
description = "Apply changes immediately"
}
variable "dns_subdomain" {
type = string
default = "redis-blue-green"
description = "The subdomain to use for the CNAME record. If not provided then the CNAME record will use var.name."
}
oof… youre doing this one at a time, one file per…
if you can create a minimal reproducible example, i’ll be able to help
but at this point, this is a lot of code to trudge through
the tests pass, for now, id point you to the current tf code in the example..
Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis
@melissa Jenner ^
$ cat main.tf
module "redis" {
source = "cloudposse/elasticache-redis/aws"
availability_zones = data.terraform_remote_state.vpc.outputs.azs
vpc_id = data.terraform_remote_state.vpc.outputs.vpc_id
enabled = var.enabled
name = var.name
tags = var.tags
allowed_security_groups = [data.terraform_remote_state.vpc.outputs.default_security_group_id]
allowed_cidr_blocks = ["20.10.0.0/16", "20.10.51.0/24", "20.10.52.0/24"]
subnets = data.terraform_remote_state.vpc.outputs.elasticache_subnets
cluster_size = var.redis_cluster_size #number_cache_clusters
instance_type = var.redis_instance_type
apply_immediately = true
automatic_failover_enabled = true
#multi_az_enabled = true
engine_version = var.redis_engine_version
family = var.redis_family
cluster_mode_enabled = false
replication_group_id = var.replication_group_id
#replication_group_description = var.replication_group_description
#at-rest encryption is to increase data security by encrypting on-disk data.
at_rest_encryption_enabled = var.at_rest_encryption_enabled
#in-transit encryption protects data when it is moving from one location to another.
transit_encryption_enabled = var.transit_encryption_enabled
cloudwatch_metric_alarms_enabled = var.cloudwatch_metric_alarms_enabled
cluster_mode_num_node_groups = var.cluster_mode_num_node_groups
snapshot_retention_limit = var.snapshot_retention_limit
snapshot_window = var.snapshot_window
dns_subdomain = var.dns_subdomain
cluster_mode_replicas_per_node_group = var.cluster_mode_replicas_per_node_group
parameter = [
{
name = "notify-keyspace-events"
value = "lK"
}
]
}
I copied code at https://github.com/cloudposse/terraform-aws-elasticache-redis/blob/master/examples/complete/main.tf#L28 and added a few more lines.
@melissa Jenner - This is a bug due to the overall configuration of what’s being passed into the CloudPosse module. If you’d like to do some pair programming I can spend some time helping you out since I just went through and configured this module myself a couple of weeks ago.
if you 2 could figure out what the issue is, perhaps a PR can be submitted to make the module easier to use
@MattyB As of now, do you have ideas of how to fix it?
@MattyB You said, “I just went through and configured this module myself a couple of weeks ago.”. Did you actually fix it?
@melissa Jenner Not without seeing more of the variables that are being passed in. There are a few gotchas depending on how you set it up. Clustered mode, and other settings.
@MattyB I posted context.tf and variables.tf. Do you see anything?
good luck! let us know when you pr
Without additional context (missing variables being passed into here) I’m unable to help you.
Can you share your code?
I don’t think multi AZ is valid for Redis. It only applies to Memcached, right?
Redis instead has the concept of “cluster mode”
Unfortunately not. I can tell you we’re running in clustered mode and that to use the module in any fashion you’ll need to thoroughly understand the CloudPosse implementation and to do some reading here https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/elasticache_replication_group. for example:
number_cache_clusters - (Optional) The number of cache clusters (primary and replicas) this replication group will have. If Multi-AZ is enabled, the value of this parameter must be at least 2. Updates will occur before other modifications. One of number_cache_clusters or cluster_mode is required.
And, I do need multi_az_enabled. Regardless, I need to be able to provision redsi. By removing multi_az_enabled, I am still not able to provision redis.
2021-03-10
Hi all, i am looking for terraform-aws-eks modules? Have a question about that what’s the difference between cloudposse and AWS? What’s the purpose that we write our own eks modules rather than using the open source eks modules?
hey @Hank i would personally swerve the AWS one its trying to be everything for everyone and in my personal opinion needs a major refactor
Thanks Steve.
What I consider is that how we handle the upgrade or new feature integration from AWS? We need add it into our own eks modules after new feature come out,right?
@Hank One important thing to note: The terraform-aws-modules
organization is not built by AWS. It’s built by Anton Babenko who is an AWS Hero I believe, but that does not mean those modules are actively maintained by AWS.
The large majority of Cloud Posse modules are kept up to date with best practices and new features. Particularly a module surrounding EKS since Cloud Posse and the community here use them very heavily.
Thank u very much
at present i manage my own modules for EKS that work with bottlerocket
Can anyone recommend a managed node group module for EKS?
Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.
We also have managed Spot.io node groups
@Erik Osterman (Cloud Posse), im interested on how this works on spot.io
v0.14.8 Version 0.14.8
Version 0.14.8
nah, their release automation always does this… creates release, then follows up a bit later to post the description and any artifacts. the rss integration isn’t smart enough to handle that
oh nice, i assumed since it said release notes that it was going to be the release notes haha
v0.14.8 BUG FIXES: config: Update HCL package to fix panics when indexing using sensitive values (#28034) core: Fix error when using sensitive…
This fixes some panics with marked values, but also includes Unicode 13 updates in both the hcl and cty packages. The CHANGELOG update for this will mention the unicode changes as well as the bug f…
Hi - how do you run a target apply on a resource when using Terraform Cloud?
I’m not exactly sure this is natively possible with TFC as of now. It wasn’t possible before, and I don’t see any documents updating the support for it.
I know we support this directly using a variable in env0 that passes the target flag to the apply:
https://docs.env0.com/docs/additional-controls#terraform-partial-apply
Disclaimer: I’m the DevOps Advocate for env0.
Using the environment variable ENV0_TERRAFORM_TARGET, you can specify specific resources that will be targeted for apply. The value of the variable will be passed to Terraform -target flag.Read more here.Using the environment variable ENV0_TF_VERSION, you can specify the Terraform version you would…
Hey, sorry. We did some digging, and you can do this with TFC now. Using CLI runs, you can use resource targeting. It is still not available in the UI.
Thanks for your response. I just got introduced to a customer who have ruined their TF State with weird cyclic dependencies and deleted the resources manually. TF target and state remove are the only saviors
Oof! Totally understandable. Hopefully you can get it all cleaned up.
I managed to fix the bad terraform state issue-what I could have fixed in an hour with Terraform CLI took me 8 hours with Terraform Cloud
hi all how we can revert to a previous state?
there is no such thing as revert in TF
but if you saved the previous plan you might be able to revert
if you screw up the state but you have not apply you could maybe restore to and old version fo the state
do you know any tools to export existing AWS resources to terraform style?
sure
CLI tool to generate terraform files from existing infrastructure (reverse Terraform). Infrastructure to Code - GoogleCloudPlatform/terraformer
the best there is
yes, that one is good as i know so far but for example: how to generate a tfstate file from the AWS existing resources with it?
there is plenty of examples in the doc
you need o do it per product manually
there is no such thing as scan all a give me a state and tf files
although you can use a for loop and go trough all the products supported by terraformer
2021-03-11
HI all, the documentation is a bit unclear on this module:
Terraform Module to Provide an Amazon Simple Notification Service (SNS) - cloudposse/terraform-aws-sns-topic
it says: subscribers:
(email is an option but is unsupported, see below).
but then no extra info, does this refer to:
# The endpoint to send data to, the contents will vary with the protocol. (see below for more information)
endpoint_auto_confirms = bool
is there an easy way to obtain the difference in hours between two dates?
i want to provide an expiry date to a cert module in the format yyyy-mm-dd
and then work out the validity
in hours
hmmm terraform can confuse me with this:
data "aws_sns_topic" "cloudposse-hosting" {
name = "cloudposse-hosting-isawesome"
}
alarm_actions = ["${data.aws_sns_topic.cloudposse-hosting.arn}"]
remove the interpolation syntax:
alarm_actions = [data.aws_sns_topic.cloudposse-hosting.arn]
ha ok, that worked
thx
then it complains with:
Template interpolation syntax
what’s the best way to format this ?
Hi I’m trying to use cloudposse’s elastic beanstalk module and getting this error
Error: Invalid count argument
on .terraform/modules/elastic_beanstalk.elastic_beanstalk_environment.dns_hostname/main.tf line 2, in resource "aws_route53_record" "default":
2: count = module.this.enabled ? 1 : 0
The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.
Anyone seen this before?
Ok looks like I won’t be using cloudposse
Can someone point me in the right direction regarding the best use of using context?
in the context.tf file itself there is comments
Thank you very much (both). I’ll watch the video now. I just need a demonstration of it’s use.
it takes a bit to get around it and understand it well but when it clicks your are like “I should have started using this last week”
Got it sorted, looking forward to cleaning up my code with this, thanks guys!
I have two VPCs. One if blue vpc (vpc_id = vpc-0067ff2ab41cc8a3e), another is shared VPC (vpc_id = vpc-076a4c26ec2217f9d). VPC peering connects these two VPCs. I provision MariaDB in the shared VPC. But, I got errors below. Error: Error creating DB Instance: InvalidParameterCombination: The DB instance and EC2 security group are in different VPCs. The DB instance is in vpc-076a4c26ec2217f9d and the EC2 security group is in vpc-0067ff2ab41cc8a3e status code: 400, request id: 75954d06-375c-4680-b8fe-df9a67f2574d
Below is the code. Can someone help?
module "master" {
source = "terraform-aws-modules/rds/aws"
version = "2.20.0"
identifier = var.master_identifier
engine = var.engine
engine_version = var.engine_version
instance_class = var.instance_class
allocated_storage = var.allocated_storage
storage_type = var.storage_type
storage_encrypted = var.storage_encrypted
name = var.mariadb_name
username = var.mariadb_username
password = var.mariadb_password
port = var.mariadb_port
vpc_security_group_ids = [data.terraform_remote_state.vpc-shared.outputs.default_security_group_id,
data.terraform_remote_state.vpc-blue.outputs.default_security_group_id,
data.terraform_remote_state.eks-blue.outputs.worker_group_general_security_group_id,
data.terraform_remote_state.eks-blue.outputs.worker_group_gitea_security_group_id,
data.terraform_remote_state.eks-blue.outputs.all_workers_security_group_id,
data.terraform_remote_state.eks-blue.outputs.cluster_security_group_id]
maintenance_window = var.maintenance_window_master
backup_window = var.backup_window_master
multi_az = true
tags = {
Owner = "MariaDB"
Environment = "blue-green"
}
enabled_cloudwatch_logs_exports = ["audit", "general"]
subnet_ids = data.terraform_remote_state.vpc-shared.outputs.database_subnets
create_db_option_group = true
apply_immediately = true
family = var.family
major_engine_version = var.major_engine_version
final_snapshot_identifier = var.final_snapshot_identifier
deletion_protection = false
parameters = [
{
name = "character_set_client"
value = "utf8"
},
{
name = "character_set_server"
value = "utf8"
}
]
options = [
{
option_name = "MARIADB_AUDIT_PLUGIN"
option_settings = [
{
name = "SERVER_AUDIT_EVENTS"
value = "CONNECT"
},
{
name = "SERVER_AUDIT_FILE_ROTATIONS"
value = "7"
},
]
},
]
}
module "replica" {
source = "terraform-aws-modules/rds/aws"
version = "2.20.0"
identifier = var.replica_identifier
replicate_source_db = module.master.this_db_instance_id
engine = var.engine
engine_version = var.engine_version
instance_class = var.instance_class
allocated_storage = var.allocated_storage
username = ""
password = ""
port = var.mariadb_port
vpc_security_group_ids = [data.terraform_remote_state.vpc-shared.outputs.default_security_group_id,
data.terraform_remote_state.vpc-blue.outputs.default_security_group_id,
data.terraform_remote_state.eks-blue.outputs.worker_group_general_security_group_id,
data.terraform_remote_state.eks-blue.outputs.worker_group_gitea_security_group_id,
data.terraform_remote_state.eks-blue.outputs.all_workers_security_group_id,
data.terraform_remote_state.eks-blue.outputs.cluster_security_group_id]
maintenance_window = var.maintenance_window_replica
backup_window = var.backup_window_replica
multi_az = false
backup_retention_period = 0
create_db_subnet_group = false
create_db_option_group = false
create_db_parameter_group = false
major_engine_version = var.major_engine_version
}
Is anyone using Terraform to manage and provision ECS Fargate + AWS CodeDeploy and doing database migrations? We’re using GitHub Actions as our CI platform to build the docker image for a Rails app, then using a continuous delivery platform to deploy the terraform (spacelift). I’m curious how to run the rake migrations?
Coming from an EKS world, we’d just run them as a Job
, but with ECS, there’s no such straight equivalent (scheduled tasks don’t count).
Ideas considered:
• Using the new AWS Lambda container functionality ( but still not sure how we’d trigger it relative to the ECS tasks)
• Using a CodePipeline, but also not sure how we’d trigger it in our current continuous delivery model, since right now, we’re calling terraform to deploy the ECS task and update the container definition. I don’t believe there’s any way to deploy a code pipeline and have it trigger automatically.
• Using Step Functions (haven’t really used them before). Just throwing out buzz words.
• Using ECS task on a cron schedule (but we have no way to pick the appropriate schedule)
I would split Iac codebase pipeline and App codebase pipeline, deploy Terraform using Spacelift. then deploy app code using Github CI by building the image, upload to ECR, the ECR should triggered Codepipeline which use codedeploy for deployments ( like blue/green)
Ya, that would be a conventional approach what I don’t like about it is that is we have multiple systems modifying the ECS task. We wouldn’t be able to deploy ECS task changes (E.g. new SSM parameters) along side a new image deployment.
We’ve managed to recreate an argocd style deployment strategy for infra using spacelift. So really just want to solve the specific problem of running migrations.
Yes, true you need to ignore ecs task definition and lb values, lot of pain
Ideally I prefer not coupling deployments and migrations
They need to be backwards compatible anyways
Or just have the migration task def sitting around and trigger a run-task with the task def for migration?
Not sure exactly which part of the migration is causing issues? Where to run the migration from? Coupling it with CI/CD ?
trigger a run-task
would be perfect if it was supported natively in terraform, but requires local exec
We need to run the migration as part of the CD
the CD is calling terraform
when files in a git repo change (~like argocd). By design, in this part of the CD, there’s no workflow associated with calling terraform, so there’s no way to run other steps like in a CI pipeline. Terraform’s job is simply to synchronize the git repo with AWS.
Ideally I prefer not coupling deployments and migrations
agree
trigger a run-task
would be perfect if it was supported natively in terraform, but requires local exec
Can you maybe post to an s3 object or ddb item with terraform, and have that event trigger a lambda that invokes the run-task?
terraform-provider-aws v3.32.0 is out now with new resource ACM Private CA, and more support for AWS managed RabbitMQ https://github.com/hashicorp/terraform-provider-aws/releases/tag/v3.32.0
FEATURES: New Data Source: aws_acmpca_certificate (#10213) New Resource: aws_acmpca_certificate (#10213) New Resource: aws_acmpca_certificate_authority_certificate (#17850) ENHANCEMENTS: resourc…
2021-03-12
Any one using Terraform Cloud can share their Git workflow and repo structure? I RTFM for TF Cloud and Github and what Hashi suggested is to either use persistent branch for each stage - dev stage branch and prod stage branch or else use folders for each env stage which translates to TF cloud workspaces and apply everything when merged to ‘main` branch. There workflow don’t appear DRY and easy to change IAC to me.
Is there a better workflow anyone using?
one fo the common questions asked here, there are no single best approach for this, but I will list them:
• repo level: each env on standalone repo
• branch level: each env on standalone branch
• folder level: ( I prefer this) each env on standalone folder My selection would be repo level for enterprises clients - branch and folder for startups and medium-small clients Also you need to split you terraform codebase into stacks:
• network stack
• data stack
• app stack this will help you with making your state file small and fast, and small blast radius. So IMHO my current approach for TF cloud project:
• tf-xyz repo
• env/dev/us-east-1 folder contains TF codebase
• create workspace on TF cloud called dev-us-east-1 points to that folder I hope this gives some light
Thank you from another observer. I (a DBA) have been advocating for a separate data-layer, but my current org seems to only split between core
and application
, grouping the database alongside the app.
We also have the env inside folder. I like your use of the region as a sub-dir underneath.
do you only have .tfvars
inside ./env/dev/us-east-1
, or are there actual full-on [main.tf](http://main.tf)
etc.?
here all in one approach
.
├── README.md
├── Taskfile.yml
└── env
└── dev
└── eu-central-1
├── README.md
├── data.tf
├── dev.auto.tfvars
├── doc.tf
├── eks.tf
├── elasticache.tf
├── helm.tf
├── kms.tf
├── kubernetes.tf
├── locals.tf
├── mq.tf
├── outputs.tf
├── provider.tf
├── random.tf
├── rds.tf
├── sg.tf
├── ssm.tf
├── terraform.tf
├── variables.tf
└── vpc.tf
3 directories, 22 files
here separated stacks
├── Makefile
├── README.md
├── Taskfile.yml
├── app
│ ├── backend.tf
│ ├── cfn
│ │ └── mq.yaml
│ ├── codedeploy-ecs-ingest.tf
│ ├── codedeploy-ecs.tf
│ ├── codedeploy-iam.tf
│ ├── cross-account-access.tf
│ ├── data.tf
│ ├── ecs-alb.tf
│ ├── ecs-cloudwatch.tf
│ ├── ecs-cluster.tf
│ ├── ecs-container-definition-ingest.tf
│ ├── ecs-container-definition.tf
│ ├── ecs-iam.tf
│ ├── ecs-ingest-nlb.tf
│ ├── ecs-route53.tf
│ ├── ecs-service-ingest.tf
│ ├── ecs-service.tf
│ ├── ecs-sg.tf
│ ├── ecs-variables.tf
│ ├── locals.tf
│ ├── mq-nlb.tf
│ ├── mq-route53.tf
│ ├── mq-sg.tf
│ ├── mq.tf
│ ├── outputs.tf
│ ├── provider.tf
│ ├── random.tf
│ ├── secretsmanager.tf
│ ├── terraform.tfvars
│ └── vars.tf
├── data
│ ├── backend.tf
│ ├── data.tf
│ ├── provider.tf
│ ├── random.tf
│ ├── rds.tf
│ ├── reoute53.tf
│ ├── secret.tf
│ ├── sg.tf
│ ├── terraform.tfvars
│ └── vars.tf
└── network
├── acm.tf
├── backend.tf
├── cvpn-cloudwatch.tf
├── cvpn-endpoint.tf
├── cvpn-sg.tf
├── cvpn-tls-users.tf
├── cvpn-tls.tf
├── outputs.tf
├── plan.out
├── provider.tf
├── route53-resolver.tf
├── route53.tf
├── terraform.tfvars
├── vars.tf
└── vpc.tf
i rotate multiple configurations through the backend config
Thank you for response. Did u mean you reset the backend between local and TF cloud to rotate the config?
that’s a different topic, I have a one time set up for the remote backend states
then i have a main(global) config and a per env one
so I make conditionals because my envs are not very different
like this actually: https://github.com/ozbillwang/terraform-best-practices
Terraform Best Practices for AWS users. Contribute to ozbillwang/terraform-best-practices development by creating an account on GitHub.
Which item in that list? Not quite getting what you mean sorry. But it sounds interesting
backend config section
what do you prefer, remote states or data sources ?
I tend to go for data sources when they are available
data sources
technically you can use a data source for a terraform remote state too
in a cloudwatch alarm, thinking howto best implement this:
dimensions = { InstanceId = “i-cloudposseisawesome” }
a remote state pull is the best option here I guess because the data source is a bit messy
messy in what way
remote state pull has some less than ideal side effects e.g. state versioning not being backwards compat
what does the data source of the instance id look like
ah now they don’t seem to discourage this, in the past they did
you’re not going to show us, are you ?
what a tease
hehe, cannot find the reference, I used it in the past but it was a bit messy. I should retry
2021-03-14
Hi. I would like to use terraform-aws-cloudwatch-flow-logs for Terraform >=0.12.0 so I pulled branch 0.12/master from gitrepo. I get multiple warnings about interpolation syntax. I checked other brunches and all of them use the old interpolation syntax “${var.something}“. Is there any branch with sorted interpolation (terra 0.12.x) for that module? I can do it myself but no sense if that is already done and I am just blind
this module was not converted to the new syntax yet
pull requests are welcome
ok. Thanks
HI folks, I wouldn’t normally post so soon after filing a bug but the github bug template suggested joining this Slack Please shout at me if this was bad etiquette.
Any run into an issue where the s3 bucket creation doesn’t respect the region you set in the provider? https://github.com/cloudposse/terraform-aws-tfstate-backend/issues/88
Found a bug? Maybe our Slack Community can help. Describe the Bug I've set the AWS provider to use us-east-1 but I'm getting this error when the module tries to create the s3 bucket: Error …
Interesting. Do you have eu-central-1 used anywhere in your terraform code? I can’t find that string used in the module itself
Found a bug? Maybe our Slack Community can help. Describe the Bug I've set the AWS provider to use us-east-1 but I'm getting this error when the module tries to create the s3 bucket: Error …
The region in the module is using the region data source so it should use the region from the provider
Hey Rb, thanks for replying. Nope, I don’t think I’m specifying that anywhere in my code. I attached a gist to the bug.
I suppose I could create a new directory and copy bit by bit to be completely sure.
same error in a fresh directory/fresh terraform init
weird
I do see eu-west-1
explicitly defined in .terraform\modules\terraform_state_backend.dynamodb_table_label\examples\autoscalinggroup
but that’s an example file and also not eu-central-1
I gotta assume the module properly creates unique bucket names, right? https://github.com/hashicorp/terraform/issues/2774
I'm using S3 remote and configuring with the following command: terraform remote config -backend=S3 -backend-config="bucket=test" -backend-config="key=terraform.tfstate" -ba…
gah… this solves it I think:
s3_bucket_name = random_string.random.result
the bucket name wasn’t unique somehow
2021-03-15
any keybase experts in the group ? We are migrating to more teamwork in terraform configurations. If I use module: https://github.com/cloudposse/terraform-aws-iam-user and I configure the login profile, the console password is encrypted as a base64 key with my own encryption key in keybase. In the workflow I decrypt the key and store it in a password vault. If I leave the company, it’s best that my co’s taint the resource and recreate it with their own key ?
Terraform Module to provision a basic IAM user suitable for humans. - cloudposse/terraform-aws-iam-user
You can use any public PGP key so perhaps you should use a ‘shared’ one in the first place?
Terraform Module to provision a basic IAM user suitable for humans. - cloudposse/terraform-aws-iam-user
Hi Bart, did you find a solution to your problem? I think a ‘shared’ private would lead to the situation that a new key would eventually be required which would also bring re-encryption of secrets to the table. You would have to implement your own rotation method to handle this regularly and without breaking stuff.
I am currently facing the same issue and would like to learn how others are dealing with this challenge.
@hkaya have a listen to the latest office hours. It’s discussed on there.
@Joe Niland thanks, will surely do.
Hi All, got a silly question, I’ve deployed the https://github.com/cloudposse/terraform-aws-jenkins into AWS environment, but I can’t seemed to find the URL to access jenkins server, I tried Route53 DNS name with port 8080 and 80 in the URL, nothing seemed to work. Could anymore point me how to access the jenkins server?
Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack - cloudposse/terraform-aws-jenkins
From the network diagram it should be open on port 443, https
Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack - cloudposse/terraform-aws-jenkins
@Zach Thanks for the quick respond, I’ve tried all them, 443, 80, 8080 on Route53 DNS, ALB DNS name, EFS hostname, efs_dns_name
still not hitting it.
I’ve not used this module myself but you probably just need to hit all the basic troubleshooting then - go see what the R53 record is pointed at, check any ALBs and TG health, etc
like when you say yoiu can’t reach it… whats happening? Is it timing out? (thats probably a SG issue) 404 (reaches jenkins but something is screwy so its not found)? 503 (hitting the LB but jenkins isn’t responding)?
output "elastic_beanstalk_environment_hostname" {
value = module.elastic_beanstalk_environment.hostname
description = "DNS hostname"
}
@Zach route53 zone name just returned no such endpoint I believe.
ALB DNS however showing this, however not that jenkins login page nor a place where I can navigate to it.
@Mohammed Yahya Not sure what you mean with that, but I have that output and its not responding via web request,
@Andriy Knysh (Cloud Posse) @Maxim Mironenko (Cloud Posse), sorry for the ping, but seeing the great work from both of you, if could provide some insight as to what is happening here, I would greatly appreciate this.
just to provide some info, I am using the complete module where I go with all the default info provided in the example tfvars file except different dns_zone_id and github_oauth_token and jenkins usename and pw of course.
it gets deployed to Elastic Beanstalk, so this output should work
output "elastic_beanstalk_environment_hostname" {
value = module.elastic_beanstalk_environment.hostname
description = "DNS hostname"
}
unless it did not get deployed successfully
please login to the AWS console and look at the CodePipeline
if it had any errors, it would show the failed steps in red
@Andriy Knysh (Cloud Posse) Thank you for the quick respond, everything is deployed successfully.
there’s an error like you said, but I have a question, would failed build stop me from accessing jenkins server?
the pipeline could not access the repo or the branch
it either does not exist, or is private
Fixing this now. Thanks Andriy
for test, try to use https://github.com/cloudposse/jenkins
Contribute to cloudposse/jenkins development by creating an account on GitHub.
once working, you can switch the repo
Thanks @Andriy Knysh (Cloud Posse). Looks like all good now.
I have another question, after looking through all the root modules, is changing from github repo to Bitbucket repo possible using this module?
Bitbucket is prob not supported by the current version (but should be possible to switch to it)
Thanks for clarifying that. Appreciate it.
I create some Lambda functions like this:
resource aws_lambda_function main {
for_each = local.functions
...
}
Is it possible to add dependencies so these functions are created in serial, or so they depend on each other?
re-posting this question
i don’t think so… not literally, anyway, which i imagine would look like this:
resource aws_lambda_function main {
for_each = local.functions
<attr> = aws_lambda_function.main["<label>"].<attr>
}
only option i can think of is to split it into as many resource blocks as you need to link the dependencies
i’d be interesting in following if you open a feature request…
Yeah. It seems like when you try and create a lot of functions at the same time, AWS will return rate limit failures for most of them as the Lambda control plane limit is 15/sec, and you can only create 1 Lambda in any one VPC at a time. So this results in Terraform creating about one Lambda every 2mins if they are all in the same VPC
oh, that should definitely be opened as a bug
terraform has retry handlers for rate limiting
Well, that’s the thing. The internal retry logic works and it eventually succeeds. It just takes forever
lulz ok
So it’s not really a Terraform bug, and I doubt Hashicorp would do anything to address this. For now we work around it with -parallelism=1
but that slows down the whole configuration
Another use-case is AppSync resources, you can only modify each AppSync API in serial, so if you have for_each on appsync resolvers or data source you get failures (AWS provider has no rate limiting retry logic as it’s a new service and this is always forgotten )
the provider might be able to be smarter about how it schedules them… is there an api to check the rate limit?
for_each is still relatively new also… i feel like this is a good issue to open either way
yeah. That would be ideal. There’s no AWS API to check rate limit. I wonder the AWS provider uses the AWS SDK’s built in retry logic or roll their own. The AWS SDK retry logic is really dumb, it’s plain exponential backoff w/ jitter. So you end up with cases like this where you are retrying every 120 seconds when you could be retrying every 10
That’s fair. I’ll open a bug. I’m not hopeful though.
resource aws_resource foo {
count = n
depends_on = [ aws_resource.foo[n-1] ]
}
I wonder if it works with count
heh. yeah, but if there’s some discussion it would be worth it
maybe another approach would be to expose retry logic to every resource as a core feature so you can override it some… retry_method and retriable_errors or somesuch
I hastily wrote up this feature request – per-resource parallelism https://github.com/hashicorp/terraform/issues/28152
Current Terraform Version 0.14 Use-cases Sometimes, providers have limitations, or the backend API has a limitation, regarding parallel modification of a certain resource. Adding a lifecycle { para…
fwiw, you can avoid the deadlock in your example by threading the bucket name to the public access block from the bucket policy…
resource aws_s3_bucket default {
name = mybucket
}
resource aws_s3_bucket_policy default {
name = aws_s3_bucket.default.name
policy = ...
}
resource aws_s3_public_access_block default {
name = aws_s3_bucket_policy.default.name
...
}
2021-03-16
does anyone have an example of tf variable validation to make sure a date is in the format YYYY-MM-DD
?
is the below valid?
variable "expiry_date" {
description = "The date you wish the certificate to expire."
type = string
validation {
condition = regex("(\\d\\d\\d\\d)-(\\d\\d)-(\\d\\d)")
error_message = "The expiry_date value must be in the format YYYY-MM-DD."
}
}
Looks almost correct — You need to specify var.expiry_date
in the condition for the regex to be run on that input variable.
i fixed using …
variable "expiry_date" {
description = "The date you wish the certificate to expire."
type = string
validation {
condition = length(regexall("(\d\d\d\d)-(\d\d)-(\d\d)", var.expiry_date)) > 0
error_message = "The expiry_date value must be in the format YYYY-MM-DD."
}
}
Hello guys ! I joined recently this Slack since I’m starting to use the CloudPosse module https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment. I’m very satisfied of the module but I see think a feature is missing. I’ve open an issue for it and would be glad to discuss and help for pushing it if I’m doing things well.
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment
@Florian SILVA thanks, your contributions are very welcome
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment
Thank you @Andriy Knysh (Cloud Posse). The current issue that I opened is the following: https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/issues/174 If I could have a feedback when someone have time. I’m not sure yet on how to resolve it the best. :)
In case anyone is interested, I published a module on terraform registry that provides an alternative method of integrating provisioned state with external tools like helm and kustomize: https://github.com/schollii/terraform-local-gen-files. API is still alpha, although I have made use of it in a couple projects and I really like the workflow it supports. So any feedback welcome!
Nice!
This is something have wanted to do more of, however, have trouble reconciling it with gitops patterns we follow
The end result needs to be it’s opening a PR against the repo with the changes
(it’s that last part haven’t looked into)
TIL: The terraform
(cli) does some interesting caching.
Error: Failed to download module
Could not download module "pr-16696-acme-migrate"
(pr-16696-acme-migrate.tf.json:3) source code from
"git::<https://github.com/acme/repo.git?ref=>....": subdir ".terraform" not found
- Terraform downloads all the module sources from git to
.terraform/
- The first clone is always a deep clone, with all files (including all dot files)
- The next time terraform encounters a module source and there’s a “cache hit” on the local filesystem, it does a copy of all the files, but ignores all dot files
- If (like we did) happen to have
.terraform
directory with terraform code for a micro service repo, this “dot file”, was was ignored. - Renaming
.terraform
toterraform
resolved the problem.
2021-03-17
Hi, I facing some issues with output value even after using depends_on block
I’m provisioning privatelink on mongoatlas, and require a connection string, following their github example I created my script but it fails at the output’s end.
I’m working on upgrading Terraform from 0.12
to 0.13
and it is telling me that it will make the following change. Also, I’m upgrading the AWS provider to >= 3
# module.redirect_cert.aws_acm_certificate_validation.cert[0] must be replaced
-/+ resource "aws_acm_certificate_validation" "cert" {
certificate_arn = "arn:aws:acm:us-east-1:000:certificate/b55ddee7-8d98-4bf2-93eb-0029cb3e8929"
~ id = "2020-10-28 18:31:37 +0000 UTC" -> (known after apply)
~ validation_record_fqdns = [ # forces replacement
+ "_2b63a2227feb97338346b0920e49818b.xxx.com",
+ "_423e90cf36285adac5ee4213289e73ab.xxx.com",
]
}
The validation records exist is both AWS and the terraform state, but not in the aws_acm_certificate_validation
. I’ve read the documentation for upgrading the AWS provider to 3 and they mention it should be ok.
I’m uncertain what will happen if I apply this. Can anyone help confirm what will happen if I do apply this change? My biggest concern is that it doesn’t do anything to my cert.
In case anyone has this question. The answer is no, it will not delete the certificate.
i am trying to perform client auth on nginx ingress controller using a CA, server cert and client client created via Terraform
does anyone know how i can get the server cert using tls_locally_signed_cert
to include the CA in the chain?
Does anyone here use TFE/TFC internally? How do you manage module versions in Github and test new releases?
You can look at the way that Cloud Posse does module version tagging + testing for any of their modules as a good way to accomplish this.
tl;dr for that topic is:
- We use the release-drafter GH action to automate tagging / releases: https://github.com/cloudposse/terraform-datadog-monitor/blob/master/.github/workflows/auto-release.yml
- And we use terratest to accomplish module tests: https://github.com/cloudposse/terraform-datadog-monitor/blob/master/test/src/examples_complete_test.go
Terraform module to configure and provision Datadog monitors from a YAML configuration, complete with automated tests. - cloudposse/terraform-datadog-monitor
Terraform module to configure and provision Datadog monitors from a YAML configuration, complete with automated tests. - cloudposse/terraform-datadog-monitor
The test-harness setup can be easily reused to make that repeatable across many module repos: https://github.com/cloudposse/test-harness
Collection of Makefiles and test scripts to facilitate testing Terraform modules, Kubernetes resources, Helm charts, and more - cloudposse/test-harness
oh this test harness looks interesting. How do you use it though?
There is no good docs to follow, but if you look at any cloud posse module then you can reverse engineer the setup and go from there.
I just upgraded a terraform module to TF13 by running terraform 0.13upgrade
. I created a versions.tf
file with the following content:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
helm = {
source = "hashicorp/helm"
}
kubernetes = {
source = "hashicorp/kubernetes"
}
}
required_version = ">= 0.13"
}
When I publish this to TFE, I get the following error:
Error: error loading module: Unsuitable value type: Unsuitable value: string required (in versions.tf line 3)
I’m not sure what this error eludes to, I’ve checked other public terraform modules with the same file and I dont notice anything different
what is the version of TFE?
how do I find the version? We do have several versions of terraform available
TFE v201910-1
terraform {
required_providers {
tfe = {
version = "~> 0.24.0"
}
}
}
should be similar to this ^^^
thats required for a module?
oh it should be in the root folder
I’m confused - this is a standalone terraform module
- how come I have to add the tfe provider
and it is in the ‘root’ folder aleady
tfe
not seen in the code you posted
or I missed something
yeah i’m trying to publish a module to terraform enterprise
what is in [version.tf](http://version.tf)
?
oh ic, tfe
is not needed
right..
sorry, we should look at version.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
helm = {
source = "hashicorp/helm"
}
kubernetes = {
source = "hashicorp/kubernetes"
}
}
required_version = ">= 0.13"
}
can I run the codes on my local laptop?
did you enable DEBUG?
v0.15.0-beta2 UPGRADE NOTES: The output of terraform validate -json has been extended to include a code snippet object for each diagnostic. If present, this object contains an excerpt of the source code which triggered the diagnostic. Existing fields in the JSON output remain the same as before. (#28057) ENHANCEMENTS: config: Improved type…
The motivation for this work is to unify the diagnostic rendering path, ensuring that there is one standard representation of a renderable diagnostic. This means that commands which emit JSON diagn…
What’s the best way to have terraform start tracking an s3 bucket that was created in the console (and has data in it already)? The terraform has a definition for the s3 bucket but is currently erroring because of BucketAlreadyOwnedByYou
is this something I can accomplish with the terraform state
command?
terraform import
oh nice, thanks!
Anyone know how I can update a parameter for an existing object? I need to modify this:
obj = {
one = {
two = {
foo = bar
}
},
three = "four"
}
Into this:
obj = {
one = {
two = {
foo = bar,
biz = baz
}
},
three = "four"
}
Terraform data structures are immutable so you need to create a new local
using the previous object. For example:
new_data = merge(var.old_data, {
one = {
two = {
biz = "baz"
}
}
})
I didn’t test the above so I might be off in some syntax, but you get the idea.
Also if you need true deep merging you cannot do it in terraform core with merge
The Cloud Posse Terraform Provider for various utilities (e.g. deep merging, stack configuration management) - cloudposse/terraform-provider-utils
Hey sometimes I get asked why I prefer TF over CF (cloudformation). I’m curious what others’ reasons are. Mine, after using CF a couple months (so not nearly as much as terraform which been using almost 3 years):
• CF difficult to modularize (nesting doesn’t cut it and IIRC nesting is discouraged)
• CF has clunky template language
• Planning is shallow and it is often difficult to know why something will be changed
• Can get stuck in messed up state eg upgrade failed then rollback failed too
• Infra upgrade-as-a-transaction (atomic, all or nothing) just feels weird
• Having to load templates to s3 is annoying Could probably find more but that’s all that comes to mind just now.
lack of data sources to lookup values from existing resources
no locals or other method for easily generating/reusing intermediate values
very limited functions and expressions available in the DSL
it’s yaml, but not really, with syntax that isn’t really yaml, and syntax that IS yaml but isn’t supported (anchors)
parameter validation is downright annoying when using the AWS types and trying to make a variable optional (try using the type AWS::EC2::KeyPair::KeyName
and making the keypair an optional parameter…)
lack of data sources to lookup values from existing resources
this one blows my mind every time
terraform isn’t just for aws!
and this is pretty key. terraform brings together or covers most/all your sources not just the AWS pieces.
All good ones!
• exported outputs of stacks are not allowed to change
• lacks a central place where templates can be shared
No escape hatches to import/export/manually edit resources / statefiles
oh that’s a good one @Alex Jurkiewicz! state manipulation commands are huge. so many times i’ve wished i could just move stuff around in the cfn stack to avoid resource cycles or get myself out of a broken stack launch/update
We have some redshift clusters managed as nested stacks in cloudformation, and they are wedged in a completely inoperable state. It’s impossible to make ANY change to the resources within. We’ve had to fall back to importing them into a new Terraform configuration and setting a “deny all” stack update policy in cloudformation. But those stacks will live forever as some busted thing
These are great
I feel like we should to make a cheap website called tf-vs-cf.com that just lists these things for easily sending to clients, managers, new engineers, etc.
@Matt Gowie let’s do it! i’ve created an org and a repo to host the static site and picked up the domain tfvscf.com (no dashes ).
we can do a static site hosted on github pages.
for now maybe add these topics as issues? and then figure out the site design as we go.
all contributors are welcome!
Hosting the website, tfvscf.com. Contribute to tfvscf/tfvscf.com development by creating an account on GitHub.
Hahah you went and did it @managedkaos — cool. I’ll try to contribute over the weekend
let me know your github ID and i’ll send an invite
Would be happy to host on Amplify unless someone has a better option. I know GH pages is a good option… but I have a whole simple Amplify setup that I like and use for customer sites. That is how I cheaply host mattgowie.com + masterpoint.io:
module "masterpoint_site" {
source = "git::<https://github.com/masterpointio/terraform-aws-amplify-app.git?ref=master>"
namespace = var.namespace
stage = var.stage
name = "masterpointio"
organization = "masterpointio"
repo = "masterpoint.io"
gh_access_token = local.secrets.gh_access_token
domain_name = "masterpoint.io"
description = "The simple HTML website for Masterpoint Consulting (@masterpointio)."
build_spec_content = data.local_file.masterpoint_build_spec.content
enable_basic_auth_on_master = false
enable_basic_auth_on_develop = true
basic_auth_username = "masterpoint"
basic_auth_password = local.secrets.basic_auth_password
develop_pull_request_preview = true
custom_rules = [{
source = "<https://www.masterpoint.io>"
target = "<https://masterpoint.io>"
status = "301"
condition = null
}]
}
@managedkaos GH is @Gowiem
Terraform and AWS Consultant. AWS Community Builder. Owner @ Masterpoint Consulting. - Gowiem
yeah that would be cool! i picked this up as a project to do some static site work (which i have been meaning to ramp up on). I’ve also been wanting to try amplify so yep, I’m open!
Matt that’s awesome. Is it free? Netlify is pretty much my go to for static site hosting and has cicd and is free. I thought amplify with aws resources would cost?
Thank you guys for this, I was looking into make it based on @Matt Gowie suggestion.
@sheldonh it’s basically free. With Amplify, you pay per build I’m pretty sure, but the hosting is free? Don’t quote me. I do know that I pay somewhere between nothing to less than 50 cents a month for my two static sites that I manage on Amplify.
Got you. So it’s free because within free tier on AWS then, but not free as it free service level, right? Asking as netlify is pretty much the gold standard for static website ease of usage when I looked in the past, but am open to trying something new for future projects if it makes sense, esp integrated with AWS.
I think its free/next-to-free because its cheap.
Static Web Hosting - Pay as you go
Build & Deploy
$0.01 per build minute
HOSTING
$0.023 per GB stored per month
$0.15 per GB served
https://aws.amazon.com/amplify/pricing/
With AWS Amplify, you pay only for what you use with no minimum fees or mandatory service usage. Get started for free with the AWS Free Tier.
I don’t even know if it’s within free tier… I just think they charge you for each build minute and for the large majority of FE sites build minutes are extremely low.
they have some good examples on the pricing page
This example:
A startup team with 5 developers have an app that has 300 daily active users. The team commits code 2 times per day.
comes out to $8/mo.
so a site that is waaaaay less than even that will be crazy cheap
i think the benefit to Netlify is you are not tied to AWS. i used it a while back and it was easy to onboard from GitHub. I can see the AWS onboarding being overwhelming if you’re not already using it.
For an everyday dev that just wants to deploy a static site, yeah use Netlify. Easy decision.
If you are already on AWS or at least know how to set up an account and maybe already have some other workloads there, perhaps maybe consider Amplify?
anyone knows how to fix the following error ?
Failed to load state: Terraform 0.14.5 does not support state version 4, please update.
Update your version of terraform
i did it with tfenv. But, still not work at all
2021-03-18
Here terraform confuses me a bit, I source a module like this:
module “global-state” { source = “../../modules/s3state-backend” profile = “cloudposse-rocks” customername = “cloudposseisawesome” }
I need to pass the values of the variables in main terraform file like this, is there a more handy way to do this ?
because the values are variable, like cloudposse-rocksforever for example
There was a really well put together document on Cloudposse’s GitHub a little while back that talked about Terraform modules. In a gist, it basically said “we will do our best to make a well developed module but in the end you are might need to add your own secret sauce to make it work well for your use case.” I thought I had bookmarked it but I guess not. If anyone knows what readme I was referring to and has the link handy, I’d really appreciate it if you posted it! Until then, I’ll keep looking around for it.
Anybody else hit this? KMS key policy re ordering - https://github.com/hashicorp/terraform-provider-aws/issues/11801
Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or other comme…
FYI this just looks like a KMS issue that can be easily replicated in the UI. KMS key policies are saved in a random order no matter how they are applied/saved
Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or other comme…
nvm, done some digging and added a comment to that ticket.
Also opened an AWS support request, will see what they say.
default_tags
are coming in v3.33.0 as an attribute of the aws provider… https://github.com/hashicorp/terraform-provider-aws/pull/17974
Community Note Please vote on this pull request by adding a reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave "…
note it’s public preview and currently limited to aws_subnet
and aws_vpc
…
provider: New
default_tags
argument as a public preview for applying tags across all resources under a provider. Support for the functionality must be added to individual resources in the codebase and is only implemented for theaws_subnet
andaws_vpc
resources at this time. Until a general availability announcement, no compatibility promises are made with these provider arguments and their functionality.
Community Note Please vote on this pull request by adding a reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave "…
Poor @Imanuel Mizrachi just implemented a Cloudrail rule to alert if no tags were added to a resource, now he’ll need to see if default_tags is set and know which resources are impacted
How to code terraform properly so that it can provision the security groups I manually created?
At AWS console, I manually provisioned the security rules below for ElasticSearch. There are three VPCs. Transit gateway connects them. ElasticSearch is installed in VPC-A.
Type Protocol Port range Source
All traffic All All 40.10.0.0/16 (VPC-A)
All traffic All All 20.10.0.0/16 (VPC-B)
All traffic All All 30.10.0.0/16 (VPC-C)
Outbound rules:
Type Protocol Port range Destination
All traffic All All 0.0.0.0/0
But, the terraform code below is not able to provision the above security groups.
resource "aws_security_group" "shared-elasticsearch-sg" {
name = var.name_sg
vpc_id = data.terraform_remote_state.vpc-A.outputs.vpc_id
ingress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = [data.terraform_remote_state.vpc-A.outputs.vpc_cidr_block,
data.terraform_remote_state.vpc-B.outputs.vpc_cidr_block,
data.terraform_remote_state.vpc-C.outputs.vpc_cidr_block]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = var.name_sg
}
}
module "elasticsearch" {
source = "git::<https://github.com/cloudposse/terraform-aws-elasticsearch.git?ref=tags/0.24.1>"
security_groups = [aws_security_group.shared-elasticsearch-sg.id,
data.terraform_remote_state.vpc-A.outputs.default_security_group_id]
vpc_id = data.terraform_remote_state.vpc-A.outputs.vpc_id
......
}
The above code provision the security rules below:
Inbound rules:
Type Protocol Port range Source
All TCP TCP 0 - 65535 sg-0288988f38d2007be / shared-elasticSearch-sg
All TCP TCP 0 - 65535 sg-0893dfcdc1be34c63 / default
Outbound rules:
Type Protocol Port range Destination
All TCP TCP 0 - 65535 0.0.0.0/0
Security rules of sg-0288988f38d2007be / shared-elasticSearch-sg
Type Protocol Port range Source
All traffic All All 40.10.0.0/16 (VPC-A)
All traffic All All 20.10.0.0/16 (VPC-B)
All traffic All All 30.10.0.0/16 (VPC-C)
Outbound rules:
Type Protocol Port range Destination
All traffic All All 0.0.0.0/0
The terraform code provisioned security groups do not work. In VPC-B and VPC-C, it cannot reach elasticsearch at VPC-A. How to code terraform properly so that it can provision the security groups I manually created?
What are the rules in the default SG?
I do not know. Actually, I do not need default SG.
I only need one SG which I manually created. “data.terraform_remote_state.vpc-A.outputs.default_security_group_id” is included in the Terraform code. Actually, I do not need it. The problem can be solved if Terraform code provisions SG with Inbound rules of “Type: All traffic. Protocol: All, Port: All”, not “Type All TCP, Protocol: TCP , Port range: 0 - 65535”.
Sorry i may have read this wrong, but if these were created in the console, were they properly imported into terraform and statefiles to the point where you could run terraform plan and get no changes? Also when talking about VPCs, you don’t mention Peering?. I realize these are likely example cidr’s, but you’re not using random internal cidrs. I clicked on this a couple days ago and wanted to respond before clicking another thread. Hope you’ve made some progress.
2021-03-19
@loren https://github.com/hashicorp/terraform-provider-aws/releases/tag/v3.33.0 with provider: New default_tags
for applying tags across all resources under a provider
NOTES: data-source/aws_vpc_endpoint_service: The service_type argument filtering has been switched from client-side to new EC2 API functionality (#17641) provider: New default_tags argument as a p…
Hi all, I would like to query a existing security group id and assign it to a ec2 instance
for example:
Hi folks, can someone help me with an issue regarding gitlab provider? I’d like to try setting up a vanilla gitlab.com workspace from scratch. Right now the repo is completely empty, only one intial (owner) account exists. I tried using his personal access token to create a first gitlab_group resource, but I’m only getting 403 forbidden errors. Am I missing something or is there another requirement beforehand?
the exact error including the api path looks like this:
Error: POST <https://gitlab.com/api/v4/groups>: 403 {message: 403 Forbidden}
turns out, I can import a manually created group as a resource to the terraform state, when I set it to public. Looks like, the free gitlab product is not exactly suitable for this purpose.
data "aws_security_groups" "cloudposse-ips" {
tags = {
Name = "cloudposse-ips"
}
}
vpc_security_group_ids = ["data.aws_security_groups.cloudposse-ips.ids"]
You do not need to qoute vpc_security_group_ids = [data.aws_security_groups.cloudposse-ips.ids]
this does not seem to work
though the security group gets queried correctly:
data "aws_security_groups" "cloudposse-ips" {
arns = [
"arn:aws:ec2:eu-west-1:564923036937:security-group/sg-0d5e812c1bb1c471a",
]
id = "eu-west-1"
ids = [
"sg-0d5e812c1bb1c471a",
]
tags = {
"Name" = "cloudposse-ips"
}
vpc_ids = [
"vpc-0baf4791f3db9bd8c",
]
}
Should setting the following Environment variables in the shell (zsh
) ensure that the variables are set in azurerm
provider section?
export ARM_CLIENT_ID=aaaa
export ARM_CLIENT_SECRET=bbbb
export ARM_SUBSCRIPTION_ID=cccc
export ARM_TENANT_ID=dddd
provider "azurerm" {
subscription_id = var.subscription_id
client_id = var.client_id
client_secret = var.client_secret
tenant_id = var.tenant_id
version = ">=2.51.0"
features {}
}
no. Terraform doesn’t know how to map those env vars to your Terraform variables
Terraform execution engine can’t access environment variables at all, it’s up to you to explicitly specify every input variable
should be using TF_VAR_client_id
when running locally?
yes, or use -var client_id=foo
to your terraform plan/apply
, or create a tfvars file
there are several ways to specify input variables https://www.terraform.io/docs/language/values/variables.html#assigning-values-to-root-module-variables
something very odd happens in my environment, - its different between the init
and plan
steps
if i set ENV variables in the ARM_CLIENT_ID
format the init
works, but then plan
doesnt
then I set the TF_VAR_client_id
and others then the plan
works
and the init
stage only works if I have the ARM_XXXXX_ID
ENV variables in place
i guess the azurerm provider reads those environment variables directly as a configuration source
check the docs for it
when I specify it as such: vpc_security_group_ids = [”${data.aws_security_group.cloudposse-ips.ids}”]
Have you tried vpc_security_group_ids = [data.aws_security_group.cloudposse-ips.ids]
that works, I was confused howto make a list
but that seems to work as well ! thx
it works
but then I get a Interpolation warning
Find security vulnerabilities, compliance issues, and infrastructure misconfigurations early in the development cycle of your infrastructure-as-code with KICS by Checkmarx. - Checkmarx/kics
2021-03-20
Hi friends, I’m creating an ec2 instance with https://github.com/cloudposse/terraform-aws-ec2-instance and the ssh keypair with https://github.com/cloudposse/terraform-aws-key-pair. the ssh connection seems to be timing out (no authorization error).
Is there some non-obvious, not-default setting I need to use to get the networking bits to work?
Terraform module for provisioning a general purpose EC2 host - cloudposse/terraform-aws-ec2-instance
Terraform Module to Automatically Generate SSH Key Pairs (Public/Private Keys) - cloudposse/terraform-aws-key-pair
What are the SG rules on ec2
Terraform module for provisioning a general purpose EC2 host - cloudposse/terraform-aws-ec2-instance
Terraform Module to Automatically Generate SSH Key Pairs (Public/Private Keys) - cloudposse/terraform-aws-key-pair
I have all the rules listed here: https://github.com/cloudposse/terraform-aws-ec2-instance#example-with-additional-volumes-and-eip thought most importantly:
{
type = "ingress"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
Is it in a private subnet or public one , if in private Do you have a nat
instance has a public ip.
Check the route table for subnet make sure 0.0.0.0 isn’t a black hole for some reason
I am slightly suspicious of
100 All traffic All All 0.0.0.0/0 Allow
* All traffic All All 0.0.0.0/0 Deny
does * mean apply to all or like n/a rule applies to nothing?
Looks good to me, can you post your terraform
yeah, one minute
Contribute to discentem/mdmrocketship development by creating an account on GitHub.
I’m sure I am making some dumb mistake with the networking bits
I think the subnet has no egress
What is route 0.0.0.0 pointed to if there is one
Your using a private subnet but you have nat set to false
ah. so should be nat_gateway_enable = true
?
Yea
I would check out this module too https://registry.terraform.io/modules/terraform-aws-modules/vpc/aws/latest
It is probably the best module ever written
I love cloudpossee modules but nothing beats that one for creating vpcs and subnets and nats etc
I will check it out
I usually call it like this
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
name = "gitlab-runner-vpc"
cidr = var.vpc_cidr_block
azs = local.vpc_azs
public_subnets = local.vpc_public_subnets
private_subnets = local.vpc_private_subnets
enable_nat_gateway = true
single_nat_gateway = true
one_nat_gateway_per_az = false
enable_dns_hostnames = true
enable_dns_support = true
enable_s3_endpoint = false
enable_dynamodb_endpoint = false
manage_default_security_group = true
default_security_group_ingress = []
default_security_group_egress = []
tags = local.tags
}
then ill do something like this
locals {
tags = {
"environment" = "gitlab-runners"
}
vpc_public_subnets = [
cidrsubnet(var.vpc_cidr_block, 8, 0),
cidrsubnet(var.vpc_cidr_block, 8, 1),
cidrsubnet(var.vpc_cidr_block, 8, 2),
]
vpc_private_subnets = [
cidrsubnet(var.vpc_cidr_block, 2, 1),
cidrsubnet(var.vpc_cidr_block, 2, 2),
cidrsubnet(var.vpc_cidr_block, 2, 3),
]
vpc_azs = [
"${var.aws_region}a",
"${var.aws_region}b",
"${var.aws_region}c",
]
}
variable "aws_region" {
type = string
default = "us-east-1"
}
variable "vpc_cidr_block" {
type = string
default = "10.15.0.0/16"
}
output "public_subnets" {
value = module.vpc.public_subnets
}
output "private_subnets" {
value = module.vpc.private_subnets
}
output "vpc_id" {
value = module.vpc.vpc_id
}
output "vpc_cidr_block" {
value = module.vpc.vpc_cidr_block
}
hmm. I think there is also something else wrong. should I put the instance on the public subnet heh?
to get a single subnet you could do ``module.vpc.private_subnets[1]`
okay it works if I put the instance on the public subnet. is that bad for security though?
yup
you should just use a nat
or dont use ssh and use ssm
okay I’m going to switch to vpc module you mentioned.
those are your two options if you want to access the ec2 in a private subnet
thanks for answering my obviously noob questions
no prob
hmm, would this be module.vpc.private_subnets[1]
or module.vpc.private_subnets[0]
0 would be in az a and 1 would be in az b
Ex us-east-1b
if the connection is timing out you have a security group problem
makes sense. so I’m still not understanding something about nat, after switching to the aws vpc module.
In this case it was def not having a route out to the internet the priv subnet had no routes to get out
I think i am having the same or similar problem still: https://github.com/discentem/mdmrocketship/blob/main/main.tf
Ok maybe you had two problems
probably
If there is now a route to the nat on that subnets route table then yea I would double check your SG
i could also just not use the ec2 module and use normal resource. that might be abstracting too much right now
I’m still lacking understanding on my issue (due to little networking experience), especially given the advice to keep the host on a private subnet.
I found this article suggesting a jumphost setup but.. is that required or can I do it directly somehow with an internet gateway/nat gateway? https://dev.betterdoc.org/infrastructure/2020/02/04/setting-up-a-nat-gateway-on-aws-using-terraform.html
I would use SSM instead - it installs an agent on your EC2 and you can access it far more securely.
I might have missed this - but why do you want to have SSH access? Is it just to manage the host? If so, AWS SSM would be your solution.
Opening SSH to the world on a public subnet is a big no-no.
Yeah just to manage it. Okay, I’ll go with ssm I guess. Any material you can share on why it’s more secure?
There’a bunch of it online, but the idea is this: If you have SSH open to the world, then you’re only relying on the authentication mechanism. Specifically, you’re hoping nobody gets your SSH key_pair. So, for a hacker, they just need to get that file somehow and they can get into your system. (and there’s a lot of ways for them to get the file)
However, when you put the server inside a private subnet, then now they need to actually establish authentication via AWS’s IAM. In most organizations, that is far more secure, as it requires MFA and may have additional mechanisms attached to it.
Makes sense, thanks for the explanation!
2021-03-21
cdktf 0.2 release… https://github.com/hashicorp/terraform-cdk/releases/tag/v0.2.0
Breaking Changes Generated classes for modules from the registry will change - see this comment for more details Phased out testing and support for Terraform 0.12. It’s likely to still work for no…
2021-03-22
Hi All!
What is the best approach to integrate a security audit step on a Terraform pipeline in Jenkins using a third-party provider?
- Should the provider supply a Jenkins plugin that adds an extra step having access to a repo with the Terraform plan file output?
- Should the provider supply a Jenkins shared library that can be imported in any existing pipeline, calling a dedicated function with the Terraform plan output or path?
- Should the provider supply a docker image that exposes a rest API endpoint receiving the Terraform plan output?
Not sure I’m fully understanding your question, but generally I don’t want something that is locked to Jenkins. I prefer something like a CLI that I can run in Jenkins, but also in any other CI tool or even locally
But this is one of those things where if you ask 10 people you’ll get ~10~1 different answers
I’m with @roth.andy on this one - I want to be able to run everything my CI system can run (regardless of the tool in choice) outside of CI and wrap the CI system du jour around it later/separately. As far as I’m concerned, CI is just there to run code and tell me if it worked, rather than being forced to use Jenkins or whatever to be able to make use of another tool.
I want to be able to run everything my CI system can run
Yep. We even make Jenkins use the same commands that we’d use locally. Jenkins doesn’t run some long script, it literally just runs task test
, task deliver
, task deploy
with environment variables as parameters.
Ok that’s an interesting thought. does a cli tool equals a running container?
Most third party providers I’ve used have a CLI tool that accepts an api token, that takes care of interfacing with the SaaS service in order to upload your scans/data/whatever so they can do their thing. They might also provide that CLI tool in a docker image as a convenience, but there’s nothing requiring that that image be used. We have our own docker image that is used as the Jenkins execution environment that includes the CLI tool
For example: fossa-cli
Fast, portable and reliable dependency analysis for any codebase. Supports license & vulnerability scanning for large monoliths. Language-agnostic; integrates with 20+ build systems. - fossas…
Introduction In addition to the automatic scans run periodically by Bridgecrew, you can run scans from a command line. This allows you to: Skip particular checks on Terraform Definition Blocks you chooseSkip particular checks on all resourcesRun only specific checks Installation Running Scans by C…
No, running from the CLI != running a container, but there shouldn’t be anything that you do that prevents containerising that tool, and bonus points for making a sane and regularly updated container available, since it is a common use case, but again, not everyone is running containers by default.
@michael.l
I thought I saw something in Cloudposse TF to limit the length of resource names (ie for when we generate a label longer than AWS allows us to name a resource).. but I can’t find it
module.this
I’m on mobile, IIRC check context.tf or the module it’s copied from.
the context.tf
ty
oic, this is only in terraform-null-label, and never made it into terraform-label
pretty sure I asked this same question here a couple months ago
2021-03-23
TErraform Cloud now support README.md file and output.tf file, and there values are shown in the UI pretty neat
lol
they did. I wish they allow to install aws cli there
local_exec { curl......}
no?
not sure, we can use http provider instead.
data "http" "example" {
url = "<https://checkpoint-api.hashicorp.com/v1/check/terraform>"
# Optional request headers
request_headers = {
Accept = "application/json"
}
}
Yeah… I saw this for the first time the other day. I laughed because what a completely useless feature for them to build instead of some of the other things they could have built. They don’t touch the product in what seems like 12+ months and then they build something that just shows the README. Yeah… not that important to me to be honest.
well
null_resource.aws (local-exec): Executing: ["/bin/sh" "-c" "aws"]
null_resource.aws (local-exec): usage:
null_resource.aws (local-exec): Note: AWS CLI version 2, the latest major version of the AWS CLI, is now stable and recommended for general use. For more information, see the AWS CLI version 2 installation instructions at: <https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html>
null_resource.aws (local-exec): usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
null_resource.aws (local-exec): To see help text, you can run:
null_resource.aws (local-exec): aws help
null_resource.aws (local-exec): aws <command> help
null_resource.aws (local-exec): aws <command> <subcommand> help
null_resource.aws (local-exec): aws: error: the following arguments are required: command
null_resource.aws: Creation complete after 2s [id=7470069141691516634]
look like aws cli is installed in terraform runners in TFC, I can rest in peace now
lol
ru aws s3 ls…..and see if you see something
I did
hahaha
I will imagine they have them lockdown
I guess it did not work?
null_resource.aws_v2 (local-exec): Executing: ["/bin/sh" "-c" "aws s3 ls"]
null_resource.aws_v2 (local-exec): 2020-12-01 19:09:27 xxxxx-eu-central-1-tf-state
null_resource.aws_v2 (local-exec): 2020-10-15 14:05:37 airflow-xxxxx
interesting
Masha’allah, is there a blog about this @Mohammed Yahya
this screenshot, I hope it could help
Our Engineering team decided to share how they’re doing CI/CD for Terraform with Jenkins, hopefully this could be helpful for any one here: https://indeni.com/blog/terraform-goes-hand-in-hand-with-ci-cd/
In today’s competitive market, our success depends on how quickly we can innovate and deliver value to customers. It’s all about speeding time to market […]
Thanks for sharing, give good vibes about devsecops
In today’s competitive market, our success depends on how quickly we can innovate and deliver value to customers. It’s all about speeding time to market […]
hey all - first time caller, long time listener :slightly_smiling_face: fyi on something that was vexing me in the terraform-aws-elasticsearch module: being a smart user, I also use terraform-null-label, applying it as so to a security group I’ll be using for my elasticsearch domains:
resource "aws_security_group" "es_internal" {
description = "internal traffic"
name = module.label.id
tags = module.label.tags
vpc_id = data.aws_vpc.main.id
}
```
Best bet is to dig through our modules for more examples
in your case, you need to add attributes for disambiguation
see the attributes
argument
in the terraform module, I use context = module.label.context
. the end result? the terraform module tries to create a security group with the same name as the one I already created, so it errors out.
on an AWS error of “security group with that name already created!”
interesting side effect of using best practices
Quick catch-up, any progress by folks on better azure pipelines terraform workflow? I can use multistage pipelines or potentionally use another tool like Terraform Cloud, env0, scalyr, but azure pipelines will be the easiest with no approval required.
Any reason to push for the others right now or is azure pipelines servicing others well right now with multistage yaml pipelines.
Hey there! I am the DevOps Advocate with env0. We’d be glad to setup a demo for you to show you our value over ADO by itself.
Very little time right now per onboarding. Any comparison on site?
Ideally knowing modules + state management + open policy checks + PR integration with azure devops is key + reasonable free tier to get started and show value. Very small team and experience I have is with terraform cloud mostly. Azure devops is honestly most likely outcome but would love any quick value proposition to weigh more heavily when I get time.
Totally understand on the no-time thing. Unfortunately, I don’t have anything direct for ADO + env0 built yet, it’s on my list. Time is not on my side either And to be honest, we have GitHub and just finished GitLab, but don’t have full PR Plan and service checks just yet with ADO. OPA for Policy enforcement is absolutely there. We also have a strong free tier, and a very lax POC policy right now, so no worries to anyone on testing it out, proving value. I can get with product and get a date on full ADO completion. In the meantime, I have these 3 videos (10.5 mins total) that illustrate our full use cases from end to end.
https://m.youtube.com/playlist?list=PLFFBGbxfEa7ZPUvNWIAvdLpAtXpK_fjSm
Share your videos with friends, family, and the world
Thank you! Normally no time would be an excuse but I onboarded this week and so it’s literally true . I’ll keep in in mind for sure. Ty!
Congrats on the new gig! Best of luck with everything. Feel free to reach out if anything looks good, or if you ever think we could help with anything.
Back to this thread: https://sweetops.slack.com/archives/CB6GHNLG0/p1611948097136100. I figured out how to write an AWS policy that only requires MFA for human users in a group. It’s pretty cool. You have to enter your MFA code when you assume role. This is HUGE security risk for cloud based companies.
Question for AWS users. Has anyone figured out how to use cli MFA with terraform?
Share
Question for AWS users. Has anyone figured out how to use cli MFA with terraform?
data "aws_caller_identity" "current" {
}
data "aws_iam_policy_document" "assume_role_policy" {
statement {
effect = "Allow"
actions = ["sts:AssumeRole"]
resources = "${var.assume_role_arns}"
}
}
resource "aws_iam_policy" "assume_role_group" {
name = "${var.assume_role_policy_name}"
policy = "${var.enable_mfa == "true" ? data.aws_iam_policy_document.require_mfa_for_assume_role.json : data.aws_iam_policy_document.assume_role_policy.json}"
}
resource "aws_iam_policy" "require_mfa" {
count = "${var.enable_mfa == "true" ? 1 : 0}"
name = "${var.require_mfa_policy_name}"
policy = "${data.aws_iam_policy_document.mfa_policy.json}"
}
resource "aws_iam_group" "assume_role_group" {
name = "${var.assume_role_group_name}"
}
resource "aws_iam_group_policy_attachment" "assume_role_attach" {
group = "${aws_iam_group.assume_role_group.name}"
policy_arn = "${aws_iam_policy.assume_role_group.arn}"
}
resource "aws_iam_group_policy_attachment" "mfa_requirement_attach" {
count = "${var.enable_mfa == "true" ? 1 : 0}"
group = "${aws_iam_group.assume_role_group.name}"
policy_arn = "${aws_iam_policy.require_mfa.arn}"
}
data "aws_iam_policy_document" "require_mfa_for_assume_role" {
statement {
sid = "AllowAssumeRole"
effect = "Allow"
actions = ["sts:AssumeRole"]
resources = "${var.assume_role_arns}"
condition {
test = "BoolIfExists"
variable = "aws:MultiFactorAuthPresent"
values = ["true"]
}
}
}
data "aws_iam_policy_document" "mfa_policy" {
statement {
sid = "AllowManageOwnVirtualMFADevice"
effect = "Allow"
actions = [
"iam:CreateVirtualMFADevice",
"iam:DeleteVirtualMFADevice"
]
resources = [
"arn:aws:iam::${data.aws_caller_identity.current.account_id}:mfa/$${aws:username}",
]
}
statement {
sid = "AllowManageOwnUserMFA"
effect = "Allow"
actions = [
"iam:DeactivateMFADevice",
"iam:EnableMFADevice",
"iam:GetUser",
"iam:ListMFADevices",
"iam:ResyncMFADevice"
]
resources = [
"arn:aws:iam::${data.aws_caller_identity.current.account_id}:user/$${aws:username}",
"arn:aws:iam::${data.aws_caller_identity.current.account_id}:mfa/$${aws:username}"
]
}
statement {
sid = "DenyAllExceptListedIfNoMFA"
effect = "Deny"
not_actions = [
"iam:CreateVirtualMFADevice",
"iam:EnableMFADevice",
"iam:GetUser",
"iam:ListMFADevices",
"iam:ListVirtualMFADevices",
"iam:ResyncMFADevice",
"sts:GetSessionToken"
]
resources = ["*"]
condition {
test = "BoolIfExists"
variable = "aws:MultiFactorAuthPresent"
values = ["false"]
}
}
}
Quiz:
If you use an aws_autoscaling_group
with aws_launch_configuration
,
without specifying a VPC (that is, AWS is expected to use the default VPC),
and without setting associate_public_ip_address
,
then do the EC2 instances generated have a public IP address or not? (answer in the thread)
Answer:
Interestingly, associate_public_ip_address
defaults to false in a launch config, however, Terraform ignores it and decides not to set the value and instances have a public IP anyway.
https://github.com/hashicorp/terraform-provider-aws/issues/1484
If the instance is launched into a default subnet in a default VPC, the default is true. If the instance is launched into a nondefault subnet in a VPC, the default is false
Intuitive eh?
even more
In the "Create Launch Configuration", select "Advanced Details" and look for the "IP Address Type" Section, you'll see:
IP Address Type
Only assign a public IP address to instances launched in the default VPC and subnet. (default)
Assign a public IP address to every instance.
Do not assign a public IP address to any instances. Note: this option only affects instances launched into an Amazon VPC
Yeah, so Terraform isn’t assigning anything, and then AWS follows the default behavior.
The default should be the same for both (false) as it seems to me to be the option with less security risk.
this is like the “default” in the console click-wizard where aws will gladly open your instance to tcp/22 to the world when you launch into a default subnet in the default vpc…
It’s all so secure
the combination of tftest
and pytest
is really feeling so much nicer and more robust/extensible than terratest
…. https://github.com/GoogleCloudPlatform/terraform-python-testing-helper
Simple Python test helper for Terraform. Contribute to GoogleCloudPlatform/terraform-python-testing-helper development by creating an account on GitHub.
i think i shared it before, but working with it more lately, and it’s worth sharing again
Simple Python test helper for Terraform. Contribute to GoogleCloudPlatform/terraform-python-testing-helper development by creating an account on GitHub.
Do you run a full plan / apply / test lifecycle with this or do you just use it to statically check your TF code?
we’re just starting down this road, but the idea is to run the apply using localstack in CI. or maybe using moto in server mode. moto does more services, but localstack is a little easier to get going
so yes, we’ll have “terraform test configs” that reference the module, create dependent resources, and pass in varying arguments to cover any module logic. and we’ll use pytest/tftest to invoke the apply/destroy for each config, hitting the localstack endpoints
Ah wow you’re going for it.
we can do a bit more, reading back the outputs and assert
‘ing they match what we expect. but personally i don’t find a ton of value in that, as i expect terraform’s acceptance tests to validate things like that reasonably well
one “extra” bit i find valuable is being able to actually invoke a lambda this way, to confirm that the packaging is valid. say, if the lambda has dependencies not present in the lambda runtime, then it needs to be in the lambda package. it’s easy to get this packaging wrong. so it is useful to actually invoke the lambda, test the imports (for example), and report all is well or fail
we may also have tftest run a plan -detailed-exitcode
subsequent to the apply to detect persistent diffs and occasional issues with for_each logic. so it’ll actually be apply/plan/destroy…
Cool. Extensive! You should write it all up. I’d read.
we’ve been doing similar for a while with terratest, but without localstack. which meant we had to manually run the tests for each pr, and hit real AWS endpoints (and costs) (and cleanup, people forget)
but testing in golang is nowhere near as nice as pytest, plus golang syntax is just not as clean as python, and seems harder for devops folks to pick up
Yeah — I’ve done the using terratests against a test AWS Account thing and then wipe that account on a schedule. Not too bad on costs, but I get your point.
The golang vs python decision is definitely an org to org thing. Neither are perfect, but ya gotta pick one. I like that this is an option that I didn’t know about before though
yeah, i like localstack because i can’t (easily) test terraform modules with a real account with real credentials on all pull requests in public projects. and pretty much all our modules are public. so we needed to do something anyway. we did get it working with terratest, but in the process we found tftest and liked it so much that now we’re planning to switch everything to use it instead
Good stuff
is there a way to enable access logging with this module? https://github.com/cloudposse/terraform-aws-tfstate-backend/blob/793d3f90c25d9f17f4a299be7b13ae5141795345/main.tf#L106
Terraform module that provision an S3 bucket to store the terraform.tfstate
file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption. - cloudposse…
anyone here used the cloudflare modules ? can give me some backend best practices ? cheers
2021-03-24
Appreciate some :thumbsup:s on this AWS provider aws_cognito_user
resource addition issue: https://github.com/hashicorp/terraform-provider-aws/issues/4542
Description Currently the aws_cognito has an aws_cognito_user_group resource which represents a group of users. In the AWS IDP console there is an option to create a user, and assign it to groups. …
Simple Terraform test helper
yep, that’s the one i’m talking about here, https://sweetops.slack.com/archives/CB6GHNLG0/p1616536971126300
the combination of tftest
and pytest
is really feeling so much nicer and more robust/extensible than terratest
…. https://github.com/GoogleCloudPlatform/terraform-python-testing-helper
nice
I saw it somewhere and forgot where,
You need this tool like yesterday. Absolutely what you need when dealing with imports on tf
CLI tool to generate terraform files from existing infrastructure (reverse Terraform). Infrastructure to Code - GoogleCloudPlatform/terraformer
We’re running a survey on the difference between identifying cloud security issues in production, vs in code (“shift left”). It’s a short one, meant to get a high level of understanding of people’s sentiments: https://docs.google.com/forms/d/e/1FAIpQLSc7izchAxnCqkQbdwIBETYX51hGmX_GMdqO9ZnEYSx34V_20Q/viewform?usp=sf_link
We’re giving away free Visa gift cards to help incentivize people for their time. However, I can also share here, that we will be sharing the raw results (minus PII, etc.), and then the conclusions, for the benefit of everyone here who is thinking about “shift left”.
Any comments on the survey are also very welcome.
What is your way of managing Security Groups trough Terraform? I would like to create a module where I specify the list of ports and allowed CIDR block multiple times. I can do for_each
but that can only be done for one thing, for example for ports, I do not know how to apply for_each
for both ports and CIDR block? Thanks!
you might need to do some sort of map and then loop over the map in your for_each.
For example your dev SG might have one CIDR and ports and your prod SG might have a differencr CIDR and ports. Put them into a map (or serperate maps) and pull the values out that way.
I’m waving my hands at the moment and don’t have an example, but that’s how i would approach it. will share soon…
This is what we’re doing https://github.com/cloudposse/terraform-aws-security-group
Terraform module to provision AWS Security Group. Contribute to cloudposse/terraform-aws-security-group development by creating an account on GitHub.
Great! Thanks a lot!
v0.14.9 0.14.9 (March 24, 2021) BUG FIXES: backend/remote: Fix error when migrating existing state to a new workspace on Terraform Cloud/Enterprise. (#28093)
When migrating state to a new workspace, the version check would error due to a 404 error on fetching the workspace record. This would result in failed state migration. Instead we should look speci…
2021-03-25
which one is the best practice (tf version 0.12 / 0.13) and why
name = format("%s-something", [var.my](http://var.my)_var)
name = "${[var.my](http://var.my)_var}-something"
@Alex Jurkiewicz any example?
"${[var.my](http://var.my)_var}-something"
- Keep it simple !
it depends, use any of them while keep consistent thought your whole modules and templates
I’m the outlier. I prefer because the format template can be a variable itself.
Writing open source modules means everyone has an opinion on the format. This is why null-label is so flexible.
forces one format.
readability
This seems similar to a echo
vs printf
argument? great stuff , I havent even thought of the debate
Some of you might be interested in terraform-aws-multi-stack-backends
this is first release: https://registry.terraform.io/modules/schollii/multi-stack-backends/aws. There are diagrams in the examples/simple
folder. Any feedback welcome!
tl;dr?
@Alex Jurkiewicz @Erik Osterman (Cloud Posse) It makes it easy to correlate terraform states that relate to the same “stack”.
Eg if you have a stack that consists of a state for VPC/network, another for EKS, a third state for resources specific to a sandbox deployment of microservices in that cluster (eg RDS instances used by the micro-services), and a fourth state for AWS resources specific to a staging deployment in that cluster, then you will see all 4 backends in that that module’s tfvars file.
The module creates a bucket and stores all states mentioned in the tfvars file there. You can of course have multiple buckets if you want (say one per stack).
So the other thing this module does is enable you to never again have to worry about creating backend.tf files; it creates them for you.
Does anyone know a Kinesis Firehose Terraform Module that sends Data Streams to Redshift?
Hey guys, would appreciate your collective minds for some feedback: Our IaC security tool (Cloudrail) now has a new capability called “Mandate On New Resources Only”. If this is set on a rule, Cloudrail will only flag resources that are set to be created under the TF plan.
This brought up an interesting philosophical question: If a developer is adding new TF code that uses an existing module, is it really a new resource? Technically, it is. Actually several resources in many cases generated by the module. But in reality, it’s just the same module again, with different parameters.
Some of our devs said “well, technically, yes, but it’s the same module, so from an enforcement perspective, it’s not a new resource, it’s just like other uses of the same module”.
I’m adding examples in a thread on this message. Appreciate your guys’ and gals’ thoughts on this matter as we think through it.
So, for example, adding a new resource like so, is clear a new resource:
resource "aws_vpc" "myvpc" { ... }
But, adding some new code that looks like this:
module "policies-id-rules-delete" {
source = "../../modules/api-gw/method"
...
}
Is technically a new resource, but using a module that’s already used elsewhere before.
More context - the first bullet here: https://github.com/indeni/cloudrail-demo/blob/v1.0/CHANGELOG.md
The exception is a little too complex imo. I’m happy for the developer to have to fix the module and then use a newer version for their new addition
What if the developer is not an infrastructure dev, just a regular software dev who copy-pasted some code? He doesn’t know how to fix a module.
They complain to infra dev, who can exclude the resource from enforcement if they want to grandfather in one last usage of the old version
Makes sense.
heh. and you have for_each
on a module, and the dev just adds a new item to the list of keys…
Oh yeah, what do you do with that?
New resource @Alex Jurkiewicz?
no new tf code at all, but you have new resources! potentially with different inputs that violate policies
Let’s say we can identify if the new resources have the same violations or new violations.
What Loren said. Adding more logic will feel like magic, and make the system less understandable
i think i’m in agreement with @Alex Jurkiewicz, basically… this is static analysis, same as code style enforcement. CI says you’re wrong, you don’t fight it, you go figure out how to fix it
i have a module that creates an ECS task that is used with a for_each. Is there a way to use the same execution role across each invocation of the module? (Only way i can think of is creating the role outside the module and passing the arn in as a var
It depends on how creative you want to get. You can’t use a data source for this, because it will fail if it can’t find the role. So, you can use some bash coding, aws CLI using, etc.
But that’s quite a mess.
If you look through this ticket, you’ll see some terrible examples of how to achieve this: https://github.com/hashicorp/terraform/issues/16380
- Inject the execution task from outside
- Split your for_each into the first element and the rest. Create the first separately, and then add it as a dependency for the rest which use your for_each loop Approach 1 sounds waaaay better
2021-03-26
I have a list of maps (nested) - can these be collapsed down?
+ badgers = [
+ {
+ "dev" = {
+ "us-east-1" = {
+ "profile-service" = {}
}
}
},
+ {
+ "dev" = {
+ "us-west-2" = {
+ "profile-service" = {}
}
}
},
+ {
+ "qa" = {
+ "us-east-1" = {
+ "profile-service" = {}
}
}
},
+ {
+ "qa" = {
+ "us-west-2" = {
+ "profile-service" = {}
}
}
},
+ {
+ "prod" = {
+ "us-east-1" = {
+ "profile-service" = {}
}
}
},
+ {
+ "prod" = {
+ "us-west-2" = {
+ "profile-service" = {}
}
}
},
+ {
+ "dev" = {
+ "us-east-1" = {
+ "account-service" = {}
}
}
},
+ {
+ "dev" = {
+ "us-west-2" = {
+ "account-service" = {}
}
}
},
+ {
+ "qa" = {
+ "us-east-1" = {
+ "account-service" = {}
}
}
},
+ {
+ "qa" = {
+ "us-west-2" = {
+ "account-service" = {}
}
}
},
+ {
+ "dev" = {
+ "us-east-1" = {
+ "compliance-service" = {}
}
}
},
+ {
+ "dev" = {
+ "eu-west-1" = {
+ "compliance-service" = {}
}
}
},
+ {
+ "qa" = {
+ "us-east-1" = {
+ "compliance-service" = {}
}
}
},
+ {
+ "qa" = {
+ "eu-west-1" = {
+ "compliance-service" = {}
}
}
},
]
can that be collapsed down into
e.g.
tomap({
"dev" = tomap({
"us-east-1" = tomap({
"account-service" = {}
"compliance-service" = {}
})
})
"qa" = tomap({
"us-east-1" = tomap({
"account-service" = {}
"compliance-service" = {}
})
})
})
tl;dr how to merge a list of maps
I wonder if I need a deep merge…
Hmm, get Error: json: cannot unmarshal array into Go value of type map[string]interface {}
using https://github.com/cloudposse/terraform-provider-utils
The Cloud Posse Terraform Provider for various utilities (e.g. deep merging, stack configuration management) - cloudposse/terraform-provider-utils
You need to call jsonencode
on the maps
we operate on strings due to terraform’s handling of objects
Aye, tried that, still the same. Resorted to using a JSON file as per the example, still no dice
[
{
"dev": {
"us-east-1": {
"profile-service": {
"badgers": "foo"
}
}
}
},
{
"dev": {
"us-west-2": {
"profile-service": {
"badgers": "foo"
}
}
}
},
{
"qa": {
"us-east-1": {
"profile-service": {
"badgers": "foo"
}
}
}
},
{
"qa": {
"us-west-2": {
"profile-service": {
"badgers": "foo"
}
}
}
},
{
"prod": {
"us-east-1": {
"profile-service": {
"badgers": "foo"
}
}
}
},
{
"prod": {
"us-west-2": {
"profile-service": {
"badgers": "foo"
}
}
}
},
{
"dev": {
"us-east-1": {
"account-service": {
"badgers": "foo"
}
}
}
},
{
"dev": {
"us-west-2": {
"account-service": {
"badgers": "foo"
}
}
}
},
{
"qa": {
"us-east-1": {
"account-service": {
"badgers": "foo"
}
}
}
},
{
"qa": {
"us-west-2": {
"account-service": {
"badgers": "foo"
}
}
}
},
{
"dev": {
"us-east-1": {
"compliance-service": {
"badgers": "foo"
}
}
}
},
{
"dev": {
"eu-west-1": {
"compliance-service": {
"badgers": "foo"
}
}
}
},
{
"qa": {
"us-east-1": {
"compliance-service": {
"badgers": "foo"
}
}
}
},
{
"qa": {
"eu-west-1": {
"compliance-service": {
"badgers": "foo"
}
}
}
}
]
Still get Error: json: cannot unmarshal array into Go value of type map[string]interface {}
can you share the the HCL?
@matt @Andriy Knysh (Cloud Posse)
Yup, gimme a mo, let me create a gist
merge(local.badgers...)
isn’t deep merging, so I end up with
merged_badgers = tomap({
"dev" = tomap({
"eu-west-1" = tomap({
"compliance-service" = {}
})
})
"prod" = tomap({
"us-west-2" = tomap({
"profile-service" = {}
})
})
"qa" = tomap({
"eu-west-1" = tomap({
"compliance-service" = {}
})
})
})
Ah, no, thinkg “If more than one given map or object defines the same key or attribute, then the one that is later in the argument sequence takes precedence” is happening
this provider https://github.com/cloudposse/terraform-provider-utils can deep-merge list of maps
The `deep_merge_yaml` data source accepts a list of YAML strings as input and deep merges into a single YAML string as output
Thanks @Andriy Knysh (Cloud Posse) yeah, killer feature, nice work guys!
it accepts a list of YAML strings (not terraform objects/maps) b/c of TF provider limitations
same with the outputs - it’s a string of merged contents
I’m trying to use deep_merge_json @Andriy Knysh (Cloud Posse) see https://gist.github.com/joshmyers/7e96e291a920fac77f9a7314bc3397ba
The Cloud Posse Terraform Provider for various utilities (e.g. deep merging, stack configuration management) - cloudposse/terraform-provider-utils
The Cloud Posse Terraform Provider for various utilities (e.g. deep merging, stack configuration management) - cloudposse/terraform-provider-utils
Yeah, am using that data source in the above gist, basic example, only difference is the JSON
Getting Error: json: cannot unmarshal array into Go value of type map[string]interface {}
you provide a list of JSON-encoded strings (possibly read from files), and then can convert the result from string to JSON
locals {
json_data_1 = file("${path.module}/json1.json")
json_data_2 = file("${path.module}/json2.json")
}
data "utils_deep_merge_json" "example" {
input = [
local.json_data_1,
local.json_data_2
]
}
output "deep_merge_output" {
value = jsondecode(data.utils_deep_merge_json.example.output)
}
Yes, am trying to read a JSON file, other than that, same as the example for JSON
ok, so this file(“${path.module}/badgers.json”) is already an array
and here yu put it into another array
input = [
local.json_data_2
]
your json should be a map, not an array
OK, if I use input = local.json_data_2
❯ terraform plan
Error: Incorrect attribute value type
on main.tf line 96, in data "utils_deep_merge_json" "example":
96: input = local.json_data_2
|----------------
| local.json_data_2 is "[\n {\n \"dev\": {\n \"us-east-1\": {\n \"profile-service\": {\n \"badgers\": \"foo\"\n }\n }\n }\n },\n {\n \"dev\": {\n \"us-west-2\": {\n \"profile-service\": {\n \"badgers\": \"foo\"\n }\n }\n }\n },\n {\n \"qa\": {\n \"us-east-1\": {\n \"profile-service\": {\n \"badgers\": \"foo\"\n }\n }\n }\n },\n {\n \"qa\": {\n \"us-west-2\": {\n \"profile-service\": {\n \"badgers\": \"foo\"\n }\n }\n }\n },\n {\n \"prod\": {\n \"us-east-1\": {\n \"profile-service\": {\n \"badgers\": \"foo\"\n }\n }\n }\n },\n {\n \"prod\": {\n \"us-west-2\": {\n \"profile-service\": {\n \"badgers\": \"foo\"\n }\n }\n }\n },\n {\n \"dev\": {\n \"us-east-1\": {\n \"account-service\": {\n \"badgers\": \"foo\"\n }\n }\n }\n },\n {\n \"dev\": {\n \"us-west-2\": {\n \"account-service\": {\n \"badgers\": \"foo\"\n }\n }\n }\n },\n {\n \"qa\": {\n \"us-east-1\": {\n \"account-service\": {\n \"badgers\": \"foo\"\n }\n }\n }\n },\n {\n \"qa\": {\n \"us-west-2\": {\n \"account-service\": {\n \"badgers\": \"foo\"\n }\n }\n }\n },\n {\n \"dev\": {\n \"us-east-1\": {\n \"compliance-service\": {\n \"badgers\": \"foo\"\n }\n }\n }\n },\n {\n \"dev\": {\n \"eu-west-1\": {\n \"compliance-service\": {\n \"badgers\": \"foo\"\n }\n }\n }\n },\n {\n \"qa\": {\n \"us-east-1\": {\n \"compliance-service\": {\n \"badgers\": \"foo\"\n }\n }\n }\n },\n {\n \"qa\": {\n \"eu-west-1\": {\n \"compliance-service\": {\n \"badgers\": \"foo\"\n }\n }\n }\n }\n]\n"
Inappropriate value for attribute "input": list of string required.
maybe try this
json { { “dev”: { “us-east-1”: { “profile-service”: { “badgers”: “foo” } } } }, ```
note that the provider was created for specific purposes, it’s not universal thing
input = [
local.json_data_2
]
input
should be an array of strings
each string should be json-encoded map
the provider deep-merges maps, not arrays
OK, will have a play around with that
try this
{
{
"dev": {
note the top-level {
to make it a map
it will def work if you put all of these parts
{
"qa": {
"eu-west-1": {
"compliance-service": {
"badgers": "foo"
}
}
}
}
into separate files
OK, I’ll see if I can break it down and pass each in
but you can try
{
{
"dev": {
and see what happens (it will deep-merge that map, I’m just not sure what the result will be)
Nope Error: invalid character '{' looking for beginning of object key string
- will try breaking it down and passing in each
try
I don’t control the JSON either, it comes back from Terraform as a list of maps, so will try passing in each
data = {
{
"dev": {
yes, in TF you can loop thru the list of maps, convert each one to string, and add it to an array to provide as input
to the provider
The Cloud Posse Terraform Provider for various utilities (e.g. deep merging, stack configuration management) - cloudposse/terraform-provider-utils
that will work
(maybe we can improve the provider to accept one input with JSON/YAML encoded string of list of maps (instead of giving it a list of encoded strings)
Aye, realise this may not be your use case
in TF you can loop thru the list of maps, jsonencode
each one to string, and add it to an array to provide as input
to the provider
So
data "utils_deep_merge_json" "example" {
input = [
"{\"qa\":{\"us-east-1\":{\"profile-service\":{}}}}",
"{\"qa\":{\"us-east-1\":{\"account-service\":{}}}}"
]
}
Should that work?
it’s cryptic… but it should in principle
It is a jsonencoded map
Error: json: unsupported type: map[interface {}]interface {}
can you run the example first to confirm it’s working for you?
then add your files to the example to confirm your files are ok
Yeah, good point
So, I’ve cloned <https://github.com/cloudposse/terraform-provider-utils>
cd examples/data-sources/utils_deep_merge_json/
terraform init
terraform plan
Error: json: unsupported type: map[interface {}]interface {}
Terraform 0.14.6 …
So example isn’t working for me either…
If I drop down to 0.2.1
it works
0.3.0
/ 0.3.1
aren’t working, not sure what it could be locally?
Yup, my use case works with 0.2.1 too
Changes to Outputs:
+ deep_merge_output = {
+ qa = {
+ us-east-1 = {
+ account-service = {}
+ profile-service = {}
}
}
}
deep_merge_output = {
"dev" = {
"eu-west-1" = {
"compliance-service" = {}
}
"us-east-1" = {
"account-service" = {}
"compliance-service" = {}
"profile-service" = {}
}
"us-west-2" = {
"account-service" = {}
"profile-service" = {}
}
}
"prod" = {
"us-east-1" = {
"profile-service" = {}
}
"us-west-2" = {
"profile-service" = {}
}
}
"qa" = {
"eu-west-1" = {
"compliance-service" = {}
}
"us-east-1" = {
"account-service" = {}
"compliance-service" = {}
"profile-service" = {}
}
"us-west-2" = {
"account-service" = {}
"profile-service" = {}
}
}
}
Working with 0.2.1
- thanks guys!
0.2.1 is the version of what?
The Cloud Posse Terraform Provider for various utilities (e.g. deep merging, stack configuration management) - cloudposse/terraform-provider-utils
terraform confuses me a bit again
I query the instance id with a datasource like this:data "aws_instance" "instancetoapplyto" {
filter {
name = "tag:Name"
values = ["${var.instancename}"]
}
}
and then I use it in a cloudwatch alarm:
InstanceId = data.aws_instance.instancetoapplyto.id
this works but I get a warning like this:
Warning: Interpolation-only expressions are deprecated
on ../../../modules/cloudwatch/alarms.tf line 8, in data "aws_instance" "instancetoapplyto":
8: values = ["${var.instancename}"]
I know about the Interpolation-only expression but here it confuses me
in TF versions after 0.11, you don’t need to use string interpolation
try
values = [var.instancename]
that works indeed ! Thanks Andriy !
Would appreciate some on this GH provider issue: https://github.com/integrations/terraform-provider-github/issues/612
We would like to manage access for repository security alerts. This feature is documented here https://docs.github.com/en/free-pro-team@latest/github/administering-a-repository/managing-security-an…
Being able to enable security alerting is fairly useless if the majority of the team can’t see it and I need to manually click into a client’s 40 or so repos to enable them to be able to see it.
We would like to manage access for repository security alerts. This feature is documented here https://docs.github.com/en/free-pro-team@latest/github/administering-a-repository/managing-security-an…
I got a map providing IAM group name and policies that should be attached to it.
groups = {
Diagnose: [
"arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly",
"arn:aws:iam::aws:policy/AmazonEC2ReadOnlyAccess",
...
], ...
I want the policies to be attached to groups with aws_iam_group_policy_attachment
But because of the group format it require double iteration to enumerate all policies attached to a group.
locals {
groups_helper = chunklist(flatten([for key in keys(local.groups): setproduct([key], local.groups[key])]), 2)
}
resource "aws_iam_group_policy_attachment" "this" {
for_each = {
for group in local.groups_helper : "${group[0]}.${group[1]}" => group
}
group = each.value[0]
policy_arn = each.value[1]
}
I did it with :point_up: code but I think that it should be much more simple than my hacky, that groups_helper
.
this is how i would do it:
locals {
groups_policies = flatten([for group, policies in local.groups : [
for policy in policies : {
name = group
policy = policy
}
]])
}
resource "aws_iam_group_policy_attachment" "this" {
for_each = { for group in local.groups_roles : "${group.name}.${group.policy}" => group }
group = each.value.name
policy_arn = each.value.policy
}
fairly similar in the end, but i feel like the data model is more explicit
Yeah. It’s also a viable option, maybe a little bit better because you have a list of dictionaries instead of list of tuples/list so it’s more explicit.
Such a map should be so easy to iterate over
I’d imagine there could be some sugar syntax that I’m not aware of. Something like this
resource "aws_iam_group_policy_attachment" "this" {
for_each = local.groups
group = each.value[0]
policy_arn = [for value in each.value[1]]
}
fwiw, if exclusive managment of policy attachments is something you’re looking for, there is this feature request… it was recently implemented for roles and i think makes things much easier… https://github.com/hashicorp/terraform-provider-aws/issues/17511
Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or other comme…
i am trying to get my head around how much of a cluster <insert swear word here> the upgrade from 0.13.4 to 0.14.x is
mainly from a CI perspective (we use Atlantis) and a pre-commit / dev perspective
0.13.x to 0.14.x should give you no <explicitive> issues
I’ve had no issues
what about the lock files?
Ok ha was just about to respond to that
That’s the question, to check them in to source control or not
how can you do it without?
A bunch of repos I do, as I want them to be locked to that version. However some I don’t
The pipeline will generate it each time on init
so if you check it in to source control, the only way you can bump a provider version is by doing a init locally and the recommitting the lock file
yeh thats a little rough
probably needs a pre-commit hook
i just gitignore the lock file, and have a pre-terraform step for CI that initializes the terraform plugin directory with versions that i manage using terraform-bundle
yup it depends on your ci and workflow. in general we dont allow people to run terrafom locally so its a little easier
yup i do what loren said on many repos
Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amon…
interesting never seen this
any good documentation on using this?
@loren maybe?
i am not good documentation
the readme is actually pretty great
i get you add a file and bundle it but how does that help?
the file pins the versions you want. you create the bundle out-of-band as a zip archive, host it somewhere your CI can reach it. the curl it, unzip it, copy the plugin directory to the value of the env TF_PLUGIN_CACHE_DIR
now you have all your plugins locally, in a place terraform will look for them, cuz that’s how TF_PLUGIN_CACHE_DIR
works
do you do this with all your tf root repos?
that’s the only place i do it, yes
so you check in the bundle?
no
the bundle file, yes, not the zip
so atlantis creates the bundle before being executed?
atlantis downloads the bundle and extracts it
or whatever your CI is, i use codebuild
what created the bundle and where do you host it?
that is done out-of-band, when we’re ready to update the tf and provider versions
i am trying to work out the path here
shall we take to DM or do it here?
¯_(ツ)_/¯
it’s your thread
anyone happen to have a tutorial on https://github.com/cloudposse/terraform-aws-tfstate-backend ? terraform newbie here, could use some hints and best practices for implementing it here: https://github.com/0-vortex/cloudflare-terraform-infra
Terraform module that provision an S3 bucket to store the terraform.tfstate
file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption. - cloudposse…
Terraform infrastructure templates for our CloudFlare instances - 0-vortex/cloudflare-terraform-infra
2021-03-27
@Erik Osterman (Cloud Posse) do you all still like the yaml_config approach? I’m building out some 2 environment pipelines and started with this. It’s elegant, but verbose.
Would you recommend using a tfvars as an argument for an environment instead of the yaml config in certain cases? I like the yaml config, but just trying to think through what I can do to avoid adding abstractions on abstractions when possible, and make it easier to collaborate.
The yaml config approach is central to what we do
Our stack config is incredibly dry - more dry that anything feasible in terraform
We support imports, and deep merging
If something is too verbose it’s probably your schema that is wrong :-)
However you will notice that few of our modules themselves use the yaml - that all use native hcl variables
But our root modules are what use YAML
I’d be happy to walk you through the approach
@Erik Osterman (Cloud Posse)
Is atmos
preferred way now to start terraforming infrastructure?
I have new client that is starting from scratch, AWS, K8S, helm, istio so seems like a atmos
would be perfect match for them
Yep, this is what we are using everywhere now. It makes it easy to separate config from business logic. Happy to give you a walk through.
Would love to see this stuff or participate in a call or the weekly session to cover a deeper dive.
I’m setting stuff up brand new right now and looking to make this as a flexible as I can to duplicating environments based on some input but struggling a little with not relying on the remote terraform backend for state and so on. I like the yaml config approach in concept, but the other fixtures.eu-west-1.tfvars feels easier to understand for a cli driven approach.
definitely wanting to be able to deal with backend + plans not being so brittle so I see some of the initial value in the yaml merging, just haven’t probably figured out the full potential yet
I like the yaml config approach in concept, but the other fixtures.eu-west-1.tfvars feels easier to understand for a cli driven approach.
It’s only easier since it’s familiar
I’m getting there. Not sure why I’m nesting vars under components, but I’m getting close.
[Seperate Question] I also have to initialize my backend in a setup step ahead of time instead of being more dynamic with the terraform-tf-state backend since it generates the backend tf. Is there any “pipeline” type setup you’d recommend to take advantage of your tf-state locking module, but not require the manual setup steps, backend.tf and other steps to be completed manually first?
I’m used to backend as terraform cloud which made it basically stupid simple with no setup requirement. Would like to approach something like this a bit more, but with S3.
No rush, just trying to think through a “non-terraform-cloud” oriented remote backend setup with the same ease of setup.
I usually include a “bootstrap” root module in all client project which invokes tf-state-backend either once all together OR once for each root module in the project. Then you only need to do it once and it templates out the backend.tf for each root module and you don’t worry about it going forward. I also use this to templatize my versions.tf files so they’re all consistent across all root modules. Can provide a dummy example if that is useful.
I’d love to see your dummy example
Me too. I need to start making this a bit more flexible to repeat in multiple environments. I want to figure out how to do this as the backend with terraform cloud was fire and forget. With azure pipelines needing to setup the backend in each region, I need as stupid simple as possible, even if it’s a backend pipeline that runs all the state file bucket setup stuff (assuming I setup one bucket per plan).
This needs to be easy to work with and allow me to tear down and rebuild plans without destroying state buckets ideally.
@Erik Osterman (Cloud Posse) so I want to avoid having to create the backend.tf file manually using the process described in the tf-backend-state module.
https://github.com/cloudposse/terraform-yaml-stack-config/tree/main/examples/remote-state
Is this an example I could use if I’m already using yaml config to define my backend and initialize all these at once for the region? I’m a bit unclear. I’m with creating backend.tf if I have to, but was hoping to use the yaml config to generate all my backend buckets more dynamically and also prevent the stacks from tearing down the backend buckets when running terraform destroy
Terraform module that loads an opinionated "stack" configuration from local or remote YAML sources. It supports deep-merged variables, settings, ENV variables, backend config, and remote …
I think I’m good now. I was trying to set up a separate bucket and Dynamo table per module or plan. I found some prior conversation about this that talked about just creating a single state bucket and using prefixes. That’s a lot easier to handle and should eliminate the backend config worries. I’m assuming the locking is per prefix not global for all state files in a bucket right?
Ok, could use help on one last thing. I see patterns for tfstate
used in places in the cloudposse modules for locals, but not sure how to use. I want to make my pipeline use a unique state file in the bucket. Am I supposed to be able to make the backend file name in the bucket a variable or environment variable to provide it?
Bump…. I’m using yaml config. I’m using the stack sorta approach with an input variable of
module "yaml_config" {
source = "cloudposse/config/yaml"
map_config_local_base_path = "../config"
map_config_paths = [
"default.config.yml"
,var.config_import
]
# context = module.label.context
}
Now I have one last piece I don’t have confidence in… the state management. I’m using tf-state-backend to deploy a state bucket per account. Now how do I make this variable based on the yaml stack I’m deploying?
terraform {
required_version = '>= 0.12.2'
backend 's3' {
region = 'eu-central-1'
bucket = 'qa-tfstate-backend'
key = 'qa/terraform.tfstate'
dynamodb_table = 'qa-tfstate-backend-lock'
profile = ''
role_arn = ''
encrypt = 'true'
}
}
I don’t think those can be variables? Do I remove the key and provide a partial backend config variable input on the cli to be able to do this? Any guidance would be appreciated as this is the last thing I think I need to do some test deploys
bump see thread comment, any help appreciated
Is there a way to iterate over a list and get an index as well as the value? At the moment I am doing this in a string template which is sort of ugly:
%{for index in range(length(local.mylist))}
[item${index}]
name=local.mylist[item].name
%{endfor}
Not sure about in a string template, but in a regular for expression, it’s just for index, value in <list> : ...
wow, i was hoping that would work but i couldn’t find it in the docs
can you? or is it a little secret
I’ll often convert the list to a map with the index as the keys depending on what I’m trying to do
wow, i was hoping that would work but i couldn’t find it in the docs
i believe i first saw it in some examples on the discourse site. for syntax stuff like this, i go back to the hcl spec… it has several examples of how it works, though they’re a bit subtle and you still need to know what you’re wanting to find…
https://github.com/hashicorp/hcl/blob/main/hclsyntax/spec.md#for-expressions
HCL is the HashiCorp configuration language. Contribute to hashicorp/hcl development by creating an account on GitHub.
2021-03-28
2021-03-29
Good morning, Has anybody got an idea if it is possible to override a value in a map of objects that’s set in tfvars? Doc’s suggest that the cmd line takes priority over tfvars and that you can override items via cmd line but I’m struggling to get the nesting right. terraform plan -var=var.site_configs.api.lambdas_definitions.get-data={“lambda_zip_path”:”/source/dahlia_v1.0.0_29.zip”}
structure is:
variable "site_configs" {
type = map(object({
lambdas_definitions = map(object({
lambda_zip_path = string
}))
}))
}
above is a large data structure but I’ve tried to simplified it for the purpose of this question.
you can’t perform a deep merge of data like that. You can only override the entire variable value
Thanks Alex, I feared you’d say that.
if you want to perform a deep merge of input variable data, I suggest you convert the input data to a json/yaml file, pre-process it using your own logic, and then load it with jsondecode(file(var.lambda_definition_file))
Yep, understand. Guess I’ve been lucky so far that I’ve not needed to change anything in this manner before. I had your suggestion in the back of my mind originally but I was hopeful I could avoid it. Thanks for the help.
watch out for this nasty bug in RabbitMQ ( enable logging ) https://github.com/hashicorp/terraform-provider-aws/issues/18067
Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or other comme…
I’m looking at building out a new account structure (currently all dumped into one wild-west style account) for my company using https://github.com/cloudposse/reference-architectures, but I don’t think we’ll wind up using EKS or kubernetes in any fashion. For now our needs are pretty simple and fargate should suffice. Will I regret using the reference architecture to build out the Organizations account structure even if I don’t need all of the components that it’s made to create?
[WIP] Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures
never mind, just reading more and finding that this has been deprecated in favor of atmos, so I’ll go play with that!
[WIP] Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures
Good morning (in Belgium/Europe) to all, I a am bit confused by this module:
Terraform module that creates an S3 bucket with an optional IAM user for external CI/CD systems - cloudposse/terraform-aws-s3-bucket
you can assign a logging target and it’s described as such:
object({
bucket_name = string
prefix = string
})
but howto define this in the terraform config ?
logging = {
bucket_name = foobar-bucket
prefix = foo
}
this is a map
thx a lot !
2021-03-30
Hate bugging folks again, but I’m so close. I just need a pointer on the backend remote state with the new yaml config stuff. @Erik Osterman (Cloud Posse) anyone can point me towards a good example?
I’m unclear if I have to use cli options or if module “backend” actually works to define the remote state dynamically for me as part of the yaml config stack setup
I found this. I thought backend configs must be hard coded and can’t be variables so someone point me towards a post or example so I can see S3 remote backend being used with yaml config for stacks pretty please. I want to deploy my stacks to 3 accounts, and each has custom yaml overrides from default.
Terraform module that loads an opinionated "stack" configuration from local or remote YAML sources. It supports deep-merged variables, settings, ENV variables, backend config, and remote …
Did you see the atmos command to generate the backend config?
Basically there’s no need to use variables
At least, we haven’t needed to - so I am thinking there’s a simpler way without variables. Config are static by design.
(E.g. if we support variables we are reinventing terraform, the goal here is to define what is static and have terraform compute that which is dynamic)
Terraform module that loads an opinionated "stack" configuration from local or remote YAML sources. It supports deep-merged variables, settings, ENV variables, backend config, and remote …
So let’s say you start with a backend like this:
This might be defined in globals.yaml
terraform:
vars: {}
backend_type: s3 # s3, remote, vault, etc.
backend:
s3:
encrypt: true
bucket: "eg-ue2-root-tfstate"
key: "terraform.tfstate"
dynamodb_table: "eg-ue2-root-tfstate-lock"
role_arn: "arn:aws:iam::123456789:role/eg-gbl-root-terraform"
acl: "bucket-owner-full-control"
region: "us-east-2"
remote:
vault:
Then you can have some-other-account.yaml
with:
import:
- globals
terraform:
backend:
s3:
bucket: "eg-uw2-root-tfstate"
region: "us-west-2"
see what I did there? You define what your standard backend configuration looks like
then you overwrite the globals with what’s specific or unique.
in this case, I’m now pointing to a bucket in us-west-2
to generate the backend configurations, I would run atmos terraform backend generate
that will drop a .tf.json
file that you should commit to source control (if practicing gitops)
Looking forward to evaluating. Ok… one quick thing. I’m trying to use mage right now. While I want to examine atmos, the CI solution i have in place would need to be gutted. Can you link me to the code for atmos so I can look at what’s doing? Or do I stick with atmos just for backend configuration and that’s it?
Universal Tool for DevOps and Cloud Automation (works with terraform, helm, helmfile, istioctl, etc) - cloudposse/atmos
Note, atmos compiles down to a binary so you can just call it from mage
it’s just a command like any other
so we use atmos for everything, way more than just backend generation
Universal Tool for DevOps and Cloud Automation (works with terraform, helm, helmfile, istioctl, etc) - cloudposse/atmos
ultimately, all we do is serialize the backend config as json, after doing all the deep merging
you can implement the pattern however you want.
I know it’s most likely epic if you are involved. I think you all write some Go as well, so any 1 sentence type answer on why choose atmos over using mage and writing native go commands?
Would like to avoid nesting more tools than necessary so feedback would be welcome
trying to plug these into azure pipelines so sticking with mage might make sense for working with a bunch of Go devs for example, but open to input!
trying to stretch myself by leveraging more native tooling than my current build frameworks, so will examine your backend logic for initialization. That’s the only piece I’m sorta stuck on. thanks for helping me out today!
Mage has no dependencies (aside from go) and runs just fine on all major operating systems, whereas make generally uses bash which is not well supported on Windows. Go is superior to bash for any non-trivial task involving branching, looping, anything that’s not just straight line execution of commands. And if your project is written in Go, why introduce another language as idiosyncratic as bash? Why not use the language your contributors are already comfortable with?
I think it’s a strong argument. One of the challenges with #variant has been debugging it.
yes, that’s what I’m working on using more.
using a DSL in go makes sense.
The thing is I’m doing non Go work too, esp with terraform. If I leverage mage to run it, it’s more verbose, but would be easier to plug into other teams projects if Go developers I think
so a library of mage functions for run/init. I just have to figure out how you are using the backend config stack so I can still use your yaml config, but ensure the backend is filled.
Are you generating backend.tf files for every stack as part of cli or is this done in some other dynamic way?
THe cli generates the backend for each component.
Btw, our idea with the stack config is that it’s tool agnostic. We use it with Spacelift, Terraform Cloud, and then wrote our own on the command line called atmos
If you subscribe to the idea, then using mage
would just be another tool that operates on the schema.
No one tool implements the full power of the schema. The stack config schema is just a way to express cross-tool configuration.
We wrote a terraform provider to read the configuration https://github.com/cloudposse/terraform-provider-utils
The Cloud Posse Terraform Provider for various utilities (e.g. deep merging, stack configuration management) - cloudposse/terraform-provider-utils
Ok. So I’m back to the backend config has to be generated for every folder, but it’s core operation could be using backend s3 config with a different key prefix for each one right? You are just iterating through all the stacks to generate backend.tf files for each of these, but I could do that manually for the low volume I have right now?
like shown at top of thread.
ya, could just do it manually
Ok that helps, not as fancy, but it helps. I want to use it, but trying to force myself to stick with mage for right now for the sake of others adoption later. Might come back around to experiment with atmos more too Maybe I’ll try it this week if I get a chance.
thanks again!
So basically what I’m coming to is the backend.tf file needs to be a cli variable for me to change it dynamically, with a partial backend config.
Basically I atmos would handle creating more configs for me otherwise, but without it’s benefit, I have to do a partial backend config and change the terraform state file name based on the staging.config.yml
or qa.config.yml
as otherwise it won’t know where to look and remote datasource prohibits variables for the file name.
I think that’s what I was trying to confirm, but all the abstractions, as elegant as they are, are hard to parse through when I didn’t build the yaml stack stuff.
whelp. I’m seeing that i think the key for the file CAN be a variable. The bucket can’t. Trying that now.
Variables may not be used here.
oh well. Looks like back to the cli backend partial config. I guess you can’t use variables for key, though terraform issue points towards possible improvements on that imminent.
Terraform Version v0.9.0 Affected Resource(s) terraform backend config Terraform Configuration Files variable "azure_subscription_id" { type = "string" default = "74732435-…
Wow. That topic got me going with atmos
Part about backend should be included in documentation
I don’t get it. All the examples are using variables too lol. All I want is to provide this using yaml config and I’d be golden. I’m assuming though I can’t do this because I have to provide it for the very first step to run. Order of operations says backend accessed first so can’t leverage
@Matt Gowie
Hard to parse out your exact need @sheldonh — Feel like I’m missing something but can’t put my finger on it.
Regardless, I’d be happy to chat about this with you and help you get it sorted. Want to schedule some time to chat? It’s awesome that you’re pushing to adopt the approach early so would love to help if I can. I’m also going to start writing some docs that utilize and highlight the Atmos + Backend generation functionality in the coming week, so hopefully those will help you / others in the future surrounding this topic.
Awesome Matt. I will be creating documentation for bootstrapping atmos including backend generation as well. I will ask them if they would like to open source it. Keep me in the loop if you would like some help on it
Good evening all, I am trying to use the below json input to supply a value to a map I’ve created normally within terraform and in part it works fine on windows
terraform_0.13.5 plan -var=build_version_inputs={get-data:\"1.0.1_1\",post-data:\"1.0.1_1\"}
However when try it on centos it fails with the below error. I assume its to do with the escaping within bash?
Error: Extra characters after expression
on <value for var.build_version_inputs> line 1:
(source code not available)
An expression was successfully parsed, but extra characters were found after
it.
I’ve tried a variety of escaping sequences but to no luck. Any suggestions, please?
/usr/local/bin/terraform_0.13.5 plan -var=build_version_inputs={"get-data":"1.0.1_1"},{"post-data":"1.0.1_1"}
/usr/local/bin/terraform_0.13.5 plan -var=build_version_inputs={\"get-data\":\"1.0.1_1\",\"post-data\":\"1.0.1_1\"}
/usr/local/bin/terraform_0.13.5 plan -var=build_version_inputs={get-data:\"1.0.1_1\",post-data:\"1.0.1_1\"}
/usr/local/bin/terraform_0.13.5 plan -var=build_version_inputs={get-data:"1.0.1_1",post-data:"1.0.1_1"}
/usr/local/bin/terraform_0.13.5 plan -var=build_version_inputs={get-data:1.0.1_1,post-data:1.0.1_1}
/usr/local/bin/terraform_0.13.5 plan -var=build_version_inputs={"get-data":"1.0.1_1","post-data":"1.0.1_1"}
Last questions; if specifying {"get-data":"1.0.1_1","post-data":"1.0.1_1"}
in a json file how do you reference the variable you are trying to supply data to?
Like this? I assume not, given the error I’ve just got
{
"variable": {
"build_version_inputs": {
"get-data": "1.0.1_1",
"post-data": "1.0.1_1"
}
}
}
/usr/local/bin/terraform_0.13.5 plan -var-file my.tf.json
The json file you have there is probably not in the format you want. As a variable file, that is creating a single variable called variable
.
You could change the format to:
{
"build_version_inputs": {...}
}
to create a variable called build_version_inputs
if you want to supply that same data on the commandline, use
terraform_0.13.5 plan -var "build_version_inputs={\"get-data\":\"1.0.1_1\",\"post-data\":\"1.0.1_1\"} "
You need to quote the keys because they have -
characters in them. I suggest using _
instead.
Also, if you want to load complex variable types like this via the commandline, make sure the variable’s type is accurate. If you use type=any
, Terraform may get confused and try to load it as a string
Hi Alex, feels like your always having to come to my aid. Thank you for taking the time to do that.
In terms of my variable, I’ve already defined it as
variable "build_version_inputs" {
type = map(string)
description = "map of built assets by Jenkins"
}
the data structure I’m using {"get-data":"1.0.1_1","post-data":"1.0.1_1"}
does work for my needs. as its simple accessed later via a lookup.
snippet from the resource…
for_each = var.lambdas_definitions
format("%s_v%s.zip", each.value.lambda_zip_path, var.build_version_inputs[each.key])
which gives me what i expect, here is a sinipet from the windows based plan of
terraform_0.13.5 plan -var=build_version_inputs={get-data:\"1.0.1_1\",post-data:\"1.0.1_1\"}
~ s3_key = "my-v1.0.1.zip" -> "/source/my_v1.0.1_1.zip"
s3_bucket = "assets.mybucket.dev"
~ s3_key = "my-v1.0.1.zip" -> "/source/my_v1.0.1_1.zip"
source_code_hash = "p1slm77OpGkBvYGSyki/hItZ6lx0AVRastFep1bdoK8="
Looking at what you’ve kindly put above I see you’ve managed to give me the correct escaped sequence for the unix side. I new it was an escaping issue, i just couldn’t find the culprit. looks like the wrapping of the whole var string was one of my mistakes.
Thank you very much for the correct syntax. given the issue with escaping etc, I guess it’ll be best to supply this via a file. the syntax you supplied
{
"build_version_inputs": {...}
}
will that just allow the values to be set, given the variable is created else where? Could have sworn I’d tried this already but guess I screwed it up somewhere.
If you create a file myyvars.tfvars.json
with content
{
"build_version_inputs": {
"get-data": "foo",
"post-data": "bar"
}
}
And run terraform plan -var-file myvars.tfvars.json
, it will do what you want. I think.
Thanks Alex, just giving that a go. brb
2021-03-31
Hey peeps,
How do I list resources that I can import into my statefile? In other words, I know that I can import already existing resources using terraform import <address> <ID>
, but before importing, I would like to see what’s available - a list containing <ID> and probably other resources that were created outside of terraform
I have had some success with https://github.com/GoogleCloudPlatform/terraformer and https://github.com/dtan4/terraforming. Simple example using the latter https://medium.com/@yoga055/importing-existing-aws-resources-to-terraform-using-terraforming-3221b26e015.
CLI tool to generate terraform files from existing infrastructure (reverse Terraform). Infrastructure to Code - GoogleCloudPlatform/terraformer
Export existing AWS resources to Terraform style (tf, tfstate) - dtan4/terraforming
Yes !! you can easily import your AWS infrastructure to terraform using terraforming.
I’ve followed our past discussions on pulumi. I’m curious if anyone has actually had a great result using it if you are working with Go/python/node devs?
Since it’s an abstraction of the apis just like terraform, for application stacks it sorta makes sense to me over learning HCL in depth for some. Was thinking about it providing value for serverless cli oriented alternative that could handle more resource creation that is needed specifically tied to the app.
I don’t find it as approachable as HCL, but in the prior roles I was working with folks that knew HCL, but not Go, now it’s the opposite. They know Go, but not HCL
v0.15.0-rc1 0.15.0-rc1 (Unreleased) ENHANCEMENTS: backend/azurerm: Dependency Update and Fixes (#28181) BUG FIXES: core: Fix crash when referencing resources with sensitive fields that may be unknown (<a href=”https://github.com/hashicorp/terraform/issues/28180” data-hovercard-type=”pull_request”…
Fixes #27809 Fixes #27809 Fixes #27723 Fixes #26702 Fixes #20831
When we map schema sensitivity to resource values, there may be unknowns when dealing with planned objects. Check for unknowns before iterating over block values. Fixes #28153
Hi, did anyone ever try to manually remove an EKS cluster from the state and after changing some stuff in the console, try to reimport the EKS again back into the state? I am running into a race condition when adding subnets to the cluster and was wondering if the destroy + create path could be avoided…