#terraform (2021-04)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2021-04-30

Steve Wade avatar
Steve Wade

i am struggling to work out how to fix a circular dependency between an SQS queue and the policy it uses. Because the policy needs the ARN of the SQS queue itself

loren avatar
loren

can you construct the arn, or is there randomness in the arn on creation?

Steve Wade avatar
Steve Wade

i guess i can construct the arn

Steve Wade avatar
Steve Wade

its quite hacky though but works

loren avatar
loren

that’s usually how i address these things

Steve Wade avatar
Steve Wade

why didn’t i think of that

Steve Wade avatar
Steve Wade

thanks dude, you’re the best!

loren avatar
loren

lol, you had the right default, always prefer to reference an attribute! this is just an edge case where that doesn’t work…

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

We had the same problem with secrets and KMS. You can have the principal be * and use a condition.

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)
How AWS Secrets Manager uses AWS KMS - AWS Key Management Service

Learn how AWS Secrets Manager uses AWS KMS to encrypt secrets.

2
loren avatar
loren

i’ll second that, the default aws kms policies are really great to study how resource policies work

Jeff Dyke avatar
Jeff Dyke

Hello. I am starting to build a new environment from scratch so i can migrate my old one into an area that didn’t have the quirks of console buildout. I am using cloudposse/vpc/aws and cloudposse/dynamic-subnets/aws. Currently only two subnets have public access, the rest go through 1 of 2 NATGWs. I also don’t need a public facing subnet to each. I don’t think in terms of money, this would end up costing that much, curious if others have considered this.

Jeff Dyke avatar
Jeff Dyke

It may be a big nothing burger, happy to hear that as well.

Alex Jurkiewicz avatar
Alex Jurkiewicz

NAT gateways are like$20/mo. It’s that really an issue to pay for three?

Alex Jurkiewicz avatar
Alex Jurkiewicz

Like, even if it represents a big percentage of your bill, is it that much money out of your total operational budget?

Jeff Dyke avatar
Jeff Dyke

Nope its not, thats why i asked here. Thank you @. I appreciate when someone has that number in their head. Exactly what i was thinking.

1
Jeff Dyke avatar
Jeff Dyke

This place is great, especially as a one man army.

1

2021-04-29

jason einon avatar
jason einon

hey, hopefully an easy one to answer! although i cant get the correct syntax

jason einon avatar
jason einon

I have the following resource outputs: aws_efs_file_system.jenkins-efs.id aws_efs_access_point.jenkins-efs.id

jason einon avatar
jason einon

I need to string them together so they appear in the following format in the deployed resource volume_handle = aws_efs_file_system.jenkins-efs.id::aws_efs_access_point.jenkins-efs.id

Gareth avatar
Gareth

can you try. volume_handle = format("%s::%s", aws_efs_file_system.jenkins-efs.id, aws_efs_access_point.jenkins-efs.id)

jason einon avatar
jason einon

perfect cheers!

1
jason einon avatar
jason einon

but tf doesnt like the ::

Gareth avatar
Gareth

Has anybody done anything with the aws resource aws_transfer_server and EFS. I can see support was added in provider Release v1.36.22 https://github.com/hashicorp/terraform-provider-aws/issues/17022 but the documentation on it is non existent and I’m currently getting

Error: Unsupported argument

  on transfer_server.tf line 42, in resource "aws_transfer_server" "this":
  42:   domain       = "EFS"

An argument named "domain" is not expected here.
Steve Wade avatar
Steve Wade

i am trying to get my head around guardduty master to member relationship. is my understanding below true …

if we have account X (a member account) which uses region 1 and 2 that means in the master account we need to enable a detector in region 1 and 2

Question: In the master account do we need to setup aws_guardduty_member per region for account X?

Matt Gowie avatar
Matt Gowie

AFAIU, yes.

Matt Gowie avatar
Matt Gowie

The per region part is painful. I know that the Cloud Posse folks have automated some of that via turf: https://github.com/cloudposse/turf

cloudposse/turf attachment image

CLI Tool to help with various automation tasks (mostly all that stuff we cannot accomplish with native terraform) - cloudposse/turf

Matt Gowie avatar
Matt Gowie

Because doing so via Terraform is very painful supposedly.

Zach avatar

Does atmos have the ability to run terraform workflows in parallel? (ie, sibling root modules that aren’t dependent on each other)

Matt Gowie avatar
Matt Gowie

Good question — I don’t believe so. But might be a good one to add to the feature request list? cc @Andriy Knysh (Cloud Posse) + @Erik Osterman (Cloud Posse)

Zach avatar

We have some scripts/ansible to run multiple components in parallel, but its ugly

Matt Gowie avatar
Matt Gowie

Yeah — I can imagine so. I would think that atmos very likely could support that considering it’s a golang binary under the hood (created by Variant) and doing things in parallel like that is one of golang’s biggest selling points, but I’m pretty sure it’s not supported today.

Zach avatar

Thats what I thought, but wasn’t sure if that was just due to the documentation being very new and in-progress

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it supports running one workflow at a time with sequential steps

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I would say our primary focus is using CD platforms to parallelize the runs. Atmos is primarily focused on local execution during development.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Parallel execution is limited by policies. E.g. in spacelift, we use rego policies to determine executions. We don’t want to re-implement that in Atmos - out of scope.

managedkaos avatar
managedkaos

This made me smile
“No schema found …” warning removed, as schema is far more likely to be available now (#454)
https://github.com/hashicorp/terraform-ls/releases/tag/v0.16.0

John Martin avatar
John Martin

Hey all. If I was using this example and I added another worker group. what do i need to do to ensure some pods only deploy to worker group alpha while the others goto worker group bravo.

John Martin avatar
John Martin
cloudposse/terraform-aws-eks-cluster attachment image

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

John Martin avatar
John Martin

is that where i’d set a kubernetes_labels?

marc slayton avatar
marc slayton

Hola, friends – Ran into an interesting issue while spinning up a multi-account architecture with atmos/terraform. The master account was left out due to a misconfiguration of the tfstate-backend component, so I’ve been trying to import it. Technically, this should be possible, but when you try with atmos, using a command like:

aws-vault exec master-root -- atmos terraform import account aws_organizations_account.organization_accounts[\"master\"] XXXXXXXXXXXX -i -s master

Produces an error like this:

Error: Unsupported attribute on /modules/terraform/terraform-core.variant line 223:

This object does not have an attribute named "region".

[...]

Error: 1 error occurred:
	* step "write provider override": job "terraform write override": config "override-contents": source 0: job "terraform provider override": /modules/terraform/terraform-core.variant:223,27-34: Unsupported attribute; This object does not have an attribute named "region"., and 1 other diagnostic(s)

This error seems to come from atmos. The variant file in that locale is definitely trying to add a provider with a ‘region’ variable in the config.

marc slayton avatar
marc slayton

I have two questions, really. #1 - Is it possible this is a general problem with doing importing via Atmos? #2 - Is there an easier way to work around the issue where your master account state did not make it into the multi-account tfstate file, but every other account seems to have made it.

Joe Hosteny avatar
Joe Hosteny

@ I ran into this issue with an import, and wound up working around it by modifying atmos

Joe Hosteny avatar
Joe Hosteny

If you used the example project here: https://github.com/cloudposse/atmos/tree/master/example, then you can just change the imports at the bottom of cli/main.variant to use your own fork of the terraform module

cloudposse/atmos attachment image

Universal Tool for DevOps and Cloud Automation (works with terraform, helm, helmfile, istioctl, etc) - cloudposse/atmos

Joe Hosteny avatar
Joe Hosteny

I just removed the region option from the terraform-core.variant file in the “terraform provider override” step

Joe Hosteny avatar
Joe Hosteny
Terraform import fails on missing option · Issue #36 · cloudposse/atmos attachment image

Found a bug? Maybe our Slack Community can help. Describe the Bug The following error is observed when attempting a terraform import: atmos terraform import account aws_organizations_organizational…

Joe Hosteny avatar
Joe Hosteny

Perhaps tomorrow you can fill me in on how you got iam-primary-roles to apply? I am having some trouble with it, though I am not working from a fresh set of accounts. I am porting our accounts from the old reference architecture standup

Joe Hosteny avatar
Joe Hosteny

Issues with assuming roles, and I am curious how you configured things

marc slayton avatar
marc slayton

RE: iam-primary-roles – sure, happy to. Which version of terraform are you using? Might be this weekend or early next week, but I’m game. Thanks for the clue, btw. I was thinking more-or-less the same thing with regard to the variant wrapper, but I hadn’t dug in that deeply yet. Good to see the community is right there on top of these things.

Joe Hosteny avatar
Joe Hosteny

I’ll have to check exactly which patch version, but it is TF 0.14.x. Thanks! Just looking for some broad guidance if you’ve gotten further along. I’ve been making progress, but a bit slow

Joe Hosteny avatar
Joe Hosteny

I should mention that I’m standing up an identity account, and it is having difficulty assuming the correct role

marc slayton avatar
marc slayton

Honestly, it hasn’t been as bad as I’d thought. I’ve been taking a lot of notes and working on some tutorial material for friends and co-workers. That part has been a little slow, but overall I’d say the Atmos approach has given me a net savings. It is vastly less time to configure and deploy 90% prebuilt modules compared with writing custom ones you have to build yourself.

marc slayton avatar
marc slayton

When you say the identity account is having trouble assuming the proper role, I’m not sure I understand. From your earlier post, I’m assuming you mean you have something like an automated system account (e.g. ci/cd) which is trying to assume an IAM role defined within the identity account. Primary roles are basically just roles that are not designed to be delegated to another user. They are roles that an automated service might take on, e.g. to run a delivery pipeline, or build more infrastructure. These are generally things you don’t do in the same identity account, however. Instead, they are roles your system user assumes to take on work in another account that trusts your service with permissions to do the necessary task. LMK if this is all really obvious. I’ve been writing tutorials all week, so forgive me if I am belaboring trivial points. :0)

Joe Hosteny avatar
Joe Hosteny

@ thanks - I am actually not at the point of running ci/cd to do this. This is a manual bootstrap, that is perhaps a bit confused by the fact that is in the existing accounts previously stood up with the ref arch (except that the identity and dns accounts are new). I think I am doing something stupid - do you have, even in draft form, your tutorial I could look at, or a set of example stack files that got you all the way to the VPC?

marc slayton avatar
marc slayton

Yes, I’m putting that together hopefully this weekend. It’s a side project for me to get some better reference docs going. RE: your problem with assuming iam-primary-roles: When you try to assume a role with your ci/cd user, what happens? Do you get an error message? Have you tried it manually using the awscli? Usually, the error message is a good clue as to what’s happening.

marc slayton avatar
marc slayton

Also might help to see a plan file. That can sometimes reveal the issue.

Joe Hosteny avatar
Joe Hosteny

Apologies, my comments were not very clear. It was a long day. I was doing this from the root account, with the primary_account_id set to the identity account, and the stage set to identity. I am currently in an assume admin role (from the existing infra), called crl-root-admin. When I attempt to plan the iam-primary-roles component, the backend is unable to be configured:

error configuring S3 Backend: IAM Role (arn:aws:iam:::role/crl-gbl-root-terraform) cannot be assumed
Joe Hosteny avatar
Joe Hosteny

I think I just need to read this component a bit more today

Joe Hosteny avatar
Joe Hosteny

This looks like an issue with tfstate_assume_role being defaulted to true, though I thought I had tried disabling that before and using the existing admin role

marc slayton avatar
marc slayton

One potential problem I see is your role definition has no account in it, e.g.:

arn:aws:iam:::role/crl-gbl-root-terraform
marc slayton avatar
marc slayton

To be assumable, this should be a fully defined account, with a region, and an account number .

marc slayton avatar
marc slayton

You might want to check your info under the stack definition.

Joe Hosteny avatar
Joe Hosteny

Right - I don’t think it should even be attempting to use that one though. I’m trying to force it to use the admin role arn (with account id and region) that I am currently assumed as. I’m not sure where the tfstate-context is being lost

Joe Hosteny avatar
Joe Hosteny

This was the proximate cause: https://sweetops.slack.com/archives/CB84E9V54/p1619810307034700?thread_ts=1614076395.005800&cid=CB84E9V54. I used the same workaround. I ran into a number of other issues that I’ve been able to resolve so far, and apply this component. I am making notes on those for tickets, or to add to any tutorial if you are sharing publicly.

@Matt Gowie this is one of the things I ran into and had a question about as well. I did the same thing as Mathieu. Is the longer term intent to do something like terraform-null-label but for the tfstate-context?

2021-04-28

Mazin Ahmed avatar
Mazin Ahmed

Open-source project release

Feedback are welcome! Thank you everyone for the support. https://github.com/mazen160/tfquery

mazen160/tfquery attachment image

tfquery: Run SQL queries on your Terraform infrastructure. Query resources and analyze its configuration using a SQL-powered framework. - mazen160/tfquery

2
cool-doge3
Florian SILVA avatar
Florian SILVA

Hello guys, I’m working on elastic beanstalk using the CLoudposse module: https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment I know this repo is looking for maintainer and it’s not main priority to update it, but could it be possible to take a closer look to this PR ? https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/pull/170. I just tested it an it sounds like clean to me and could fix an issue in the module.

cloudposse/terraform-aws-elastic-beanstalk-environment attachment image

Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

Matt Gowie avatar
Matt Gowie

Checking it out @

cloudposse/terraform-aws-elastic-beanstalk-environment attachment image

Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

1
Matt Gowie avatar
Matt Gowie

We have a potential community member who reached out (@) who is interested in helping maintain this module so hopefully that module starts getting more attention from folks who are actually using it soon. Problem we have today is that none of the current maintainers are big into beanstalk so it’s hard to review PRs.

1
Florian SILVA avatar
Florian SILVA

Sounds good to me it can be merge I understand the second problem. Since beanstalk is quite complex It’s not easy to review these PR. I’m currently working on it and doing some modifications depending on my needs and seeing if some issues are linked so I’m a bit aware of some things. Just trying to fix minor fix when it annoys ^^

2
Alex Renoki avatar
Alex Renoki

@ I might as well help reviewing some issues if needed. I’m not sure how to help with this maintenance issue

2
Florian SILVA avatar
Florian SILVA

From what I saw, there are interesting and easy PR that has been open. The main issue on this module for me is that it is mainly working for application load balancer. I would not recommend using this module for other cases. I got some issues with the classic load balancer and the network type seems not working good enough but I saw some PR if I remember well so maybe starting by reviewing these could be a good beginning.

Matt Gowie avatar
Matt Gowie

@ merged. Thanks for the ping.

1
Florian SILVA avatar
Florian SILVA

Just saw it thank you for the efficiency

Matt Gowie avatar
Matt Gowie

Np — I can usually get to things if people complain loudly enough, but overall there is a continuous flood of PRs each week so it’s easy to miss em. Particularly for modules that we / I don’t actively use.

Steve Wade avatar
Steve Wade

does anyone have any recommended cloudwatch alarms for redshift?

Alex Jurkiewicz avatar
Alex Jurkiewicz

depends a lot on your use-case. Personally, we only alarm on the cluster health metric

Alex Jurkiewicz avatar
Alex Jurkiewicz

if a service wants more specific alarms, it’s done at the application level

Steve Wade avatar
Steve Wade

makes sense i am trying to work out the healthy alarm

Steve Wade avatar
Steve Wade

specifically the threshold

Steve Wade avatar
Steve Wade
resource "aws_cloudwatch_metric_alarm" "unhealthy_status" {
  alarm_actions       = [aws_sns_topic.this.arn]
  alarm_description   = "The database has been unhealthy for the last 10 minutes."
  alarm_name          = "${var.redshift_cluster_name}_reporting_unhealthy"
  comparison_operator = "GreaterThanOrEqualToThreshold"
  evaluation_periods  = "1"
  metric_name         = "HealthStatus"
  namespace           = "AWS/Redshift"
  ok_actions          = [aws_sns_topic.this.arn]
  period              = "600"
  statistic           = "Maximum"
  threshold           = "0"

  dimensions = {
    ClusterIdentifier = var.redshift_cluster_id
  }
}
Steve Wade avatar
Steve Wade

is this right it does not feel right

Alex Jurkiewicz avatar
Alex Jurkiewicz

You want > threshold, not >=

Steve Wade avatar
Steve Wade

unhealthy is zero though

Steve Wade avatar
Steve Wade


Any value below 1 for HealthStatus is reported as 0 (UNHEALTHY).

Alex Jurkiewicz avatar
Alex Jurkiewicz

oops, I had it backwards. Well 1 is healthy then. So you want Less Than a threshold of 1

Alex Jurkiewicz avatar
Alex Jurkiewicz

or Less Than Or Equal To 0

Steve Wade avatar
Steve Wade

makes sense

Steve Wade avatar
Steve Wade

thanks man

Steve Wade avatar
Steve Wade

hope you’re well too

Alex Jurkiewicz avatar
Alex Jurkiewicz

ty u2

Gareth avatar
Gareth

Afternoon, can anybody suggest why my ASG created via terraform-aws-modules/autoscaling/aws/4.1.0 doesn’t force a new ASG to be built when the launch configuration changes?

paraphrased terraform apply output… # module.asg.aws_autoscaling_group.this[0] will be updated in-place ~ launch_configuration = “my-asg-2000001” -> (known after apply) module.asg.aws_autoscaling_group.this[0]: Modifications complete after 2s

But a no point are the existing instances replaced. I can see the module has create_before_destroy in it. Any idea what I’m missing?

Gareth avatar
Gareth

looks like this is expected. Although, I’m unclear if this behaviour is due to a difference in functionality between launch configuration and launch templates. I vagally remember reading that one of them can’t be updated in place. I’ve always used a custom module that changes the ASG when the template changes.

I’ll have to look at the difference between our custom module and the registry based one. would still welcome any comments or points, while I’m doing this comparison.

Gareth avatar
Gareth

Looks like my custom module uses the hack mention here: https://github.com/hashicorp/terraform-provider-aws/issues/4100 name = "asg-${aws_launch_configuration.this.name}" but I assume I can’t do that on the module because it will be circular.

Gareth avatar
Gareth

For anybody that has the same problem / question. I’ve found a way to do it. Basically call the module twice. Once to create the launch configuration or launch template Set the module not to create the asg

module "asg_config" {
  source  = "terraform-aws-modules/autoscaling/aws"
  version = "~> 4.0"

  create_lc  = true
  create_asg = false

  name = "${var.client}-${var.service}-asg"

then use a second module to create the auto scaling group

module "asg" {
  source  = "terraform-aws-modules/autoscaling/aws"
  version = "~> 4.0"

  launch_configuration = module.asg_config.launch_configuration_name
  use_lc               = true
  create_asg           = true

I still look to have an issue with the ASG creating the asg but doesn’t wait for the node to come up before destroying the old one.

module.asg.aws_autoscaling_group.this[0]: Creation complete after 2s
module.asg.aws_autoscaling_group.this[0]: Destroying...

So not perfect but I’ll keep looking for a solution on that one. Hope this helps somebody else.

Tim Birkett avatar
Tim Birkett

Did you ever try configuring the instance_refresh block?

Gareth avatar
Gareth

HI Tim, apologies only just seen this. I’ve not configured instance_refresh block on this module but have used it before. What’s your question?

loren avatar
loren

TIL, if trying to see all the validation warnings instead of the summarized “9 more similar warnings elsewhere”:

terraform validate -json | jq '.diagnostics[] | {detail: .detail, filename: .range.filename, start_line: .range.start.line}'
3
Steve Wade avatar
Steve Wade

n00b question incoming … can someone explain to me exactly what the below actually means please …
Requester VPC (vpc-03a0a62a6d1e42513) peering connection attributes:
DNS resolution from accepter VPC to private IP
Enabled

2021-04-27

Release notes from terraform avatar
Release notes from terraform
05:03:41 PM

v0.15.1 0.15.1 (April 26, 2021) ENHANCEMENTS:

config: Various Terraform language functions now have more precise inference rules for propagating the “sensitive” characteristic values. The affected functions are chunklist, concat, flatten, keys, length, lookup, merge, setproduct, tolist, tomap, values, and zipmap. The details are a little different for each of these but the general idea is to, as far as possible, preserve the sensitive characteristic on individual element or attribute values in…

Release notes from terraform avatar
Release notes from terraform
07:53:37 PM

v0.15.1 0.15.1 (April 26, 2021) ENHANCEMENTS:

config: Various Terraform language functions now have more precise inference rules for propagating the “sensitive” characteristic values. The affected functions are chunklist, concat, flatten, keys, length, lookup, merge, setproduct, tolist, tomap, values, and zipmap. The details are a little different for each of these but the general idea is to, as far as possible, preserve the sensitive characteristic on individual element or attribute values in…

Release v0.15.1 · hashicorp/terraform attachment image

0.15.1 (April 26, 2021) ENHANCEMENTS: config: Various Terraform language functions now have more precise inference rules for propagating the “sensitive” characteristic values. The affected functi…

Gerald avatar
Gerald

Hi folks, do you have group sentinel policy here?

2021-04-26

David avatar
David

Is it possible to alter hostnames per instance in an ASG?

loren avatar
loren

if you mean the hostname of the instances, sure the ASG doesn’t care…

if you mean multiple DNS names resolving to one ASG, sure… the certificate needs to be a wildcard or have a SAN for each valid name, and you need to set up the target group rules to match on the host

David avatar
David

I would like each instance to have a unique hostname containing the AZ

managedkaos avatar
managedkaos

If all you want is a TAG then you can configure that in the AWS config (see the doc). If you want a fully qualified domain name, then you may have to set something up to read the instance data and create DNS entries. https://docs.aws.amazon.com/autoscaling/ec2/userguide/autoscaling-tagging.html

Tagging Auto Scaling groups and instances - Amazon EC2 Auto Scaling

Add or remove tags for your Auto Scaling groups and Amazon EC2 instances to organize your resources.

managedkaos avatar
managedkaos


You can add multiple tags to each Auto Scaling group. Additionally, you can propagate the tags from the Auto Scaling group to the Amazon EC2 instances it launches.

Steve Wade avatar
Steve Wade

When setting up guarduty with a master/member setup do you always send the slack notifications from the master account or have it setup to notify from each of the member accounts?

Release notes from terraform avatar
Release notes from terraform
08:33:36 PM

v0.14.11 0.14.11 (April 26, 2021) ENHANCEMENTS: cli: Update the HashiCorp public key (#28503)

Release notes from terraform avatar
Release notes from terraform
08:33:36 PM

v0.15.1 0.15.1 (April 26, 2021) ENHANCEMENTS:

config: Various Terraform language functions now have more precise inference rules for propagating the “sensitive” characteristic values. The affected functions are chunklist, concat, flatten, keys, length, lookup, merge, setproduct, tolist, tomap, values, and zipmap. The details are a little different for each of these but the general idea is to, as far as possible, preserve the sensitive characteristic on individual element or attribute values in…

Release notes from terraform avatar
Release notes from terraform
08:53:39 PM

v0.13.7 0.13.7 (April 26, 2021) ENHANCEMENTS: cli: Update the HashiCorp public key (#28500)

Release notes from terraform avatar
Release notes from terraform
09:33:39 PM

v0.12.31 0.12.31 (April 26, 2021) ENHANCEMENTS: cli: Update the HashiCorp public key (#28499)

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

For those who missed it - HashiCorp were hit by the Codecov issue, and so all TF versions starting 0.12 had their signing key updated. Suggest you update to the most recent patch on the minor version you use. HashiCorp said they don’t foresee anyone being able to use this to deliver mal-providers, but it’s a good step to take anyway.

https://www.bleepingcomputer.com/news/security/hashicorp-is-the-latest-victim-of-codecov-supply-chain-attack/

5
2
Release notes from terraform avatar
Release notes from terraform
10:03:40 PM

v0.11.15 0.11.15 (April 26, 2021) IMPROVEMENTS: cli: Update the HashiCorp Public Key (#28498) backend/http: New options for retries on outgoing requests. (#19702)…

2021-04-25

2021-04-24

2021-04-23

Alex Renoki avatar
Alex Renoki

henlo

Alex Renoki avatar
Alex Renoki

I’m here because of the call for maintainers: https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment

1
Matt Gowie avatar
Matt Gowie

Are you interested in becoming a maintainer?

Matt Gowie avatar
Matt Gowie

Cool There is some info on the docs site here: https://docs.cloudposse.com/community/faq/

I’ll kick off the conversation with the contributor team and we’ll get back to you.

Alex Renoki avatar
Alex Renoki

sure

Alex Renoki avatar
Alex Renoki

I have several workloads running on Elastic Beanstalk and I’m relying on this Terraform module.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hey Alex! That’s great. Let’s talk week after next. I will DM you.

Steve Wade avatar
Steve Wade

does anyone have a recommended approach for setting up guardduty in a multi account setup from a terraform perspective?

Steve Wade avatar
Steve Wade

i have seen a lot of modules but wondered if their was any alignment

Leia Renée avatar
Leia Renée
05:30:45 PM

Hi guys , Do you have a sample repository which installs kubernetes cluster auto scaler that works with TF 15.0 properly I was using cookie labs which broken after upgrade. Thanks Leia https://www.linkedin.com/in/leia-renee/

Ryan Fisher avatar
Ryan Fisher

Hi all, I want to set up a greenfield AWS project using the CP resources as intended. What is the best place to start? Is it build the geodesic environment and follow the instructions in cloudposse/reference-architectures? The readme says to just clone the repo, set up AWS, and run make root but that target doesn’t exist.

Ryan Fisher avatar
Ryan Fisher

Also says:
Update the configuration for this account by editing the configs/master.tfvar file
That dir doesn’t exist in that repo

Ryan Fisher avatar
Ryan Fisher

Looking through https://github.com/cloudposse/tutorials, probably will figure it out

cloudposse/tutorials attachment image

Contribute to cloudposse/tutorials development by creating an account on GitHub.

Matt Gowie avatar
Matt Gowie

@ docs.cloudposse.com is the spot you want to be. Feel free to ask me any question. Reference arch is out of date, I wouldn’t look there.

Ryan Fisher avatar
Ryan Fisher

Thanks, going through the aws bootstraping tutorial. Makes sense so far. For some reason migrating tfstate to the s3 backend is failing though, not sure why.

Ryan Fisher avatar
Ryan Fisher
✗ . [none] tfstate-backend ⨠ aws-vault exec badger-dev -- aws s3 ls
Enter passphrase to unlock /conf/.awsvault/keys/:
2021-04-24 19:50:47 acme-ue2-tfstate-useful-satyr
Ryan Fisher avatar
Ryan Fisher
✗ . [none] tfstate-backend ⨠ aws-vault exec badger-dev -- terraform init
Enter passphrase to unlock /conf/.awsvault/keys/:
Initializing modules...

Initializing the backend...

Error: Error inspecting states in the "s3" backend:
    S3 bucket does not exist.

The referenced S3 bucket must have been previously created. If the S3 bucket
was created within the last minute, please wait for a minute or two and try
again.

Error: NoSuchBucket: The specified bucket does not exist
        status code: 404, request id: R9BK7CGDC6ADW18R, host id: OGOwOANW5lgRhvUmtI8exraeV7GBAyW45XlTtuelQLMWDFxnfNYAPlg
vbNYtmCPyFknapGRRAUQ=
Ryan Fisher avatar
Ryan Fisher

TF not finding the state bucket

Ryan Fisher avatar
Ryan Fisher

I can ls, cp, rm from the bucket so it’s definitely there

Matt Gowie avatar
Matt Gowie

Huh possibly a region issue? I’d compare the region of the bucket vs the stacks/catalog/globals.yaml backend config + [backend.json.tf](http://backend.json.tf) file.

Matt Gowie avatar
Matt Gowie

Also, I believe atmos will pass the -reconfigure flag when doing an tf init to deal with issues of the backend config changing… but I’m not 100% sure on that so maybe try deleting your .terraform directories.

Matt Gowie avatar
Matt Gowie

If you can find out what went wrong, please let me know. I’ve gone through that workflow a few times when writing it up.

Joe Hosteny avatar
Joe Hosteny

@Matt Gowie I think the script that invokes yq sets the region to uw2, when it created the state files in ee2. I was going to submit a PR to the tutorial. I hit this too.

Matt Gowie avatar
Matt Gowie

Ah damn, maybe I missed that one in a last minute switch to show dealing more than one region. @Joe Hosteny thanks for the info.

Joe Hosteny avatar
Joe Hosteny

No problem. I think workspace_key_prefix sounds like it needs to be set too? I discussed with @jose.amengual in another thread. Haven’t checked into it this weekend any more, but I noticed the literal “env:” in the state file s3 path

Matt Gowie avatar
Matt Gowie

@Joe Hosteny — I put up a PR for this. Mind giving it a review since you should be able to approve? https://github.com/cloudposse/tutorials/pull/5

fix: updates random-pet to use ue2 > uw2 in sub by Gowiem · Pull Request #5 · cloudposse/tutorials attachment image

what Updates random-pet.sh script to properly use ue2 when doing a substitution of the bucket + dynamo table names. why This was causing issues with folks not being able to find their bucket. r…

Joe Hosteny avatar
Joe Hosteny

Thanks Approved - that’s what I did as well

Matt Gowie avatar
Matt Gowie

Thanks @Joe Hosteny!

Ryan Fisher avatar
Ryan Fisher

Thanks. I’ll test tomorrow.

Matt Gowie avatar
Matt Gowie

Thanks Ryan!

2021-04-22

Mazin Ahmed avatar
Mazin Ahmed

I made a PR to build up statistics on TFSec findings, to filter results by check type.

Should make analyzing Terraform vulnerabilities much easier

Mazin Ahmed avatar
Mazin Ahmed

If you’re into cloud security, would be happy to connect with you on Twitter https://twitter.com/mazen160 :)

Mazin Ahmed (@mazen160) | Twitter
The latest Tweets from Mazin Ahmed (@mazen160). Hacker Builder. I talk about Web Security, Security Engineering, DevSecOps, and Tech Startups. Founder @FullHunt. [email protected] Blue by Day. Red by Night
Marcin Brański avatar
Marcin Brański

Looks amazing!

Amit Karpe avatar
Amit Karpe

Hi, I am using elasticsearch module. Which is trying to create IAM user.

➜ tf apply -auto-approve
module.elasticsearch.aws_security_group.default[0]: Refreshing state... [id=sg-0e56e3767a5b60fe7]
module.elasticsearch.aws_security_group_rule.ingress_cidr_blocks[0]: Refreshing state... [id=sgrule-3053224398]
module.elasticsearch.aws_security_group_rule.egress[0]: Refreshing state... [id=sgrule-3045719721]
module.elasticsearch.aws_iam_role.elasticsearch_user[0]: Creating...
module.elasticsearch.aws_elasticsearch_domain.default[0]: Creating...
module.elasticsearch.aws_elasticsearch_domain.default[0]: Still creating... [10s elapsed]

Error: Error creating IAM Role es-msf-gplsmzapp-1-user: AccessDenied: User: arn:aws:iam::XXXX:test is not authorized to perform: iam:CreateRole on resource: arn:aws:iam::330153026934:role/es-msf-gplsmzapp-1-user with an explicit deny
        status code: 403, request id: 87e0551b-3953-4e56-b364-a02b26065841

  on .terraform/modules/elasticsearch/main.tf line 68, in resource "aws_iam_role" "elasticsearch_user":
  68: resource "aws_iam_role" "elasticsearch_user" {



Error: Error creating ElasticSearch domain: ValidationException: You must specify exactly one subnet.

  on .terraform/modules/elasticsearch/main.tf line 100, in resource "aws_elasticsearch_domain" "default":
 100: resource "aws_elasticsearch_domain" "default" {

Just curious, can I skip user or role creation process? In other env, I was able to provision ES/Kibana without creating this user. I was wrong, I found that it create new Role on other env. So role creation is default process. Want to know can we skip role creation? Can someone guide me?

Matt Gowie avatar
Matt Gowie

Hey Amit, check out the variable here: https://github.com/cloudposse/terraform-aws-elasticsearch#input_create_iam_service_linked_role

That enables / disables creation of the role.

The elasticsearch_user resource that is created can be found here: https://github.com/cloudposse/terraform-aws-elasticsearch/blob/master/main.tf#L68

There is logic which you could use to disable that resource if you want.

cloudposse/terraform-aws-elasticsearch attachment image

Terraform module to provision an Elasticsearch cluster with built-in integrations with Kibana and Logstash. - cloudposse/terraform-aws-elasticsearch

cloudposse/terraform-aws-elasticsearch attachment image

Terraform module to provision an Elasticsearch cluster with built-in integrations with Kibana and Logstash. - cloudposse/terraform-aws-elasticsearch

Dan avatar

Hi lads, I have an issue with cloudposse/elasticsearch/aws module although I set create_iam_service_linked_role = "false" and there is nothing in the plan related to AWSServiceRoleForAmazonElasticsearchService apply is trowing

Error: Error creating service-linked role with name es.amazonaws.com: InvalidInput: Service role name `AWSServiceRoleForAmazonElasticsearchService` has been taken in this account, please try a different suffix.
	status code: 400, request id: 9c27ff1d-5ec9-496c-8290-cf65180ffb69

  on iam.tf line 49, in resource "aws_iam_service_linked_role" "es":
  49: resource "aws_iam_service_linked_role" "es" {

any idea what can be?

Matt Gowie avatar
Matt Gowie

That value should be bool — Are you trying to pass it as a string?

Matt Gowie avatar
Matt Gowie

If it’s still trying to create that role then I’d look at the logic behind that variable and try to trace it back. If there is a bug, either open an issue or put up a PR and then post in #pr-reviews.

1
Dan avatar

thanks, I already did as bool, I will dig deeper

Lee Skillen avatar
Lee Skillen

Seems like there’s an issue open for the issue: https://github.com/terraform-providers/terraform-provider-aws/issues/5218

Lee Skillen avatar
Lee Skillen

Actually, I misread, but does give some insight into the parameter.

Lee Skillen avatar
Lee Skillen

Definitely could see how it’d fail as a string even if specified as "false" (non-empty string is truthy, I assume). Looks like using = false should work. Good luck!

Gerald avatar
Gerald
08:40:38 PM

Hi folks, I’m using Cloudfosse ECS task module. Do you have another modules that we can export the following docker label below in container to datadog logs as tags?

Joe Presley avatar
Joe Presley

Is it possible to chain Terraform on_depends for a list. Basically I’m trying to do something like

resource "google_secret_manager_secret_version" "main" {
  count       = length(var.secret_data)
  secret      = google_secret_manager_secret.main.id
  secret_data = var.secret_data[count.index].secret_data
  enabled     = var.secret_data[count.index].enabled
  depends_on = [google_secret_manager_secret_version.main[count.index - 1]]
}

to create secret manager versions in a specific order. When I try it, I get an error on the  depends_on  line  A single static variable reference is required: only attribute access and indexing with constant keys. No calculations, function calls, template expressions, etc are allowed here.

Matt Gowie avatar
Matt Gowie

Does

resource "google_secret_manager_secret_version" "main" {
  count       = length(var.secret_data)
  secret      = google_secret_manager_secret.main.id
  secret_data = var.secret_data[count.index].secret_data
  enabled     = var.secret_data[count.index].enabled
  depends_on = [google_secret_manager_secret_version.main]
}

Not do what you want?

loren avatar
loren

no you can’t use depends_on on the same resource block to order the creation… you can order them, but only with separate resource blocks. another “kinda” option is to use -parallelism=1

Joe Presley avatar
Joe Presley

I get a cycle error when I use depends_on on the same resource block. I’ll try to use parallelism=1 to see if that maintains order of the list.

Joe Presley avatar
Joe Presley

I just tested with parellism=1 . It doesn’t preserve the order of the list. So secret version 7 is created first instead of secret version 1.

loren avatar
loren

i can imagine a module to try to make it easier, but it would still involve multiple resources, each using depends_on for the prior one. say 10 resources, accepting a list of up to 10 secrets. obvs the number is arbitrary. and if you have more than that number of secrets, invoke the module more than once using module-level depends_on for ordering

Joe Presley avatar
Joe Presley

I think you’re right. I’ll need to have the logic for the depends_on on a module level. Then the user invokes the secret-version module multiple times with a chain of depends_on.

Joe Presley avatar
Joe Presley

The only other way I see doing it is to create a module for a secret_version and let the user chain the depends_on in the module calls.

Heath Snow avatar
Heath Snow

Question regarding the https://github.com/cloudposse/terraform-aws-mq-broker module. It looks to me like the ingress SGs are messed up. I’m seeing from_port and to_port set to 0 for ingress but I don’t see protocol set to -1 to get All TCP traffic. This means that all ingress are getting the 0 port but that just won’t work for connecting to a broker. Does that sound right?

cloudposse/terraform-aws-mq-broker attachment image

Terraform module for provisioning an AmazonMQ broker - cloudposse/terraform-aws-mq-broker

Heath Snow avatar
Heath Snow

Or maybe I don’t understand how port 0 is supposed to work?

cloudposse/terraform-aws-mq-broker attachment image

Terraform module for provisioning an AmazonMQ broker - cloudposse/terraform-aws-mq-broker

MattyB avatar
MattyB

Unlelss I’m mistaken from and to port “0” should mean all ports. the protocol is “tcp” which will allow only TCP traffic https://github.com/cloudposse/terraform-aws-mq-broker/blob/master/sg.tf

cloudposse/terraform-aws-mq-broker attachment image

Terraform module for provisioning an AmazonMQ broker - cloudposse/terraform-aws-mq-broker

MattyB avatar
MattyB

Part of the code that I’m seeing

resource "aws_security_group_rule"   description              = "Allow inbound traffic from existing Security Groups"
  from_port                = 0
  to_port                  = 0
  protocol                 = "tcp"
  type                     = "ingress"
}
MattyB avatar
MattyB

whelp I guess I’m mistaken…for wide open ports we’re using from 0 to 65535. I have to be going crazy

Heath Snow avatar
Heath Snow

Doesn’t protocol need to be “all” or “-1” for this to work?

Heath Snow avatar
Heath Snow

I’m testing a RabbitMQ broker now with nc -z -v <hostname> 5671 and it doesn’t work with “tcp” and port 0 .

Heath Snow avatar
Heath Snow

but works with port set to 5671 explicitely.

MattyB avatar
MattyB

looks like you’re right

Heath Snow avatar
Heath Snow

Okay, I thought I was loosing it. I’ll submit a PR shortly.

MattyB avatar
MattyB
cloudposse/terraform-aws-eks-cluster attachment image

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

MattyB avatar
MattyB

https://github.com/cloudposse/terraform-aws-rds-cluster/blob/a51d466251f9584236606007aefe1578638ce684/main.tf#L25

might be more suited to this particular project (specifying a single port)

cloudposse/terraform-aws-rds-cluster attachment image

Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster

Heath Snow avatar
Heath Snow

Yeah, in my issue> I eluded to refining the ports further:<span class=”message__quotation”
Even this is a bit wide open IMO when it should be restricting traffic to only the ports needed but at least this will fix it.</span>

Incorrect protocol for ingress security groups · Issue #29 · cloudposse/terraform-aws-mq-broker attachment image

Describe the Bug When using the allowed_security_groups or allowed_cidr_blocks inputs they are setting port 0 traffic as allowed. This doesn&#39;t work since any broker queue (ActiveMQ or RabbitMQ)…

1
Heath Snow avatar
Heath Snow

There’s only 2 supported brokers I think, ActiveMQ and RabbitMQ. Maybe my PR should include the specific ports you think?

Heath Snow avatar
Heath Snow

Or I could fix the immediate issue and follow up w/a 2nd PR to restrict ‘em.

MattyB avatar
MattyB

I personally think it should be restricted to a default port like they do with the RDS cluster.

Heath Snow avatar
Heath Snow

In a [previous PR> I was conversing with “Gowiem” and he said he was using this module so now I’m wondering how he’s using it in this state </i](https://github.com/cloudposse/terraform-aws-mq-broker/pull/28)

Dynamic logs block by heathsnow · Pull Request #28 · cloudposse/terraform-aws-mq-broker attachment image

what RabbitMQ support via dynamic logs block RabbitMQ general logs cannot be enabled. hashicorp/terraform-provider-aws#18067 why Support engine_type = &quot;RabbitMQ&quot; references has…

2021-04-21

Kim avatar

is anyone here can help me to use this project https://github.com/cloudposse/terraform-aws-config ?

cloudposse/terraform-aws-config attachment image

This module configures AWS Config, a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. - cloudposse/terraform-aws-config

Matt Gowie avatar
Matt Gowie

What help do you need @?

cloudposse/terraform-aws-config attachment image

This module configures AWS Config, a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. - cloudposse/terraform-aws-config

Matt Gowie avatar
Matt Gowie

Ask some specific questions and I’m sure folks can help out. Definitely be sure to check the examples/complete directory if need direction on how to use it generally.

Kim avatar

In fact, i need to know how to start using the project stp by stp

Matt Gowie avatar
Matt Gowie

@ I’d check out some tutorials from HashiCorp Learn — The modules track might help https://learn.hashicorp.com/collections/terraform/modules

Going step by step through how to execute Terraform and use a module / talk through the AWS process is a bit much to go into in a forum like this unfortunately, so doing some research on your own and then circling back with specific questions like “Hey why am I getting this error” will allow me or others to help you.

Reuse Configuration with Modules | Terraform - HashiCorp Learn attachment image

Learn how to provision, secure, connect, and run any infrastructure for any application.

O K avatar

Hi All! I deployed the following terraform config in one account and it works fine . Currently I’m trying to deploy the same code in another account and facing the error below elasticsearch stucks in Loading state I checked STS is enabled in my region, so this is not a case https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/aes-handling-errors.html#es-vpc-sts

module "elasticsearch-app" {
  source = "../../../external_modules/cloudposse/terraform-aws-elasticsearch"
  stage  = var.environment
  name   = "elasticsearch-ap"
  //  TODO: setup DNS zone for elasticsearch-app
  //  dns_zone_id             = "Z14EN2YD427LRQ"
  security_groups         = [module.stage_vpc.default_security_group_id, module.eks.worker_security_group_id]
  vpc_id                  = module.stage_vpc.vpc_id
  subnet_ids              = [module.stage_vpc.public_subnets[0]]
  availability_zone_count = 1
  zone_awareness_enabled  = "false"
  elasticsearch_version   = "7.9"
  instance_type           = "t2.small.elasticsearch"
  instance_count          = 1
  ebs_volume_size         = 10
  // TODO: create strict policies for elastic assumed roles
  iam_role_arns           = ["*"]
  iam_actions             = ["es:ESHttpGet"] #, "es:ESHttpPut", "es:ESHttpPost", "es:ESHttpHead", "es:ESHttpDelete"]
  encrypt_at_rest_enabled = "false"
  kibana_subdomain_name   = "kibana-es-apps"

  # Disable option: Require HTTPS for all traffic to the domain
  # Required as global-search service doesn't work with https
  domain_endpoint_options_enforce_https = false


  advanced_security_options_internal_user_database_enabled = true
  advanced_security_options_master_user_name               = "elasticuser"
  advanced_security_options_master_user_password           = aws_ssm_parameter.elasticsearch_apps_password.value

  // Required as workaround: <https://github.com/cloudposse/terraform-aws-elasticsearch/issues/81>
  advanced_options = {
    "rest.action.multi.allow_explicit_index" = "true"
  }

}

Error:

module.elasticsearch-app.aws_elasticsearch_domain.default[0]: Still creating... [59m51s elapsed]
module.elasticsearch-app.aws_elasticsearch_domain.default[0]: Still creating... [1h0m1s elapsed]

Error: Error waiting for ElasticSearch domain to be created: "arn:aws:es:us-east-1:11111111111111:domain/stage-elasticsearch-ap": Timeout while waiting for the domain to be created

  on ../../../external_modules/cloudposse/terraform-aws-elasticsearch/main.tf line 100, in resource "aws_elasticsearch_domain" "default":
 100: resource "aws_elasticsearch_domain" "default" {
Amazon Elasticsearch Service Troubleshooting - Amazon Elasticsearch Service

Learn how to identify and solve common Amazon Elasticsearch Service errors.

Matt Gowie avatar
Matt Gowie

@ the AWS ElasticSearch service is known to be slow / crappy. I’ve had plenty of issues with it taking hour+ deploy times and terraform timing out. I would guess that your issue is due to AWS ElasticSearch and not due to the module.

Amazon Elasticsearch Service Troubleshooting - Amazon Elasticsearch Service

Learn how to identify and solve common Amazon Elasticsearch Service errors.

Matt Gowie avatar
Matt Gowie

I’d suggest trying again tomorrow

O K avatar

Thank you, man

O K avatar

changed instance from t2.small -> t2.medium and it deployed in 15 min! probably more memory is needed or AWS knows the stuff

Matt Gowie avatar
Matt Gowie

AWS is weak: They allow you to use t2.small instances, but they basically don’t work. Like I’ve had issues like you just ran into and just general day to day memory failures due to trying to use too small of instances. It’s like AWS wants to show off that you can keep your ES clusters cheap, but in reality…. that shit don’t work.

Matt Gowie avatar
Matt Gowie

ES is extremely memory hungry so I’d rather them just come out and say: Hey we don’t allow you to use cheap instances because ES is just way too memory intensive. You have to use these expensive boxes over here.

O K avatar
Introducing OpenSearch | Amazon Web Services attachment image

Today, we are introducing the OpenSearch project, a community-driven, open source fork of Elasticsearch and Kibana. We are making a long-term investment in OpenSearch to ensure users continue to have a secure, high-quality, fully open source search and analytics suite with a rich roadmap of new and innovative functionality. This project includes OpenSearch (derived from […]

Matt Gowie avatar
Matt Gowie

Not sure if I have a fully formed opinion. But I hope that causes them to put more effort into ES as a managed service as I’ve been pretty unhappy with it so far. I’ve lost clusters and data because clusters can just get into a failed state and the AWS documentation just says “Hey contact support to fix”. That’s BS in my mind. So if they start addressing issues like that then I’ll be happier.

Matt Gowie avatar
Matt Gowie

Then again, if I never had to use ElasticSearch ever again, I’d be happy.

1
O K avatar

This is 0.12 version https://github.com/cloudposse/terraform-aws-elasticsearch. I wonder what should I check to solve this error

cloudposse/terraform-aws-elasticsearch attachment image

Terraform module to provision an Elasticsearch cluster with built-in integrations with Kibana and Logstash. - cloudposse/terraform-aws-elasticsearch

Tom Vaughan avatar
Tom Vaughan

Is there a way to set up terraform-aws-tfstate-backend so the state file is saved to a folder in an existing S3 bucket? The way I have been using this module there is a bucket for each state file and it is getting pretty cluttered. Was hoping there was a way to do this for better organization.

roth.andy avatar
roth.andy

You can specify a folder path in the key param when specifying the s3 state backend

roth.andy avatar
roth.andy
 backend "s3" {
   region         = "us-east-1"
   bucket         = "< the name of the S3 state bucket >"
   key            = "some/folder/structure/terraform.tfstate"
   dynamodb_table = "< the name of the DynamoDB locking table >"
   profile        = ""
   role_arn       = ""
   encrypt        = true
 }
Tom Vaughan avatar
Tom Vaughan

Thanks!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yup, this is the best way

MattyB avatar
MattyB

Regarding https://github.com/cloudposse/terraform-aws-alb and https://github.com/cloudposse/terraform-aws-nlb - is there a particular reason the module “access_logs” in nlb can’t look like alb? I’m more than happy to submit the PR. I didn’t know if I was missing something.

cloudposse/terraform-aws-alb attachment image

Terraform module to provision a standard ALB for HTTP/HTTP traffic - cloudposse/terraform-aws-alb

cloudposse/terraform-aws-nlb attachment image

Terraform module to provision a standard NLB for TCP/UDP/TLS traffic https://cloudposse.com/accelerate - cloudposse/terraform-aws-nlb

1
jose.amengual avatar
jose.amengual

send a PR, but there should be similar

cloudposse/terraform-aws-alb attachment image

Terraform module to provision a standard ALB for HTTP/HTTP traffic - cloudposse/terraform-aws-alb

cloudposse/terraform-aws-nlb attachment image

Terraform module to provision a standard NLB for TCP/UDP/TLS traffic https://cloudposse.com/accelerate - cloudposse/terraform-aws-nlb

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ya, they should be similar.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The NLB module was a contribution from some other organization

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Submit the PR and we’ll review in #pr-reviews

2
Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Did you hear about AWS’s new policy validation API and wished you can use it with your Terraform code? Now there’s a way: https://indeni.com/blog/integrating-awss-new-policy-validation-with-terraform-in-ci-cd/

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Is there a non-saas version?

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Not today. The API calls are made directly from our own AWS account. What’s your thinking?

bds avatar

I was going to ask the same thing. Looks pretty cool

1
tolstikov avatar
tolstikov

any ideas how to detect configuration drift, e.g. resources created manually without terraform? Any tools/vendors for this kind of task?

Charles Kim avatar
Charles Kim
Alex - there’s [driftctl> but need to understand what you determine as drift. In TF, you can use <https://www.terraform.io/docs/language/meta-arguments/lifecycle.html meta argument to intentionally ignore changes](https://driftctl.com/) . Also, resources are often created out of events (e.g. lambda function creates an s3). What’s your definition of config drift?
Catch Infrastructure Drift attachment image

driftctl is a free and open-source CLI that warns of infrastructure drift and fills in the missing piece in your DevSecOps toolbox.

tolstikov avatar
tolstikov

in this case I mean that I need to detect any resources in the AWS account which were not created by the terraform code @

tolstikov avatar
tolstikov

driftctl seems good, thanks! @

Charles Kim avatar
Charles Kim

Try out driftctl. It has some issues but the team is rather responsive. disclosure: i work at Cloudrail and we’re focusing on solving security issues resulting from config drift. Here’s a sample: https://indeni.com/blog/identifying-iam-configuration-drift/

Identifying IAM Configuration Drift | Indeni

So, your team, or even possibly your entire organization, has decided to standardize on using infrastructure-as-code to define IAM entities within cloud environments. For example, […]

1
Steve Wade avatar
Steve Wade

@ did you use driftctl with Atlantis

Mazin Ahmed avatar
Mazin Ahmed

I’m working on a new project that will be released soon! Would love to hear your feedback, let me know your Github ID if you would like a preview before release https://twitter.com/mazen160/status/1383475198544936964

My side-project for the weekend, tfquery: a tool for SQL-like queries in your Terraform State files. My goal is to be able to say: $> select * from resources where type = “aws_s3_bucket” and “is_encrypted” = false

Will try to open-source it since I didn’t find a good solution

imiltchman avatar
imiltchman

This sounds really cool. Will you be able to query the entire S3 backend, or just one state file at a time?

My side-project for the weekend, tfquery: a tool for SQL-like queries in your Terraform State files. My goal is to be able to say: $> select * from resources where type = “aws_s3_bucket” and “is_encrypted” = false

Will try to open-source it since I didn’t find a good solution

Mazin Ahmed avatar
Mazin Ahmed

@imiltchman You can import sync your S3 backend locally, and then run a query on all Tfstate files at the same time, it’s been really helpful for me

Mazin Ahmed avatar
Mazin Ahmed

Let me know if I can add you! Hopefully I can hear thoughts from you if you get a chance

imiltchman avatar
imiltchman

I don’t have any immediate use cases for this, but I’ll keep it in mind. Wishing you success with the launch.

David Fernandez avatar
David Fernandez

Hi, I’m working with https://github.com/cloudposse/terraform-aws-documentdb-cluster and I wish I could use disabled the TLS of documentdb, but not find how, Is posible with this module?

cloudposse/terraform-aws-documentdb-cluster attachment image

Terraform module to provision a DocumentDB cluster on AWS - cloudposse/terraform-aws-documentdb-cluster

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if something is missing in the module that this resource supports aws_docdb_cluster, you can open an issue or a PR

cloudposse/terraform-aws-documentdb-cluster attachment image

Terraform module to provision a DocumentDB cluster on AWS - cloudposse/terraform-aws-documentdb-cluster

David Fernandez avatar
David Fernandez

ok, thanks so much!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
parameter {
    name  = "tls"
    value = "enabled"
  }
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

which in the module is this var

<https://github.com/cloudposse/terraform-aws-documentdb-cluster/blob/master/variables.tf#L84>
David Fernandez avatar
David Fernandez

ok, I try with the variable “cluster_parameters”

David Fernandez avatar
David Fernandez

Sorry one question, if I use for example :

variable "parameters" {
  type =list(object({
    apply_method = string
    name         = string
    value        = string
  }))
  default     = [{"apply_method"="true","name"="tls","value"="disabled"}]
  description = "List of parameters for documentdb"
} 
David Fernandez avatar
David Fernandez

How to I could use in my main.tf? I get values with the method for_each ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
module "documentdb_cluster" {
  source  = "......"
  version = x.x.x

  cluster_size                    = var.cluster_size
  
  cluster_parameters = [
    {
      apply_method = ""
      name         = "tls"
      value        = "disabled"
    }
  ]
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

or

module "documentdb_cluster" {
  source  = "......"
  version = x.x.x

  cluster_size                    = var.cluster_size
  
  cluster_parameters = var.parameters
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the module does for_each on the list of objects

David Fernandez avatar
David Fernandez

ok,thanks!

2021-04-20

Gareth avatar
Gareth

Morning everyone, I have a resource that creates aws_cloudfront_origin_request_policy. Which I then later reference in a locals section cf_custom_request_policy_map = { for k, v in aws_cloudfront_origin_request_policy.this : k => v.id if length(aws_cloudfront_origin_request_policy.this) > 0 } and then merge with all_policy_maps = merge([local.cf](http://local.cf|local.cf>_managed_policy_map, length(<http://local.cf)_custom_request_policy_map) The resource is togglable so won’t always be there. Everything looks to work but I do a lot of sanity checking / viewing of outputs in the console when I’m trying to debug by code and when trying to view local.all_policy_maps I get Error: Result depends on values that cannot be determined until after "terraform apply". Which makes sense but my question now is… *Is there a better way I should be referencing the output of the resource?* If this was a module I’d normally use an output but its not part of a module, the resource and local are all within the same tf script and are part of the same single apply.

Welcome all comments and thank you all in advance.

Amit Karpe avatar
Amit Karpe

Hi, I am using this module (terraform-aws-elasticsearch). Where I was looking to enable Fine-Grained Access Control in Amazon Elasticsearch Service. Based on my understanding, can I say:

advanced_security_options_internal_user_database_enabled = true

Above configuration will enable it?

Marcin Brański avatar
Marcin Brański

I did that using v0.30.0 version of that module. Add such options beside other required parameters specified in docs.
advanced_security_options_enabled = true
advanced_security_options_internal_user_database_enabled = true
advanced_security_options_master_user_name = “master”
advanced_security_options_master_user_password = “pass”

Amit Karpe avatar
Amit Karpe

thank you!

Amit Karpe avatar
Amit Karpe

If not then I want to know how to enable “Fine-grained access control” for ES using above module?

Aaditya Nandeshwar avatar
Aaditya Nandeshwar

Hello Folks, How do I use multiple managed rules in below aws config module

module "example" {
  source = "cloudposse/config/aws"
  # Cloud Posse recommends pinning every module to a specific version
  # version     = "x.x.x"

  create_sns_topic = true
  create_iam_role  = true

  managed_rules = {
    account-part-of-organizations = {
      description  = "Checks whether AWS account is part of AWS Organizations. The rule is NON_COMPLIANT if an AWS account is not part of AWS Organizations or AWS Organizations master account ID does not match rule parameter MasterAccountId.",
      identifier   = "ACCOUNT_PART_OF_ORGANIZATIONS",
      trigger_type = "PERIODIC"
      enabled      = true
    }
  }
}
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-aws-config attachment image

This module configures AWS Config, a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. - cloudposse/terraform-aws-config

Aaditya Nandeshwar avatar
Aaditya Nandeshwar

I’m trying with below approach but getting some error

module "config" {
  source = "cloudposse/config/aws"
  # Cloud Posse recommends pinning every module to a specific version
  # version     = "x.x.x"
  s3_bucket_arn                    = module.s3_config_bucket.bucket_arn
  s3_bucket_id                     = module.s3_config_bucket.bucket_id
  global_resource_collector_region = "ap-south-1"
  create_sns_topic                 = false
  create_iam_role                  = true
  
  managed_rules = {
    access-keys-rotated = {
      description  = "Checks if the active access keys are rotated within the number of days specified in maxAccessKeyAge. The rule is NON_COMPLIANT if the access keys have not been rotated for more than maxAccessKeyAge number of days.",
      identifier   = "ACCESS_KEYS_ROTATED",
      trigger_type = "PERIODIC"
      enabled      = true
      input_parameters = [
        {
        maxAccessKeyAge = 90
        }
      ]
     },
    acm-certificate-expiration-check = {
      description  = "Checks if AWS Certificate Manager Certificates in your account are marked for expiration within the specified number of days. Certificates provided by ACM are automatically renewed. ACM does not automatically renew certificates that you import",
      identifier   = "ACM_CERTIFICATE_EXPIRATION_CHECK",
      trigger_type = "Configuration changes"
      enabled      = true
      input_parameters = [
        {
        daysToExpiration = 15
        }
      ]
    }
  }
} 
hkaya avatar
hkaya

Maybe this has already been discussed, but I could not find anything useful so I figured it might be worth asking anyway. Is there a known best way or practice to deal with a larger number of helm_releases, ideally in a dynamic fashion? My use case looks like this: • one pipeline builds a release from repository 1 and pushes helm charts to an artifactory folder ◦ the number of the helm charts can vary from 1 to >50 (can also grow over time) • another pipeline gets triggered from the first and starts with a terraform run, where some helm_release resources get deployed, the idea was to look up the list of the services from the chart repo (can be done with the jfrog cli in the pipeline) and use this list for some kind of iteration over either ◦ a module where the service parameters are fed into along with the service name from the list ◦ another method based on terraform, unknown to me so far Or am I going the wrong path trying to solve this with terraform when it should be done with native helm?

Thank you for your suggestions.

managedkaos avatar
managedkaos

sharing for the (terraform) culture!

alias moduleinit='touch {main,variables,outputs}.tf && wget <https://raw.githubusercontent.com/github/gitignore/master/Terraform.gitignore> -O .gitignore'
4
Matt Gowie avatar
Matt Gowie

Here is my similar bash hackery around this — https://github.com/Gowiem/DotFiles/blob/master/terraform/functions.zsh#L70

Though I honestly don’t use that much anymore.

Gowiem/DotFiles attachment image

Gowiem DotFiles Repo. Contribute to Gowiem/DotFiles development by creating an account on GitHub.

managedkaos avatar
managedkaos

nice.

Steve Wade avatar
Steve Wade

incoming n00b question I am trying to work out what zone awareness means in aws elasticsearch. Do you have to use it when using a multi node setup

Matt Gowie avatar
Matt Gowie

Probably a good question for #aws or read through AWS docs on the subject.

2021-04-19

Steve Wade avatar
Steve Wade

how likely (in time) would it be that if I created a PR for https://github.com/cloudposse/terraform-aws-documentdb-cluster that it would be merged and tagged?

cloudposse/terraform-aws-documentdb-cluster

Terraform module to provision a DocumentDB cluster on AWS - cloudposse/terraform-aws-documentdb-cluster

Marcin Brański avatar
Marcin Brański

Like with any PR It can be few hours up to infinity. Depends on how complex the PR is, if best practices are there, tests if any and if functionality added makes sense.

What do you want to implement? If you need guidance let me know

cloudposse/terraform-aws-documentdb-cluster

Terraform module to provision a DocumentDB cluster on AWS - cloudposse/terraform-aws-documentdb-cluster

Matt Gowie avatar
Matt Gowie

@ we’re pretty good about providing guidance so if you put something up then it’ll likely get eyes on it and you’ll get direction if something needs to change. If nobody responds quickly then feel free to ping in #pr-reviews or ping me directly.

Steve Wade avatar
Steve Wade

i realised that what i was wanting to do doesn’t make sense for the module

Steve Wade avatar
Steve Wade

so we just forked what we needed

Matt Gowie avatar
Matt Gowie

barak avatar
barak

Checkov 2.0 is released !  A ton of work went into this from the Bridgecrew team (and from you all) and we’re super excited for this milestone for the project. TL;DR the update includes:

• A completely rearchitected graph-based Terraform scanning framework. This allows for multi-resource queries with improved variable resolution and drastically increases performance.

• Checkov can now scan Dockerfiles for misconfigurations.

• We’ve added nearly 250 new out-of-the-box policies, including existing attribute-based ones and new graph-based ones. To learn more, check out:

• The Bridgecrew blog post//bridgecrew.io/blog/checkov-2-0-release)

Checkov 2.0: Deeper, broader, and faster IaC scanning | Bridgecrew Blog attachment image

Introducing our biggest update to Checkov 2.0 yet including an all-new graph-based framework, 250 new policies, and Dockerfile support.

2
1
2
Brandon Metcalf avatar
Brandon Metcalf

hello. is there an open issue to address the deprecated use of null_data_source: https://github.com/cloudposse/terraform-aws-ec2-instance/blob/4f28ecce852107011f66bf74bb6b32691605b368/main.tf#L153 ? i didn’t find anything and can submit a PR. thanks.

cloudposse/terraform-aws-ec2-instance

Terraform module for provisioning a general purpose EC2 host - cloudposse/terraform-aws-ec2-instance

Matt Gowie avatar
Matt Gowie

Doesn’t look like it @ — Feel free to submit a PR and post it here or #pr-reviews and I’ll give it a review.

cloudposse/terraform-aws-ec2-instance

Terraform module for provisioning a general purpose EC2 host - cloudposse/terraform-aws-ec2-instance

1
Gareth avatar
Gareth

Hopefully a simple question… Is it possible to do multiple comparisons like this? cookie_behavior = local.myvalue == "none" || "whitelist" || "all" ? local.myvalue : null This currently errors; So, I assume not and continuing with the assumptions, I assume the only real option is to use a regular expression or cookie_behavior = local.myvalue == "none" || local.myvalue == "whitelist" || local.myvalue == "all" ? local.myvalue : null

loren avatar
loren

yes but they are separate comparison expressions when you set it up like that:

cookie_behavior = (local.myvalue == "none" || local.myvalue == "whitelist" || local.myvalue == "all") ? local.myvalue : null

or you can use a list with contains():

cookie_behavior = contains(["none", "whitelist", "all"], local.myvalue) ? local.myvalue : null
Gareth avatar
Gareth

Thank you Loren,

vicken avatar
vicken

Has anybody run into this issue before changing number of nodes with the msk module? https://github.com/cloudposse/terraform-aws-msk-apache-kafka-cluster/issues/17

Invalid index error when changing `number_of_broker_nodes` variable · Issue #17 · cloudposse/terraform-aws-msk-apache-kafka-cluster

Found a bug? Maybe our Slack Community can help. Describe the Bug Invalid index error when changing number_of_broker_nodes variable from 2 to 4. (The # of AZ&#39;s is 2 instead of 3 like the exampl…

Mohammed Yahya avatar
Mohammed Yahya
New Terraform Tutorial: Deploy Infrastructure with the Terraform Cloud Operator for Kubernetes attachment image

Learn how to use the Terraform Cloud Operator for Kubernetes to manage the infrastructure lifecycle through a Kubernetes custom resource.

1
Steve Wade avatar
Steve Wade

Can anyone recommend an upstream elastic search service module it needs to be able to handle single and multi node setups with instance and ebs storage options

Matt Gowie avatar
Matt Gowie
cloudposse/terraform-aws-elasticsearch

Terraform module to provision an Elasticsearch cluster with built-in integrations with Kibana and Logstash. - cloudposse/terraform-aws-elasticsearch

Steve Wade avatar
Steve Wade

I have created my own and used it for a long time but it doesn’t fit my current clients use case as it needs to be more flexible

Gareth avatar
Gareth

Good evening, has anybody got a suggestions as to the problem here: terraform 0.13.5 is exiting with this error: Error: rpc error: code = Unavailable desc = transport is closing

When trying to apply a aws_cloudfront_origin_request_policy I’ve made.

Gareth avatar
Gareth

if anybody has the same issue here. my problem was

resource "aws_cloudfront_origin_request_policy" "example" {
  name    = "example-policy"
  comment = "example comment"
  cookies_config {
    cookie_behavior = "none"
     cookies {
       items = []
     }
  }

leaving the cookies {} section in place when none is set caused the error. Same with either of headers_config & query_strings

Now to find a way to use dynamic to exclude those sections completely if they are set to none.

Gareth avatar
Gareth

***removed, looks like the example did work but didn’t paste correctly. Above issue must be with my inputs. Sorry to have wasted peoples time.

2021-04-18

Brij S avatar
Brij S

Hi everyone, just wanted to see if anyone had a clever way of doing the following; I’d like to turn the following into a module (which is the easy part )

resource "vault_auth_backend" "aws" {
  type = "aws"
}

resource "vault_aws_auth_backend_role" "example" {
  backend                         = vault_auth_backend.aws.path
  bound_account_ids               = ["123456789012"]
  bound_iam_role_arns             = ["arn:aws:iam::123456789012:role/MyRole"]
}

If multiple account id’s are required then I can pass in a list to bound_account_ids and use count to iterate through it, however, if I wanted the IAM role name to be different for some of the account ids how could I achieve this? for_each ?

managedkaos avatar
managedkaos

@Brij S when you say “different for some of the account ids” things get kind of odd. yeah, you could use a for_each on a list of account ids, but if you want to vary the IAM role, you’ll need to change the data to a map. that way each item in the for_each loop will have an id and and IAM role associated with it.

Another option would be creating a module for each IAM role, that way you can associate the IDs with the module that has the IAM role they need.

there are a few ways you can approach this. just need to figure out which one is best for your use.

Matt Gowie avatar
Matt Gowie

Hey all — this Terraform issue could use some . It’s been around for almost 2 years and causes a lot of confusion with modules in the registry (which the Cloud Posse module library of course gets hit by): https://github.com/hashicorp/terraform/issues/21417

loren avatar
loren

Maybe ping mitchelh on twitter? Seems like it should be an easy one, and it seems like he’s able to get an eyeball on useability things like that more than comments on stale issues :/

Matt Gowie avatar
Matt Gowie

Not a bad idea. I’ll do so.

Matt Gowie avatar
Matt Gowie

Always feel like an ass doing that type of thing but went ahead and did it anyway.

loren avatar
loren

2 years seems like a reasonable period to wait before escalating to a more drastic measure

Matt Gowie avatar
Matt Gowie

Yeah agreed. It’s also the type of thing that likely burns people constantly, but the vast majority of module consumers aren’t going to actually look up this issue and give it a .

Alex Jurkiewicz avatar
Alex Jurkiewicz

I believe this is a limitation of HCL2 not of Terraform, per se

Alex Jurkiewicz avatar
Alex Jurkiewicz

(you might get a response which asks you to re-raise in a different repo)

1

2021-04-17

Mohammed Yahya avatar
Mohammed Yahya
a quick and dirty idea: Do you think using TOML instead for YAML JSON for passing tfvars to Terraform Stack will make sense?
Alex Jurkiewicz avatar
Alex Jurkiewicz

no, YAML and JSON have their limitations, but they are supported using the standard library for Terraform. The benefit of TOML is much smaller than the cost of added complexity to your process.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yea, IMO we don’t need another config format.

mcseoliver avatar
mcseoliver

Hi everyone

Just a baby learning terraform here. Will watch your channel and ask questions as they come

1
1

2021-04-16

Mazin Ahmed avatar
Mazin Ahmed

I’m trying to write a parser for tfstate files. Version 4 sounds doable, but Version 3 is quite hard to normalize. Is there is a way that I can automatically migrate version 3 to version 4 without doing a full upgrade on the codebase?

Matt Gowie avatar
Matt Gowie

That’s a pretty low level question — I’m not sure if anyone will really know off the top of their heads. I’d ask in the HashiCorp Terraform discourse board: https://discuss.hashicorp.com/

HashiCorp Discuss

HashiCorp

Mazin Ahmed avatar
Mazin Ahmed

Will try to check on the discourse board too

Mazin Ahmed avatar
Mazin Ahmed

I’m working on a fun open-source tool for Terraform DevOps that should be useful

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

there are more and more projects operating on the terraform state files. i’d look at how they are doing it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
camptocamp/terraboard attachment image

A web dashboard to inspect Terraform States - camptocamp/terraboard

Mohammed Naser avatar
Mohammed Naser

Any terratest users here? I’m wondering if anyone has hacked/played with integrating it with KinD to get a Kubernets cluster-on-the-fly inside Terratest

2021-04-15

David Napier avatar
David Napier

Why does terraform-aws-iam-user require a PGP key? o.O

David Napier avatar
David Napier

Hmm.. yeah, that makes the module pretty much useless for me. Darn.

David Napier avatar
David Napier

Don’t get me wrong, awesome craftsmanship though.

Zach avatar

if you’re trying to make a service user that has programmatic access only, they have a different module for it

David Napier avatar
David Napier

Just making a list of users that can log into the dashboard. I just used the aws_iam_user resource with a for_each loop inside.

sheldonh avatar
sheldonh

moved into #terragrunt Avoided terragrunt for a long time.

I’m now in a place where I don’t have access to github, using Azure Repos. I need to deploy multiple clones of an environment and managing state is annoying. I’m doing a lot of work to realize I’m basically writing my own Go implementation of terragrunt sorta.

Considering Atlantis runs with az repos, I need to simplify my layout, I’m working with Go developers, limited on github stuff, and terraform cloud and others aren’t most likely options at this moment (have to role my own with azure pipelines otherwise)…

I tried the yaml config and dives in deep but since this is basically only terraform the abstraction and debugging for my use case wasn’t ideal though it was pretty cool!

Is there any major reason I shouldn’t just go ahead and use terragrunt for this type of workflow?

sheldonh avatar
sheldonh

… moving this into #terragrunt didn’t realize dedicated channel.

Mario de Sá Vera avatar
Mario de Sá Vera
Hello Folks, just wondering if any of you have already gone through this ... I want to force pre-generated secrets into RDS using locals : locals {
  your_secret = jsondecode(
    data.aws_secretsmanager_secret_version.creds.secret_string
  )
}
Mario de Sá Vera avatar
Mario de Sá Vera
and then ... 

# Set the secrets from AWS Secrets Manager  username = “${local.your_secret.username}”  password = “${local.your_secret.password}”

Mario de Sá Vera avatar
Mario de Sá Vera

but Terraform insists saying the values are not set … tried several combinations of “ 7$ ${} … but starting to feel like I am doing the wrong thing here … any directions please ?

Mario de Sá Vera avatar
Mario de Sá Vera

and also checked : https://blog.gruntwork.io/a-comprehensive-guide-to-managing-secrets-in-your-terraform-code-1d586955ace1#bebe — got a feeling the module is not able to access the locals !???

A comprehensive guide to managing secrets in your Terraform code attachment image

One of the most common questions we get about using Terraform to manage infrastructure as code is how to handle secrets such as passwords…

Heath Snow avatar
Heath Snow

This suggests you may need to add more to your data call, excerpt:

jsondecode(data.aws_secretsmanager_secret_version.example.secret_string)["key1"]
A comprehensive guide to managing secrets in your Terraform code attachment image

One of the most common questions we get about using Terraform to manage infrastructure as code is how to handle secrets such as passwords…

Mario de Sá Vera avatar
Mario de Sá Vera

hummmmm… interesting… will try that out just now HS ! appreciated.

1
Mario de Sá Vera avatar
Mario de Sá Vera

Error: provider.aws: aws_db_instance: : “password”: required field is not set

Mario de Sá Vera avatar
Mario de Sá Vera

it still does not get a value it seems …

Mario de Sá Vera avatar
Mario de Sá Vera

but thanks anyway … this is tripping me out !

Heath Snow avatar
Heath Snow

perhaps the data object isn’t getting right secret? ie

data "aws_secretsmanager_secret_version" "example" {
  secret_id = data.aws_secretsmanager_secret.example.id
}
Heath Snow avatar
Heath Snow

anyways, good luck!

Mario de Sá Vera avatar
Mario de Sá Vera

yeah … I am going on that direction now … take care my man !!!

Mario de Sá Vera avatar
Mario de Sá Vera

I was able to sort it out ! the Admin folks are blocking the password field to be set … I went ahead and HARDCODED a string and it is still NOT reading … ALL SORTED !!! MOSTLY APPRECIATED FOR YOUR SUPPORT !!!

2

2021-04-14

Mohammed Yahya avatar
Mohammed Yahya
Get started with a 1Password Secrets Automation workflow

Learn how to set up and use Secrets Automation to secure, orchestrate, and manage your company’s infrastructure secrets.

Mohammed Yahya avatar
Mohammed Yahya
07:52:04 AM
Get started with a 1Password Secrets Automation workflow

Learn how to set up and use Secrets Automation to secure, orchestrate, and manage your company’s infrastructure secrets.

Mohammed Yahya avatar
Mohammed Yahya
1Password/terraform-provider-onepassword

Use the 1Password Connect Terraform Provider to reference, create, or update items in your 1Password Vaults. - 1Password/terraform-provider-onepassword

Release notes from terraform avatar
Release notes from terraform
04:03:28 PM

v0.15.0 0.15.0 (April 14, 2021) UPGRADE NOTES AND BREAKING CHANGES: The following is a summary of each of the changes in this release that might require special consideration when upgrading. Refer to the Terraform v0.15 upgrade guide for more details and recommended upgrade steps.

“Proxy configuration blocks” (provider blocks with only alias set) in shared modules are now replaced with a more explicit…

2
5
pjaudiomv avatar
pjaudiomv

@eric you called it, they released right before office hours

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Every week

Marcin Brański avatar
Marcin Brański

It doesn’t seem like a significant version. Will test it tomorrow.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it will get auto published today in our packages distribution

2
jack fenton avatar
jack fenton

hey guys! i forgot if you can do this… or if how

i need to get a module s3_replica_bucket_arn = module.secondary.module.stuff

this module is initialised (it’s in ../some_other_folder )

but i get

A managed resource "secondary" "module" has not been declared in
module.secondary.
A managed resource "secondary" "module" has not been declared in
module.primary.

I only want the one module from the parent

anyone know if i am barking up the wrong tree?

managedkaos avatar
managedkaos

Try running terraform state list to get a list of all the modules in your project. If there are a lot of resources you might want to try grepping for module.secondary:

terraform state list | grep module.secondary

That might help you narrow in on the path you need to use to get the value you are looking for.

also note that if you are trying to use a value from a module, the module must publish that value as an output. Check the source for the module to confirm all the things being published. One easy way to do this (especially if you don’t have access to the source) is to print the entire module as output in your project’s outputs.tf:

output "secondary" {
  value = module.secondary
}

Then run terraform refresh to see all the outputs from the module. if the value you are trying to get to is not in there, you can’t get it unless you update the module to publish the value.

jack fenton avatar
jack fenton

thanks, there’s an output, i didn’t know terraform state list (well i did once run it)

jack fenton avatar
jack fenton

i’ll give that a go, cheers

managedkaos avatar
managedkaos

np!

Matthew Tovbin avatar
Matthew Tovbin
Hi folks, who is in charge of reviewing the PRs on [cloudposse> / <https://github.com/cloudposse/terraform-aws-rds-cloudwatch-sns-alarms terraform-aws-rds-cloudwatch-sns-alarms](https://github.com/cloudposse?type=source) ?
cloudposse/terraform-aws-rds-cloudwatch-sns-alarms

Terraform module that configures important RDS alerts using CloudWatch and sends them to an SNS topic - cloudposse/terraform-aws-rds-cloudwatch-sns-alarms

1
Matt Gowie avatar
Matt Gowie

Hey @ bring this up in #pr-reviews and somebody will get to it. That module gets left behind a little bit if I remember correctly, so us maintainers just need a bit of a nudge and #pr-reviews is the best place for that.

cloudposse/terraform-aws-rds-cloudwatch-sns-alarms

Terraform module that configures important RDS alerts using CloudWatch and sends them to an SNS topic - cloudposse/terraform-aws-rds-cloudwatch-sns-alarms

Matthew Tovbin avatar
Matthew Tovbin

Thanks!

Matthew Tovbin avatar
Matthew Tovbin

It would be so so great if that someone could have a look on several opened PR and get some of them a go

2021-04-13

Tom Dugan avatar
Tom Dugan

Is there a pattern to resolve the value depends on resource attributes that cannot be determine until apply when a resource is created where a variable is defined but the variable definition is created in the same state as the calling terraform? Example in thread.

Marcin Brański avatar
Marcin Brański

Yes and no. You can deal with it but maybe not the way you would like to. Just dealt with one example today. I used for each in on a module and tried to compute it’s output in locals. So terraform errored out on me. I fixed it with iterating over module.xxx.output in resource instead of iterating over computed local .

Tom Dugan avatar
Tom Dugan

Hmm I’m not sure i understand and if that method would apply to my circumstance. My immediate example involves creating a private Route53 zone and sending that zone into a module which will create a DNS entry if the Zone id exits using count. https://github.com/cloudposse/terraform-aws-elasticache-redis/blob/83bd076d932b3bac8203fe9b3a70cac43d8d36db/main.tf#L169

Terraform doesnt know if it will exsist until apply. Usually i deal with this with a feature flag. ie route_53_enabled = true . In this case its not my module its Cloude Posse’s so wondering if there is a better way for conditional resource creation.

loren avatar
loren

ultimately, the for_each key must depend only on user inputs and not on any computed values derived from resources in the same state

1
loren avatar
loren

@ you can use -target with plan/apply to workaround the problem. looking at that module, the way it is depending on var.zone_id in the enabled variable, and how enabled is used in count, there is no other workaround when you create the zone in the same tfstate

1
loren avatar
loren

you can certainly move them into different tfstates, and manage them separately, and that will work also

loren avatar
loren

(caveat: i haven’t used this module, so there may be some detail i am unaware of that would support your use case. i’ll defer to any of the cloudposse folks if they chime in)

Tom Dugan avatar
Tom Dugan

ah yeah the -target option i was hoping to avoid.

Pretty much this problem is creeping up when we are testing a module which calls that module. To test our module we create the Private zone during the test. In the real call the zone is created in a different state.

I think I will just extract the route53 logic to bypass this issue. Thanks for the insight!

loren avatar
loren

i would suggest, instead of depending on var.zone_id in the enabled variable, the module should accept a var.create_dns_record boolean. i’m not sure how to do that in a backwards-compatible manner though, so not sure it would be accepted

1
Tom Dugan avatar
Tom Dugan

That’s inline with my typical design pattern so I would agree with that approach. The backward compatibility problem is a good note, I’m not sure there is a clean way to solve that,

Steve Wade avatar
Steve Wade

does anyone have a recommended way of running tflint on a monorepo of modules?

pjaudiomv avatar
pjaudiomv

I’m sure there’s a better way but I do this. for D in */; do cd “${D}” && tflint && cd ..; done

pjaudiomv avatar
pjaudiomv

That will only do one level deep though

Matt Gowie avatar
Matt Gowie
antonbabenko/pre-commit-terraform

pre-commit git hooks to take care of Terraform configurations - antonbabenko/pre-commit-terraform

2
1
Steve Wade avatar
Steve Wade

are there any examples of tflint.hcl as i would like to enable a few rules from https://github.com/terraform-linters/tflint/tree/master/docs/rules

paultath81 avatar
paultath81

i’m running into the following error when running terragrunt with azure provider. Has anyone come across this? Seems a possible bug i may have encountered? I’m running tf version 0.14.9 and tg version 0.28.20

azurerm_role_definition.default: Creating...

Error: rpc error: code = Unavailable desc = transport is closing....

2021/04/12 19:38:12 [TRACE] dag/walk: upstream of "provider[\"registry.terraform.io/hashicorp/azurerm\"] (close)" errored, so skipping
2021/04/12 19:38:12 [TRACE] dag/walk: upstream of "root" errored, so skipping
2021-04-12T19:38:12.459-0700 [DEBUG] plugin: plugin exited





!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!

Terraform crashed! This is always indicative of a bug within Terraform.
A crash log has been placed at "crash.log" relative to your current
working directory. It would be immensely helpful if you could please
report the crash with Terraform[1] so that we can fix this.

When reporting bugs, please include your terraform version. That
information is available on the first line of crash.log. You can also
get it by running 'terraform --version' on the command line.

SECURITY WARNING: the "crash.log" file that was created may contain 
sensitive information that must be redacted before it is safe to share 
on the issue tracker.

[1]: <https://github.com/hashicorp/terraform/issues>

!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!
ERRO[0079] Hit multiple errors:
Hit multiple errors:
exit status 1 
paultath81 avatar
paultath81

Figured it out

Brij S avatar
Brij S

Hi all, I’m trying to dynamically obtain the ARNS of aws resource share invitations. I found that the data source for RAM doesn’t really support this. I’m attempting to mimic this example instead and I’ve been able to retrieve the ARNS using the following awscli command below:

aws ram get-resource-share-invitations \
    --query 'resourceShareInvitations[*]|[?contains(resourceShareName,`prefix`)==`true`].resourceShareInvitationArn' \
    --region us-east-1 

However, I’m not sure how I can get it into the correct format that data.external requires. Ideally Id want the output to be:

{ resrouceShareName: resourceShareInvitationArn }
pjaudiomv avatar
pjaudiomv

Pipe to jq and create a map

mikesew avatar
mikesew

sidenote: If there’s a terraform github issue you were able to find about this data-source incompatibility, I’ll happily give it an upvote.

loren avatar
loren

i believe it is more that it’s a whole different api, rather than an incompatibility, exactly… the data source aws_ram_resource_share is based on get-resource-share, but the invitation is returned by get-resource-share-invitations

loren avatar
loren

however, the share accepter resource takes the actual resource_share_arn, which IS returned by aws_ram_resource_share, and then the share accepter resource looks up the invite arn for you using that. the share accepter does not accept the invite arn

Brij S avatar
Brij S

yeah that makes sense

loren avatar
loren

i believe you can get the share arn from the invite though, so you’re approach will work, but you’ll need to adjust the query

Brij S avatar
Brij S

yeah thats what i’m trying to figure out but jmespath gets ugly, fast

loren avatar
loren

OR you can use a multi-provider approach… use the aws_ram_resource_share data source with a provider that has permissions to read the ram share from the owner account

Brij S avatar
Brij S

any ideas?

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Wrote a short blog post about drift and Terraform, specifically in the case of AWS IAM: https://indeni.com/blog/identifying-iam-configuration-drift/

Would love to hear more examples from people here about drift issues you care about. I’m hearing more and more about the need to identify drift, and would like to focus is on specific use cases (vs all drift). Thoughts anyone?

Identifying IAM Configuration Drift | Indeni

So, your team, or even possibly your entire organization, has decided to standardize on using infrastructure-as-code to define IAM entities within cloud environments. For example, […]

1
loren avatar
loren

nice! i like that SCP… now, make sure the trust policy for the Iac role is locked down so only your CI system can assume it… and/or have an explicit deny on all other policies to be able to AssumeRole the Iac role

Identifying IAM Configuration Drift | Indeni

So, your team, or even possibly your entire organization, has decided to standardize on using infrastructure-as-code to define IAM entities within cloud environments. For example, […]

loren avatar
loren

that’s also the first good use case i’ve seen of paths on iam entities… interesting…

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Yeah, Rajeev’s quite creative

1
mrwacky avatar
mrwacky

We used to use paths for IAM roles, but then some AWS (don’t remember which) service barfed when it wasn’t /

loren avatar
loren

sounds about right @mrwacky

Alex Jurkiewicz avatar
Alex Jurkiewicz

It’s DMS from memory (or at least, it’s also DMS )

2
mrwacky avatar
mrwacky

I’ve taken a cursory glance, but can’t find anywhere regex_replace_chars is used anywhere in https://github.com/cloudposse/terraform-null-label (or any callers). Am I missing something? Have y’all ever used this?

mrwacky avatar
mrwacky

To clarify: I can’t find nor imagine an instance where I’d want a different regex than the default.

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Ah! That’s a different question. I can’t think of an example right now.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The reason we have this is to support the use-case where a user does not want us to normalize/strip out characters. We (cloudposse) don’t have any such use-case since we’re strict about how name things.

1

2021-04-12

Gareth avatar
Gareth

Good morning, I’m struggling from Monday morning fog. Can somebody please suggest a quick way of converting this

 myconfig = {
    "/ErrorPages"   = "mys3bucket"
    "/client-assets" = "mys3bucket"
}

into

mys3bucket = ["/ErrorPages", "/client-assets"]

I’ve tried merge and using the (…) but I think I’m over complicating this. As I assume it should be as simply a for loop but for the life of my I can’t get the syntax correct.

type of example I feel it should be but isn't working or syntactically correct 
locals {
newlist = tolist([
    for k, v local.myconfig : v.value {
       tolist(v)
    }
  ])
}
loren avatar
loren

try just keys(local.myconfig)?

Gareth avatar
Gareth

Hello Loren, Thank you, that’s lead me to almost get what I needed. Looks like my initial structure is actually nested. So

}
 myconfig = {
   "/ErrorPages"   = "mys3bucket"
   "/client-assets" = "mys3bucket"
  }
}

Using

flatten([for k,v in local.myconfig :distinct(values(v))])

Gets me to

[
  "mys3bucket",
]

but every time I try and pull this together to something like

{for k,v in local.mys3_configs : flatten(distinct(values(v))) => tolist(keys(v))}

I crash and burn

loren avatar
loren

all i know is that based on what you’ve exposed here and what you said you wanted as a result, keys() is the answer. can’t offer more without seeing the whole data structure

Gareth avatar
Gareth

totally understand Loren, sorry, There isn’t really any more to the structure but let me reframe the data above to be less confusing.

loren avatar
loren
locals {
  mys3bucket = keys(local.myconfig)
}
Gareth avatar
Gareth

okay, I think the confusion is the naming in my example. So let me frame it. Sorry about this

> local.mys3_configs
{
  "api" = {
    "/ErrorPages" = "assets.bucket.dev"
    "/client-assets" = "assets.bucket.dev"
  }
}

keys(local.mys3_configs)
[
  "api",
]

What I need to do is get a distinct list of values aka assets.bucket.dev and use that value to make a new list which will contain all the keys local.mys3_configs.api

This gets me close to what I want.
{for k,v in local.mys3_configs : "randmonword" => { "another_randmon_word" = tolist(keys(v))}}
{
  "randmonword" = {
    "another_randmon_word" = [
      "/ErrorPages",
      "/client-assets",
    ]
  }
}

I’m falling down when I try and make this type of structure flatten(distinct(values(v))) represents the dynamic but distinct list of values covered by values(v) aka assets.bucket.dev tolist(keys(v)) represents the dynamic list of keys I want to add in to a single list. aka [ "/ErrorPages", "/client-assets"]

{for k,v in local.mys3_configs : flatten(distinct(values(v))) => tolist(keys(v))}}
{
  "assets.bucket.dev" = [
      "/ErrorPages",
      "/client-assets",
    ]
  }
}

Still as clear as mud

Gareth avatar
Gareth

playing around a little more has got me to:

{for k,v in local.mys3_configs : element(flatten(distinct(values(v))), 0) => tolist(keys(v))}
{
  "assets.bucket.dev" = [
    "/ErrorPages",
    "/client-assets",
  ]
}

But while this gives me what I want, is it correct? or have I just fluked it and a different approach would be safer?

Matt Gowie avatar
Matt Gowie

Hey we’re looking for a maintainer of our popular beanstalk module — if you use Beanstalk and this module and would be interested in being a contributor then reach out and let us know!

https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment#searching-for-maintainer

cloudposse/terraform-aws-elastic-beanstalk-environment

Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

sheldonh avatar
sheldonh

I’m used to Terraform Remote backend. I’m using the dynamo + s3 and find lots of lock issues (i’m only one running it), as it seems easily stuck at times. Ideally I’d like to have my backend configure itself with it’s own state backend on initialization like Terraform Cloud makes easy, so either TF Cloud, Env0, or spacelift depending on what I evaluate, just for backend state simplification only, not for runners at this time.

Am I using this stuff wrong and it’s normally easy to initialize and go, or would I be better served to use a remote backend that creates on initialization to simplify that part?

Brij S avatar
Brij S

does anyone have a way to obtain AWS RAM share arn’s using terraform and not the awscli?

Brij S avatar
Brij S

ooof, I tried to search for this and wasnt able to find it, that should work

1
Brij S avatar
Brij S

hmm, another AWS account has sent me a RAM invite and I can see it pending, however, the following isn’t working -

data "aws_ram_resource_share" "example" {
  name           = "aws-test-VPN"
  resource_owner = "OTHER-ACCOUNTS"
}

I tried using SELF as well, but I get the following error:

Error: No matching resource found: %!s(<nil>)

  on main.tf line 13, in data "aws_ram_resource_share" "example":
  13: data "aws_ram_resource_share" "example" {
pjaudiomv avatar
pjaudiomv

it may only work after you accept the request

pjaudiomv avatar
pjaudiomv

but not sure

pjaudiomv avatar
pjaudiomv

does the cli work

Brij S avatar
Brij S

yeah it seems like its only for after accepting

pjaudiomv avatar
pjaudiomv
aws ram  get-resource-shares --name "aws-test-VPN" --resource-owner OTHER-ACCOUNTS
Brij S avatar
Brij S

let me try

Brij S avatar
Brij S

nope

{
    "resourceShares": []
}
pjaudiomv avatar
pjaudiomv

gotcha

loren avatar
loren

There are a ton of bugs in the terraform ram share accepter, you’re almost better off dealing with it manually

1

2021-04-11

Jurgen avatar
Jurgen

hey, random question:

https://www.terraform.io/docs/language/functions/fileset.html

we are using the above fucntion and then for_each over a bunch of files to create some resources. The problem is, its sequential and a bit slow. Any idea on how to make it async?

fileset - Functions - Configuration Language - Terraform by HashiCorp

The fileset function enumerates a set of regular file names given a pattern.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. create a sub-module from all the resources
fileset - Functions - Configuration Language - Terraform by HashiCorp

The fileset function enumerates a set of regular file names given a pattern.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Use for_each on the submodule and give it the set of files
Jurgen avatar
Jurgen

right, so you are saying a module is async, interesting.

Jurgen avatar
Jurgen

@Andriy Knysh (Cloud Posse) This looks to have decreased the run time by 30 to 40 percent… Thanks for the pro tip. Still testing but the first result looks good.

1
Jurgen avatar
Jurgen

ok, its really hard to tell and I think. iam reading it wrong

Jurgen avatar
Jurgen

but maybe 1 minute reduction 10 mins to 9

Jurgen avatar
Jurgen

oh, well. Was worth a try

2021-04-10

marc slayton avatar
marc slayton

Hey all – I ran into a couple module bugs I’d really like to submit a PR for. To debug, I’m looking for a way to print out the objects being passed from one module to another during a ‘terraform plan’. Not quite sure how to manage this from within atmos/Geodesic. The terraform console seems a bit awkward in this context as well. Pointers on how to delve into debugging would be much appreciated!

Alex Jurkiewicz avatar
Alex Jurkiewicz

what’s awkward about the terraform console?

Alex Jurkiewicz avatar
Alex Jurkiewicz

the other thing is you can read your statefile

marc slayton avatar
marc slayton

I did manage to get this going, but not from within atmos. In the end, I rendered the objects I needed using the ‘terraform output’ command. Looks like my multi-account build is working now. I’ll submit a PR with the changes to terraform-aws-components, and also some notes that may help others.

1
marc slayton avatar
marc slayton

Thanks for the advice, Alex! Cheers –

Mohammed Yahya avatar
Mohammed Yahya
tfsec - Visual Studio Marketplace

Extension for Visual Studio Code - tfsec integration for VSCode

terraform1
1

2021-04-09

marc slayton avatar
marc slayton

Hey all – I’m looking into initializing remote tfstates in conjunction with atmos. To initialize a remote tfstate, I need to execute a command equivalent to: “terraform apply -auto-approve” – only from within the atmos wrapper. It’s not entirely clear how to construct this command. I’ve tried a few intuitive combinations using the docs, but they do not seem to work as expected. Does anyone have a quick example of how to run atmos with ‘terraform apply -auto-approve’ and then ‘init -force-copy’ as one-time commands to initialize a remote tfstate?

Matt Gowie avatar
Matt Gowie

Hey Marc — does atmos terraform deploy not do what you’re looking for?

marc slayton avatar
marc slayton

Yes, this did the trick! Sorry for the newbie question. Must have missed it in the docs. Cheers –

Matt Gowie avatar
Matt Gowie

Well to your credit, the docs on initializing remote state via tfstate-backend isn’t in the docs yet and that will be coming very shortly (PR should be up in the coming couple day).

Matt Gowie avatar
Matt Gowie

But glad that did the trick. Let us know if you have any other questions.

marc slayton avatar
marc slayton

Thanks, Matt! – So far, so good. :0)

Bart Coddens avatar
Bart Coddens

Hey all, I would like to use a arimethric based on the value from a variable

Bart Coddens avatar
Bart Coddens

now I have this:

Bart Coddens avatar
Bart Coddens
resource "aws_ebs_volume" "backup" {
  count             = var.tier != "PROD" ? 0 : 1
  availability_zone = var.aws_az
  size              = "${var.homesize} * 1.5"
  type              = "standard"
}
Bart Coddens avatar
Bart Coddens

but this does not seem to work

Bart Coddens avatar
Bart Coddens

size = var.homesize * 2

1
Bart Coddens avatar
Bart Coddens

this does it

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Any users of Infracost here? Can you share feedback on the tool?

marcinw avatar
marcinw

We’ve been looking at integrating it with Spacelift. It looks pretty decent and the CLI does not seem to leak anything to the API server. The obvious limitation is that usage-based cost estimation is only as good as the input you provide, but flat fees are generally well recognized, at least for AWS, and broken into components.

marcinw avatar
marcinw

I’d suggest using it on static artifacts (state and plan) rather than have the wrapper run Terraform for you, because it feels messy and likely duplicates your work.

marcinw avatar
marcinw

That said, I’ve mostly looked at it from the integration perspective - inputs, outputs, security and API availability. The CLI feels a bit awkward and inconsistent, especially if you’re outputting machine-readable JSON, but it’s generally not a blocker - you should be able to do what you want after some trial and error.

marcinw avatar
marcinw

The team behind it is super nice and very responsive, too.

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Thank you Marcin, that is very helpful. Would love to also hear of users using it on a regular basis. We are a vendor, like you, (not competing) and are thinking of integrating Infracost.

marcinw avatar
marcinw

Sure thing. Always happy to talk shop and compare notes ;)

Mohammed Yahya avatar
Mohammed Yahya

@Erik Osterman (Cloud Posse) gonna be very helpful https://www.terraform.io/docs/language/functions/defaults.html

defaults - Functions - Configuration Language - Terraform by HashiCorp

The defaults function can fill in default values in place of null values.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Quite possibly

defaults - Functions - Configuration Language - Terraform by HashiCorp

The defaults function can fill in default values in place of null values.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

One thing we have encountered is that strict typing can make two otherwise compatible modules totally incompatible due to types. We encountered this with our null label module and have subsequently changed our context object to any.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Still this default function is welcome

Mohammed Yahya avatar
Mohammed Yahya

I been thinking to reduce number of variables for any module using this via one variable object with optional, gonna test this and see pros cons

loren avatar
loren

I found defaults() hard to use and reason about, when I tried the experiment in 0.14… now, the optional() marker for complex variable objects, that worked perfectly and was very easy

1
2
loren avatar
loren

Maybe they’ve fixed defaults though, it was a few months ago and very much an experiment

2021-04-08

marc slayton avatar
marc slayton

Hey all – I’m putting together my first atmos build using terraform. I’ve just added a ‘vpc’ module, one of two I found on the cloudposse site. The vpc builds with the new config, but it’s giving me WARNING errors like the following:

Alex Jurkiewicz avatar
Alex Jurkiewicz

This looks like warning related to Atmos

Alex Jurkiewicz avatar
Alex Jurkiewicz

Are you talking about this Atmos? https://github.com/simplygenius/atmos

It doesn’t look like the authors are in this Slack, so you are possibly asking in the wrong place for help?

simplygenius/atmos

Breathe easier with terraform. Cloud system architectures made easy - simplygenius/atmos

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

wow, had never seen that

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/atmos

Universal Tool for DevOps and Cloud Automation (works with terraform, helm, helmfile, istioctl, etc) - cloudposse/atmos

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this is the one

1
marc slayton avatar
marc slayton
The root module does not declare a variable named "vpc_flow_logs_enabled" but
a value was found in file "uw2-dev-vpc.terraform.tfvars.json". To use this
value, add a "variable" block to the configuration.
marc slayton avatar
marc slayton

I’m curious whether this is a known issue, or perhaps I’m using the wrong vpc module? I’ve declared all the above variables in the stacks/globals.yaml config. The warning seems to come from terraform itself, and might be related to newer versions of terraform 0.14. Is this a known issue?

Alex Jurkiewicz avatar
Alex Jurkiewicz

The warning is saying that you are providing a variable which your Terraform configuration isn’t using. For example, terraform plan -var foo=bar in a Terraform configuration with no variable "foo" { ... } block.

This isn’t related to the VPC module, but to your root (top-level) Terraform configuration

2
marc slayton avatar
marc slayton

Yep, I was misinterpreting the warning. Thanks for setting me straight!

Zach avatar

For what its worth, Hashicorp said they are removing that warning and they’ll allow you to have whatever you want declared in tfvars now.

Zach avatar

Should for sure be in v0.15, I can’t recall if they added it to the recent 0.14 patches

Marcin Brański avatar
Marcin Brański

0.14.9 still shows that. I just today upgraded to 0.14.10 so not sure about that but definitely fmt has changed

Zach avatar

oh what changed in fmt, I missed that note

François Davier avatar
François Davier

Hi all

François Davier avatar
François Davier

trying to use cloud posse aws backup module, working well under terraform enterprise, but when i want to re launch plan apply , i’ve got some issue:

François Davier avatar
François Davier
Error: Provider produced inconsistent final plan

When expanding the plan for module.backup.aws_backup_plan.default[0] to
include new values learned so far during apply, provider
"registry.terraform.io/hashicorp/aws" produced an invalid new value for .rule:
planned set element
cty.ObjectVal(map[string]cty.Value{"completion_window":cty.NumberIntVal(240),
"copy_action":cty.SetVal([]cty.Value{cty.ObjectVal(map[string]cty.Value{"destination_vault_arn":cty.UnknownVal(cty.String),
"lifecycle":cty.ListVal([]cty.Value{cty.ObjectVal(map[string]cty.Value{"cold_storage_after":cty.UnknownVal(cty.Number),
"delete_after":cty.UnknownVal(cty.Number)})})})}),
"lifecycle":cty.ListVal([]cty.Value{cty.ObjectVal(map[string]cty.Value{"cold_storage_after":cty.NullVal(cty.Number),
"delete_after":cty.NumberIntVal(2)})}),
"recovery_point_tags":cty.MapVal(map[string]cty.Value{"Name":cty.StringVal("oa-uso-fda-plt1-env1-tenantfda_kpi-tenantfda"),
"Namespace":cty.StringVal("oa-uso-fda-plt1-env1-tenantfda")}),
"rule_name":cty.StringVal("oa-uso-fda-plt1-env1-tenantfda_kpi-tenantfda"),
"schedule":cty.StringVal("cron(0 3 * * ? *)"),
"start_window":cty.NumberIntVal(60),
"target_vault_name":cty.StringVal("oa-uso-fda-plt1-env1-tenantfda_kpi-tenantfda")})
does not correlate with any element in actual.

This is a bug in the provider, which should be reported in the provider's own
issue tracker.
François Davier avatar
François Davier

this is how i use module:

François Davier avatar
François Davier

#Cloudposse backup module ”backup-idp-env” {    source = ”tfe.xxx.xxx.com/techsol-devops/backup/aws”   # Cloud Posse recommends pinning every module to a specific version   version                        = ”0.6.1”   namespace                      = var.workspace_name   name                           = var.rds_identifier-idp   delimiter                      = ”_”   backup_resources               = [module.rds_dbserver-odp.db_instance_arn]   schedule                       = ”cron(0 3   ? *)”   start_window                   = 60   completion_window              = 240   delete_after                   = 2   destination_vault_arn          = data.aws_backup_vault.dr_idp.arn   copy_action_delete_after       = 7 }

François Davier avatar
François Davier

backup vault is an external local exec process creation with some aws cli command, so backup vault is not impacted when we want to destroy infra because not in the state

François Davier avatar
François Davier

is anyone already had this issue please ?, thank you

Steve Wade avatar
Steve Wade

is there an easy way to move terraform state to a different dynamo db key?

Steve Wade avatar
Steve Wade

i want to move from "us-east-1/rules-engine-prd/env-01/terraform.tfstate" to "us-east-1/rules-engine-prd/XXX/terraform.tfstate"

Zach avatar

In the original config, do an init. then change the config and hit init again

Zach avatar

terarform will prompt you that it detected the backend config has changed and ask if it should copy to the new location

Zach avatar

it does not remove the old location!

Steve Wade avatar
Steve Wade

i can then safely remove the old one manaually?

Zach avatar

yup

paultath81 avatar
paultath81

is it possible to use aws s3 as a backend state storage for azure although i know there’s one for azure (blob storage)?

mikesew avatar
mikesew

Hm, I dont see why not - the backend is supposed to be separate from the rest of the terraform config.

terraform {
  backend "s3" {
    bucket = "mybucket"
    key    = "path/to/my/key"
    region = "us-east-1"
    ## you'd have to supply your AWS_ACCESS_KEY and AWS_SECRET_ACCESS_KEY either here or during terraform init
  }
}

then you’d provisiong your azure resources in your standard main.tf>.. i’m literally just pulling this from the <https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs|provider docs.

terraform {
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "=2.46.0"
    }
  }
}

# Configure the Microsoft Azure Provider
provider "azurerm" {
  features {}
}

# Create a resource group
resource "azurerm_resource_group" "example" {
  name     = "example-resources"
  location = "West Europe"
}
mikesew avatar
mikesew

so your ‘backend’ could be anything.. aws, terraform cloud, terraform enterprise, consul

paultath81 avatar
paultath81

ok - i was thinking due to using azurerm provider you’d be stuck to using azure blob for backend store and not aws s3 https://www.terraform.io/docs/language/settings/backends/azurerm.html

Backend Type: azurerm - Terraform by HashiCorp

Terraform can store state remotely in Azure Blob Storage.

paultath81 avatar
paultath81

pls…

Joe Presley avatar
Joe Presley

I’m reviewing some code and am curious about a choice made in it. Would there be a reason to use node_pools = zipmap(local.node_pool_names, tolist(toset(var.node_pools))) instead of node_pools = zipmap(local.node_pool_names, var.node_pools) ? var.node_pools type is list(map(string)). I’m basically curious why someone would convert the list of maps to a set and then convert it back to a list.

Pierre-Yves avatar
Pierre-Yves

converting a list to set remove duplicate, for_each takes as input a set or a map . so I don’t see a reason to convert the set back to a list.

Joe Presley avatar
Joe Presley

Converting back to a list might be necessary for zipmap.

Joe Presley avatar
Joe Presley

The local node_pools would be used in a for_each block on creating node pools for a GKE cluster.

Joe Presley avatar
Joe Presley

From my experiments it looks like the way the code is written avoids destructive modifications if the order of the var.node_pools list changes. I don’t understand why that happens though. Any thoughts on why it works?

Pierre-Yves avatar
Pierre-Yves

zipmap build a map out of your list, the index is based on a map key and not on a list id ..

Joe Presley avatar
Joe Presley

Thanks for the explanation. I didn’t understand why it worked.

1
sheldonh avatar
sheldonh

Any free tier provider like spacelift or env0 that now covers pr integration with azure devops? Spacelift didn’t seem to have that yet so just checking if any recent updates. Right now using dynamic backend config similar to atmos approach with a go based app I’ve been fiddling with.

tim.davis.instinct avatar
tim.davis.instinct

DevOps Advocate with env0 here. We are currently working on the Azure DevOps integration, including CD and PR Plans. You can use ADO now, but it is just a simple repo hook. We’re just not quite there with the CD / PR plan hooks just yet. I’ve reached out to our CTO to try and get a code-commit date for you.

tim.davis.instinct avatar
tim.davis.instinct

If you want to DM me your contact info, or just let me know DM’s here are fine, I can keep you in the loop as we make progress.

sheldonh avatar
sheldonh

I built in some PR comment additions to the plan so I’m trying, just figuring better lifecycle in the tool itself would be good. No rush, just exploring options as haven’t caught up on recent updates. thanks for keeping me posted

marcinw avatar
marcinw

To clarify, do you mean Azure DevOps as a VCS provider?

tim.davis.instinct avatar
tim.davis.instinct

I believe they were. We can support it today just as a basic VCS provider. It’s just the extra webhook stuff like PR Plan comments and CD that we don’t have yet. Do y’all have that yet for ADO @marcinw?

marcinw avatar
marcinw

Nope.

tim.davis.instinct avatar
tim.davis.instinct

Sounds like we both have a solid feature request on the board

sheldonh avatar
sheldonh

Seperate discussion…. I get that terraform doesn’t fit into the typical workflow with CI CD very well at least out of the box.

To be fair though if these tools such as terraform cloud, spacelift, env0 are in essence running the same CLI tool that you can run in your own CI CD job that preserves the plan artifact, what do you feel is the actual substantial difference for the core part of terraform plans?

Don’t get me wrong I love working with stuff like terraform cloud, but I guess I’m still struggling to see the value in that if you write a pipeline then handles plan artifacts

tim.davis.instinct avatar
tim.davis.instinct

At env0, it’s not about the deployment job and state to us. It’s about the entire lifecycle of the environment from deploy to destroy. We get compared to a CI/CD tool a lot, but we don’t do CI. We can do CD… But it’s really the day 2+ operational stuff that we do that makes the value come out. Setting TTL’s, environment scheduling, RBAC, Policy enforcement, cost monitoring. All of us TACoS really focus on the whole IaC lifecycle, not just running a pipeline and shoving the state somewhere.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Here are some issues to deal with:

  1. planfile is sensitive (secrets likely stored within)
  2. order of applies matter, should always apply oldest plan first
  3. one-cancels-all: once the planfile for a given root module is applied, any planfiles prepared before that need to be discarded - so if you’re treating planfiles as artifacts, it’s complicated
  4. generally want to approve before apply and many systems don’t do this well (e.g. github actions) - and I’m not talking about atlantis style chatops, but a bonafide approval mechanism
  5. policy enforcement - where you want a policy that when one project changes, you want another project to trigger
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
06:07:55 AM
1

2021-04-07

Milosb avatar
Milosb

HI guys, I am trying to create aws routes dynamically for each route table, and each peering connection that I specify I’ve done it eventually, but I have feeling that there could/should be smoother way for do it. Generally I had quite headache with maps/list manipulation. Is there any better approach to achieve something like this?

locals {
  route_table_ids = ["rtb-1111111111111","rtb-2222222222222", "rtb-333333333333333"]
  cidr_peerings = [
    {
      "cidr_block" = "10.180.0.0/21"
      "vpc_peering_id" = "pcx-1111111111111111111"
    },
    {
      "cidr_block" = "10.184.0.0/21"
      "vpc_peering_id" = "pcx-2222222222222222"
    },
  ]

  routes = {
      for i in setproduct(local.route_table_ids, local.cidr_peerings):
      "${i[0]}_${i[1].vpc_peering_id}" => merge(i[1], {route_table_id: i[0]})
    }

}
resource "aws_route" "this" {
  for_each = local.routes

  route_table_id            = each.value.route_table_id
  destination_cidr_block    = each.value.cidr_block
  vpc_peering_connection_id = each.value.vpc_peering_id
}
Bart Coddens avatar
Bart Coddens

Question for a conditional create. I query the instance type like this:

Bart Coddens avatar
Bart Coddens
data "aws_instance" "instancetoapplyto" {
  filter {
    name   = "tag:Name"
    values = [var.instancename]
  }
}
Bart Coddens avatar
Bart Coddens

this gives back: data.aws_instance.instancetoapplyto.instance_type

Bart Coddens avatar
Bart Coddens

now I would like to use this in a conditional create context, if the value equals t3.* then set count to 1

1
Steve Wade avatar
Steve Wade

what is the different between using

resource "aws_autoscaling_attachment" "asg" {
  count                  = length(var.load_balancers)
  autoscaling_group_name = aws_autoscaling_group.asg.name
  elb                    = element(var.load_balancers, count.index)
}

and just load_balancers = [] directly in the ASG config?

loren avatar
loren
  1. order of operations - i.e. do you have all the information needed when creating the ASG to also attach the LB at that time?
  2. separation of responsibility - i.e. are there different teams/configs responsible for or maintaining the ASG vs the LB?
Steve Wade avatar
Steve Wade

yes i can

Steve Wade avatar
Steve Wade

i have this weird issue at present

Steve Wade avatar
Steve Wade

whereby on first tf execution the attachment works fine

Steve Wade avatar
Steve Wade

then if i re-execute tf again (with no changes) it wants to remove it

Steve Wade avatar
Steve Wade
02:58:31 PM
Steve Wade avatar
Steve Wade

i can’t work out why :point_up: is happening and i have a suspicion its the aws_autoscaling_attachment

loren avatar
loren

if so, change it to null

loren avatar
loren

it’s like security group inline rules vs rule attachments. only one should be used for a given SG

loren avatar
loren

i may be misunderstanding, because i don’t see an aws_autoscaling_group resource in the gist

loren avatar
loren

i see the attachment, not the ASG definition… but going back to the original question, use only one option for attaching the LB to ASG… either the attachment resource OR the ASG resource

loren avatar
loren

if you pass an empty list to an attribute, that typically implies exclusive control of the attribute… so load_balancers = [] means remove all LBs. to get the default behavior of ignoring the attribute, use load_balancers = null

Steve Wade avatar
Steve Wade

interesting

Steve Wade avatar
Steve Wade

although the issue seems to be when i am specifying the id

Steve Wade avatar
Steve Wade

when scaling up the ingress node count in the ASG its removing the load balancer from the ASG configuration which makes no sense to me as nothing else has changed.

Bart Coddens avatar
Bart Coddens

The solution is:

Bart Coddens avatar
Bart Coddens

count = format(“%.1s”, data.aws_instance.instancetoapplyto.instance_type) == “t” ? 1 : 0

marc slayton avatar
marc slayton

Hey all – I have a general question about using cloudposse components and modules. I’ve been through the tutorials on atmos and Geodesic and both make good sense. I feel like I’m still missing something, however – specifically, a step-by-step for building a master account module, or just a ‘my first stack’ tutorial. Wondering if such a thing might exist? Or am I missing something very obvious? Cheers –

marc slayton avatar
marc slayton

Capturing some first impressions as I ramp in hope of pulling together a doc for future rampers.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

unfortunately, no such document exists yet

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we’re working on some tutorials - i think the next one will be on creating a vpc and EKS cluster

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

TBH, we probably won’t tackle multi-account tutorial for a while since it’s a master class in terraform and aws

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

cc: @Matt Gowie

marc slayton avatar
marc slayton

Maybe I should start with something smaller, like a doc on creating a small internal stack rather than a full blown master account.

marc slayton avatar
marc slayton

Is there a baseline or example template I should start from when building my first stack?

marc slayton avatar
marc slayton

I noticed the terraform-yaml-stack-config repo, which looks promising. I’ll start there.

Matt Gowie avatar
Matt Gowie

Hey Marc — This is definitely on our roadmap but the full-blown “Here is how you provision many accounts + all the infra for those accounts” with stacks is still a bit of a ways out. That’s what stacks are built for, but it’s a TON of information to convey honestly. We’re trying to do it piece by piece and we’ll eventually work up to that topic, but it’s advanced and similar to what Erik mentioned: It’s a masterclass unto it self.

That said — We are putting together more AWS + Stack example tutorials soon and they should be launching within the coming weeks.

There is no example stack template that is a perfect example, but you can look here for a simpler example: https://github.com/cloudposse/terraform-yaml-stack-config#examples

cloudposse/terraform-yaml-stack-config

Terraform module that loads an opinionated &quot;stack&quot; configuration from local or remote YAML sources. It supports deep-merged variables, settings, ENV variables, backend config, and remote …

marc slayton avatar
marc slayton

Thanks Matt! That’s perfect for me. Cheers –

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Marcin Brański might have something to share as well, having just gone through it

Marcin Brański avatar
Marcin Brański

I started for my client from ground up with atmos recently (less than 2 weeks ago). I don’t think I’m experienced enough to share tips but if I will have time (which currently I don’t) and client will agree then I can share snippets or whole configuration that we did.

I, personally, used tutorial, example from atmos, read variant and it kinda clicked.

marc slayton avatar
marc slayton

Thanks Marcin – understand about your time. Happy to curate anything you can share. I enjoy that kind of work.

Fernando avatar
Fernando

@Marcin Brański it would definitely help, I’m at the same point as @marc but it didn’t click to me yet. Maybe it’s because I’ve been using TF, YAML and some python wrappers in a complete different way, but although I followed the tutorials I can’t figure out how to, for example, create just a pair of accounts (i.e. master - prod) and an VPC/EKS cluster on prod one

marc slayton avatar
marc slayton

Got a VPC running with a master account last night using atmos and geodesic, so I should be good to go. Started a supplemental “tutorial.md” which I’ll submit as a PR to the terraform-yaml-stack-config. Thanks all, for being so welcoming! Cheers –

1
Marcin Brański avatar
Marcin Brański

Yeah! Good for you! Have you already thought of CICD?

marc slayton avatar
marc slayton

I’ve been considering a few options on that side. 1.) Terraform cloud. 2.) JenkinsX.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-spacelift-cloud-infrastructure-automation

Contribute to cloudposse/terraform-spacelift-cloud-infrastructure-automation development by creating an account on GitHub.

Matt Gowie avatar
Matt Gowie

@ I believe a number of us here (and the wider community is coming to the same conclusion) would suggest against Terraform Cloud. Their pricing model past their “Team” tier is highway robbery and there are better solutions out there. Spacelift as Erik pointed out is once of them.

sheldonh avatar
sheldonh

I’m jumping in to say I’m a little confused with just trying to use the yaml stck config. I don’t need the full atmos stuff, just want to use the yaml configuration for setting backend and components and i’m not having much luck.

Is there anyother example of using terraform-yaml-stack-config repo and how the command with variant or cli is being actually processed to set backend?

I have a go build script doing some of this but I’d love to avoid rework on this if I’m just misunderstanding how to use the yaml stack config option.

sheldonh avatar
sheldonh

Basically what I have right now is each folder loading the stack designated and the resulting output for each “root” module componet looks like this

module "vpc" {
  source                                          = "cloudposse/vpc/aws"
  version                                         = "0.18.1"
  enable_default_security_group_with_custom_rules = module.yaml_config.map_configs.components.terraform.vpc.vars.enable_default_security_group_with_custom_rules
  cidr_block                                      = module.yaml_config.map_configs.components.terraform.vpc.vars.cidr_block
  instance_tenancy                                = module.yaml_config.map_configs.components.terraform.vpc.vars.instance_tenancy

This doesn’t seem to match the more simple output I’ve observed on the other projects.

marc slayton avatar
marc slayton

Can I ask a question here about Sandals?

Matt Gowie avatar
Matt Gowie

@ Nothing stopping you from asking a question! I would suggest starting a new thread if it’s about a new topic.

marc slayton avatar
marc slayton

LOL – thanks for the reply. I was actually just figuring out how to use the Foqal plugin. It looked pretty interesting. Seems like it might be useful in capturing general trends about the types of questions being asked, so you can prioritize certain types of documentation.

Matt Gowie avatar
Matt Gowie

Ah I don’t know much about that plugin — I haven’t found the responses useful, but if you’re finding it useful then more power to ya!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@vlad knows more about foqal

Petro Gorobchenko avatar
Petro Gorobchenko

Hello, I have a question on source = "cloudposse/rds/aws" => version = "0.35.1" . Been getting an error DBName must begin with a letter and contain only alphanumeric characters . Although my database_name only contains hyphens and is less than 64 in length. I haven’t seen any support on this yet, or wasn’t able to find it. Any help/info is much appreciated.

jose.amengual avatar
jose.amengual

that error is from Terraform, not the module

jose.amengual avatar
jose.amengual
aws_db_instance DBName must begin with a letter and contain only alphanumeric characters. · Issue #1137 · hashicorp/terraform-provider-aws

Hi, Terraform Version Terraform v0.9.11 Affected Resource(s) aws_db_instance Terraform Configuration Files data &quot;aws_availability_zones&quot; &quot;available&quot; {} resource &quot;aws_vpc&qu…

Petro Gorobchenko avatar
Petro Gorobchenko

hey @jose.amengual Thanks for responding. I was checking the module at it seems under aws_db_instance default its specifying the identifier as module.this.id .. Sorry, I could heading the wrong path…

jose.amengual avatar
jose.amengual

module.this.id is the label module we use to name resources so you might want to check that did you pass for name, attributes, stage, environment etc

Petro Gorobchenko avatar
Petro Gorobchenko

Lets say I have verified the data mapping between the module and values. Would you be to point me to any other potential mishaps? Or is it that there is missing gap between module expectation and my values?

jose.amengual avatar
jose.amengual

I will have to look at your plan

jose.amengual avatar
jose.amengual

but we use this module extensively and we have no issues

Petro Gorobchenko avatar
Petro Gorobchenko

Thats my thought also, since its not a major reported issue. Must be something within the configurations.

Petro Gorobchenko avatar
Petro Gorobchenko

seems like hyphens may have been the cause. I had removed them from the name, and its throwing another issue. But seems like passing the DBName portion.

1
1
Release notes from terraform avatar
Release notes from terraform
06:03:30 PM

v0.15.0-rc2 0.15.0 (Unreleased) BUG FIXES: core: Fix crash when rendering JSON plans containing iterable unknown values (#28253)

Mohammed Yahya avatar
Mohammed Yahya

v0.14.10 is out also, strange sync with wed meeting

Release notes from terraform avatar
Release notes from terraform
06:33:31 PM

v0.14.10 0.14.10 (April 07, 2021) BUG FIXES: cli: Only rewrite provider locks file if its contents has changed. (#28230)

Mohammed Yahya avatar
Mohammed Yahya
Terraform

Welcome to DeepSource Documentation

2021-04-06

Anton Sh. avatar
Anton Sh.

Hello :wave: I can’t configure log_configuration in terraform-aws-ecs-container-definition . I need to configure aws cloudwhatch driver could some one direct me to example please?

cloudposse/terraform-aws-ecs-container-definition

Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource - cloudposse/terraform-aws-ecs-container-definition

Steve Wade avatar
Steve Wade

is there a way to override https://github.com/cloudposse/terraform-aws-sns-lambda-notify-slack/blob/master/main.tf#L13 so that my lambda’s aren’t called default every time?

Marcin Brański avatar
Marcin Brański

Yep, pass context or part of context that will name your lambda

Mike Martin avatar
Mike Martin

Not directly a Terraform question, but does anyone know how to get in touch with Hashicorp sales? Specifically Terraform Cloud. We’ve reached out to support who routed us to the sales email, but haven’t heard back yet. Is anyone here from hashicorp or know any of the sales folks there? Thanks in advance!

Matan Shavit avatar
Matan Shavit

Strange, you’d think part of the business model is selling to potential customers

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You can DM me

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

just give me your email

Mike Martin avatar
Mike Martin

Sending over now. Thanks!

barak avatar
barak

does someone know a good tf module for reverse proxy?

Steve Wade avatar
Steve Wade

i have a weird issue where manually firing messages into SNS fires my lambda to slack perfectly

however, rds event subscriptions do not seem to be adding messages to SNS

I have created a gist https://gist.github.com/swade1987/c80cef29079255f052099ca232c0d96c

MattyB avatar
MattyB

Apologies for possibly not understanding. You’re saying that when you manually fire off an event into SNS, it successfully fires a lambda that sends a message to your slack DM/channel/whatever?

Steve Wade avatar
Steve Wade

Yes but when I make any changes to RDS nothing happens. The gist above is configured to send events to SNS but it doesn’t seem to be doing it

MattyB avatar
MattyB

I’ll try to get around to this sometime tonight if nobody else can pitch in and you’re still having issues. On baby duty right now

Steve Wade avatar
Steve Wade

i have manually rebooted the RDS instance loads of times but nothing fires

Steve Wade avatar
Steve Wade

does anyone have any ideas as I am running out myself

Steve Wade avatar
Steve Wade

the issue is 100% the KMS policy

Steve Wade avatar
Steve Wade

as soon as I remove encryption everything starts working

Ryan Fisher avatar
Ryan Fisher

Hi all, question. I want to check out a git repo in TF (can do this will null_resource), then I want to read a YAML file from that repo into a TF var. Anyone know if null_resource is the only way to accomplish this?

Also what is the future of null_resource as its flagged as deprecated? It seems to me that there is still use cases that locals don’t solve (like this, the repo doesn’t exists when locals are parsed).

1
loren avatar
loren

You can just use a module block with a source reference pointing to the repo (including the git ref). On init, terraform will pull down the repo. You can reference the files from the .terraform directory

Ryan Fisher avatar
Ryan Fisher

Ah, good call, thanks!

loren avatar
loren

You don’t get a tfvar that way, exactly, but you can use the file and yamldecode functions to pull the value (s) into a local

Ryan Fisher avatar
Ryan Fisher

Hmm, can’t use variables in a source declaration though. Might have to re-think this. I’m calling a module and passing it variables for the repo to check out (have multiple repos depending on the config).

loren avatar
loren

If the git repo or the ref are variables, then no this doesn’t work, but probably nothing else will either. You’re looking at some kind of wrapper, at that that point

Ryan Fisher avatar
Ryan Fisher

Yeah, my initial solution was calling a script with the local-exec provisioner that parsed the vars to checkout the repo.

managedkaos avatar
managedkaos

I think Loren might be saying you can wrap your TF code. maybe have a script that creates the tf file with the module source. Run that script before running terrform plan/apply/etc

Ryan Fisher avatar
Ryan Fisher

Yeah, I understand. Trying to see if I can avoid that.

managedkaos avatar
managedkaos

roger that!

managedkaos avatar
managedkaos

Sidebar question: how many repos are you working with? I know it might not be scalable but what about pulling down all the repos and then using a variable to reference the one for the current configuration?

loren avatar
loren

Right, exactly, cdktf or terragrunt or other external tooling and generating the tf might be preferable to trying to hack the workflow from within a terraform config

Ryan Fisher avatar
Ryan Fisher

Yeah that might work, pulling all the repos. The way I have it set up now is each repo has a google cloud pipeline in it. One module/pipeline/repo that calls another module that handles parsing which repo(s) to check out. I think for now I can do something along the lines of just grabbing all of them every time, its only 6 repos atm.

loren avatar
loren

Or take the opportunity to rethink the larger workflow, really take advantage of the declarative nature of terraform

Ryan Fisher avatar
Ryan Fisher

Otherwise wrapper script or makefile it is

Ryan Fisher avatar
Ryan Fisher

Thanks for the help

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Are you sure you even need to clone the repo? can you just use the raw url?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-yaml-config

Terraform module to convert local and remote YAML configuration templates into Terraform lists and maps - cloudposse/terraform-yaml-config

Ryan Fisher avatar
Ryan Fisher

ah… never thought of that. No I don’t need the repo, I just need the file.

Ryan Fisher avatar
Ryan Fisher

Thanks @Erik Osterman (Cloud Posse)! Saved me a bunch of time probably dealing with a wrapper or writing a custom module.

2021-04-05

Michael Koroteev avatar
Michael Koroteev

has someone tried using the the EKS node group module to deploy bottlerocket-based workers ?

Mohammed Yahya avatar
Mohammed Yahya
cloudandthings/terraform-pretty-plan

Contribute to cloudandthings/terraform-pretty-plan development by creating an account on GitHub.

Stepan Kuksenko avatar
Stepan Kuksenko

guys does somebody know why it is not supported now https://github.com/cloudposse/terraform-aws-kops-efs ? is there another solution to make EFS for kops maybe kops addon or something ?

cloudposse/terraform-aws-kops-efs

Terraform module to provision an IAM role for the EFS provider running in a Kops cluster, and attach an IAM policy to the role with desired permissions - cloudposse/terraform-aws-kops-efs

Stepan Kuksenko avatar
Stepan Kuksenko

@Igor Rodionov @joshmyers @Maxim Mironenko (Cloud Posse) maybe you can help to find answer sorry guys for direct mentioning

cloudposse/terraform-aws-kops-efs

Terraform module to provision an IAM role for the EFS provider running in a Kops cluster, and attach an IAM policy to the role with desired permissions - cloudposse/terraform-aws-kops-efs

Igor Rodionov avatar
Igor Rodionov

I will send you few links today

Stepan Kuksenko avatar
Stepan Kuksenko

thank you so did like this:

  1. created needed IAM permission using kops kops edit cluster
  2. created EFS with https://github.com/cloudposse/terraform-aws-efs thank you guy for this module, it is wonderful !
  3. deployed efs driver with helm
Ignas avatar
Ignas
07:31:22 PM

Hi there! I’m new to Terraform (and DevOps in general). I’m trying to automate infra on a project I’m working on and I’m a bit confused on logical separation of modules/resources. At the moment I’m refactoring the initial version (I’m splitting state per env) and wondering what could be a better approach to doing VPC configuration (I’m on Hetzner, not AWS). Right now I store all IP ranges in separate variables, grouped by purpose (app, backoffice), then I create the subnets and refer to those in my prod/main.tf when building instances, but this feels awkward. I’m wondering if it makes more sense to have smaller configuration units and move each subnet into it’s specific service module (or something like that). I broke my code and looking for a smarter approach to this Maybe someone has an existing project with similar configuration that I could take a look at?

TED Vortex avatar
TED Vortex

use modules ?

Ignas avatar
Ignas

you mean module per subnet?

jose.amengual avatar
jose.amengual

when I use TF to do VPC/Subnets I create all that in a separate component/module and then I search/find the subnet to use in each app/service

jose.amengual avatar
jose.amengual

the search/find = could be remote state share, data lookup or using outputs

Ignas avatar
Ignas

just to confirm, so you’d have something like subnets/app_mysql/, subnets/app_cache/, etc. file structure?

jose.amengual avatar
jose.amengual

not really

jose.amengual avatar
jose.amengual

I have a vpc/network project

jose.amengual avatar
jose.amengual

and the app use those resources on their repos

jose.amengual avatar
jose.amengual

I do not mix app and networking

jose.amengual avatar
jose.amengual

A vpc and subnet is a lower level than the app so I separate those

jose.amengual avatar
jose.amengual

it is very strange to “delete” a subnet when you delete an app unless you have a vpc per app

Ignas avatar
Ignas

ohh ok, thanks, that clears it up a bit

Ignas avatar
Ignas

so you’d use something like

module "vpc" {
  source = "github.com/someuser/vpc"
}

and use it’s outputs within the app project?

jose.amengual avatar
jose.amengual

so if you look at it from he point of view of a coldstart on a new aws account the first thing you get is a vpc because without it you can’t do anything but usually you create your own because the default is not on your CIDR range and so so set up the fundation for your app to work and after that you do not modify the conectivity for the app to work. Much like on a company there is a network department

jose.amengual avatar
jose.amengual
cloudposse/terraform-aws-dynamic-subnets

Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets

jose.amengual avatar
jose.amengual

similar to the example

Ignas avatar
Ignas

That makes sense to me, so you define all of the networking in a separate project. but then lets say you have a subnet for databases and another for an app, Do you just hardcode the IPs in the app’s project so it knows where the db servers are? There won’t be that many and I can just reference them in the networking project, so maybe it makes sense to just copy paste from one project to another? I’m talking about the subnet “10.8.0.0/24” strings.

Ignas avatar
Ignas

Or would your networking project generate an “output file” that your app project could pick up and use variables instead?

Ignas avatar
Ignas

Sorry if I’m not being clear, this is all new and a little confusing:) wondering what works better in practice

jose.amengual avatar
jose.amengual

you can use outputs for sure

jose.amengual avatar
jose.amengual

if you are deploying app and RDS in a separate subnet then you will need outputs for each

jose.amengual avatar
jose.amengual

usually you create many private/public subnets per VPC and app and rds use the same subnets but secured by a security group

jose.amengual avatar
jose.amengual

and sometimes you will have a DMZ were you will allow traffic base on ACLs but all that could be in this networking project

jose.amengual avatar
jose.amengual

most of the time, you do it once and once connectivity is good you do not touch it again

1
Ignas avatar
Ignas

got it, thank you!

Ignas avatar
Ignas

Probably worth mentioning that I’ll be generating an Ansible inventory file from all of this.

slack1270 avatar
slack1270

Hi all. Anyone know of a tool to automatically generate test code for terraform classes? At the moment writing tests seems very mechanical, and I’d like to take the heavy lifting out of the equation.

Padarn avatar
Padarn

Hi guys - curious about how ignore_changes is implemented in providers? We’re trying to debug part of the azure provider that doesn’t seem to respect this. Anyone have pointers?

2021-04-04

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Anyone using TF Enterprise here? (the on-prem version) We’re working on TFE integration and would appreciate feedback on the user instructions we’re publishing.

Rahul Sarkar avatar
Rahul Sarkar

Hey guys, Happy Easter Monday. I need an assist with debugging the terraform_provider_azurerm on my local - trying to get started to see if I can help the community and increase my understanding of terraform.. I have posted the question on stackoverflow. https://stackoverflow.com/questions/66945925/attempting-to-debug-the-terraform-provider-azurerm-so-that-i-can-contribute-to-t Any help will be much appreciated!

Attempting to debug the terraform-provider-azurerm so that I can contribute to the community. But terraform plan crashes

Introduction Hi guys, I am trying to get started with contributing to the terraform-provider-azurerm. I have noticed a problem with the azurerm_firewall_network_rule_collection I have reported it …

Darren P avatar
Darren P

I’ve got an interesting idea that I would like to see if anyone has any experience or advice. I work for a non-profit and am automating out project deployments at the moment. To ensure we’re as cost-optimal as possible, I’ve decided that non-production projects will share as many AWS resources as possible. These projects are essentially their own ECS Service and will share 1 ECS cluster and 1 RDS database. With this multi-tenant approach, I’m wondering what the best way to manage creation of multiple databases/users. I’m using Terragrunt and wanted to see if I could have these db “migrations” executed per terragrunt.hcl/project. My first thought was to create a terraform module that contained a lambda function or perhaps even a docker container.

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

You might want to look at Aurora Serverless for this. It may come out cheaper and less complicated, because you’d still have separation between projects but no dedicated instances.

2021-04-03

2021-04-02

github140 avatar
github140

I’m not aware of such. I’d rather ask what’s blocking you from migrating to tf0.12/HCL2?

1
Tim Birkett avatar
Tim Birkett

@ the upgrade isn’t too scary… There’s a 0.12upgrade helper Terraform command that works pretty well. If your Terraform code is split up into small modular stacks, you can use tfenv to help make sure you’re using the correct Terraform version and avoid needing to upgrade everything at once.

hrishidkakkad avatar
hrishidkakkad

Guys not able to pass environment variables rightly in my container definition

hrishidkakkad avatar
hrishidkakkad

Can someone help? We can get on a call

MattyB avatar
MattyB

Did you figure this out?

MattyB avatar
MattyB

I’m not affiliated with CloudPosse, just a member here. Around 1.5y of exp with Terraform and CloudPosse modules, just shy of 10y of experience.

github140 avatar
github140

Do you have a code snippet you’re looking to improve @

Rhys Davies avatar
Rhys Davies

Thanks for the reply guys, ok good to know that I wasn’t just missing some ancient secret Terraform. Yeah, looks like I’m gonna gear up to do the upgrade to 0.12 and +. I guess I was a bit reticent because the delta is gonna be massive with all those syntax changes!

Rhys Davies avatar
Rhys Davies

Guess I can feel good about pumping those Github -/+’s numbers

2021-04-01

hrishidkakkad avatar
hrishidkakkad

terraform-aws-ecs-alb-service-task - looking at this, it does not create the ALB. Am I right? Any particular reason why? The argument [ecs_load_balancers](https://github.com/cloudposse/terraform-aws-ecs-alb-service-task#input_ecs_load_balancers) takes the name of the existing ALB

cloudposse/terraform-aws-ecs-alb-service-task

Terraform module which implements an ECS service which exposes a web service via ALB. - cloudposse/terraform-aws-ecs-alb-service-task

Michael Koroteev avatar
Michael Koroteev

Hi Guys, Is there an example of how to create a node group based on bottlerocket ami using this module - https://github.com/cloudposse/terraform-aws-eks-node-group ? Thanks!

cloudposse/terraform-aws-eks-node-group

Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.

Hao Wang avatar
Hao Wang

you will need eks worker to use custom AMI

cloudposse/terraform-aws-eks-node-group

Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.

Michael Koroteev avatar
Michael Koroteev

yea that part is a given. I was talking about if there is support for that in the module, since bottle-rocket is a different in many ways to amazon-eks-node

Hao Wang avatar
Hao Wang

along with the bottlerocket ami or the ami you build with packer based on bottlerocket ami

Piotr Perzyna avatar
Piotr Perzyna

Hey all, This PR is waiting 6 months for review and this is very elegant way to secure s3 bucket. https://github.com/cloudposse/terraform-aws-s3-bucket/pull/49 Could you make a magic and merge it?

Policy to allow only ssl uploads by jaymed · Pull Request #49 · cloudposse/terraform-aws-s3-bucket

what Adds enable flag to allow only ssl/https bucket uploads. Includes logic to merge other policies enabled by the user such as the string policy passed in via the policy variable and the other e…

jose.amengual avatar
jose.amengual

we are still waiting for a response from the contributor

Policy to allow only ssl uploads by jaymed · Pull Request #49 · cloudposse/terraform-aws-s3-bucket

what Adds enable flag to allow only ssl/https bucket uploads. Includes logic to merge other policies enabled by the user such as the string policy passed in via the policy variable and the other e…

jose.amengual avatar
jose.amengual

if you want this sooner you can create a PR with the same code

Piotr Perzyna avatar
Piotr Perzyna
Rebase #49: Policy to allow only ssl uploads by pperzyna · Pull Request #82 · cloudposse/terraform-aws-s3-bucket

This is a rebase of PR #49 what Adds enable flag to allow only ssl/https bucket uploads. Includes logic to merge other policies enabled by the user such as the string policy passed in via the poli…

jose.amengual avatar
jose.amengual

please look at the tests @

Piotr Perzyna avatar
Piotr Perzyna

@jose.amengual Could you try now?

jose.amengual avatar
jose.amengual

merged

Piotr Perzyna avatar
Piotr Perzyna

Thank you!

jose.amengual avatar
jose.amengual

np

Steve Wade avatar
Steve Wade

has anyone seen this before …

cloud-nuke defaults-aws
INFO[2021-04-01T13:40:37+01:00] Identifying enabled regions
ERRO[2021-04-01T13:40:37+01:00] session.AssumeRoleTokenProviderNotSetError AssumeRoleTokenProviderNotSetError: assume role with MFA enabled, but AssumeRoleTokenProvider session option not set.
github.com/gruntwork-io/[email protected]/errors/errors.go:81 (0x16a1565)
runtime/panic.go:969 (0x1036699)
github.com/aws/[email protected]/aws/session/session.go:318 (0x1974a25)
github.com/gruntwork-io/cloud-nuke/aws/aws.go:50 (0x19749ca)
github.com/gruntwork-io/cloud-nuke/aws/aws.go:66 (0x1974b36)
github.com/gruntwork-io/cloud-nuke/aws/aws.go:86 (0x1974ce6)
github.com/gruntwork-io/cloud-nuke/commands/cli.go:281 (0x199506c)
github.com/gruntwork-io/[email protected]/errors/errors.go:93 (0x16a175e)
github.com/urfave/[email protected]/app.go:490 (0x1691402)
github.com/urfave/[email protected]/command.go:210 (0x169269b)
github.com/urfave/[email protected]/app.go:255 (0x168f5e8)
github.com/gruntwork-io/[email protected]/entrypoint/entrypoint.go:21 (0x1996478)
github.com/gruntwork-io/cloud-nuke/main.go:13 (0x19966a7)
runtime/proc.go:204 (0x10395e9)
runtime/asm_amd64.s:1374 (0x106b901)
  error="AssumeRoleTokenProviderNotSetError: assume role with MFA enabled, but AssumeRoleTokenProvider session option not set."
Steve Wade avatar
Steve Wade

looking for some guidance here …

Do people normally wrap https://github.com/cloudposse/terraform-aws-rds-cloudwatch-sns-alarms and https://github.com/cloudposse/terraform-aws-sns-lambda-notify-slack together?

If so, do you configure all this as part of your RDS module or have it seperate?

Also does anyone have an example out in slack from https://github.com/cloudposse/terraform-aws-sns-lambda-notify-slack ?

Mahi C avatar
Mahi C

Hi All,

I have issues while creating the terraform Module for the RabbitMQ Terraform supports AWS version 3.34.0(latest version) for RabbitMQ which is released on November 2020 but in our organization we are using the AWS 2.67.0 version. I was encountering below error.

expected engine_type to be one of [ACTIVEMQ], got RabbitMQ [0m

[0m on .terraform/modules/amazon-mq/amazon-mq/main.tf line 63, in resource “aws_mq_broker” “mq”: 63: resource “aws_mq_broker” “mq” [4m{ [0m [0m [0m [0m [31m [1m [31mError: [0m [0m [1mexpected deployment_mode to be one of [SINGLE_INSTANCE ACTIVE_STANDBY_MULTI_AZ], got CLUSTER_MULTI_AZ [0m

Tim Birkett avatar
Tim Birkett

Bite the bullet… upgrade your provider. I did it last week, it wasn’t too painful. 2.67.0 is way out of date and you’ll miss out on new features and resources.

https://registry.terraform.io/providers/hashicorp/aws/latest/docs/guides/version-3-upgrade is really helpful.

Check your plan outputs with a fine toothed comb… I successfully deregistered all instances on an ALB because I missed the warning here: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/autoscaling_group

1
Mahi C avatar
Mahi C

But I have tried updating the version to 3.34.0 in terraform root config.tf file but facing issues in other modules regarding the version change. the below issue with s3 module which is running on AWS version 2.57

Mahi C avatar
Mahi C

Will it related to the older versions for the S3 bucket in the root module

Matt Gowie avatar
Matt Gowie

For those of you with a couple seconds to spare — this issue could always use another round of ’s: https://github.com/hashicorp/terraform-provider-aws/pull/15966

Add aws_amplify_app resource by ashishmohite · Pull Request #15966 · hashicorp/terraform-provider-aws

Community Note Please vote on this pull request by adding a reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave &quot;…

3
Rhys Davies avatar
Rhys Davies

Hey all, is there a way to do dynamic blocks in terraform 0.11?

Rhys Davies avatar
Rhys Davies

To give a concrete example: I’m stuck on an old version of Terraform and have never done the upgrade from HCL1 to HCL2 and 0.12 so a bit hesitant to attempt it. Writing an ECS Service module with a dynamic load balancer block would do me wonders right now in cleaning up the code

Rhys Davies avatar
Rhys Davies

but I’m a bit lost on how to achieve a similar result to dynamic blocks in 0.12+ or if it’s even possible?

    keyboard_arrow_up