#terraform (2019-10)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2019-10-01

rbadillo avatar
rbadillo

Good morning, I’m having issues with block_device_mappings in a launch template

rbadillo avatar
rbadillo

Terraform 0.12

rbadillo avatar
rbadillo
  block_device_mappings {
    device_name = "/dev/xvda"

    ebs {
      volume_type = "gp2"
      volume_size = 64
    }
  }

  block_device_mappings {
    device_name = "/dev/sdb"

    ebs {
      volume_type = "io1"
      iops        = var.iops
      volume_size = var.volume_size
    }
  }
rbadillo avatar
rbadillo

That’s resulting on this

rbadillo avatar
rbadillo
rbadillo avatar
rbadillo

Any idea what am I doing wrong ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what’s the issue?

rbadillo avatar
rbadillo

terraform is adding iops to the gp2 volume

rbadillo avatar
rbadillo

don’t know why

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so maybe that’s a default

rbadillo avatar
rbadillo

Actually everything is good

rbadillo avatar
rbadillo

false alarm

rbadillo avatar
rbadillo

thanks for your help

drexler avatar
drexler

need some gurus with this. I have a powershell script to pull an artifact from S3 and install it application during the EC2 bootstrap process. Standalone, the Powershell script works. When applied via the User Data with property i get the following in the Ec2ConfigLog when i remote into the instance to see what happened:

2019-10-01T16:13:54.710Z: Ec2HandleUserData: Message: Start running user scripts
2019-10-01T16:13:54.726Z: Ec2HandleUserData: Message: Could not find <runAsLocalSystem> and </runAsLocalSystem>
2019-10-01T16:13:54.726Z: Ec2HandleUserData: Message: Could not find <powershellArguments> and </powershellArguments>
2019-10-01T16:13:54.726Z: Ec2HandleUserData: Message: Could not find <persist> and </persist>

I’ve ruled out the Powershell version since v4 works with it and even base64-decoded what Terraform does and uploads when creating the resource. The target instance is a Windows Server 2012 box. Ideas appreciated.

2019-10-02

drexler avatar
drexler

figured out my problem…the custom AMI i was using didn’t have user data script execution enabled. The little things we miss…

1
foqal avatar
foqal
01:35:41 PM

Helpful question stored to <@Foqal> by @oscar:

Why isn't my User Data script running?
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@here public #office-hours starting now! join us to talk shop https://zoom.us/j/508587304

sohel2020 avatar
sohel2020

Do sweetops has any terraform module to create Kubernetes cluster using kops?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

no, we use kops directly, but we use terraform to provision backing services needed by kops

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i have seen some modules out there do what you say though…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(not by us)

sohel2020 avatar
sohel2020

@Erik Osterman (Cloud Posse) Could you please give me a link?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
GitHub

GitHub is where people build software. More than 40 million people use GitHub to discover, fork, and contribute to over 100 million projects.

1
Bogdan avatar

Anyone can recommend 0.12-ready ECS service module?

Bogdan avatar

I tried airship but can’t even get pass terraform init unfortunately

2019-10-03

oscar avatar

@Erik Osterman (Cloud Posse) Many moons ago we discussed that Dockerfile is for global variables (#geodesic), .envrc is good for slightly less global variables, but shared across applications, and that for terraform only variables it should live in an auto.tfvars file.

I’ve since followed that rule but I’ve noticed the following warning:


Using a variables file to set an undeclared variable is deprecated and will
become an error in a future release. If you wish to provide certain "global"
settings to all configurations in your organization, use TF_VAR_...
environment variables to set these instead.

It must be time to move away from that approach and actually start placing Terraform variables in our .envrc with USE tfenv right?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I don’t think that’s what this means.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It means you have a variable in one of your .tfvars files that does not have a corresponding variable block:

variable "...." { 
}
Jakub Korzeniowski avatar
Jakub Korzeniowski

Hi wave . I’m dealing with some really old terraform files that sadly don’t specify the version of the modules that have been used to apply the infrastructure. Is there a way to tell that from the tf state file?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hrmmmm that could be tricky. I can’t recall if the tfstate file persists that. Have you had a look at it?

Daniel Minella avatar
Daniel Minella

Does anyone have a terraform script that creates a stack with prometheus and thanos?

drexler avatar
drexler

i built a VPC pre TF v0.12 using the terraform-aws-modules/vpc/aws. I forgot to pin down the version. Anyone know the last compatible version with TF v0.11.x? Getting the following error:

Error downloading modules: Error loading modules: module vpc: Error parsing .terraform/modules/21a99daec297cf2c47674e5f63337da8/terraform-aws-modules-terraform-aws-vpc-5358041/main.tf: At 2:23: Unknown token: 2:23 IDENT max
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Terraform AWS modules

Collection of Terraform AWS modules supported by the community - Terraform AWS modules

oscar avatar
terraform-aws-modules/terraform-aws-vpc

Terraform module which creates VPC resources on AWS - terraform-aws-modules/terraform-aws-vpc

jose.amengual avatar
jose.amengual

any reson why this : https://github.com/cloudposse/terraform-aws-kms-key/blob/0.11/master/main.tf#L43 got deleted from the TF 0.12 version ?

cloudposse/terraform-aws-kms-key

Terraform module to provision a KMS key with alias - cloudposse/terraform-aws-kms-key

jose.amengual avatar
jose.amengual

looks like there is already a pull request for this :

cloudposse/terraform-aws-kms-key

Terraform module to provision a KMS key with alias - cloudposse/terraform-aws-kms-key

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it was added to TF 0.11 branch after it was converted to 0.12

jose.amengual avatar
jose.amengual

I see so that PR should be ok to merge ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

after a few minor issues fixed, yes (commented in the PR)

jose.amengual avatar
jose.amengual

awesome

davidvasandani avatar
davidvasandani

Is it possible to disable the log bucket for the terraform-aws-s3-website module? It seems like it was going to be configureable but I can’t find it. https://github.com/cloudposse/terraform-aws-s3-website/issues/21#issuecomment-420829113

Error deleting S3 Bucket {logs-bucket} : BucketNotEmpty · Issue #21 · cloudposse/terraform-aws-s3-website

module Parameters module &quot;dev_front_end&quot; { source = &quot;git://github.com/cloudposse/terraform-aws-s3-website.git?ref=master&quot;> namespace = &quot;namespace&quot; stage = &quot;…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I’m not sure off the bat, however, @Andriy Knysh (Cloud Posse) is working on terraform all next week (as this week)

Error deleting S3 Bucket {logs-bucket} : BucketNotEmpty · Issue #21 · cloudposse/terraform-aws-s3-website

module Parameters module &quot;dev_front_end&quot; { source = &quot;git://github.com/cloudposse/terraform-aws-s3-website.git?ref=master&quot;> namespace = &quot;namespace&quot; stage = &quot;…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

he’s currently working on the beanstalk modules

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we can maybe get to this after that (or if you want to open a PR)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s easy to implement: a new var.logs_enabled and dynamic block for logging

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but when it’s enabled, it’s still can’t be destroyed automatically w/o adding force_destroy as is done for the main bucket https://github.com/cloudposse/terraform-aws-s3-website/blob/master/main.tf#L45

cloudposse/terraform-aws-s3-website

Terraform Module for Creating S3 backed Websites and Route53 DNS - cloudposse/terraform-aws-s3-website

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s probably another var.logs_force_destroy

davidvasandani avatar
davidvasandani

Cool! Thanks @Andriy Knysh (Cloud Posse) and @Erik Osterman (Cloud Posse) I’ll work on a PR. Sorry for not just looking at the code and realizing that a PR would resolve it.

Hemanth avatar
Hemanth

what’s happening here -> ` count = “${var.add_sns_policy != “true” && var.sns_topic_arn != “” ? 0 : 1}” ` can someone explain ? referring from - https://github.com/cloudposse/terraform-aws-efs-cloudwatch-sns-alarms/blob/master/main.tf

cloudposse/terraform-aws-efs-cloudwatch-sns-alarms

Terraform module that configures CloudWatch SNS alerts for EFS - cloudposse/terraform-aws-efs-cloudwatch-sns-alarms

maarten avatar
maarten

take a look here, https://bit.ly/1mz5q0g wikipedia article on the ternary operator

cloudposse/terraform-aws-efs-cloudwatch-sns-alarms

Terraform module that configures CloudWatch SNS alerts for EFS - cloudposse/terraform-aws-efs-cloudwatch-sns-alarms

2019-10-04

Fred Light avatar
Fred Light

Hi guys. This morning i just copied a working geodesic in a new one and i got an error when trying to upload tfstate to tfstate-backend S3 bucker (403). Digging into this i found a bucket policy that was not present in the other buckets. I don’t really understand from where it can come since all the versions and image tags are fixed ones. Does somebody already faced such a case ?

oscar avatar

403 is auth. You’re likely trying to upload to the bucket of the old account/geodesic shell

oscar avatar

Go into your Terraform project’s directory and run env | grep -i bucket and see what the S3 bucket is to confirm or disprove that theory.

Fred Light avatar
Fred Light
old one:
TF_BUCKET_REGION=eu-west-3
TF_BUCKET=go-algo-dev-terraform-state
Fred Light avatar
Fred Light
new one:
TF_BUCKET_REGION=eu-west-3
TF_BUCKET=go-algo-commons-terraform-state
oscar avatar

Hm. Not what I thought

Fred Light avatar
Fred Light

in fact i dont understand how old one doesn’t had the Bucket Policy wich is clearly defined in aws-tfstate-backend (still it doesn’t explain why it was blocking me but …)

Fred Light avatar
Fred Light

the usage was :

  • start geodesic
  • assume-role
  • go in tfstate-backend folder
  • comment s3 state backend
  • init-terraform
  • terraform plan/apply
  • uncomment s3 backend
  • re init-terraform and answer yes about uploading existing state -> 403
Fred Light avatar
Fred Light

and the error is :

Initializing modules...
- module.tfstate_backend
- module.tfstate_backend.base_label
- module.tfstate_backend.s3_bucket_label
- module.tfstate_backend.dynamodb_table_label

Initializing the backend...
Do you want to copy existing state to the new backend?
  Pre-existing state was found while migrating the previous "local" backend to the
  newly configured "s3" backend. No existing state was found in the newly
  configured "s3" backend. Do you want to copy this state to the new "s3"
  backend? Enter "yes" to copy and "no" to start with an empty state.

  Enter a value: yes

Releasing state lock. This may take a few moments...
Error copying state from the previous "local" backend to the newly configured "s3" backend:
    failed to upload state: AccessDenied: Access Denied
	status code: 403, request id: XXXXXXXXXXXXX, host id: xxxxxxxxxxxxxxxx

The state in the previous backend remains intact and unmodified. Please resolve
the error above and try again.
Fred Light avatar
Fred Light

deleting bucket policy allowed upload with the same command

oscar avatar

The state in the local tfstate file is your old bucket

oscar avatar

I suspect it is using the same .terraform directory as the old geodesic / account?

oscar avatar

I’ve had that a few times when I forget to reinit properly.

Fred Light avatar
Fred Light

humm should not since this ignored in dockerignore no ?

Fred Light avatar
Fred Light

will have to clone it again so i will check about it

Fred Light avatar
Fred Light

and also the terraform-state folder is comming from FROM cloudposse/terraform-root-modules:0.106.1 as terraform-root-modules

Fred Light avatar
Fred Light

so it could be no way of importing a pre-existing .terraform folder unless i am missing something

oscar avatar

Yeh wasn’t sure if you were running from /localhost or /conf

Fred Light avatar
Fred Light

yes from /conf

Fred Light avatar
Fred Light

ok i duplicated a second env from the original one and i have got the same symptoms (403 when uploading tf state)

Fred Light avatar
Fred Light

the policy applied to the bucket is this one :

Fred Light avatar
Fred Light
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "DenyIncorrectEncryptionHeader",
            "Effect": "Deny",
            "Principal": {
                "AWS": "*"
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::go-algo-prod-terraform-state/*",
            "Condition": {
                "StringNotEquals": {
                    "s3:x-amz-server-side-encryption": "AES256"
                }
            }
        },
        {
            "Sid": "DenyUnEncryptedObjectUploads",
            "Effect": "Deny",
            "Principal": {
                "AWS": "*"
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::go-algo-prod-terraform-state/*",
            "Condition": {
                "Null": {
                    "s3:x-amz-server-side-encryption": "true"
                }
            }
        }
    ]
}
Fred Light avatar
Fred Light

tried to remove 1st statement => 403, removed only 2nd one => 403 so i guess both are causing issue

Fred Light avatar
Fred Light

runnin in eu-west-3 if that make a difference about ServerSideEncryption

Bogdan avatar

anyone knows what the syntax of container_depends_on input in cloudposse/terraform-aws-ecs-container-definition should look like? CC: @Erik Osterman (Cloud Posse) @sarkis @Andriy Knysh (Cloud Posse)

Bogdan avatar

in https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ContainerDependency.html it’s specified as containing two values: condition and containerName

ContainerDependency - Amazon Elastic Container Service

The dependencies defined for container startup and shutdown. A container can contain multiple dependencies. When a dependency is defined for container startup, for container shutdown it is reversed.

Bogdan avatar

so should I pass a list of maps that contains those two keys?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

try to provide a list of maps with those keys

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let us know if any issues

Bogdan avatar

will do thanks!

Bogdan avatar

I’m using using something like in the terraform-aws-ecs-container-definition module with my tf version being 0.12.9:

  log_options = {
    awslogs-create-group  = true
    awslogs-region        = "eu-central-1"
    awslogs-groups        = "group-name"
    awslogs-stream-prefix = "ecs/service"
  }

but I get
Definition container_definitions is invalid: Error decoding JSON: json: cannot unmarshal bool into Go struct field LogConfiguration.Options of type string

Bogdan avatar

I’ve tried to use (double-)quotes for the “true” and still didn’t get rid of it…

Bogdan avatar

@sarkis @Andriy Knysh (Cloud Posse) ^^^

cabrinha avatar
cabrinha
cloudposse/terraform-aws-ecs-web-app

Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app

cabrinha avatar
cabrinha

and I’m wondering, why is there an alb_ingress definition, but it seems that there is no ALB being created?

sarkis avatar

The listener needs to be created in that module and then passed along here: https://github.com/cloudposse/terraform-aws-ecs-web-app/blob/master/main.tf#L94

~https://github.com/cloudposse/terraform-aws-ecs-alb-service-task module creates the ALB itself~it’s been a while since i used these - see @Andriy Knysh (Cloud Posse) response below for a better explanation

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ALB is created separately

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-ecs-atlantis

Terraform module for deploying Atlantis as an ECS Task - cloudposse/terraform-aws-ecs-atlantis

cabrinha avatar
cabrinha

So, when I try to spin down this stack, I get:

module.alb_ingress.aws_lb_target_group.default: aws_lb_target_group.default: value of 'count' cannot be computed
cabrinha avatar
cabrinha

And, if I spin the stack up, then destroy it, I hit this error, and even running a new plan then gives the same error.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this project was applied and destroyed many times w/o those errors https://github.com/cloudposse/terraform-root-modules/tree/master/aws/ecs

cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s for atlantis, but you can take the parts related only to ECS and test

cabrinha avatar
cabrinha

I think the issue is that in the alb_ingress module, I’m passing in the default_target_group_arn from the ALB module into target_group_arn of the alb_ingress module…

cabrinha avatar
cabrinha
cabrinha avatar
cabrinha

So, I’m a little confused as to how to connect the alb_ingress module with the alb module, or if I even need to.

cabrinha avatar
cabrinha

The ALB module has no examples and the alb_ingress module has an example of only itself being called. I think it’d be nice to have an example that shows putting these two together.

cabrinha avatar
cabrinha

Somehow I keep getting this error:

aws_ecs_service.ignore_changes_task_definition: InvalidParameterException: The target group with targetGroupArn arn:aws:elasticloadbalancing:us-west-2::targetgroup/qa-nginx-default/b4be3dbba5e084ab does not have an associated load balancer.
cabrinha avatar
cabrinha

Looks like the service and the ALB are being created at the same time, but really the service needs to be created after the alb.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ALB needs to be created first

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so either use -target, or call apply two times

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(we’ll be adding examples this week when we convert all the modules to TF 0.12)

cabrinha avatar
cabrinha

I suppose in TF 0.12 they’ll have depends_on for modules soon.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s not about depends_on, TF waits for the ALB to be created, BUT it does not wait for it to be in READY state

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and it takes time for ALB to become ready

cabrinha avatar
cabrinha

right

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so depends_on will not help here

Bogdan avatar
Bogdan
05:38:28 PM

It turns out that starting with capital “T” solves the issue; it should be “True” not true

I’m using using something like in the terraform-aws-ecs-container-definition module with my tf version being 0.12.9:

  log_options = {
    awslogs-create-group  = true
    awslogs-region        = "eu-central-1"
    awslogs-groups        = "group-name"
    awslogs-stream-prefix = "ecs/service"
  }

but I get
Definition container_definitions is invalid: Error decoding JSON: json: cannot unmarshal bool into Go struct field LogConfiguration.Options of type string

1
Brij S avatar

i’ve got a variable, "${aws_ssm_parameter.npm_token.arn}" which ends up being arn:aws:ssm:us-west-2:xxxxxxxxx:parameter/CodeBuild/npm-token-test Is it possible to remove the -test based on if a variable is true or not?

Brij S avatar

tried this, with no luck

${var.is_test == "true" ? "${substr(aws_ssm_parameter.npm_token.arn, 0, length(aws_ssm_parameter.npm_token.arn) - 5)}" : "${aws_ssm_parameter.npm_token.arn}"}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

why no luck?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you don’t need to do interpolation inside interpolation

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

"${var.is_test == "true" ? substr(aws_ssm_parameter.npm_token.arn, 0, length(aws_ssm_parameter.npm_token.arn) - 5) : aws_ssm_parameter.npm_token.arn}"

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and what’s the provided value for var.is_test?

Brij S avatar

true

Brij S avatar

that worked forgot I idnt need to do the interpolation inside

Brij S avatar

thanks @Andriy Knysh (Cloud Posse)!

2019-10-05

Milos Backonja avatar
Milos Backonja

HI, I am using custom script to export swagger from restapi and to create boilerplate for API Gateway Import. I created deployment resourse which will trigger redeploy once swagger export is changed. It looks something like this;

# Create Deployment resource “aws_api_gateway_deployment” “restapi” { rest_api_id = aws_api_gateway_rest_api.restapi.id variables = { trigger = filemd5(“./result.json”) } lifecycle { create_before_destroy = true } }

For now it works as expected. I am just trying to find solution to not destroy previous deployments on new deploys. Any suggestions more than welcome.

2019-10-07

Hemanth avatar
Hemanth

how does terraform consider ${path.module}, what value does it take for it - Example: like in file(“${path.module}/hello.txt”)

Hemanth avatar
Hemanth

file("${path.module}/hello.txt") is same as file("./hello.txt") ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if used from the same folder (project), then yes, the same

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

${path.module} is useful when the module is used from another external module

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for example:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

${path.module} still points to the module’s root, not the example’s root (even if used from the example)

Cloud Posse avatar
Cloud Posse
04:05:29 PM

:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is Oct 16, 2019 11:30AM.
Register for Webinar
slack #office-hours (our channel)

2019-10-08

oscar avatar

Anyone able to find the Terraform Cloud CIDR:?

wave1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Terraform Cloud desperately needs something like this https://chartio.com/docs/data-sources/tunnel-connection/ (what chartio does)

Tunnel connectionattachment image

Chartio offers two ways of connecting to your database. A direct connection is easier; you can also use our SSH Tunnel Connection.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@sarkis

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

what’s nice about this is you don’t even expose SSH. using autossh you establish persistent tunnels from inside your environments that connect out to chartio

oscar avatar

Tragically enough, it isn’t that that’s the issue

oscar avatar

our VCS is IP whitelisted

oscar avatar

so TF Cloud can’t access our code

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

aha

oscar avatar

But yes that would be a useful solution if/when I hit the roadblock of terraform private infrastructure

oscar avatar

Though in theory it is only running against AWS Accounts

oscar avatar

so shouldn’t be an issue

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

honestly, if you’re that locked down, onprem terraform cloud is what you’re going to probably need

1
oscar avatar

Aye

oscar avatar

Got a call with them tomorrow

oscar avatar

I’ll feed back in PM about what they say

oscar avatar

Time to get ripped off by SaaS

oscar avatar

the trouble is though, we don’t want on prem

oscar avatar

we want managed services

oscar avatar

less maintenance etc. We don’t want to host it!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but you self-host VCS it sounds like

sarkis avatar

@Erik Osterman (Cloud Posse) 100% agree on the connection bit - I’m proposing something like VPC Endpoints to the team after using TFC a bit we ran into the same limitation

oscar avatar

I can’t seem to find it with google / terraform docs

joshmyers avatar
joshmyers

Do they provide one?

dalekurt avatar
dalekurt

Q: How can I target a specific .tf file with terraform? I have multiple tf files in my project and I would like to destroy and apply the resources specified within one specific tf file. I’m aware I can -target the resources within the tf file but I want an easier way if that exists.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

no easy way that I can think of with pure terraform.

Szymon avatar

Maybe you can put those files into separate folders?

Szymon avatar

Or read about terragrunt

dalekurt avatar
dalekurt

Thanks @Szymon I had been reading up on Terragrunt recently

oscar avatar
oscar
04:15:26 PM

Doesn’t appear to be that way

Do they provide one?

MattyB avatar

Hey everyone, I’ve been playing around with Terraform for a couple of weeks. After making a simple POC my modules ended up similar to CloudPosse modules on a much smaller scale. I found your repos just this past weekend and am pretty impressed with them compared to other community modules. One thing I haven’t figured out yet is how to build the infrastructure in one go. It seems to me like you can build the VPC, alb, etc.. all at once and then separately build the RDS due to dependency issues. I’d assume there are similar issues with much more complex architectures? I thought it was how I configured my own POC but using the CloudPosse modules there’s a similar issue. example: `Error: Invalid count argument

on .terraform/modules/rds_cluster_aurora_mysql.dns_replicas/main.tf line 2, in resource “aws_route53_record” “default”: 2: count = var.enabled ? 1 : 0

The “count” value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to first apply only the resources that the count depends on. `

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@MattyB the count error happened very often in TF 0.11, it’s much better in 0.12. We have some info about the count error here https://docs.cloudposse.com/troubleshooting/terraform-value-of-count-cannot-be-computed/

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

The error itself has nothing to do with how you organize your modules and projects - it’s separate tasks/issues. Take a look at these threads https://sweetops.slack.com/archives/CB6GHNLG0/p1569434890216300 and https://sweetops.slack.com/archives/CB6GHNLG0/p1569434945216700?thread_ts=1569434890.216300&cid=CB6GHNLG0

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/reference-architectures

Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures

Sharanya avatar
Sharanya

Lookin for a terraform Module For - RDS- Instance Configuration : SQL Server Standard 2017, 2017 v14

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we have https://github.com/cloudposse/terraform-aws-rds, tested with MySQL and Postgres, was not tested with SQL Server (but could work, just specify the correct params - https://github.com/cloudposse/terraform-aws-rds/blob/master/variables.tf#L111)

cloudposse/terraform-aws-rds

Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-rds

Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

Now that’s how you build a module

cool-doge1
jose.amengual avatar
jose.amengual

could it be possible to pass this construct :

lifecycle_rules        = {
    id      = "test-expiration"
    enabled = true

    abort_incomplete_multipart_upload_days = 7

    expiration {
      days = 30
    }

    noncurrent_version_expiration {
      days = 30
    }
  }

to a variable with something like map(string)?

Chris Fowles avatar
Chris Fowles

you could do it with structural types if you want to validate the input map(object({id = string, enabled = bool, etc})) or you could do it as map(any) if you don’t want to validate the input

Chris Fowles avatar
Chris Fowles

as a general practice though i tend to try and make variables somewhat more self documenting and have things like expiration_enable abort_incomplete_multipart_upload_days, etc as separate values as it makes it easier to use the module for others

Chris Fowles avatar
Chris Fowles
Type Constraints - Configuration Language - Terraform by HashiCorp

Terraform module authors and provider developers can use detailed type constraints to validate the inputs of their modules and resources.

jose.amengual avatar
jose.amengual

but how do you map :

expiration {
      days = 30
    }

and a map(any) for example ?

jose.amengual avatar
jose.amengual

since is key=value

jose.amengual avatar
jose.amengual

it will be so much easier if you could attach a lifecycle policy or include a file for varible blocks

Chris Fowles avatar
Chris Fowles

yeh ok good point - blocks are a bit of a pain in the butterfly

Chris Fowles avatar
Chris Fowles

technically they’re a list of maps

jose.amengual avatar
jose.amengual
Error: Invalid value for module argument

  on s3_buckets.tf line 29, in module "s3_bucket_scans":
  29:   lifecycle_rule        = {
  30:     id      = "test-scans-expiration",
  31:     enabled = true,
  33:     abort_incomplete_multipart_upload_days = 7,
  35:     expiration = {
  36:       days = 30
  37:     }
  39:     noncurrent_version_expiration = {
  40:       days = 30
  41:     }
  42:   }

The given value is not suitable for child module variable "lifecycle_rule"
defined at ../terraform-aws-s3-bucket/variables.tf:102,1-26: all map elements
must have the same type.

jose.amengual avatar
jose.amengual

it did not like that

jose.amengual avatar
jose.amengual

I tried this too :

  lifecycle_rule { 
    id = var.lifecycle_rule
  }
jose.amengual avatar
jose.amengual

and

lifecycle_rule        = <<EOT
  {
    #id      = "test-scans-expiration"
    enabled = true

    abort_incomplete_multipart_upload_days = 7

    expiration {
      days = 30
    }

    noncurrent_version_expiration {
      days = 30
    }
  }
  EOT
jose.amengual avatar
jose.amengual

it almost worked

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@jose.amengual here’s how we provide lifecycle_rule https://github.com/cloudposse/terraform-aws-s3-website/blob/master/main.tf#L74

cloudposse/terraform-aws-s3-website

Terraform Module for Creating S3 backed Websites and Route53 DNS - cloudposse/terraform-aws-s3-website

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but if you want to have the entire block as variable, why not declare it as object (or list of objects) with different item types including other objects

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-ec2-autoscale-group

Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-ec2-autoscale-group

Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group

Chris Fowles avatar
Chris Fowles
  lifecycle_rule {
    id      = module.default_label.id
    enabled = var.lifecycle_rule_enabled
    prefix  = var.prefix
    tags    = module.default_label.tags

    noncurrent_version_transition {
      days          = var.noncurrent_version_transition_days
      storage_class = "GLACIER"
    }

    noncurrent_version_expiration {
      days = var.noncurrent_version_expiration_days
    }
  }

in my mind (and experience) this is a much better way to do it rather than try and pass a big object as a variable. you end up with a much easier to understand interface to your module

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes agree. The only case when you’d want to provide it as a big object var is when the entire block is optional

loren avatar

I prefer using a big object and leaving the usage up to the user

loren avatar

With dynamic blocks it seems to work very well in tf 0.12

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yea, it all depends (on use case and your preferences)

jose.amengual avatar
jose.amengual

and

jose.amengual avatar
jose.amengual

since the other guy is taking too long

jose.amengual avatar
jose.amengual

and

jose.amengual avatar
jose.amengual

We a are all in the same team

jose.amengual avatar
jose.amengual

if you have time to look at the alb and kms pull request I will appreciate

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

commented on kms

jose.amengual avatar
jose.amengual

done on that, anything I can help with the terraform-aws-alb/pull/29?

jose.amengual avatar
jose.amengual

ok , I like the example s3-website module, I will do something like that

Mads Hvelplund avatar
Mads Hvelplund

does anyone know how to merge a list of maps into a single map with tf 0.12? i.e, i want:

[
  {
    "foo": {
      "bar": 1
      "bob": 2
    }
  },
  {
    "baz": {
      "lur": 3
    }
  }
]

to become:

{
  "foo": {
    "bar": 1
    "bob": 2
  }
  "baz": {
    "lur": 3
  }
}
Mads Hvelplund avatar
Mads Hvelplund

the former is the output of a for loop that groups items with the ellipsis operator

2019-10-09

maarten avatar
maarten
output "merged_tags" {  
   value = merge(local.list_of_maps...)
}
Mads Hvelplund avatar
Mads Hvelplund

Oh, decomposition is supported like js

Gocho avatar

Hi @all Not sure if it is the right place to ask, but I take my chance !

Say I have 3 terraform repositories, A, B and …tadam : C! Both B & C rely on A states (mututlizaton of some components) /——–B A ——/ -——— C

(Please note my graphical skills )

All repositories are in terraform 0.11, but I would like to update to 0.12. What would be the best way to achieve that (and avoid conflicts) ? Can I simply update B then A and C ? Or should I take care about a particular order ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If I recall correctly, you’ll want to run terraform apply with the latest version of 0.11 which will ensure state compatibility with 0.12

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

then you ought to be able to upgrade the projects in any order after that.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

perhaps someone in #terraform-0_12 has better suggestions

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If you haven’t reviewed this, make sure you do that first.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Pattern to handle optional dynamic blocks

What’s a good way to handle optional dynamic blocks, depending on existence of map keys? Example: Producing aws_route53_record resources, where they can have either a “records” list, or an “alias” block, but not both. I’m using Terraform 0.12.6. I’m supplying the entire Route53 zone YAML as a variable, through yamldecode(): zone_id: Z987654321 records: - name: route53test-plain.example.com type: A ttl: 60 records: - 127.0.0.1 - 127.0.0.2 - name: route53test-alias.example.com type…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

When did HashiCorp launch https://discuss.hashicorp.com/?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It’s long overdue!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yea, they have very nice posts in there

2019-10-10

sarkis avatar

i think it’s pretty recent, June

sarkis avatar

learn.hashicorp.com is getting really good too - but most of the material is too introductory for now

Hemanth avatar
Hemanth

Trying to reference an instance.id from different directory- erroring out

./tf
./tf/dir1/ -> (trying to use aws_instance.name.id in ->./tf/modules/cloud/myfile.tf)
./tf/envs/sandbox/ -> (running plan here)
./tf/modules/cloud -> myfile.tf 

Tried adding “${aws_instance.name.id}” in outputs.tf in ./tf/dir1/ and using that value in ./tf/modules/cloud/myfile.tf Throws a new error like

-unknown variable accessed: var.profile in:${var.profile}

How do i go about this ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the module /tf/modules/cloud should have [variables.tf](http://variables.tf) and [outputs.tf](http://outputs.tf)

Hemanth avatar
Hemanth

@Andriy Knysh (Cloud Posse) yes those two files do exist in /tf/modules/cloud

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the main file that uses the module provides vars to the module

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and uses its outputs in its own outputs

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Hemanth avatar
Hemanth

bit confusing, but going to look into it. Thanks for sharing those @Andriy Knysh (Cloud Posse) appreciate your help

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

consider the variables and outputs as chains

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the variables go from top level modules to low level modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the outputs go from low level modules to top level modules

johncblandii avatar
johncblandii

@Andriy Knysh (Cloud Posse) @Erik Osterman (Cloud Posse) https://github.com/cloudposse/terraform-aws-ecs-alb-service-task/pull/34

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks @johncblandii we’ll review

party_parrot1
AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

Guys and Gals, is there a TF resouse to control AWS SSO?

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

I can’t seem to find it

loren avatar

i don’t think there is even a public API for AWS SSO, is there?

loren avatar

my take so far on AWS SSO is basically, just use any other federation provider

2
Phuc avatar

Hi guys, Is there anyone familiar with IAM role and Instance Profile ?> I have a case like this: I would like to create an Instance Profile with suitable policy to allow access to ECR repo ( include download image from ECR as well). Then I attach that Instance Profile for a Launch Configuration to spin up an instance. The reason why I mentioned Policy for ECR is that I would like to set aws-credential- helper on the instance to use with Docker (Cred-helper). when it launch, so that when that instance want to pull image from ECR, it wont need AWS credential on the host itself at first. All of that module, I would like to put in Terraform format as well. Any help would be appreciated so much.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Phuc see this example on how to create instance profile https://github.com/cloudposse/terraform-aws-eks-workers/blob/master/main.tf#L69

cloudposse/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

Phuc avatar

Hi aknysh

cloudposse/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

Phuc avatar

the main point I focus on

Phuc avatar

is that instance launch by launch configuration with instance profile

Phuc avatar

contain aws credential to access ecr yet

Phuc avatar

it kind complex for my case, I try alot, but the previous working way for me is ~/.aws/credential must exist on the host.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

not sure what’s the issue is, but you create a role, assign the required permissions to the role (ECR etc.), add assume_role document for the EC2 service to assume the role, and finally create an instance profile from the role

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then you use the instance profile in launch configuration or launch template

Phuc avatar

Do you know how aws-cred-helper and docker login working together ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-ec2-autoscale-group

Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group

Phuc avatar

https://github.com/awslabs/amazon-ecr-credential-helper. I want to achieve the same result but with IAM only

awslabs/amazon-ecr-credential-helper

Automatically gets credentials for Amazon ECR on docker push/docker pull - awslabs/amazon-ecr-credential-helper

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

https://github.com/awslabs/amazon-ecr-credential-helper#prerequisites says it works with all standard credential locations, including IAM roles (which will be used in case of EC2 instance profile)

awslabs/amazon-ecr-credential-helper

Automatically gets credentials for Amazon ECR on docker push/docker pull - awslabs/amazon-ecr-credential-helper

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so once you have an instance profile on EC2 where the amazon-ecr-credential-helper is running, the instance profile (role) will be used automatically

Phuc avatar

yeah, I was thinking that too

Phuc avatar

try to figure if there is any missing policy for the iam role

2019-10-11

mmarseglia avatar
mmarseglia

i’m trying to apply a generated certificate from ACM, using DNS validation, using the cloudposse module, terraform-aws-acm-request-certificate. The certificate is applied to the loadbalancer in an elasticbeanstalk application. The cert gets created and applied to the loadbalancer before it’s verified, the eb config fails and then the app is in a failed state, preventing further configuration. to fix this I added a aws_acm_certificate_validation resource to the module and changed the arn output to aws_acm_certificate_validation.cert.certificate_arn. this seems to have fixed my problem.

oscar avatar

That looks right to me :) The other solution would have been to force a dependency between the validation resource and the ALB

Jeff Young avatar
Jeff Young

Using this

  source  = "cloudposse/vpc-peering-multi-account/aws"
  version = "0.5.0"

And getting this error.

Error: Missing resource instance key

  on .terraform/modules/vpc_peering_cross_account_devdata/cloudposse-terraform-aws-vpc-peering-multi-account-3cf3f60/accepter.tf line 96, in locals:
  96:   accepter_aws_route_table_ids           = "${distinct(sort(data.aws_route_tables.accepter.ids))}"

Because data.aws_route_tables.accepter has "count" set, its attributes must be
accessed on specific instances.

For example, to correlate with indices of a referring resource, use:
    data.aws_route_tables.accepter[count.index]
Jeff Young avatar
Jeff Young

any help is appreciated.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the module has not been converted to TF 0.12 yet, but you are using 0.12 which has more strict type system

Jeff Young avatar
Jeff Young

FWIW. I actually just commented this out and proceeded and seem to have my VPC peering working …

# Lookup accepter route tables
data "aws_route_tables" "accepter" {
  # count    = "${local.count}"
Jeff Young avatar
Jeff Young

So commenting out the count

Jeff Young avatar
Jeff Young

got me further down the road with the module and Terraform 0.12.

Jeff Young avatar
Jeff Young

Thanks.

2019-10-12

maarten avatar
maarten

Hi everyone, I’m helping a friend with Azure and it’s a quite different from AWS.. Much appreciated if someone could tell me what the scope of a azurerm_resource_group normally is or should be best practice-wise.

Unclear to me if I should fit all resources of one project in there, or group in type like, dns related, network.. etc.etc. Thanks.

github140 avatar
github140

We use dedicated resource groups for different use cases.

maarten avatar
maarten

Would you like to give me an example ?

maarten avatar
maarten

or multiple, an example per different use-case ..

github140 avatar
github140

a) use a HPC resource group for HPC workload incl. network, storage, vms, limits, admin permissions b) SAP resource group for SAP workload incl. network, storage, blob storage, vms, databases. c) NLP processing for the implementation of NLP services d) HUB resource group which includes the HUB VPN endpoint and shared services like domain controllers.

maarten avatar
maarten

gotcha, thanks

2019-10-13

guimin avatar

HI here, I have a question about terraform registry that the NEXT button does not work in the page : https://registry.terraform.io/browse/modules?offset=109&provider=aws and why the registry only show at most 109 modules

guimin avatar

Even if I use curl <https://registry.terraform.io/v1/modules?limit=100&offset=200&provider=aws> and the results also same as the previous.

loren avatar

perhaps that’s just the end of registered aws modules?

loren avatar

oh hmm… the registry home page says there are 1198 aws modules…

guimin avatar

HI @loren It seems like all of provider have the same issue.

loren avatar

yeah, maybe look for or open an issue on the terraform github

loren avatar
Terraform Registry Pagination bug · Issue #22380 · hashicorp/terraform

I was implementing the pagination logic client side in a PowerShell module when I noticed there is a pagination error. The returned offset in the meta is not updated over 115. As you can see, the r…

guimin avatar

Thanks

2019-10-14

Cloud Posse avatar
Cloud Posse
04:02:20 PM

:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is Oct 23, 2019 11:30AM.
Register for Webinar
slack #office-hours (our channel)

Sharanya avatar
Sharanya

Hello All, Quick question - Did anyone come across setting up option group for MYSQL DB in RDS - in terraform

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-rds

Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-rds

Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.

Matt Gowie avatar
Matt Gowie

Hey @Andriy Knysh (Cloud Posse) — saw your comment on https://github.com/terraform-providers/terraform-provider-aws/issues/3963 for the cp EBS module. I’m using and I see it not trying to apply the name tag, but I’m still running into issues with receiving that Service:AmazonCloudFormation, Message:No updates are to be performed. + Environment tag update failed. error when I do an apply. It looks like the Namespace tag isn’t applying each time — Have you seen that before?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we deleted the Namespace tag from elasticbeanstalk-environment (latest release)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it was a problem as well, similar to the Name tag

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in fact, AWS checks if tags contain anything with the string Name in it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

probably using regex

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Regular Expressions: Now You Have Two Problems

a blog by Jeff Atwood on programming and human factors

1
Matt Gowie avatar
Matt Gowie

Ahaaa 4 days ago — I’m just the tiniest bit behind the curve. Awesome — Thank you! That module is great stuff. Allowing me to bootstrap this project quite easily!

Matt Gowie avatar
Matt Gowie

@Andriy Knysh (Cloud Posse) Similar to the EBS environment module’s issue: The application resource module has a similar problem, it just doesn’t cause an error. It causes a change to that resource to be made on each apply. Put up a PR to fix: https://github.com/cloudposse/terraform-aws-elastic-beanstalk-application/pull/17

Removes the "Namespace" tag due to EBS limitation by Gowiem · Pull Request #17 · cloudposse/terraform-aws-elastic-beanstalk-application

Including the &quot;Namespace&quot; tag to the beanstalk-application resource required a change for each apply as the tag is never applied. Removing that tag causes the beanstalk-application to avo…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks @Matt Gowie, merged. Wanted to do it, but since it does not cause an error and updating an EBS app takes a few secs, never got to it

Matt Gowie avatar
Matt Gowie

Glad to help out! Would like to do more where I can since these EBS modules were such a huge help.

2019-10-15

mmarseglia avatar
mmarseglia

has anyone else had that same dependency issue with https://github.com/cloudposse/terraform-aws-acm-request-certificate

cloudposse/terraform-aws-acm-request-certificate

Terraform module to request an ACM certificate for a domain name and create a CNAME record in the DNS zone to complete certificate validation - cloudposse/terraform-aws-acm-request-certificate

maarten avatar
maarten

https://github.com/Flaconi/terraform-aws-alb-redirect - For everyone who deals with quite a lot of 302/301 redirects like apex -> www and want to solve it outside of nginx or s3, here you go aws Next release will have optional vpc creation.

Flaconi/terraform-aws-alb-redirect

HTTP 301 and 302 redirects made simple utilising an ALB and listener rules. - Flaconi/terraform-aws-alb-redirect

2
nukepuppy avatar
nukepuppy

is it still viable to contribute to the 0.11 branch?

nukepuppy avatar
nukepuppy

having an issue where i’d like to not have the module create its own security group and would be good to have that optional if possible

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

branch 0.11/master is for TF 0.11

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-rds

Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

fork from it and open a PR

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we’ll merge it into 0.11/master branch

nukepuppy avatar
nukepuppy

@Andriy Knysh (Cloud Posse) cool sounds good

Matt Gowie avatar
Matt Gowie

Hey folks, what is the community approach to running SQL scripts against a new RDS cluster? remote-exec seems not great since it requires that the SG opens up to the calling machine.

Matt Gowie avatar
Matt Gowie

I see https://github.com/fvinas/tf_rds_bootstrap — But that seems old. Wondering if my google-fu is not finding the right tool for this job.

fvinas/tf_rds_bootstrap

A terraform module to provide a simple AWS RDS database bootstrap (typically running SQL code to create users, DB schema, …) - fvinas/tf_rds_bootstrap

maarten avatar
maarten

@Matt Gowie There is a Mysql and Postgres provider which can create roles inside the RDS, something for you ?

Matt Gowie avatar
Matt Gowie

Yeah, possibly. Mind sending me a link?

Matt Gowie avatar
Matt Gowie

Thanks Maarten — Will check those out.

jose.amengual avatar
jose.amengual

yet another PR https://github.com/cloudposse/terraform-aws-rds-cluster/pull/57 Aurora RDS backtrack support

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks @jose.amengual, commented on the PR

jose.amengual avatar
jose.amengual

added the changes bu I’m having an issue with make readme

jose.amengual avatar
jose.amengual
curl --retry 3 --retry-delay 5 --fail -sSL -o /Users/jamengual/github/terraform-aws-rds-cluster/build-harness/vendor/terraform-docs <https://github.com/segmentio/terraform-docs/releases/download/v0.4.5/terraform-docs-v0.4.5-darwin-amd64> && chmod +x /Users/jamengual/github/terraform-aws-rds-cluster/build-harness/vendor/terraform-docs
make: gomplate: No such file or directory
make: *** [readme/build] Error 1
jose.amengual avatar
jose.amengual

I upgraded my mac to Catalina, it could be related to that

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

did you run

make init
make readme/deps
make readme
jose.amengual avatar
jose.amengual

yes

jose.amengual avatar
jose.amengual

well I installed it by hand brew install gomplate

jose.amengual avatar
jose.amengual

and it worked

2019-10-16

julien M. avatar
julien M.

Hello , i have written this aws_autoscaling_policy :

resource "aws_autoscaling_policy" "web" {
  name                   = "banking-web"
  policy_type            = "TargetTrackingScaling"
  autoscaling_group_name = "${aws_autoscaling_group.web.name}"
  policy_type = "TargetTrackingScaling"

  target_tracking_configuration {
    customized_metric_specification {
      metric_dimension {
        name  = "LoadBalancer"
        value = "${aws_lb.banking.arn_suffix}"
      }

      metric_name = "TargetResponseTime"
      namespace   = "AWS/ApplicationELB"
      statistic   = "Average"
    }

    target_value = 0.400
  }
}

but i search to modify this value with this type of ScalingPolicy :

julien M. avatar
julien M.
cytopia avatar
cytopia

Hi everybody,

I have the following list variable in Terraform 0.11.x in terraform.tfvars defined:

mymaps = [
  {
    name = "john"
    path = "/some/dir"
  },
  {
    name = "pete"
    path = "/some/other/dir"
  }
]

Now in my module’s [main.tf](http://main.tf) I want to extend this variable and store it as a local.

The logic is something like this:

if mymaps[count.index]["name"] == "john"
    mymaps[count.index]["newkey"] = "somevalue"
endif

In other words, if any element’s name key inside the list has a value of john, add another key/val pair to the john dict.

The resulting local should look like this

mymaps = [
  {
    name   = "john"
    path   = "/some/dir"
    newkey = "somevalue"
  },
  {
    name = "pete"
    path = "/some/other/dir"
  }
]

Is this somehow possible with null_resource (as they have the count feature) and locals?

Robert avatar

@here could people thumbs up this on GitHub so we can get CloudWatch anomaly dectection metrics in terraform?

Robert avatar
AWS CloudWatch Alarm - Anomaly detection · Issue #9293 · terraform-providers/terraform-provider-aws

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or &quot;me to…

Robert avatar
AWS CloudWatch Alarm - Anomaly detection by hakopako · Pull Request #9828 · terraform-providers/terraform-provider-aws

Community Note Please vote on this pull request by adding a reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave &quot;…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@here public #office-hours starting now! join us to talk shop zoom https://zoom.us/j/508587304

1
Brij S avatar

I have a general terraform layout question. When I first started with terraform and got an ‘understanding’ for modules I created a module, called ‘bootstrap’ . This module includes the creation of acm certs, route53 zones, iam users/roles, s3 buckets, firehose, cloudfront oai. I now realize that modules should probably be smaller than this. What I want to know is - for people who bootstrap new aws accounts using TF, do you have a bunch of smaller modules(acm, route53, firehose, etc) and then for one of things like some iam roles/user just include those resources along with the module in a terraform file?

Brij S avatar

for example:

module {}

module {}

resource "iam_user {}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Brij S take a look at these threads on similar topics, might be of some help to you to understand how we do that

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Hi guys, Any of you has experience with maintenance of SaaS environments? What I mean is some dev, test, prod environments separate for every Customer? In my case, those environments are very similar, at least the core part, which includes, vnet, web apps in Azure, VM, storage… All those components are currently written as modules, but what I’m thinking about is to create one more module on top of it, called e.g. myplatform-core. The reason why I want to do that is instead of copying and pasting puzzles of modules between environments, I could simply create env just by creating/importing my myplatform-core module and passing some vars like name, location, some scaling properties. Any thoughts about it, is it good or bad idea in your opinion?

I appreciate your input.

jose.amengual avatar
jose.amengual

is there any reason why this module is not upgraded to 0.12 ? https://github.com/cloudposse/terraform-aws-ecs-alb-service-task can I have a stab at it ?

cloudposse/terraform-aws-ecs-alb-service-task

Terraform module which implements an ECS service which exposes a web service via ALB. - cloudposse/terraform-aws-ecs-alb-service-task

sarkis avatar

This module was a very specific use case, I don’t think anyone would have an issue with you taking a stab at it, would need to 0.12 all the things eventually!

cloudposse/terraform-aws-ecs-alb-service-task

Terraform module which implements an ECS service which exposes a web service via ALB. - cloudposse/terraform-aws-ecs-alb-service-task

jose.amengual avatar
jose.amengual

we use it a lot, it is pretty opinionated I will like to add a few more options to make it a bit more flexible

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we have 100+ modules to upgrade to TF 0.12

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this one is in the line, will be updated in the next week

jose.amengual avatar
jose.amengual

is in the line to be updated next week ? really ?

jose.amengual avatar
jose.amengual

well, if that is the case then it will save me some work

jose.amengual avatar
jose.amengual

but for real @Andriy Knysh (Cloud Posse) is scheduled for next week ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes

jose.amengual avatar
jose.amengual

is this xmas?

jose.amengual avatar
jose.amengual

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

haha

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@jose.amengual @Andriy Knysh (Cloud Posse) has been tearing it up

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

he just converted all the beanstalk modules and dependencies

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

then converted jenkins

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

added terratest to all of it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

jenkins by the way is a real beast!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

our main hold up is we’re only releasing terraform 0.12 modules with that have automated terratests

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this is the only way we can continue to keep the scale we have of modules

jose.amengual avatar
jose.amengual

I totally agree and yes I saw that jenkins module

jose.amengual avatar
jose.amengual

is huge

Conrad Kurth avatar
Conrad Kurth

hey everyone, I have maybe a simple question, but whenever I am running a terraform plan with https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment?ref=master it is always updating, even after it is applied. Can someone point me in the right direction?

- setting {
          - name      = "EnvironmentType" -> null
          - namespace = "aws:elasticbeanstalk:environment" -> null
          - value     = "LoadBalanced" -> null
        }
      + setting {
          + name      = "EnvironmentType"
          + namespace = "aws:elasticbeanstalk:environment"
          + value     = "LoadBalanced"
        }
      - setting {
          - name      = "HealthCheckPath" -> null
          - namespace = "aws:elasticbeanstalk:environment:process:default" -> null
          - value     = "/healthz" -> null
        }
      + setting {
          + name      = "HealthCheckPath"
          + namespace = "aws:elasticbeanstalk:environment:process:default"
          + value     = "/healthz"
        }
      - setting {
          - name      = "HealthStreamingEnabled" -> null
          - namespace = "aws:elasticbeanstalk:cloudwatch:logs:health" -> null
          - value     = "false" -> null
        }
      + setting {
          + name      = "HealthStreamingEnabled"
          + namespace = "aws:elasticbeanstalk:cloudwatch:logs:health"
          + value     = "false"
        }
cloudposse/terraform-aws-elastic-beanstalk-environment

Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

unfortunately, there is no right direction here. There are many bugs in the aws provider related to how EB environment handles settings

cloudposse/terraform-aws-elastic-beanstalk-environment

Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the new version tries to recreate all 100% of the settings regardless of how you arrange them

Conrad Kurth avatar
Conrad Kurth

nice

Conrad Kurth avatar
Conrad Kurth

so everyone has to live with this

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

0.11 version at least tried to recreate only those settings not related to the type of environment you built

Conrad Kurth avatar
Conrad Kurth

gotcha

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

they have to fix those bugs in the provider

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

there are many others

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Error: Provider produced inconsistent final plan · Issue #10297 · terraform-providers/terraform-provider-aws

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or &quot;me to…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Provider produced inconsistent final plan · Issue #7987 · terraform-providers/terraform-provider-aws

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or &quot;me to…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that was going on for months and still same issues

Conrad Kurth avatar
Conrad Kurth

ahhh

Conrad Kurth avatar
Conrad Kurth

thank you for the detailed explanation!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s why we have to do this in our tests for Jenkins which runs on EB https://github.com/cloudposse/terraform-aws-jenkins/blob/master/test/src/examples_complete_test.go#L29

cloudposse/terraform-aws-jenkins

Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack - cloudposse/terraform-aws-jenkins

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(the issue with dynamic blocks in settings)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

at least solved by applying twice

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but that does not solve the issue with TF trying to recreate 100% of settings regardless if they are static or defined in dynamic blocks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

at least I did not find a solution

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ping me if you find anything

Conrad Kurth avatar
Conrad Kurth

hmmm interesting

Conrad Kurth avatar
Conrad Kurth

thanks for all the links!

Conrad Kurth avatar
Conrad Kurth

and will do

kskewes avatar
kskewes

Hey everyone, thanks so much for your work. New to EKS (moving from IBM) but am curious about subnetting with the CP eks-cluster module (and dependencies - vpc, subnets, workers, etc) and understanding how it works. If we assign maximum size cidr_block of 10.0.0.0/16 to vpc then we will get:

  1. 3x ‘public’ (private) subnets 10.0.[0|32|64].0/19 - one per AZ? This contains any public ALB/NLB’s internal IP? What if we provision a private LoadBalancer K8s Service? I will test
  2. 3x ‘private’ subnets 10.0.[96|128|160].0/19 - one per AZ? This will be used by k8s worker nodes ASG’s.
  3. Cluster seems to use 172.x.y.z/? (RFC1918 somewhere) for the K8s Pod IP’s and for the K8s Service IP’s. This makes me think we are doing IPIP in the cluster.
  4. The remaining 10.0.192.0/18 is free for us to use, say with separate non K8s ASG’s or perhaps SaaS VPC endpoints (RDS/MQ/ETC) that we want in the same VPC? Is this all correct? Whilst #1 above seems like a large range for a few services, it’s not like IP addresses are sparse. We haven’t added Calico yet.
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we have this working/tested example of EKS cluster with workers https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/examples/complete/main.tf

cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

as you can see, it uses the VPC module and subnets module to create VPC and subnets

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

there are many ways of creating subnets in a VPC

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

https://github.com/cloudposse/terraform-aws-dynamic-subnets is one opinionated approach which creates one public and one private subnet per AZ that you provide (we use that almost everywhere with kops/k8s)

cloudposse/terraform-aws-dynamic-subnets

Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the test shows what CIDRs we are getting for the public and private subnets https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/test/src/examples_complete_test.go#L42

cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can divide your VPC into as many subnets as you want/need

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s not related to the EKS module, which just accepts subnet IDs regardless of how you create them

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we have a few modules for subnets, all of them divide a VPC differently https://github.com/cloudposse?utf8=%E2%9C%93&q=subnets&type=&language=

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yea, definitely no one-size-fits all strategy for subnets

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It’s one of the more opinionated areas, especially in larger organizations

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Pick what makes sense for your needs but have a look at our different subnet module that implement different strategies to slice and dice subnets

kskewes avatar
kskewes

oh wow, awesome, I’ll dive through the different subnets and tests above. Thank so much for the response. A+++

1

2019-10-17

oscar avatar

How do you set the var-file for Terraform on the CLI? For instance export TF_VAR_name=oscar export TF_VAR_FILE=environments/oscar/terraform.tfvars == -var-file=environments/oscar/terraform.tfvars

roth.andy avatar
roth.andy
Input Variables - Configuration Language - Terraform by HashiCorp

Input variables are parameters for Terraform modules. This page covers configuration syntax for variables.

oscar avatar

Thanks but.. not quite

oscar avatar

I wasn’t quite clear enough

oscar avatar

I meant as an ENV variable

oscar avatar

like in the examples above

roth.andy avatar
roth.andy

you cant. Terraform would see TF_VAR_FILE as a variable called FILE

oscar avatar

Yup. Just an example of what it might be called in case anyone has come across it.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@oscar maybe use a yaml config instead?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
locals {
  env = yamldecode(file("${terraform.workspace}.yaml"))
}
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

or instead of terraform.workspace, use any TF_VAR_…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@oscar since you’re using geodesic + tfenv, you can do TF_CLI_PLAN_VAR_FILE=foobar.tfvars

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

or if you want to do stock terraform, you can do

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

TF_CLI_ARGS_plan=-var-file=foobar.tfvars

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Environment Variables - Terraform by HashiCorp

Terraform uses environment variables to configure various aspects of its behavior.

oscar avatar

Can’t seem to find documentation on doing this

Laurynas avatar
Laurynas

Hi how can I read a list in terraform from file? https://github.com/cloudposse/terraform-aws-ecs-container-definition/blob/master/examples/string_env_vars/main.tf I’d like to pass ENV variables from file eg. env.tpl ` [{ name = “string_var” value = “123” }, { name = “another_string_var” value = “true” }, { name = “yet_another_string_var” value = “false” }, ] ` data “template_file” “policy” { template = “${file(“ci.env.json)}” } ` environment = data.template_file.policy.rendered

cloudposse/terraform-aws-ecs-container-definition

Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource - cloudposse/terraform-aws-ecs-container-definition

di.khan.r avatar
di.khan.r

Hi folks, if you have a service exposing an API documented using OpenAPI and are thinking of creating a terraform provider for it, you might find the following plugin useful:

Link to the repo: https://github.com/dikhan/terraform-provider-openapi

The OpenAPI Terraform Provider dynamically configures itself at runtime with the resources exposed by the service provider (defined in an OpenAPI document, formerly known as swagger file).

dikhan/terraform-provider-openapi

OpenAPI Terraform Provider that configures itself at runtime with the resources exposed by the service provider (defined in a swagger file) - dikhan/terraform-provider-openapi

2019-10-18

Matt Gowie avatar
Matt Gowie

How folks — I keep struggling with the idea of bootstrapping an RDS database with a user, schema, etc after it is initially created. I can use local-exec to invoke psql with sql scripts, but that kind of stinks and there seems to be heavy lifting involved to use something like the OS Ansible Provider (https://github.com/radekg/terraform-provisioner-ansible). What I’m trying to get at: Is there a better way? I’d like a way to easily bootstrap on RDS instance or cluster creation + run migrations appropriately.

radekg/terraform-provisioner-ansible

Marrying Ansible with Terraform 0.12.x. Contribute to radekg/terraform-provisioner-ansible development by creating an account on GitHub.

sarkis avatar

Have you looked into the postgresql provider? https://www.terraform.io/docs/providers/postgresql/index.html. This will work better than local-exec.

Provider: PostgreSQL - Terraform by HashiCorp

A provider for PostgreSQL Server.

Matt Gowie avatar
Matt Gowie

@sarkis It doesn’t seem to allow me to run SQL files against the remote DB or am I missing that functionality?

sarkis avatar

i wouldn’t do this with terraform, the provider allows you to setup user, database and schema

sarkis avatar

any sql past that like init or migration should not be in TF imo

Matt Gowie avatar
Matt Gowie

Yeah, I’m going the ansible route.

Matt Gowie avatar
Matt Gowie

Thanks for weighing in @sarkis — just confirming I shouldn’t go that route is great.

sarkis avatar

i think that would work better in that ansible may help in making that stuff idempotent, there could be potentially bad things happening if you keep applying sql every time tf applies

sarkis avatar

via local-exec - i’m not certain how you can easily achieve this .. i haven’t looked at the ansible provisioner, but i assume that is possible… basically you want to only run the sql once right?

Matt Gowie avatar
Matt Gowie

Yeah, I need a way to query a database_version column and then run migration files off of that. It’s easy to do that in Ansible. Just being new to TF I didn’t know where to draw the line. Everyone says TF is not for the actual provisioning of resources so this makes.

1
maarten avatar
maarten

@Matt Gowie how do you plan to deploy the actual software ? Is it docker orchestrated ?

2019-10-19

Alex Siegman avatar
Alex Siegman

I must be blind - can’t find anywhere in the terraform docs how to add an IP to a listener target group with target_type set to ip for a network load balancer

2019-10-20

Andrea avatar

hi, I get “An argument named “tags” is not expected here” when trying to use tags in “aws_eks_cluster”

Andrea avatar

according to the docs it is supported, not sure how to debug/further investigate what’s going on… any tip?

Andrea avatar

I’m using the latest TF version, and the syntax is simply

tags = {
    Name = "k8s_test_masters"
  }
github140 avatar
github140

@Andrea Do you use aws provider version >=2.31 ?

Andrea avatar

Hi @github140 I was on 2.29 actually…

Andrea avatar

I removed it and run “terraform init”, and I got

provider.aws: version = "~> 2.30"
Andrea avatar

but no 2.31, and I still get the “tags” error…

Andrea avatar

hold on a sec, I’ve updated to 2.33 and the tags are now showing up in the TF plan command

Andrea avatar

thanks so much!!

Andrea avatar

can I just ask how you knew that at least version 2.31 was needed please?

Andrea avatar

this might help me the next time I fall into a similar issue..

github140 avatar
github140
terraform-providers/terraform-provider-aws

Terraform AWS provider. Contribute to terraform-providers/terraform-provider-aws development by creating an account on GitHub.

Andrea avatar

fair enough thanks @github140!

Richy de la cuadra avatar
Richy de la cuadra

hey there how can i iterate two levels?, i want to get values from a list inside another list count = length(var.main_domains) count_alias = length(var.main_domains[count.index].aliases) like that

maarten avatar
maarten

https://github.com/hashicorp/hcl2/blob/master/hcl/hclsyntax/spec.md#splat-operators tuple[*].foo.bar[0] is approximately equivalent to [for v in tuple: v.foo.bar[0]]

hashicorp/hcl2

Former temporary home for experimental new version of HCL - hashicorp/hcl2

2019-10-21

Cloud Posse avatar
Cloud Posse
04:02:48 PM

:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is Oct 30, 2019 11:30AM.
Register for Webinar
slack #office-hours (our channel)

dalekurt avatar
dalekurt

I’m interested in hearing about some directory structure being used to manage terraform projects.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Let’s review this Wednesday on #office-hours

1
Phuc avatar

Hi Guys, anyone has tried update the ECS-module to attach multiple target group and add additional port in task definition in Terraform ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we are working on upgrading the ECS modules to TF 0.12. If you want these new features added, please open an issue in the repo and described the required changes in more details

2019-10-22

Michael Warkentin avatar
Michael Warkentin

Anyone using the terraform-aws-sns-lambda-notify-slack module? It doesn’t seem to include any outputs which makes it tricky to integrate with the things we want to feed events from. Opened https://github.com/cloudposse/terraform-aws-sns-lambda-notify-slack/issues/8

Missing outputs · Issue #8 · cloudposse/terraform-aws-sns-lambda-notify-slack

Hi, this module looks helpful to get slack notifications hooked up, however it&#39;s missing some outputs to make it easy to wire into other resources. My current use case is configuring a cloudwat…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This module is a wrapper around another module

Missing outputs · Issue #8 · cloudposse/terraform-aws-sns-lambda-notify-slack

Hi, this module looks helpful to get slack notifications hooked up, however it&#39;s missing some outputs to make it easy to wire into other resources. My current use case is configuring a cloudwat…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If you are blocked, you might want to use the upstream module directly

1
AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

did you guys see this?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Thanks for sharing

Matt Gowie avatar
Matt Gowie

Anyone know a way to allow configuring S3 bucket variables for transitions and support not including those transitions?

For example, I have the following module resource:

resource "aws_s3_bucket" "media_bucket" {
  bucket        = module.label.id
  region        = var.region
  acl           = "private"
  force_destroy = true
  tags          = module.label.tags

  versioning {
    enabled = var.versioning_enabled
  }

  lifecycle_rule {
    id      = "${module.label.id}-lifecycle"
    enabled = var.enable_lifecycle
    tags    = module.label.tags

    transition {
      days          = var.transition_to_standard_ia_days
      storage_class = "STANDARD_IA"
    }

    transition {
      days          = var.transition_to_glacier_days
      storage_class = "GLACIER"
    }

    expiration {
      days = var.expiry_days
    }
  }
}

Is there a way that I can pass values to the vars expiry_days or transition_to_* and have them not apply. I want to be able to turn those transitions on for some environments, but keep them off for others.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

use dynamic blocks for transition and expiration with condition to check those variables (if the vars are present, enable the dynamic blocks), similar to https://github.com/cloudposse/terraform-aws-ec2-autoscale-group/blob/master/main.tf#L39

cloudposse/terraform-aws-ec2-autoscale-group

Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group

Matt Gowie avatar
Matt Gowie

Good stuff @Andriy Knysh (Cloud Posse) — Will check that out. Thanks!

2019-10-23

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

hi guys, does anyone have a real working out of the box openvpn aws terraform module?

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

if not i’ll make one and share

javier.vivanco avatar
javier.vivanco
terraform-community-modules/tf_aws_openvpn

Terraform module which creates OpenVPN on AWS. Contribute to terraform-community-modules/tf_aws_openvpn development by creating an account on GitHub.

javier.vivanco avatar
javier.vivanco
lmammino/terraform-openvpn

A sample terraform setup for OpenVPN using Let’s Encrypt and Certbot to generate certificates - lmammino/terraform-openvpn

javier.vivanco avatar
javier.vivanco

Hi, I haven’t tried them. but look like what you’re looking for.

AgustínGonzalezNicolini avatar
AgustínGonzalezNicolini

Thanks!

Julio Tain Sueiras avatar
Julio Tain Sueiras

@Andriy Knysh (Cloud Posse) so I mention to eric in the office hours, so sourcegraph reach out to me and I am building a LSIF indexer for terraform

Julio Tain Sueiras avatar
Julio Tain Sueiras

basically it will allow web browser based code navigation for terraform

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

sounds interesting, thanks for sharing

Mike Schueler avatar
Mike Schueler

hey all, thanks for open sourcing your terraform examples/modules. theyre a huge help.

i’m trying to use terraform-multi-az but just had a question.. how do i define the cidr i want to use? it’s a little unclear to me what this is doing:

locals {
  public_cidr_block  = cidrsubnet(module.vpc.vpc_cidr_block, 1, 0)
  private_cidr_block = cidrsubnet(module.vpc.vpc_cidr_block, 1, 1)
}
Andrew Jeffree avatar
Andrew Jeffree
cloudposse/terraform-aws-multi-az-subnets

Terraform module for multi-AZ public and private subnets provisioning - cloudposse/terraform-aws-multi-az-subnets

Andrew Jeffree avatar
Andrew Jeffree

so it takes the cidr from the vpc module and then splits it up using the https://www.terraform.io/docs/configuration/functions/cidrsubnet.html function for private/public. The resulting cidr blocks are then broken up further when the subnets are actually created. https://github.com/cloudposse/terraform-aws-multi-az-subnets/blob/master/private.tf#L17-L21

cidrsubnet - Functions - Configuration Language - Terraform by HashiCorp

The cidrsubnet function calculates a subnet address within a given IP network address prefix.

cloudposse/terraform-aws-multi-az-subnets

Terraform module for multi-AZ public and private subnets provisioning - cloudposse/terraform-aws-multi-az-subnets

Mike Schueler avatar
Mike Schueler

Yes Andrew. Ok let me take a look

Mike Schueler avatar
Mike Schueler

is there a complete example somewhere showing how i’d use these to build a vpc from start to finish with everything – vpc, subnets, route tables, security groups, transit gateway attachment, transit gateway route tables, etc? it’s confusing to me how i’d use multiple modules together

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

By design, we’ve taken a different approach than many modules in the community.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We design our modules to be highly composable and solve one thing.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

For example, with subnets, there are so many possible subnet strategies, that we didn’t overload the VPC module with that.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

instead, we created (3) modules that implement various strategies

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

for example, our experience is there’s no “one size fits all” for organizations in how they want to subnet. it’s one of the more opinionated areas.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

today, we don’t have a transit gateway module

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but we have many vpc peering modules

Mike Schueler avatar
Mike Schueler

it seems that using modules is the way to go, but having trouble understanding how i can adapt my old stuff that’s written as individual resources and leverage all the work you guys have already done with modules

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

there might not be a clear path. it might mean taking one piece at a time and redoing it using modules.

Andrew Jeffree avatar
Andrew Jeffree
cloudposse/terraform-aws-multi-az-subnets

Terraform module for multi-AZ public and private subnets provisioning - cloudposse/terraform-aws-multi-az-subnets

Mike Schueler avatar
Mike Schueler

and that makes sense now regarding the cidr, i thought that the multi-az-subnets was meant to bringup the full vpc but its just for the subnets

Andrew Jeffree avatar
Andrew Jeffree

In terms of adapting your old stuff which is individual resources to modules. Generally you’d want to recreate things using modules where possible as while you can import/mv your existing terraform state to match modules etc it’s a lot of work and rarely ends well.

Mike Schueler avatar
Mike Schueler

oh, yeah, i’m not talking about existing infra that’s running. not worried about state.

Mike Schueler avatar
Mike Schueler

thanks.. will look through the examples some more.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

almost all our modules (that have been converted to TF 0.12) have working examples with automated tests on a real AWS account, and many of them use vpc and subnets modules. Examples:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-elastic-beanstalk-environment

Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-emr-cluster

Terraform module to provision an Elastic MapReduce (EMR) cluster on AWS - cloudposse/terraform-aws-emr-cluster

Mike Schueler avatar
Mike Schueler

thanks, yeah once you linked the first example i found a bunch of them

Mike Schueler avatar
Mike Schueler

thanks sir

Mike Schueler avatar
Mike Schueler

The EKS cluster is actually what I was trying to create. The only thing I need to add is a Route53 resolver / rules

2019-10-24

Igor avatar

I am having a hard time retrieving an AWS Cognito pool id. data.aws_cognito_user_pools.pool[*].id seems to return the pool name instead. Anyone ran into this?

Joan Hermida avatar
Joan Hermida

[*] ?

Joan Hermida avatar
Joan Hermida

Try .ids instead of .id

Igor avatar

Wasn’t working, but I found the solution. Need to wrap ids in tolist() and then it works. (This is TF012)

Joan Hermida avatar
Joan Hermida

Uhhh! Nice

2019-10-25

MattyB avatar

Playing around with https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn and getting a 502 when testing it out. My understanding before reading AWS documentation was that I should be able to access the assets through the given cloudfront domain so d1234.cloudfront.net/{s3DirPath} or the alias cdn.example.com/{s3DirPath}. The s3DirPath is static/app/img/favicon.jpg so d1234.cloudfront.net/static/app/img/favicon.png should work

cloudposse/terraform-aws-cloudfront-s3-cdn

Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn

MattyB avatar

After reading the AWS documentation https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/GettingStarted.html it seems like my s3 bucket needs to be less restrictive?

Getting Started with CloudFront - Amazon CloudFront

Get started with these basic steps to start delivering your content using CloudFront.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-cloudfront-s3-cdn

Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also

The s3DirPath is static/app/img/favicon.jpg so d1234.cloudfront.net/static/app/img/favicon.png should work

check the file extensions, you have jpg and png for the same file

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also, in this module https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn, the S3 bucket is not in the website mode

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you need an S3 website with CloudFront CDN, do something like this https://github.com/cloudposse/terraform-root-modules/blob/master/aws/docs/main.tf

cloudposse/terraform-root-modules

Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules

MattyB avatar

i was typing the paths, not copy pasting. it’s jpg

must be the website mode. thanks!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

website mode and not website mode - you should be able to get your files anyway. Take a look at the example above

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in website mode, the bucket must have public read access

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in normal mode, it does not

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Is there a terraform provider for terraform cloud? I can’t find one

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Provider: Terraform Enterprise - Terraform by HashiCorp

The Terraform Enterprise provider is used to interact with the many resources supported by Terraform Enterprise. The provider needs to be configured with the proper credentials before it can be used.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But that is enterprise

jose.amengual avatar
jose.amengual

I could tell next week after our tf cloud demo

jose.amengual avatar
jose.amengual

I think the client is the same

jose.amengual avatar
jose.amengual

Endpoint is different

jose.amengual avatar
jose.amengual

TFE and cloud is the same except one can be sefl hosted and the other one is saas

2019-10-26

oscar avatar

Erik I use tf cloud.

TFE is provider & remote is tfe backend

oscar avatar

Tfe == tfc

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ok great! Know if one can use their tfe provider with tfc without upgrading

oscar avatar

Not sure how you mean.

The tfe provider requires no changes to be used with tfc

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I mean upgrading from a free tier

oscar avatar

Free tfc to the 2nd or 3rd tfc? No dont believe so. Just a SaaS package upgrade

oscar avatar

Tfc to tfe though does require changes to the host name in the provider config

jose.amengual avatar
jose.amengual

Yes, I think is the same client, different hostname (since you host it yourself) and you will have to migrate modules and such but I guess hashicorp will do that for you

2019-10-27

2019-10-28

Cloud Posse avatar
Cloud Posse
04:03:38 PM

:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is Nov 06, 2019 11:30AM.
Register for Webinar
slack #office-hours (our channel)

loren avatar

I remember someone here writing a “serverless” module wholly in terraform, but now I can’t find it… Anyone have that link?

tamsky avatar
blinkist/terraform-aws-airship-ecs-service

Terraform module which creates an ECS Service, IAM roles, Scaling, ALB listener rules.. Fargate & AWSVPC compatible - blinkist/terraform-aws-airship-ecs-service

Igor avatar

api gateway + lambda?

loren avatar

pretty much, i can’t remember who here posted the module they were working on

loren avatar

i’ll try the sweetops archive

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Maxim Mironenko (Cloud Posse) is working to break down the archive by month

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it will make it easier to find things.

loren avatar

it’s all good, i like the one page per channel, lets find-in-page do its thing

jose.amengual avatar
jose.amengual

I love you @Andriy Knysh (Cloud Posse) https://github.com/cloudposse/terraform-aws-ecs-alb-service-task you made my day!!!!!!!!!!! 0.12 ready..

cloudposse/terraform-aws-ecs-alb-service-task

Terraform module which implements an ECS service which exposes a web service via ALB. - cloudposse/terraform-aws-ecs-alb-service-task

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

props to @Andriy Knysh (Cloud Posse) !

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

he’ll have the rest finished anyday now (today or tomorrow)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Thanks @jose.amengual

jose.amengual avatar
jose.amengual

amazing, thanks so much

loren avatar

found it, thank you sweetops archive! https://github.com/rms1000watt/serverless-tf

rms1000watt/serverless-tf

Serverless but in Terraform . Contribute to rms1000watt/serverless-tf development by creating an account on GitHub.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

aha! by @rms1000watt

rms1000watt avatar
rms1000watt

praveen avatar
praveen

Team, May I have working terraform module for AWS Fargate with ALB

jose.amengual avatar
jose.amengual
cloudposse/terraform-aws-ecs-alb-service-task

Terraform module which implements an ECS service which exposes a web service via ALB. - cloudposse/terraform-aws-ecs-alb-service-task

praveen avatar
praveen

Thank you Pepe,

jose.amengual avatar
jose.amengual

np

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

2019-10-29

IvanM avatar

guys I have a bit of issue with terraform-aws-modules/eks/aws

I have a cluster up and running, but when I ssh into pod, the localhost:10255 is unreachable

[root@ip-10-2-201-138 metricbeat]# curl localhost:10255
curl: (7) Failed connect to localhost:10255; Connection refused

anyone knows how to enable this port on kubernetes level?

IvanM avatar

if someone would be interested this is the issue https://github.com/awslabs/amazon-eks-ami/issues/128

Port 10255 is no longer open by default and it was not marked as release…

Kubelet no longer listening on read only port 10255 · Issue #128 · awslabs/amazon-eks-ami

What happened: I have an EKS 1.10 cluster with worker nodes running 1.10.3 and everything is great. I decided to create a new worker group today with the 1.10.11 ami. Everything is great except it …

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ask in #terraform-aws-modules channel

aaratn avatar

How to override ECS task role when using terraform in code build? I am trying to set profile on backend config but it’s not picking up the profile

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

please provide more info, what modules are you using, share some code

aaratn avatar

I am using my custom AWS code with partial backed init

aaratn avatar

With terragrunt

aaratn avatar
terraform-providers/terraform-provider-aws

Terraform AWS provider. Contribute to terraform-providers/terraform-provider-aws development by creating an account on GitHub.

aaratn avatar

As a result of which it is throwing 403 errors when loading tfstate file

aaratn avatar

Already tried to set the parameters like

skip_get_ec2_platforms skip_credentials_validation

ByronHome avatar
ByronHome

Hi guys, i have a cuestion. How i can configure s3 bucket name with config_source input of this module https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/tree/0.11/master ?

cloudposse/terraform-aws-elastic-beanstalk-environment

Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

everything in aws:elasticbeanstalk:application:environment namespace are ENV vars that will appear in the EB environment Configuration->Software->Environment properties

cloudposse/terraform-aws-elastic-beanstalk-environment

Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so everything you put into config_source will appear as CONFIG_SOURCE ENV var that your app will be able to read

ByronHome avatar
ByronHome

thank you very much:slightly_smiling_face: , but my question was, How i could change s3 bucket name when this module aws:elasticbeanstalk:application:environment creat it. The definition of config_source input is “S3 source for config”, so I thinked that with this input i can specific a s3 configuration

2019-10-30

Nick V avatar

I have an ecs task def that uses a parameter store arn–any ideas how to get the task def to update when the param store value changes? Maybe a null resource?

Nick V avatar

If I can’t come up with anything better I’m thinking of just hashing the param value and making an env var with that

oscar avatar

I don’t know about ECS tasks, but why not do a DATA on the param store value, base64encode it (if you like), and mark it as a trigger for the ECS task don’t know the workflow exactly but the functions mentioned should help

Steven avatar

You want the task to update on start or real time? For on start, just have the task def reference the parameter store directly. The task def never changes, but the task will always get the current value on start. If you are looking for real time, then the app is going to need to handle it not the task def. But you might be able to do a lifecycle to kill tasks if value changes

Nick V avatar

Terraform will update the parameter store value somewhere else so I’d basically just like to update the ecs task so it grabs the new value

Nick V avatar

It’s a pgbouncer module that takes the arn of DB URLs and whatever consumes the pgbouncer module handles dumping the right db urls into parameter store

Laurynas avatar
Laurynas

Hey folks, I have a following usecase : I want to launch fargate service (PHPmyadmin) in public subnet. I don’t want to use load balancer for this service because I don’t care about availability just want to be able to add fargate public IP to R53. What’s the best way to do that?

Nick V avatar

I think you need something additional here like a lambda since the fargate task can relaunch and get a new IP

Nick V avatar

That or add a call in the fargate service to route53 (like a shell script that updates route53 before the normal launch)

tamsky avatar

• you’ll always be restricted to a fargate service setting of desired_count=1

• and what Nick V suggested, an additional “sidecar-style” fargate task that both figures out its public IP (I’m uncertain an AWS API call exists that returns this data; you may have to use a “what is my ip” type external service) and then calls and updates Route53 with the public IP (task will need IAM permissions to call R53).

Nick V avatar

imo you should probably not expose phpmyadmin directly to the web and use a vpn or jumphost to get access. With that, you can use service discovery which will populate dns with the container’s private ip

Nick V avatar

plus you would have to deal with self signed certs or expose phpmyadmin unencrypted to the internet (not recommended) on top of route53 updates

1
tamsky avatar

always possible their needs exclude TLS and phpmyadmin

vig avatar

Hey team, anyone actively using tfmask now?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We are, but not having problems

vig avatar

it is the same with this issue. no output whatsoever; https://github.com/cloudposse/tfmask/issues/7

Environment variable not working · Issue #7 · cloudposse/tfmask

I&#39;m attempting to add local_file which I use to create a file containing secrets. Attempt to update the tflask using environemtn variable is failing. typing the values below export TFMASK_VALUE…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But we’re not using it 0.12

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/tfmask

Terraform utility to mask select output from terraform plan and terraform apply - cloudposse/tfmask

vig avatar

the regex is the fallback used on the main.go function.

cloudposse/tfmask

Terraform utility to mask select output from terraform plan and terraform apply - cloudposse/tfmask

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

responded.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We still need sample output in order to test for it.

vig avatar

alright. ill post something after work later, then;

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

include output so we can test a regex against it

Joe Niland avatar
Joe Niland

Hey guys, This is more of a question about https://github.com/cloudposse/reference-architectures so I haven’t added an issue.

Thank you for the reference architecture, by the way!

I am just getting into implementing an AWS multi-account strategy using Terraform. Have set them up in various ways manually before.

In the readme it mentions an identity account which is for storing IAM users and delegating access, however there is no identity.tfvars file in configs/. I also noticed in https://github.com/cloudposse/root.cloudposse.co that users are stored in the root account.

So my questions are:

  1. What should go in the identity config?
  2. Is it better/more practical to forget the identity account and just use root, as per your demo repo? This particular project is for a small team.

Thanks!

cloudposse/reference-architectures

Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures

cloudposse/root.cloudposse.co

Example Terraform Reference Architecture for Geodesic Module Parent (“Root” or “Identity”) Organization in AWS. - cloudposse/root.cloudposse.co

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We didn’t finish implementing identity

cloudposse/reference-architectures

Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures

cloudposse/root.cloudposse.co

Example Terraform Reference Architecture for Geodesic Module Parent (“Root” or “Identity”) Organization in AWS. - cloudposse/root.cloudposse.co

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But @dalekurt is doing it right now

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We just use root for now to simplify the setup, but I agree that a separate identity account is ideal

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also, we have been using #geodesic for most of the refarch discussion

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You also might dig our archives: https://archive.sweetops.com/geodesic/

geodesic

SweetOps is a collaborative DevOps community. We welcome engineers from around the world of all skill levels, backgrounds, and experience to join us! This is the best place to talk shop, ask questions, solicit feedback, and work together as a community to build sweet infrastructure.

Joe Niland avatar
Joe Niland

Hi @Erik Osterman (Cloud Posse) thanks for the info and the quick reply. I will use #geodesic going forward. Will check the archives.

I may start off with using root in that case. I guess it wouldn’t be too hard to add it later and migrate/recreate the users.

I had an error with SSM ParameterStore as well, but I will try to get a more solid repro case before I raise it.

dalekurt avatar
dalekurt

@Joe Niland If you are interested I could share what I have done for setting up identity and my thoughts on it. I agree and I like the idea of keeping all the Unicorns and Zombies restricted to the Identity account and have them role switch to the other accounts they should have access to. I will be doing some more work on setting up policies based on the following scenario

Assume that you're a lead developer at a large company named Example Corporation, and you're an experienced IAM administrator. You're familiar with creating and managing IAM users, roles, and policies. You want to ensure that your development engineers and quality assurance team members can access the resources they need. You also need a strategy that scales as your company grows.

You choose to use AWS resource tags and IAM role principal tags to implement an ABAC strategy for services that support it, beginning with AWS Secrets Manager. To learn which services support authorization based on tags, see AWS Services That Work with IAM. To learn which tagging condition keys you can use in a policy with each service's actions and resources, see Actions, Resources, and Condition Keys for AWS Services.

Your Engineering and Quality Assurance team members are on either the Pegasus or Unicorn project. You choose the following 3-character project and team tag values:

    - access-project = peg for the Pegasus project
    - access-project = uni for the Unicorn project
    - access-team = eng for the Engineering team
    - access-team = qas for the Quality Assurance team

Additionally, you choose to require the cost-center cost allocation tag to enable custom AWS billing reports. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide. 

Source: https://docs.aws.amazon.com/en_pv/IAM/latest/UserGuide/tutorial_attribute-based-access-control.html

Implement a strategy that uses principal and resource tags for permissions management.

Hasan avatar

geodesic would be pretty clutch if it would work out of the box with SAML auth. The aws-vault integration makes things very cumbersome to work with if your identity configuration isn’t vanilla. I’ve made attempts to resolve it, but can’t figure out why terraform is unable to read the session tokens.

Joe Niland avatar
Joe Niland

@dalekurt I am definitely interested, thanks. I will give as much feedback as I can.

I get what a unicorn is but is a zombie a user/role that’s become unneeded?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


geodesic would be pretty clutch if it would work out of the box with SAML auth.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Problem is Gsuite is probably the worst of the SAML providers to deal with =/

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Okta is great! (and works out of the box)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

No two SAML providers are the same

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The problem is almost no saml cli supports deterministic ports and most of them rely on a localhost:$port call back. If $port is not deterministic, we can’t port map it inside of the geodesic container.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
segmentio/aws-okta

aws-vault like tool for Okta authentication. Contribute to segmentio/aws-okta development by creating an account on GitHub.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This is an awesome SAML cli done “the right way”

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but of course specific to Okta.

jose.amengual avatar
jose.amengual

I wish they had a version with keycloak

2019-10-31

    keyboard_arrow_up