#terraform

Archive: https://archive.sweetops.com/terraform/

2019-11-18

ismail

Hi all, I am facing some issues in my terraform code. I am trying to transfer a file using null_resource on aws instance. But getting error

host        = module.ec2.public_ip
Inappropriate value for attribute "host": string required.

Here is the code being used:

resource "null_resource" "test" {
  provisioner "file" {
    source      = file(var.source_path)
    destination = file(var.destination_path)
    //    on_failure  = "continue"
    connection {
      type        = "ssh"
      user        = var.ssh_user
      private_key = var.ssh_key
      host        = module.ec2.public_ip
    }
  }
}
Lewis

Hello All,

Does anyone know how to delete the webhooks after?

We currently create a new webhook everytime we do a deployment and keep maxing out our webhook limit, we would like a way to delete the previous webhooks prior or just install the webhook for a one time use.

Kind regards, Lewis

Laurynas

Anyone experience weird Error: Cycle: errors after updating to 0.12.15 from 0.12.12?

Laurynas

it looks like I need to downgrade the terraform. How can i do that without loosing the state?

nutellinoit

do you have the remote state on s3 versioned?

Laurynas

yes.It’s remote on s3 with versioning

Steven

Then I’d restore the prior version and remove its record in DynamoDB

2
Cloud Posse
05:01:00 PM

Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is Nov 27, 2019 11:30AM.
https://zoom.us/meeting/register/dd2072a53834b30a7c24e00bf0acd2b8\|Register for Webinar
#office-hours (our channel)

kskewes

Hey team, is anyone using the data source for security_groups a lot to enable lookup and extension of allowed_security_group variable in various modules? We use a few state files to limit blast radius and have a chicken & egg situation. Contrived example:

  1. if we want to allow bastion to connect to RDS, we need to provision bastion and its SG before we can provision RDS SG. Or:
  2. if we use the data source, then we can provision RDS (bastion SG data source empty?) and then provision bastion, then reapply RDS in order to add the bastion SG (data source returns SG id). Option 2 means we may have to do a couple of terraform apply to bring certain things up but still gives us small blast radius. Third option is to centrally define all SG’s and then specify them in other state files but that gives us a massive blast radius. We started using cidr_blocks but with the way our subnets are defined they don’t map to workloads, ie: subnets are shared by different workloads with different requirements
aknysh

I think if you can determine the order of resource creation (e.g. bastion first, then RDS, then resource C, then resource D, etc.) and it does not change too often, then you can use #1

aknysh

if it changes a lot and often, then data sources could be used

aknysh

if those are in diff folders (with diff TF state), you have three choices anyway to send params between modules: 1) remote state,; 2) data source lookup; 3) writing/reading to/from SSM (or something similar)

kskewes

cheers! Been playing with #2 and it seems that data sources complain if empty result. So given the different TF states, I think we will have to extend bastion rules from RDS directory.

kskewes

But otherwise data source works well to pass params.

2019-11-17

John H Patton

I’ve worked through the ACM module and updated it to allow for a SAN that includes records not in a passed in zone…

John H Patton

is it possible to provide some of this to give back to the community? not sure what the best way to do that is… create a PR?

kskewes

Hey team, thanks for all the modules. Am standing up this one: https://github.com/cloudposse/terraform-aws-elasticache-redis Couple thoughts:

  1. Be good to recommend a secure auth_token, we are internally using random_password. Have submitted PR for this, happy to iterate or close if not suitable.
  2. Security Groups and cidr_block’s. There are a couple of PR’s that look useful for us to control what/where can talke to Redis and vice versa. ie: supply own security group and/or cidr_block for ingress. Anything I can do to help them along? We will carry fork for now.
cloudposse/terraform-aws-elasticache-redis

Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis

1
vitaly.markov

I’ve opened a PR, added a variable to create OIDC provider, then can be used by other modules, I also going to add modules that create IAM roles for cluster-autoscaler, ExternalDNS, etc https://github.com/cloudposse/terraform-aws-eks-cluster/pull/36

feat(oidc-provider): Add oidc_provider_enabled variable by vymarkov · Pull Request #36 · cloudposse/terraform-aws-eks-cluster

what Added oidc_provider_enabled variable in order to create an IAM OIDC identity provider for the cluster, then you can create IAM roles to associate with a service account in the cluster, instead…

vitaly.markov

@aknysh could you review this PR ?

feat(oidc-provider): Add oidc_provider_enabled variable by vymarkov · Pull Request #36 · cloudposse/terraform-aws-eks-cluster

what Added oidc_provider_enabled variable in order to create an IAM OIDC identity provider for the cluster, then you can create IAM roles to associate with a service account in the cluster, instead…

aknysh

will do, thanks again @vitaly.markov

curious deviant

** Seeking Guidance/Opinions **

We have a situation where in the team is trying to host multiple apps in a single VPC ( still debating this arch). I personally would like to have them in separate VPCs and not manage the entire infrastructure as it grows as a monolith managed via a single TF repo (also have network isolation b/w them). In case having a single VPC is the route to go , would it be possible to manage the shared infrastructure (VPC) through 1 repo and manage the components within it via app repos? Is it a good way even of managing infrastructure ? Can the reference in the app repos to the shared VPC be dynamic in that case or would we need to hardcode VPC, Subnet IDs etc.. ? Anything else we should be wary of ?

well if you use shared vpcs you can manage that as a single vpc so it can all be in one TF

shared VPC is a new amazon product

you basically create a vpc in one account and you can shared between accounts

you shared the subnets on your vpc to the other accounts

that is one way to go

curious deviant

thanks @PePe! In this case we would have just one account per env (1 for Dev and 1 for Prod)…so it would be 1 vpc in 1 account , but shared by several apps. Would it make sense then for us to have 1 repo to manage the VPC and other shared components and one repo per app for the components hosted in it?

one repo is fine for that I think

my guess is that you want to keep all the resources for all the environments the same

so you can create the TF so that is flexible enough to accommodate the differences between envs if any

you might want to have one nat gateway in Dev and many in prod etc

2019-11-16

vFondevilla

I have a question about terraform and multiple git identities. I have multiple git repositories (some of them on gitlab) but in gitlab you can’t have the same ssh key in 2 different users. I configured git to use another ssh key for the work repositories, but when Terraform is sourcing a module, it will use the main identity (my personal one) without access to my employer private repos. There’s any way of managing this I’m unaware of?

Mateusz Kamiński

I would arrange one user to have access to all those repos - isn’t it possible?

vFondevilla

Nope. Is not possible to use personal users for accessing the company repos by policy

Steven

What about company user on personal repos or fork the repos

vFondevilla

that’s one of the possibilities I have in mind, but i’d prefer to keep everything separated

loren

Terraform uses go-getter under the covers, so maybe it has some tricks that would help… https://github.com/hashicorp/go-getter/blob/master/README.md#git-git

hashicorp/go-getter

Package for downloading things from a string URL using a variety of protocols. - hashicorp/go-getter

loren

Use a terraform override file to set your ssh key for each nested module source…

vFondevilla

oh! nice

vFondevilla

i’ll take a look thx!

loren
Override Files - Configuration Language - Terraform by HashiCorp

Override files allow additional settings to be merged into existing configuration objects.

MattyB

Using your terraform-aws-rds module as an example, it looks like the parameter groups, subnet groups, and security group resources are tightly coupled with the instance. Is there a reason I shouldn’t design an rds module for those resources to be passed in? If I have a large number of instances (DB, ALB, ECS, etc…) it seems like it’d be good practice to create fewer dependent resources. Thoughts?

Erik Osterman

@MattyB sounds correct

aknysh

for security groups, we usually create a separate one for the module, and then provide allowed_security_groups and allowed_cidr_block to allow to connect to the SG https://github.com/cloudposse/terraform-aws-rds/blob/master/main.tf#L139

cloudposse/terraform-aws-rds

Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.

aknysh

parameter group implemented using dynamic block so you can add as many params as you need (or provide an empty list for no params) https://github.com/cloudposse/terraform-aws-rds/blob/master/main.tf#L77

cloudposse/terraform-aws-rds

Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.

aknysh

for subnet_group you just provide the subnets where to provision RDS instances https://github.com/cloudposse/terraform-aws-rds/blob/master/main.tf#L124 (so not really much to configure here)

cloudposse/terraform-aws-rds

Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.

MattyB

Helpful as usual thanks!

2019-11-15

Laurynas

Hi, have you ever experienced that when terraform apply fails sometimes not all created resources are written into terraform state? I had an issue with one terraform module but when fixed that terraform was trying to create resources that were already created ….

oscar

Sort of. I found that it tries to exit safely and this issue would normally happen when I try to exit with CTRL+C and don’t allow it to safely terminate

aknysh

also, if there is any issue with a provider (bad config, etc.), TF (which written in Go) will just panic and exit immediately leaving you with some resources created on AWS but not in the state

Milos Backonja

Hi all, how you handle ssh keys with terraform/terragrunt? I have used https://github.com/cloudposse/terraform-aws-key-pair for bastion and for all other instances in private subnets? Its ok untill some point, but if you delete terragrunt cache or you work in team it can be issue, since it will try to recreate keys, and depends on configuration other resources depedent on those keys. I am looking for solution which can be used for safe/secure ssh keys sharing and distributing. to all instances, and I am wondering what other people use. Thanks

cloudposse/terraform-aws-key-pair

Terraform Module to Automatically Generate SSH Key Pairs (Public/Private Keys) - cloudposse/terraform-aws-key-pair

Erik Osterman
gravitational/teleport

Privileged access management for elastic infrastructure. - gravitational/teleport

Erik Osterman

This is what we use

Erik Osterman

The other stuff is just for very simple use cases

Ricky Spanish

what about unknow token IDENT error?

Ricky Spanish

I’m using terraform v0.11.13

Erik Osterman

This is always a mismatch between HCL2 and terraform 0.11

1
aknysh

it’s when you using TF 0.11 on modules converted to 0.12

1
Ricky Spanish

you right, i need to modify some resources

vitaly.markov

How to create a new terraform module to follow cloudposse`s practices? I didn’t find any templates in https://github.com/cloudposse @Erik Osterman could you help me ? I’m going to create a module similar to https://github.com/cloudposse/terraform-aws-kops-external-dns

cloudposse/terraform-aws-kops-external-dns

Terraform module to provision an IAM role for external-dns running in a Kops cluster, and attach an IAM policy to the role with permissions to modify Route53 record sets - cloudposse/terraform-aws-…

aknysh

there are no templates , but you can just copy the module (including the examples if you need it) and update in place, including README.yaml and all TF code (that’s what we usually do)

cloudposse/terraform-aws-kops-external-dns

Terraform module to provision an IAM role for external-dns running in a Kops cluster, and attach an IAM policy to the role with permissions to modify Route53 record sets - cloudposse/terraform-aws-…

Brij S

with TF12, is the * not allowed for multiple resources? for example, aws_api_gateway_domain_name.domain[*].domain_name gives an error saying a string is required. I’m trying to create multiple api-gateway custom domains(from a list) then have the appropriate records be created in route53 for each domain

aknysh

aws_api_gateway_domain_name.domain[*].domain_name is a list of strings

aknysh

you have to iterate it to get each item as string

Brij S

so, using [count.index] ?

Brij S

I think I’ve almost got it

resource "aws_api_gateway_domain_name" "domain" {
  count           = var.create_custom_domain ? length(var.domain_names) : 0
  certificate_arn = var.aws_acm_certificate_arn
  domain_name     = var.domain_names[count.index]
}

resource "aws_route53_record" "domain" {
  count   = var.create_custom_domain ? length(var.route53_record_types) : 0
  name    = aws_api_gateway_domain_name.domain[count.index].domain_name
  type    = var.route53_record_types[count.index]
  zone_id = "xxxx"

  alias {
    evaluate_target_health = false
    name                   = aws_api_gateway_domain_name.domain[count.index].cloudfront_domain_name
    zone_id                = aws_api_gateway_domain_name.domain[count.index].cloudfront_zone_id
  }
}
Brij S

this works, but the r53 domain only creates an AAAA record, not the A record for one of the gateway domains. What am I missing?

aknysh

what’s in var.route53_record_types?

Brij S
variable "route53_record_types" {
  description = "list of route 53 record types"
  type        = list(string)
  default     = ["A", "AAAA"]
}
Brij S

any ideas

aknysh

Does var.domain_names have two items?

Brij S
variable "domain_names" {
  description = "custom domain name(s) for api gateway"
  type        = list(string)
  default     = []
}
domain_names            = ["<http://xxxxxx.com>", "xxxxx.vision"]
aknysh

By the way they are written, those two resources will create only A record for the first domain, and only AAA for the second domain

Brij S

oh, thats not what I want at all, I want an A and AAAA for each domain

Adrian

@Erik Osterman why you did’nt include label_order = var.label_order in

module "label" {
  source     = "git:<i class="em em-<https"></i>//github.com/cloudposse/terraform-null-label.git?ref=tags/0.14.1>"
  enabled    = var.enabled
  namespace  = var.namespace
  name       = var.name
  stage      = var.stage
  delimiter  = var.delimiter
  attributes = var.attributes
  tags       = var.tags
}

https://github.com/cloudposse/terraform-aws-elasticsearch/blob/master/main.tf

cloudposse/terraform-aws-elasticsearch

Terraform module to provision an Elasticsearch cluster with built-in integrations with Kibana and Logstash. - cloudposse/terraform-aws-elasticsearch

Adrian

I need to fork it and add this line , it is possible to make some PR for that kind of change?

Erik Osterman

we just haven’t gone through all our modules to add all the parameters

Erik Osterman

we’ll approve if you open the PR

Erik Osterman

ping @aknysh

Adrian

thnx

Adrian

btw, nice to have modules from CloudPosse

Bruce

Does anyone have a good example of a lambda that will notify a slack channel of a role change? For example when an engineer switches role to a admin role it fires a notification to slack. Was going to pull one together but didn’t want to reinvent the wheel if it’s been done before… Seems like it has.

loren

the aws-to-slack thing is easy enough. sounds like you need to define the actual event though, to trigger the notification. that event would be either CWE or SNS…. probably CWE in this case, with a pattern filtering on cloudtrail events for the sts:AssumeRole* action?

Bruce

That was I was thinking my approach would be @loren thanks!

2019-11-14

james

any ECS users here? we’re considering a migration into ECS (from vanilla docker on EC2) but I couldn’t find much in the way of existing ECS modules in the module directory

james

does it play well with Terraform?

vFondevilla

yep

vFondevilla

I’m using it for managing our task definitions and services

vFondevilla

and is pretty simple and straightforward

james

Great to hear, thanks

MattyB

https://github.com/cloudposse/terraform-aws-ecs-web-app using it right now. super simple to setup

cloudposse/terraform-aws-ecs-web-app

Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app

1
1
james

Nice!

cloudposse/terraform-aws-ecs-web-app

Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app

1
1
aknysh
cloudposse/terraform-aws-ecs-web-app

Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app

1
1
aknysh

Terratest for the example (it provisions the example on real AWS account) https://github.com/cloudposse/terraform-aws-ecs-web-app/blob/master/test/src/examples_complete_test.go

cloudposse/terraform-aws-ecs-web-app

Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app

1
1
johncblandii

We use ECS for 90% of our apps (Fargate specifically) and it works great.

1
1
Laurynas

Hi, sometimes when terraform has to destroy and recreate ecs service I get a following error: InvalidParameterException: Unable to Start a service that is still Draining. What causes it and how can I fix that?

Nick V

Is there a way to show the output of terraform plan from a plan file? In 0.11 you just did terraform show but in 0.12 I’m just getting resource names without changes

Erik Osterman

maybe terraform apply -auto-approve=false $PLANFILE < /dev/null

Brij S

im trying to create a custom domain with api gateway, but i am using count to create two types of r53 records. Im using count for api_gateway as a conditional, which I would also like to use for the r53 records:

resource "aws_api_gateway_domain_name" "domain" {
  count           = var.create_custom_domain == "true" ? 1 : 0
  certificate_arn = var.aws_acm_certificate_arn
  domain_name     = var.domain_name
}

resource "aws_route53_record" "domain" {
  count = length(var.route53_record_types)
  name    = aws_api_gateway_domain_name.domain.domain_name
  type    = var.route53_record_types[count.index]
  zone_id = "xxxxxxxxxx"

  alias {
    evaluate_target_health = false
    name                   = aws_api_gateway_domain_name.domain.cloudfront_domain_name
    zone_id                = aws_api_gateway_domain_name.domain.cloudfront_zone_id
  }
}

can count have & ?

aknysh
count  = var.create_custom_domain ? length(var.route53_record_types) : 0                 
aknysh

and make var.create_custom_domain a bool so you don’t need to compare to a string var.create_custom_domain == "true"

Brij S

now I get this:

Error: Missing resource instance key

  on ../modules/frontend/main.tf line 263, in resource "aws_route53_record" "domain":
 263:     name                   = aws_api_gateway_domain_name.domain.cloudfront_domain_name

Because aws_api_gateway_domain_name.domain has "count" set, its attributes
must be accessed on specific instances.

For example, to correlate with indices of a referring resource, use:
    aws_api_gateway_domain_name.domain[count.index]


Error: Missing resource instance key

  on ../modules/frontend/main.tf line 264, in resource "aws_route53_record" "domain":
 264:     zone_id                = aws_api_gateway_domain_name.domain.cloudfront_zone_id

Because aws_api_gateway_domain_name.domain has "count" set, its attributes
must be accessed on specific instances.

For example, to correlate with indices of a referring resource, use:
    aws_api_gateway_domain_name.domain[count.index]
Brij S

but apig is only 1 resource, why reference by count?

loren

it’s a tf0.12 change. when a resource is using count, references to its attributes must use the index. in tf0.11 you could reference it directly (sometimes), but in tf0.12 it is strict about using the index. e.g.

name                   = aws_api_gateway_domain_name.domain[0].cloudfront_domain_name
Brij S

sweet that did it tf12 is funky

aknysh

better to use it with splat+join like this

join("", aws_api_gateway_domain_name.domain.*.cloudfront_domain_name)
aknysh

somebody already had issues with using [0]

loren

it kind of depends on the module and where the index is being used, but yeah for belt and suspenders purposes can splat+join, or use the ternary to conditionally reference the index value (if you want a different/specifc value when the count == 0 for the referenced resource)

name = length(aws_api_gateway_domain_name.domain) > 0 ? aws_api_gateway_domain_name.domain[0].cloudfront_domain_name : null
1
loren

new gotcha introduced in tf 0.12.11… bummed. i really was liking the ability to use the 0-index everywhere to reference attributes of optional resources… linking straight to the workaround: https://github.com/hashicorp/terraform/issues/23222#issuecomment-547462883

Hey all, new here and still new to Terraform - I’m trying to use Terraform to configure an AWS CodePipeline. It will plan and apply just fine, but the pipeline fails in the real world every time at the source stage. It seems to need additional S3 permissions and I haven’t yet figured out how to provide them. The error is Insufficient permissions The provided role does not have permissions to perform this action. Underlying error: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID:... I’ve tried a blanket S3 allow-all permission policy on both the pipeline’s associated role and the codebuild’s associated role (desperation) to no avail. - anyone got any advice?

Bruce

Hey @ you could always try creating the pipeline with the AWS CLI and use --debug to see what permission error your getting. It’s not always clear but sometimes another AWS API will call another… It’s helped me out in the past.

@Bruce To add on to your point (Though it clearly seems to be a perm issue), search for Traceback in aws_cli_command --debug 2>&1 \| less -S

1

Thanks @Bruce & @Andy - looking into it today, will update here.

1

So it turns out the KMS encryption key was causing the failure, once I disabled that it ran just fine. Thanks again! off to other errors!

1
Maciek Strömich
Cycle Error on 0.12.14 using blue green deployment · Issue #23374 · hashicorp/terraform

Terraform Version 0.12.14 Terraform Configuration Files resource &quot;aws_autoscaling_group&quot; &quot;web&quot; { name = &quot;web-asg-${aws_launch_configuration.web.name}&quot; availability_zon…

2019-11-13

Jeff Young

Good Morning anyone run into this?

Initializing provider plugins...
- Checking for available provider plugins...

Error verifying checksum for provider "okta"

Terraform v0.12.13 Linux somevm0 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 0509 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

Erik Osterman

Hrmmmm no haven’t seen that one.

Erik Osterman

maybe blow away your .terraform cache directory and try agian?

Jeff Young

Thanks… I blew both cache dirs away ~/.terraform.d and .terraform Same result.

Jeff Young

Adding debug to the terraform Run I see this:

2019/11/13 14<i class="em em-23"></i>05 [TRACE] HTTP client GET request to <https://releases.hashicorp.com/terraform-provider-okta/3.0.0/terraform-provider-okta_3.0.0_SHA256SUMS>
2019/11/13 14<i class="em em-23"></i>05 [ERROR] error fetching checksums from "<https<i class="em em-//releases.hashicorp.com/terraform-provider-okta/3.0.0/terraform-provider-okta_3.0.0_SHA256SUMS>""></i> 403 Forbidden
Erik Osterman

possibly an upstream problem?

Erik Osterman

maybe try pinning to an earlier release of the provider

Jeff Young
 version = "~> 1.0.0"
No provider "okta" plugins meet the constraint "~> 1.0.0".

The version constraint is derived from the "version" argument within the
provider "okta" block in configuration. Child modules may also apply
provider version constraints. To view the provider versions requested by each
module in the current configuration, run "terraform providers".

To proceed, the version constraints for this provider must be relaxed by
either adjusting or removing the "version" argument in the provider blocks
throughout the configuration.


Error: no suitable version is available
Jeff Young

perhaps I need to harass hashicorp

Jeff Young

https://releases.hashicorp.com/ This shows no sign of okta.

Jeff Young

https://github.com/terraform-providers/terraform-provider-okta/issues/4

So I cloned the repo and built it and installed locally and got it working.

Is this dead? · Issue #4 · terraform-providers/terraform-provider-okta

Should articulate/terraform-provider-okta be the canonical version? I would propose either removing this or working with the articulate folks to give them access to this repository. Thanks.

Adrian

mismatch between ~/.terraform.d/plugins/[octa_provider_file] and terraform state checksum of this file?

johncblandii

I have some mangled state locally and am moving it to TF Cloud so manual state management…yayyyy.

-/+ module.eks_cluster.aws_eks_cluster.default (new resource required)
      id:                                                     "mycluster" => <computed> (forces new resource)
      arn:                                                    "arn<img src="/assets/images/custom_emojis/aws.png" class="em em-aws">eks<i class="em em-us-west-2"></i>xxxxxxx:cluster/mycluster" => <computed>

^ This is a concern. Normally I wouldn’t sweat a new resource, but if this destroys the EKS cluster… goes my week restoring everything. So my question is……..will this really delete the resource IF the id is the exact same as the live instance?

This is using https://github.com/cloudposse/terraform-aws-eks-cluster.git?ref=0.4.0 as the module source.

Erik Osterman

Yes, what ever is causing your new ID to be generated will force a new cluster

Erik Osterman

I don’t think there’s a way to avoid that

Erik Osterman

fwiw, @aknysh is working on tf cloud support in our EKS modules

Erik Osterman

we have some PRs for this open now that will get merged by EOD

Erik Osterman
Allow installing external packages. Allow assuming IAM roles by aknysh · Pull Request #33 · cloudposse/terraform-aws-eks-cluster

what Update provisioner &quot;local-exec&quot;: Optionally install external packages (AWS CLI and kubectl) if the worstation that runs terraform plan/apply does not have them installed Optionally…

johncblandii

I pretty much knew the answer, but had to sanity check. the thing is…the id shouldn’t be new. it should be the exact same

johncblandii

well lookie there…kctl is a great addition. we do that separately right now

Hi there! We’re using this module for EKS https://github.com/cloudposse/terraform-aws-eks-cluster, and currently trying to setup the mapping for IAM roles to k8s service accounts. According to https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html, we need to grab the OIDC issuer URL from the EKS cluster and add it to the aws_iam_openid_connect_provider resource.

Has anyone come across this? Unfortunately can’t evaluate the eks_cluster_id output variable from the eks cluster module into a data lookup for the aws_eks_cluster data source. Perhaps we add an output to the eks module for the identity output from aws_eks_cluster? Happy to do a PR for it

cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Enabling IAM Roles for Service Accounts on your Cluster - Amazon EKS

The IAM roles for service accounts feature is available on new Amazon EKS Kubernetes version 1.14 clusters, and clusters that were updated to versions 1.14 or 1.13 on or after September 3rd, 2019. Existing clusters can update to version 1.13 or 1.14 to take advantage of this feature. For more information, see

Chris Fowles

if you’re using .net on eks beware of this: https://github.com/aws/aws-sdk-net/issues/1413

SDK does not seem to support EKS IAM for service accounts · Issue #1413 · aws/aws-sdk-net

I&#39;m trying to get a .NET Core app to work with EKS new support for IAM for Service Accounts. I&#39;ve followed these instructions . This app is reading from an SQS queue and was working previou…

Chris Fowles

also make sure you add a thumbprint to the openid connect provider:

resource "aws_iam_openid_connect_provider" "this" {
  client_id_list  = ["<http://sts.amazonaws.com>"]
  thumbprint_list = [
    "9e99a48a9960b14926bb7f3b02e22da2b0ab7280" # <https://github.com/terraform-providers/terraform-provider-aws/issues/10104#issuecomment-545264374>
  ]
  url             = "${aws_eks_cluster.this.identity.0.oidc.0.issuer}"
}
1
Chris Fowles

these are the things that i ran into over the last week dealing with IAM for service account

Thanks for the heads up! Luckily no .net, but could be similar risks with some of the other pods we’re running

Unfortunately we can’t do something similar to "${aws_eks_cluster.this.identity.0.oidc.0.issuer}" to get the oidc issuer URL as we’re using this module, which doesn’t have this as an output https://github.com/cloudposse/terraform-aws-eks-cluster

Chris Fowles

is the cluster id output the cluster name?

Chris Fowles

we don’t use the module unfortunately so i don’t know

Chris Fowles

right - you said it wasn’t up above

Chris Fowles

i’ve had some success using the arn datasource to parse names out of arns:

Chris Fowles

but ideally it would be an output from the eks module

Yea, one of the outputs of the module is the EKS cluster name. With this we could use with the aws_eks_cluster datasource like you’ve done above… however doesn’t seem like we can do an indirect variable reference like with evalin bash.

Ideally would want to be able to have something like "${data.aws_eks_cluster.${module.eks_cluster_01.eks_cluster_id}.identity.0.oidc.0.issuer}"

Simplest way feels like a PR to the eks cluster module to output this variable

1

Thanks for the suggestions though @Chris Fowles! Good to have warnings that the IAM Roles to svc accounts isn’t the magic bullet we were expecting it to be

MattyB

Anyone have a good article/blog/post/etc.. on directory structures for more complex architectures? I’ve gone through a few blogs, and threads on reddit, but haven’t quite understood how to break mine down to be as simple as theirs. I have a lot of variables & modules compared to others. Maybe part of it is on me. For my PoC(proof of concept but PoS might be more like it) I’m declaring quite a few variables, env vars for example, and making quite a few module references. If I want to use the same architecture for all of my environments it’s all fine and dandy because I can just have another tfvars file but that doesn’t seem like a great idea thanks for the feedback.

MattyB

I know there’s also #terragrunt and #geodesic but I’m not at that point yet. Just trying to build a solid foundation in terraform for right now.

kskewes

This is one way. https://charity.wtf/2016/03/30/terraform-vpc-and-why-you-want-a-tfstate-file-per-env/ We do something similar but split an env into multiple directories. Vpc, eks, services, etc.

Terraform, VPC, and why you want a tfstate file per env

How to blow up your entire infrastructure with this one great trick! Or, how you can isolate the blast radius of terraform explosions by using a separate state file per environment.

Joe Presley

Have you read Terraform: Up and Running 2nd Edition? It’s pretty much the book on how to organize your Terraform code.

MattyB

Not yet but it’s on my to do list. I’ll move it higher up on the priority list

aknysh

@ PRs are welcome

Add OIDC Issuer to module output by benclapp · Pull Request #34 · cloudposse/terraform-aws-eks-cluster

Add the OIDC Provider output to the module, to enable module consumers to link IAM roles to service accounts, as described here. Note, updating the readme seemed to fail on MacOS, so not updated yet.

aknysh
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

aknysh
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

aknysh

not sure I understand this

Yea, one of the outputs of the module is the EKS cluster name. With this we _could_ use with the `aws_eks_cluster` datasource like you've done above… however doesn't seem like we can do an indirect variable reference like with `eval`in bash
aknysh

since the module outputs cluster name, you can use https://www.terraform.io/docs/providers/aws/d/eks_cluster.html to read all other attributes

AWS: aws_eks_cluster - Terraform by HashiCorp

Retrieve information about an EKS Cluster

Correct me if I’m wrong, but I think going down that path, we’d have to have something like this right? "${data.aws_eks_cluster.${module.eks_cluster_01.eks_cluster_id}.identity.0.oidc.0.issuer}"

E.g. evaluate the cluster name, and then use the cluster name as part of the data lookup against aws_eks_cluster?

aknysh

the example here https://www.terraform.io/docs/providers/aws/d/eks_cluster.html shows exactly what you need to do

aknysh
module "eks_cluster" {
  source                 = "../../"
  namespace              = var.namespace
  stage                  = var.stage
  name                   = var.name
  attributes             = var.attributes
  tags                   = var.tags
  region                 = var.region
  vpc_id                 = module.vpc.vpc_id
  subnet_ids             = module.subnets.public_subnet_ids
  kubernetes_version     = var.kubernetes_version
  kubeconfig_path        = var.kubeconfig_path
  local_exec_interpreter = var.local_exec_interpreter

  configmap_auth_template_file = var.configmap_auth_template_file
  configmap_auth_file          = var.configmap_auth_file

  workers_role_arns          = [module.eks_workers.workers_role_arn]
  workers_security_group_ids = [module.eks_workers.security_group_id]
}

data "aws_eks_cluster" "example" {
  name = module.eks_cluster.eks_cluster_id
}

locals {
	oidc_issuer = data.aws_eks_cluster.example.identity.0.oidc.0.issuer
}
1

Of course! Didn’t think to create the data source first, and then get the variable from that, this is exactly what we need

Need more coffee…

Thanks heaps for the help @aknysh

2019-11-12

Laurynas

Any pointers on what is the best practice in creating multiple AWS console iam users using terraform?

Dinesh Patra

We as a team are evaluating Terraform Enterprise..vs Terraform Opensource.. does anyone see value add in Enterprise version

aknysh

are you talking about TF Cloud or TF Enterprise (on-prem installation)? https://www.terraform.io/docs/cloud/index.html#note-about-product-names

Home - Terraform Cloud - Terraform by HashiCorp

Terraform Cloud is an application that helps teams use Terraform to provision infrastructure.

Dinesh Patra

TF Cloud - to start with.. thats what Hashicorp is pushing lately

aknysh

we see these features in TF Cloud that are very useful:

aknysh
  1. Automatic state management - you don’t have to setup S3 bucket and DynamoDB for TF state backend
aknysh
  1. atlantis-like functionality included (run plan on pull request) - you don’t have to provision atlantis if you want GitOps
aknysh
  1. Security and access control management - you create teams and assign permissions to the teams (which can run plan, apply, destroy, etc.), and then add users to the teams

and you can use workspaces

and they are the recommended way to use Cloud

not that you can’t use it in OSS but they are not recommended for multi-team repos etc

We have an official Demo on Friday,

Andrew Jeffree

The bigger issue I have with TF Cloud is it requires me to give them Access Keys with essentially admin access. Which means I need to trust them to keep those keys secure… Whereas with Atlantis I can run it in an account and the security of it is my responsibility.

Dinesh Patra

Thanks for the inputs @aknysh @PePe.. Appreciate that! I am using terragrunt for state-s3 management , tied up terragrunt plan/apply on Jenkins.. but only the Security and access management is thru iam-roles.. I will look forward the Demo on Friday to see further on TF cloud

then you buy TF enterprise, that is self hosted, if you do not trust them

aknysh

Yes giving them the access keys is the biggest concern

aknysh

All the rest is ok

aknysh

Not sure if buying tf enterprise is worth it

aknysh

You have to pay for it plus for its hosting

exactly

aknysh

In that case, might be better to use open source terraform and atlantis

aknysh

For cost and security reasons

yes

or use GithubAction hosted-runner

if you use github

aknysh

I guess we just have to wait for the news that tf cloud was hacked and access keys compromised :)

aknysh

Internal players are the biggest concern

that is a good point

John H Patton

Hello Everyone. I’m having an issue with ACM and I’m hoping someone knows why this is happening or how to troubleshoot further. I’m creating an LB config with an HTTPS listener and with the following module configuration:

module "cert" {
  source                            = "git:<i class="em em-<https"></i>//github.com/cloudposse/terraform-aws-acm-request-certificate.git?ref=0.4.0>"
  zone_name                         = local.zone
  domain_name                       = local.origin_dns
  subject_alternative_names         = local.origin_san
  ttl                               = "300"
  process_domain_validation_options = true
  wait_for_certificate_issued       = true
}
John H Patton

Issue 1: First run: everything is created when I look in the portal, but the TF output says:

Error: 6 errors occurred:
    * missing <http://main.dev.mydomain.com> DNS validation record: <http://_654c27fb72c292a9967470a6a4c10216.main.dev.mydomain.com>
    * missing <http://origin.qa.mydomain.com> DNS validation record: <http://_88c3a0c95150482d39451905c3703883.origin.qa.mydomain.com>
    * missing <http://main.uat.mydomain.com> DNS validation record: <http://_f5a5dd6ef183a5104c5e65cc301a27d0.main.uat.mydomain.com>
    * missing <http://origin.uat.mydomain.com> DNS validation record: <http://_2527e37bcae7b44a3d28db696f258b84.origin.uat.mydomain.com>
    * missing <http://main.qa.mydomain.com> DNS validation record: <http://_c1f13019befd352bcee1787e92f43a0b.main.qa.mydomain.com>
    * missing <http://origin.dev.mydomain.com> DNS validation record: <http://_b2eda108e906975e5775231a8b92dcbf.origin.dev.mydomain.com>
John H Patton

Issue 2: Second+ run: I am getting a Cycle error and all of the records:

Error: Cycle: module.cert.aws_route53_record.default (destroy deposed 0aea8e5c), module.cert.aws_route53_record.default (destroy deposed 0eee4568), module.cert.aws_route53_record.default (destroy deposed 8317334a), module.cert.aws_route53_record.default (destroy deposed cfd8b435), module.cert.aws_route53_record.default (destroy deposed 8741e753), module.cert.aws_route53_record.default (destroy deposed 44a3ff11), module.cert.aws_acm_certificate.default (destroy deposed 312e6bad), aws_lb_listener.front_end_https, module.cert.aws_route53_record.default (destroy deposed bad2ac93)
John H Patton

anyone have any ideas? thanks in advance!

Erik Osterman

Hrmmmm! so looks like you’re using the HCL2 version of the module

Erik Osterman

that one i know fixed a bunch of problems we had with tf-0.11

John H Patton

yeah, i’ve tried many adjustments to it, and all roads lead back to the above “errors”

Erik Osterman

I haven’t seen this one before…

Erik Osterman

have you tried (for debugging) to reduce the configuration to the smallest working one?

Erik Osterman

then go up from there?

John H Patton

resources are created on run 1, but it errors out, Cycle error with destroy deposed from the lifecycle on the cert

John H Patton

yeah, i’ve done a debug TF with just the module and variables involved.. no joy

John H Patton

i’m going to see if i can increase the create timeout on validation by adding this to the aws_acm_certificate_validation resource:

  timeouts {
    create = "60m"
  }
John H Patton

looks like that’s an option, not sure if it’ll work.. i already had to increase retries and decrease parallelism.. this thing makes a slew of route53 API calls

John H Patton

moment of truth…

John H Patton

getting all the Creation complete after 42s and watnot… waiting for the error at the end.

John H Patton

no joy:

Error: 6 errors occurred:
	* missing <http://origin.uat.mydomain.com> DNS validation record: <http://_2527e37bcae7b44a3d28db696f258b84.origin.uat.mydomain.com>
	* missing <http://main.qa.mydomain.com> DNS validation record: <http://_c1f13019befd352bcee1787e92f43a0b.main.qa.mydomain.com>
	* missing <http://origin.dev.mydomain.com> DNS validation record: <http://_b2eda108e906975e5775231a8b92dcbf.origin.dev.mydomain.com>
	* missing <http://main.uat.mydomain.com> DNS validation record: <http://_f5a5dd6ef183a5104c5e65cc301a27d0.main.uat.mydomain.com>
	* missing <http://origin.qa.mydomain.com> DNS validation record: <http://_88c3a0c95150482d39451905c3703883.origin.qa.mydomain.com>
	* missing <http://main.dev.mydomain.com> DNS validation record: <http://_654c27fb72c292a9967470a6a4c10216.main.dev.mydomain.com>
John H Patton

all the records were successfully created, tho

aknysh
cloudposse/terraform-aws-acm-request-certificate

Terraform module to request an ACM certificate for a domain name and create a CNAME record in the DNS zone to complete certificate validation - cloudposse/terraform-aws-acm-request-certificate

aknysh
cloudposse/terraform-aws-acm-request-certificate

Terraform module to request an ACM certificate for a domain name and create a CNAME record in the DNS zone to complete certificate validation - cloudposse/terraform-aws-acm-request-certificate

John H Patton

yeah, i have looked at so much terraform all over the place, those examples included… i’ll revisit that, however… perhaps i’ve missed something

John H Patton

yeah, i have… the equivalent for me would be the following tfvars:

region = "us-east-2"

namespace = "na"

stage = "nonprod"

name = "nonprod-zone"

parent_zone_name = "<http://nonprod.mydomain.com>"

validation_method = "DNS"

ttl = "300"

process_domain_validation_options = true

wait_for_certificate_issued = true
John H Patton

mydomain is a placeholder, of course

John H Patton

and my san is different

John H Patton

let me try using wildcards… haven’t tried that yet

aknysh

oh, so the module works only for star subdomains in SAN

aknysh

<http://domain.com> and *.<http://domain.com> will have exactly the same DNS validation records generated by TF

aknysh

not-star subdomain will have its own different DNS validation record

aknysh

the module does not apply many DNS records

aknysh

for many different issues we had with that before

aknysh

<http://domain.com> and *.<http://domain.com> will have exactly the same DNS validation records generated - this is how AWS works

John H Patton

i mean, i see the validation records being created correctly, and in the portal the cert is in the issued state

John H Patton

but, i can’t seem to use the cert

John H Patton

as for the many DNS records… yeah, i’ve run into that issue already… seems to be a rate limit on route53 that causes issues, i needed to reduce the parallelism to 2

John H Patton

i think i’m going to need to create the cert separate from the rest of this infra

aknysh

multi-level subdomains are not supported in a wild-card cert

aknysh

if you use the module, you’ll have to request many wildcard certificates, one for each subdomain DNS zone

John H Patton

well, i’m not sure how to make the hardcoded domains in the SAN work right… it’s driving me nuts.. been fighting this for over a week

John H Patton

I figured it out, finally! This is an array of an array:

  distinct_domain_names = distinct(concat([var.domain_name], [for s in var.subject_alternative_names : replace(s, "*.", "")]))

It outputs:

distinct_domain_names = [
  [
    "<http://main.nonprod.mydomain.com>",
    "<http://origin.nonprod.mydomain.com>",
  ],
]
John H Patton

This needed to be flattened, then I can work with it in other variables/data sources correctly

  distinct_domain_names = flatten(distinct(concat([var.domain_name], [for s in var.subject_alternative_names : replace(s, "*.", "")])))
John H Patton

that took entirely way too long to figure out

cabrinha

hello there

cabrinha

can anyone tell me how to use the https://www.terraform.io/docs/providers/aws/d/autoscaling_groups.html to select an ASG based on it’s tags?

AWS: aws_autoscaling_groups - Terraform by HashiCorp

Provides a list of Autoscaling Groups within a specific region.

cabrinha

Specifically, I’m confused on the filters:

  filter {
    name   = "key"
    values = ["Team"]
  }

  filter {
    name   = "value"
    values = ["Pets"]
  }
cabrinha

so, what I’m thinking is:

  filter {
    name   = "key"
    values = ["tag:Name"]
  }

  filter {
    name   = "value"
    values = ["my-name-tag-value"]
  }

I don’t think so

filter {
    name   = "tag:Name"
    values = ["hello"]
  }
filter {
    value   = ["value"]
    name = "tag:my-name-tag-value"
  }

there

cabrinha

so I need two filter blocks?

no, that depends on your tags

if with one tag you can find what you need

then you just need only one

cabrinha

So, how do I fetch or specify the value of the “Name” tag?

cabrinha
* data.aws_autoscaling_groups.default: data.aws_autoscaling_groups.default: Error fetching Autoscaling Groups: ValidationError: Filter type tag:Name is not correct. Allowed Filter types are: auto-scaling-group key value propagate-at-launch
John H Patton

Like so:

  filter {
    name   = "tag:Name"
    values = ["thevalue"]
  }
cabrinha

name - (Required) The name of the filter. The valid values are: auto-scaling-group, key, value, and propagate-at-launch.

cabrinha

You can’t use name = "tag:Name"

the tag Name has a capital N

by the way

cabrinha
Error: Error refreshing state: 1 error occurred:
        * data.aws_autoscaling_groups.default: 1 error occurred:
        * data.aws_autoscaling_groups.default: data.aws_autoscaling_groups.default: Error fetching Autoscaling Groups: ValidationError: Filter type tag:Name is not correct. Allowed Filter types are: auto-scaling-group key value propagate-at-launch
John H Patton

ah, i see.. this is a slightly different mechanism.. one moment…

does it have a tag name ?

some resources allow for tag{} lookup filters

John H Patton
  filter {
    name   = "key"
    values = ["tag:Name"]
  }

  filter {
    name   = "value"
    values = ["thevalue"]
  }

not all

cabrinha

even when I use this as a filter, I get nothing returned:

  filter {
    name   = "key"
    values = ["tag:Name"]
  }

  filter {
    name   = "value"
    values = ["*"]
  }
John H Patton

ah-hah.. here’s the thing… ASGs have funky tags:

  tag {
    key                 = "Name"
    value               = "somename"
    propagate_at_launch = true
  }
John H Patton

that;s what you’re matching against

John H Patton

if the tag:Name doesn’t work for the key, try:

 filter {
    name   = "key"
    values = ["Name"]
  }

  filter {
    name   = "value"
    values = ["thevalue"]
  }
cabrinha

cabrinha
data "aws_autoscaling_groups" "default" {
  filter {
    name   = "key"
    values = ["Name"]
  }
  filter {
    name   = "value"
    values = ["*"]
  }
  filter {
    name   = "propagate-at-launch"
    values = ["true"]
  }
}
cabrinha

can anyone test this on their own ASGs?

cabrinha

I’m getting nothing back on all these variations

John H Patton

what version of TF are you using? what version of the aws provider?

cabrinha

TF 0.11

cabrinha

AWS provider ~> 1.0

cabrinha

that might have been it

cabrinha

I’ll try with latest AWS provider now

John H Patton

yeah, i’m using aws -> 2.35

John H Patton

give it a shot

cabrinha

Alright, I’m on 2.35 now

cabrinha

but im still not getting anything returned from the data lookup

are you sure you are running against the right AWS account and region ?

cabrinha
Improve filters for data source aws_autoscaling_group · Issue #3534 · terraform-providers/terraform-provider-aws

When using aws_autoscaling_group as a data source, I expect to be able to filter autoscaling groups similar to how I can filter other resource (i.e. aws_ami). Terraform Version Any Affected Resourc…

cabrinha

account and region are fine, my state is finding everything else

cabrinha

this is also my first time using this data lookup

cabrinha

I got it to work, using only one filter

John H Patton

nice!!!

John H Patton

congrats

cabrinha

thanks for taking a look

cabrinha

does that seem right?

2019-11-11

Maciek Strömich
liamg/tfsec

Static analysis powered security scanner for your terraform code - liamg/tfsec

1
Laurynas

Hi, is it possbile to read locals from file in terraform?

Erik Osterman

Sorta

Erik Osterman

You can load a json file into a local

Erik Osterman

Or yaml

Erik Osterman
jsondecode - Functions - Configuration Language - Terraform by HashiCorp

The jsondecode function decodes a JSON string into a representation of its value.

1
Cloud Posse
05:00:59 PM

Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is Nov 20, 2019 11:30AM.
https://zoom.us/meeting/register/dd2072a53834b30a7c24e00bf0acd2b8\|Register for Webinar
#office-hours (our channel)

Blaise Pabon

One of my discomforts with Terraform has been the paucity of testing and verification… It looks like Hashicorp got the memo ; this just in from Berlin: https://speakerdeck.com/joatmon08/test-driven-development-tdd-for-infrastructure

Test-Driven Development (TDD) for Infrastructure

Originally presented at 2019 O’Reilly Velocity (Berlin). In software development, test-driven development (TDD) is the process of writing tests and then developing functionality to pass the tests. Let’s explore methods of adapting and applying TDD to configuring and deploying infrastructure-as-code. Repository here: https://github.com/joatmon08/tdd-infrastructure

kskewes

Yeah, using conftest to validate outputs from terraform plan is slick.

Test-Driven Development (TDD) for Infrastructure

Originally presented at 2019 O’Reilly Velocity (Berlin). In software development, test-driven development (TDD) is the process of writing tests and then developing functionality to pass the tests. Let’s explore methods of adapting and applying TDD to configuring and deploying infrastructure-as-code. Repository here: https://github.com/joatmon08/tdd-infrastructure

2019-11-10

vitaly.markov

I’ve opened a PR with hint to avoid another unexpected behaviour - mismatch kubelet versions on control plane and node groups https://github.com/cloudposse/terraform-aws-eks-cluster/pull/32

fix(eks-cluster-example): Add eks_worker_ami_name_filter variable to the example by vymarkov · Pull Request #32 · cloudposse/terraform-aws-eks-cluster

Unfortunately, most_recent variable does not work as expected. Enforce usage of eks_worker_ami_name_filter variable to set the right kubernetes version for EKS workers, otherwise will be used the f…

I need to stand up AWS Directory Service. I would love to maintain the users and groups as configuration as code with gitops. Anybody done that before?

loren

there’s no aws api for it, i don’t think. you might need to roll your own thing on lambda or ec2 that leverages an ldap library

That was my other thought. To do it with LDAP or some Microsoft tool.

loren

We’ve had some luck with the openldap package and python-ldap

2019-11-09

2019-11-08

cabrinha

So, I’m trying to write an AWS Lambda module that can automatically add DataDog Lambda layers to the caller’s lambda…

cabrinha

I don’t think I can do terraform data lookup on the Lambda layer in DataDog’s account, so I’m trying to write a python script for my own external data lookup

cabrinha
data "external" "layers" {
  program = ["python3", "${path.module}/scripts/layers.py"]
  query = {
    runtime = "${var.runtime}"
  }
}
cabrinha

but, what I’m having an issue with right now, is how Python interprets the “query” parameter inside my program. The docs say it’s a JSON object … but depending on how I read it, it seems to come in as a set() or a io.TextWrapper

did Hashicorp changed the font on the Doc site ?

Julio Tain Sueiras

If you mean the general revamp and adding search then yes

Michael Warkentin

Not seeing any search - am I blind? haha

Julio Tain Sueiras
cabrinha

yeah, the fonts changed too – and I don’t like them!

Michael Warkentin

Oh, I thought something looked different

Erik Osterman
Erik Osterman

so, out of familiarity, I liked the old “default” font, but it feels quite readable to me.

do you know anyone in hashicorp to complain ?

I hope the search is better

vitaly.markov

I can’t find a solution how to format map to string, an example below:

input: { purpose = "ci_worker", lifecycle = "Ec2Spot" }
output: "purpose=ci_worker,lifecycle=Ec2Spot" 

Any ideas?

Erik Osterman

yes, i’ve done this before in 0.11 syntax

Erik Osterman
  1. use null_resource with count to create a list of key=value pairs
Erik Osterman
  1. use join with splat to aggregate them with ,
Erik Osterman
cloudposse/reference-architectures

Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures

vitaly.markov

Awesome, thank you a lot

cloudposse/reference-architectures

Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures

Erik Osterman

maybe some more elegant way in HCl2

cabrinha

I think I’m getting bit by a really strange error: https://github.com/hashicorp/terraform/issues/16856

Local lists don't support interpolation · Issue #16856 · hashicorp/terraform

Terraform said that a local list variable (which elements have interpolations) is not a list Terraform Version Terraform v0.11.1 + provider.aws v1.5.0 + provider.template v1.0.0 Terraform Configura…

loren

Yep. No workaround I know of in tf 0.11. Upgrade to tf 0.12

Local lists don't support interpolation · Issue #16856 · hashicorp/terraform

Terraform said that a local list variable (which elements have interpolations) is not a list Terraform Version Terraform v0.11.1 + provider.aws v1.5.0 + provider.template v1.0.0 Terraform Configura…

cabrinha

yeah, I’m trying to determine if the user has set “var.enable_datadog” to “true”; If they have then:

layers = [ list(var.layers, local.dd_layer) ]  
  else: 
layers = [ var.layers ]
cabrinha
  layers                         = [
    "${var.enable_datadog ? "${local.dd_layer}:${data.external.layers.result["arn_suffix"]}" : var.layers }"
  ]
cabrinha

Error: module.lambda.module.lambda.aws_lambda_function.lambda: layers: should be a list

cabrinha

even when I remove the local, I get the same error:

  layers                         = [
    "${var.enable_datadog ? data.external.layers.result["arn_suffix"] : var.layers }"
  ]
cabrinha

so for some reason, data.external along with locals that are not strings seem to be the culprits here

Hi Guys : I’m using https://github.com/cloudposse/terraform-aws-s3-log-storage and I created a bucket that is receiving logs but I can download them, copy nothing not from the root account or anything

cloudposse/terraform-aws-s3-log-storage

This module creates an S3 bucket suitable for receiving logs from other AWS services such as S3, CloudFront, and CloudTrail - cloudposse/terraform-aws-s3-log-storage

I tried setting acl = ""

cloudposse/terraform-aws-s3-log-storage

This module creates an S3 bucket suitable for receiving logs from other AWS services such as S3, CloudFront, and CloudTrail - cloudposse/terraform-aws-s3-log-storage

but I got

Error: Error putting S3 ACL: MissingSecurityHeader: Your request was missing a required header
	status code: 400, request id: 594BF7171170FCA0, host id: N+nVSWtyZ4S9+XMfo4d57W1LuuyPZWSI3y357Li0hUD46eBFRmEUt9c0z8iGssCE3iVsVbs11BU=

I guess the only thing that can write this flow logs

the policy that Tf is applying :

policy                      = jsonencode(
            {
                Statement = [
                    {
                        Action    = "s3:PutObject"
                        Condition = {
                            StringEquals = {
                                s3:x-amz-acl = "bucket-owner-full-control"
                            }
                        }
                        Effect    = "Allow"
                        Principal = {
                            AWS = "arn<img src="/assets/images/custom_emojis/aws.png" class="em em-aws">iam:root"
                        }
                        Resource  = "arn<img src="/assets/images/custom_emojis/aws.png" class="em em-aws">s3:::globalaccelerator-logs/*"
                        Sid       = "AWSLogDeliveryWrite"
                    },
                ]
                Version   = "2012-10-17"
            }
        )
        region                      = "us-east-1"
        request_payer               = "BucketOwner"
        tags                        = {
            "Attributes" = "logs"
            "Name"       = "globalaccelerator-logs"
            "Namespace"  = "hds"
            "Stage"      = "stage"
        }

        lifecycle_rule {
            abort_incomplete_multipart_upload_days = 0
            enabled                                = false
            id                                     = "globalaccelerator-logs"
            tags                                   = {}

            expiration {
                days                         = 90
                expired_object_delete_marker = false
            }

            noncurrent_version_expiration {
                days = 90
            }

            noncurrent_version_transition {
                days          = 30
                storage_class = "GLACIER"
            }

            transition {
                days          = 30
                storage_class = "STANDARD_IA"
            }
            transition {
                days          = 60
                storage_class = "GLACIER"
            }
        }

        server_side_encryption_configuration {
            rule {
                apply_server_side_encryption_by_default {
                    sse_algorithm = "aws:kms"
                }
            }
        }

        versioning {
            enabled    = true
            mfa_delete = false
        }
    }

the resource part

~ resource "aws_s3_bucket" "default" {
      - acl                         = "log-delivery-write" -> null
        arn                         = "arn<img src="/assets/images/custom_emojis/aws.png" class="em em-aws">s3:::globalaccelerator-logs"
        bucket                      = "globalaccelerator-logs"
        bucket_domain_name          = "<http://globalaccelerator-logs.s3.amazonaws.com>"
        bucket_regional_domain_name = "<http://globalaccelerator-logs.s3.amazonaws.com>"
        force_destroy               = false
        hosted_zone_id              = "Z3AQBSTGFYJSTF"
        id                          = "globalaccelerator-logs"

I can’t even modify the ACL

anyone have any ideas ?

Erik Osterman

@Igor Rodionov

Igor Rodionov

Reading

it is driving me nuts…

ACL :

{
    "Owner": {
        "DisplayName": "pepe-aws",
        "ID": "234234234242334234234222423423422424242434424224243"
    },
    "Grants": [
        {
            "Grantee": {
                "DisplayName": "pepe-aws",
                "ID": "234234234242334234234222423423422424242434424224243",
                "Type": "CanonicalUser"
            },
            "Permission": "FULL_CONTROL"
        },
        {
            "Grantee": {
                "Type": "Group",
                "URI": "<http://acs.amazonaws.com/groups/s3/LogDelivery>"
            },
            "Permission": "WRITE"
        },
        {
            "Grantee": {
                "Type": "Group",
                "URI": "<http://acs.amazonaws.com/groups/s3/LogDelivery>"
            },
            "Permission": "READ_ACP"
        }
    ]
}

to create the bucket I used :

# Policy for S3 bucket for Global accelerator flow logs
data "aws_iam_policy_document" "default" {
  statement {
    sid = "AWSLogDeliveryWrite"

    principals {
      type        = "AWS"
      identifiers = [
        data.aws_elb_service_account.default.arn, 
        "arn<img src="/assets/images/custom_emojis/aws.png" class="em em-aws">iam:root"
        ]
    }

    effect = "Allow"

    actions = [
      "s3:PutObject",
      "s3:Get*"
    ]

    resources = [
      "arn<img src="/assets/images/custom_emojis/aws.png" class="em em-aws">s3:::${module.s3_log_label.id}/*",
    ]

    condition {
      test     = "StringEquals"
      variable = "s3:x-amz-acl"

      values = [
        "bucket-owner-full-control",
      ]
    }
  }
}

# S3 bucket for Global accelerator flow logs
module "s3_bucket" {
  source                 = "git:<i class="em em-<https"></i>//github.com/cloudposse/terraform-aws-s3-log-storage.git?ref=tags/0.6.0>"
  namespace              = var.namespace
  stage                  = var.stage
  name                   = var.name
  delimiter              = var.delimiter
  attributes             = [module.s3_log_label.attributes]
  tags                   = var.tags
  region                 = var.region
  policy                 = data.aws_iam_policy_document.default.json
  versioning_enabled     = "true"
  lifecycle_rule_enabled = "false"
  sse_algorithm          = "aws:kms"
  #acl = ""
}

one very weird thing is that the KMS key used to encrypt the objects is unknown to me

maybe since I do not have access to the key I can unencrypt the objects

We do not have any account with ID

399586xxxxx

I guess that ID comes from GlobalAccelerator or something like that

aknysh

@PePe here is an example of using the module in another module https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/blob/master/main.tf#L80

cloudposse/terraform-aws-cloudfront-s3-cdn

Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn

aknysh
cloudposse/terraform-aws-cloudfront-s3-cdn

Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn

aknysh

maybe ^ will help

aknysh
cloudposse/terraform-aws-s3-log-storage

This module creates an S3 bucket suitable for receiving logs from other AWS services such as S3, CloudFront, and CloudTrail - cloudposse/terraform-aws-s3-log-storage

So after some research :

I never had a problem creating the bucket

but I did one mistake

I used :

sse_algorithm          = "aws:kms"

without passing a key

and that somehow chose a kms from who knows where and my objects are all encrypted with a key that I don’t own

that is not the module doing that, that is the API or terraform doing it

my guess is that the objects are encrypted using the Origins KMS key ?

like the aws log delivery service ?

I have no clue

I wan to to test creating buckets this way

by hand and see what happens

the flow logs were successfully written in the bucket so that is why I though it was weir that it can encrypt but not decrypt the objects

I created another bucket using default aes-256 server side encryption using the manged aws kms key and the flow logs were correctly written to the bucket and I was able to download them without any issued and all this using the same module but just using the default encryption

so I tried earlier a new bucket same module using sse_algorithm = "aws:kms"

not passing a custom key and it used the default kms key

so I have no clue WTH

the other thing I could try is to use terraform 0.11 use the old version of the module and try again

maybe is a provider bug

I think it could be related to this :

After you enable default encryption for a bucket, the following encryption behavior applies:

    There is no change to the encryption of the objects that existed in the bucket before default encryption was enabled.

    When you upload objects after enabling default encryption:

        If your PUT request headers don't include encryption information, Amazon S3 uses the bucket's default encryption settings to encrypt the objects.

        If your PUT request headers include encryption information, Amazon S3 uses the encryption information from the PUT request to encrypt objects before storing them in Amazon S3.

    If you use the SSE-KMS option for your default encryption configuration, you are subject to the RPS (requests per second) limits of AWS KMS. For more information about AWS KMS limits and how to request a limit increase, see AWS KMS limits.

fw123

any terragrunt experts here that i can pm a question to

2019-11-07

Saichovsky

Hi everyone

I have a burning question on terraform <> AWS

resource "aws_kinesis_analytics_application" "app" {
  name = var.analytics_app_name
  tags = local.tags

  // TODO: need to make inputs & outputs dynamic -- cater for cases of multiple columns, e.t.c.
  //  inputs {
  //    name_prefix = ""
  //    "schema" {
  //      "record_columns" {
  //        name = ""
  //        sql_type = ""
  //      }
  //      "record_format" {}
  //    }
  //  }
  //  outputs {
  //    name = ""
  //    "schema" {}
  //  }
}

Here’s my quandary - number of columns is not fixed

Saichovsky

The issue here is how to accommodate different inputs (having different column types or numbers, etc)

aknysh
Expressions - Configuration Language - Terraform by HashiCorp

The Terraform language allows the use of expressions to access data exported by resources and to transform and combine that data to produce other values.

aknysh
cloudposse/terraform-aws-ec2-autoscale-group

Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group

aknysh

@Saichovsky ^

hi guys, is cloudposse ecs cluster module come with draining lambda function as well ?

what module exactly ?

Mike Whiting

@Erik Osterman thanks

Julio Tain Sueiras
ellisdon-oss/terraform-provider-azuredevops

Terraform Provider for AzureDevOps. Contribute to ellisdon-oss/terraform-provider-azuredevops development by creating an account on GitHub.

Erik Osterman

@Jake Lundberg (HashiCorp) maybe something for you…

Erik Osterman

I believe you were saying a lot of your customers were asking about integration of terraform cloud with azure devops

Erik Osterman

@Julio Tain Sueiras has something for you

Julio Tain Sueiras

@Erik Osterman in this case is different A) https://www.hashicorp.com/blog/announcing-azure-devops-support-for-terraform-cloud-and-enterprise/ and B) the one that jake mention is probably TF Cloud to hook to AzureDevOps Repos

Announcing HashiCorp Terraform Cloud and Enterprise Support for Azure DevOps

Today we’re pleased to announce Azure DevOps Services support for HashiCorp Terraform Cloud and HashiCorp Terraform Enterprise. This support includes the ability to link your Terra…

Julio Tain Sueiras

where as the provider purpose to manage all components of azuredevops with terraform

1
cabrinha

when I have a parameter like this:

// Add environment variables.
  environment {
    variables {
      SLACK_URL = "${var.slack_url}"
    }
  }

How can I parameterize it?

cabrinha

the value of “variables” needs to be some kind of dynamic map

cabrinha

with any number of key/value pairs

cabrinha

When I try:

  // Add environment variables.
  environment {
    variables {
      "${var.env_vars}"
    }
  }

… where var.env_vars is a map … I get an error

cabrinha

key '"${var.env_vars}"' expected start of object ('{') or assignment ('=')

cabrinha
Map variables as parameter · Issue #10437 · hashicorp/terraform

Hi there, I&#39;m trying to create a module which has a map variable as parameter and use it as parameter to create a resource. As map can be dynamic, I want to don&#39;t need to specify any of map…

vitaly.markov

I’ve found another not obvious behavior in case when we create two or more node groups (https://github.com/cloudposse/terraform-aws-eks-workers) then trafiek beetween pods in diferent node groups is not allowed due to SG missconfiguration,

So the podA in the ng-1 cannot to send request to podB in ng-2

cloudposse/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

aknysh

Nice find @vitaly.markov thanks for testing again. It’s not a misconfiguration, we just didn’t think about that case when created the modules

cloudposse/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

aknysh

but maybe we can use the additional SGs to connect the two worker groups together https://github.com/cloudposse/terraform-aws-eks-workers/blob/master/variables.tf#L410

cloudposse/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

aknysh

by adding the SG from group 1 to the additional SGs of group 2, and vice versa

aknysh

or we can even place all of the groups into just one SG https://github.com/cloudposse/terraform-aws-eks-workers/blob/master/variables.tf#L398

cloudposse/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

kskewes

Mm. I didn’t think about the example. For blue green node pools we created our own security group and put all pools in it and then specified in master. This way there was no chicken egg and reliance on pool module output to construct sg members.

kskewes

Happy to share this if useful? Would be much nicer with for each module support.

aknysh

yea that’s a good solution - create a separate SG, put all worker pools into it, and then connect to the master cluster

aknysh

@kskewes please share, it would be worth adding to the examples

kskewes

Sure. Reminder set to post gist.

vitaly.markov

@kskewes I wasted time before I found a root of cause, I used to create EKS using eksctl, I didn’t expect this behavior

vitaly.markov

I think would be better to add an example to repo

kskewes

Me too mate. Spent day or so tracking down. When back at computer will post. We can slim it down for repo.

kskewes

Our <http://eks.tf> file with:

  1. vpc and subnets from remote state (we tend to use lots of state files to limit blast radius).
  2. single security group for all nodes so nodes and their pods can talk to each other and no chicken/egg with blue/green/etc. Note we left out a couple rules from module’s SG.
  3. blue/green worker pools.
  4. worker pool per AZ (no rebalancing and we need to manage AZ fault domain).
  5. static AMI selection (deterministic). Note, we are dangerously close to character limits (<64 or 63?) for things like role arn’s, etc.

Workflow is to:

  1. terraform apply (blue pool)
  2. …. update locals (ie: green pool ami and enable)
  3. terraform apply (deploys green pool alongside blue pool)
  4. … drain.
  5. …. update locals (ie: blue pool ami and disable)
  6. terraform apply (deletes blue pool, leaving green only)
kskewes

Please sing out if any questions or suggestions. Will PR custom configmap location soon (haven’t written but we want).

Erik Osterman

I think the evolution of this could be even simpler. Always keep blue and green “online”. Only change the properties of blue and green with terraform. Automate the rest entirely using a combination of taints and cluster autoscaler. The cluster autoscaler supports scaling down to 0.

Erik Osterman

So basically you can taint all the nodes in blue or green and that would trigger a graceful migration of pods to one or the other

Erik Osterman

Then Kubernetes will take care of scaling down the inactive pool to zero.

Erik Osterman

kubectl taint node -l flavor=blue dedicated=PreferNoSchedule

Erik Osterman

This would then scale down the blue pool

Erik Osterman

@oscar you might dig this

kskewes

That’s a great idea. My colleague has done an internal PR to start cluster auto scalar and we’ll discuss this. New to EKS so bit of a lift and shift.

kskewes

Thanks!

oscar

For ref, we do this:

module "eks_workers_1" {
  source = "git:<i class="em em-<https"></i>//github.com/cloudposse/terraform-aws-eks-workers.git?ref=tags/0.10.0>"

  use_custom_image_id = true
  image_id            = data.aws_ssm_parameter.eks_ami.value

  namespace                          = var.namespace
  stage                              = var.stage
  name                               = "1"
...
  allowed_security_groups            = [module.eks_workers_2.security_group_id, module.eks_workers_3.security_group_id]
...
vitaly.markov

@oscar I tried to use allowed_security_groups and it broke all SG setup, I’ll create an issue later

oscar

I’ve not verified it ‘works’, but it certainly doesn’t ‘break’ on terraform apply

oscar

It’s been like this for some time

kskewes

Thanks Oscar. Problem for us with security groups is if the color is disabled then no security group exists and terraform complains. This way we create a single group up front.

kskewes

Team mate has done a great job with cluster auto scalar so we’ll end up doing Erik’s suggestion soon. Thanks for that!

Erik Osterman

Dude check out what was just announced! https://github.com/aws/aws-node-termination-handler

aws/aws-node-termination-handler

A Kubernetes DaemonSet to gracefully handle EC2 Spot Instance interruptions. - aws/aws-node-termination-handler

Erik Osterman

This might make things even easier

Erik Osterman

NOTE:
If a termination notice is received for an instance that’s running on the cluster, the termination handler begins a multi-step cordon and drain process for the node.

Erik Osterman

it’s not spot specific

kskewes

Yeah, we want to roll out PDB’s everywhere so we don’t just take down our apps because we drained too fast

kskewes

thanks for sharing though, we’ve been chatting in house when we saw

There is a Microsoft effort for a terraform provider for azure devops. Would be good to consolidate efforts https://github.com/microsoft/terraform-provider-azuredevops

microsoft/terraform-provider-azuredevops

Terraform provider for Azure DevOps. Contribute to microsoft/terraform-provider-azuredevops development by creating an account on GitHub.

2019-11-06

Laurynas

Hi, how can I pass environemnt variables from file using https://github.com/cloudposse/terraform-aws-ecs-container-definition

cloudposse/terraform-aws-ecs-container-definition

Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource - cloudposse/terraform-aws-ecs-container-definition

Mike Whiting

I’ve managed to deploy multiple task definitions and services to an AWS ECS service using terraform but now am stumped at simple services discovery. I don’t want to use service discovery through Route53 as this seems overkill for now when the deployed services (docker containers) exist on the same host ECS EC2 instance. Does anyone have any recommendations beyond using docker inspect and grep to discover the subnet IP address ?

Erik Osterman

Consul?

cabrinha

Istio? AppMesh? Consul is really good though, I would check that out.

cabrinha

has anyone gotten terraform 0.12 syntax checking and highlighting to work with VSCode?

Erik Osterman
terraform

SweetOps is a collaborative DevOps community. We welcome engineers from around the world of all skill levels, backgrounds, and experience to join us! This is the best place to talk shop, ask questions, solicit feedback, and work together as a community to build sweet infrastructure.

Erik Osterman

not sure if helpful

Erik Osterman

also, there’s someone in this channel that has been working on something for vscode/HCL2 I think

loren

@Julio Tain Sueiras

Erik Osterman

thanks @loren!

Hugo Lesta

Hello, do you know any vscode extension that work well with v0.12 syntax?

Erik Osterman

but i can’t remember who it was

MattyB

https://github.com/mauve/vscode-terraform/issues/157#issuecomment-547690010 I’ve had it working for a couple of weeks. It’s stable but not 100% functionality

MattyB

crud, i grabbed the wrong link. it’s the next reply

Julio Tain Sueiras

What’s up? @Erik Osterman

Erik Osterman

has anyone gotten terraform 0.12 syntax checking and highlighting to work with VSCode?

Julio Tain Sueiras

@MattyB will fix most of the issues, been busy with preparing for release of the terraform provider for azuredevops

Is this the same provider Microsoft is working on? https://github.com/microsoft/terraform-provider-azuredevops

microsoft/terraform-provider-azuredevops

Terraform provider for Azure DevOps. Contribute to microsoft/terraform-provider-azuredevops development by creating an account on GitHub.

MattyB

No worries. It does what I need. Thanks!

Julio Tain Sueiras

Anybody here use azuredevops , just curious

I used to. No complaints.

I use it as well and have been pretty happy with it

vitaly.markov

hi @aknysh @Igor Rodionov I’ve been trying to setup an EKS cluster using this module setup node groups https://github.com/cloudposse/terraform-aws-eks-workers, how I can set a list of labels to kube node ? thank you in advance

cloudposse/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

aknysh

by list of labels you mean tagging?

cloudposse/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

vitaly.markov

tags attached only to ec2 instances

kskewes

Good timing, we tried doing this but nodes wouldn’t join cluster.

  # FIXME # 
bootstrap_extra_args = "--node-labels=<http://example.com/worker-pool=01,example.com/worker-pool-colour=blue>"

Don’t have SSH yet so commented out until can debug

aknysh
cloudposse/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

aknysh
cloudposse/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

aknysh
cloudposse/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

aknysh
cloudposse/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

aknysh
90 days of AWS EKS in Production - kubedex.com

Come and read 90 days of AWS EKS in Production on http://Kubedex.com. The number one site to Discover, Compare and Share Kubernetes Applications.

aknysh

@vitaly.markov @kskewes thanks for testing the modules

aknysh

it would be super nice if we could add your changes to kubelet params to the test https://github.com/cloudposse/terraform-aws-eks-cluster/tree/master/examples/complete

cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

aknysh

and here in the automatic Terratest we have Go code to wait for all worker nodes to join the cluster https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/test/src/examples_complete_test.go#L81

cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

aknysh

which was tested many times

aknysh

but we did not test it with node labels

aknysh

would be nice to add it to the test and test automatically

aknysh
TestExamplesComplete 2019-11-06T18<i class="em em-55"></i>51Z command.go<i class="em em-53"></i> Running command terraform with args [output -no-color eks_cluster_security_group_name]                  
TestExamplesComplete 2019-11-06T18<i class="em em-55"></i>51Z command.go<i class="em em-121"></i> eg-test-eks-cluster                                                                                    
Waiting for worker nodes to join the EKS cluster                                                                                                                 
Worker Node ip-172-16-100-7.us-east-2.compute.internal has joined the EKS cluster at 2019-11-06 18<i class="em em-56"></i>05 +0000 UTC                                               
Worker Node ip-172-16-147-3.us-east-2.compute.internal has joined the EKS cluster at 2019-11-06 18<i class="em em-56"></i>06 +0000 UTC                                               
All worker nodes have joined the EKS cluster                                                                                                                     
vitaly.markov

@aknysh I’ve found another bug (I think), by default terraform create a control plane with a specified kubernetes version from tf vars, but the node group will be provisioned with default version provided by ami, currently 1.11

vitaly.markov

I specify another variable eks_worker_ami_name_filter = "amazon-eks-node-${var.kubernetes_version}*" to use correct ami

vitaly.markov
+ kubectl version --short
Client Version: v1.16.0
Server Version: v1.14.6-eks-5047ed
vitaly.markov
+ kubectl get nodes
NAME                                              STATUS   ROLES    AGE   VERSION
ip-172-16-109-48.eu-central-1.compute.internal    Ready    <none>   12m   v1.11.10-eks-17cd81
ip-172-16-116-136.eu-central-1.compute.internal   Ready    <none>   99m   v1.11.10-eks-17cd81
ip-172-16-143-206.eu-central-1.compute.internal   Ready    <none>   99m   v1.11.10-eks-17cd81
ip-172-16-157-119.eu-central-1.compute.internal   Ready    <none>   12m   v1.11.10-eks-17cd81
aknysh

I think EKS currently supports 1.14 only

aknysh
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

aknysh

oh you are talking about worker nodes?

aknysh

your changes would be good

aknysh

but still I think EKS support 1.14 only (last time I checked)

aknysh

is most_resent not working to select the latest version? https://github.com/cloudposse/terraform-aws-eks-workers/blob/master/main.tf#L141

cloudposse/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

aknysh
cloudposse/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

vitaly.markov


is most_resent not working to select the latest version?

vitaly.markov

yeap, it doesn’t work correctly

kskewes

Correct. It does not. I haven’t tested suggested change above, we pin AMI.

kskewes

Just utilised custom security groups, so glad this is available in workers module.

vitaly.markov

@aknysh I’ve opened a PR https://github.com/cloudposse/terraform-aws-eks-workers/pull/27 that solves the issue with extra kubelet args

fix(bootstrap-extra-args): Use KUBELET_EXTRA_ARGS env variable to pass extra kubelet args by vymarkov · Pull Request #27 · cloudposse/terraform-aws-eks-workers

I&#39;ve faced with issue passing node labels to bootstrap.sh, the bootstrap cannot completed when we pass extra kubelet args as an argument, and thus node won&#39;t join to the kubernetes cluster….

aknysh

thanks will check

kskewes

How is everyone breaking up their Terraform stack for an ‘environment’ ? We have shared out by provider (aws/gitlab/etc) but with a migration to AWS am looking at breaking out our AWS within an environment. This means multiple state files, some copies into variables files and a dependency order for standing up the whole environment, but does limit blast radius for changes. We put multiple state files into the same s3 bucket using the CP s3 backend module. (thank you!) Very rough tree output in thread.

kskewes
$ tree
. (production)
├── ap-southeast-2
│   ├── eks
│   │   ├── <http://eks.tf>
│   │   ├── manifests
│   │   │   └── example-stg-ap-southeast-2-eks-01-cluster-kubeconfig
│       ├── <http://provider.tf> -> ../provider.tf
│   │   ├── <http://variables-env.tf> -> ../../variables-env.tf
│   │   └── <http://variables.tf> -> ../variables.tf
│   ├── <http://gpu.tf>
│   ├── managed_services
│   │   ├── <http://aurora.tf>
│   │   ├── <http://elasticache.tf>
│   │   ├── <http://mq.tf>
│       ├── <http://provider.tf> -> ../provider.tf
│   │   ├── <http://s3.tf>
│   │   ├── <http://variables-env.tf> -> ../../variables-env.tf
│   │   └── <http://variables.tf> -> ../variables.tf
│   ├── <http://provider.tf>
│   ├── <http://variables.tf>
│   └── vpc
│       ├── <http://provider.tf> -> ../provider.tf
│       ├── <http://subnets.tf>
│       ├── <http://variables-env.tf> -> ../../variables-env.tf
│       ├── <http://variables.tf> -> ../variables.tf
│       └── <http://vpc.tf>
├── us-west-2
│   ├── eks
│   │   ├── <http://eks.tf>
<snip>
└── <http://variables-env.tf>

Note we are a linux + macOS shop and use symlinks in terraform and ansible already.

we do it this way

var.environment = var.stage

a region = a variable

modules are created by functional pieces/aws products or product groups

ECS task module, ECS cluster module, ALB module, Beanstalk module

and every module takes var.environment, var.region as an argument from a .tfvars file

sp no matter what region the module will just work

if you are doing mutiregion it is IMPERATIVE that you use a var.name that contains the region ( this is always thinking from the point of view of using cloudposse module structure)

otherwise things like IAM roles will collide because it is a globalservice

use Regional s3 state buckets if you are going multiregion

kskewes

Cheers, I forgot to say we have modules in a separate git repo. This structure is for within a region and done too minimise blast radius when running terraform apply. The individual files line <http://mq.tf> is instantiating the mq module plus whatever else it needs.

ok , cool yes so similar of what we have

kskewes

We overload stage with region to achieve that. I guess name would work as well.

we like to have a flat structure module where there is no subfolders and many TF files floating around

kskewes

Eg. stage = "stg-ap-southeast-2"

same idea

cool

kskewes

Flat structure is nice for referencing everything line global variables hah.

kskewes

Have you ever merged two state files? If we wanted to collapse again that could be an option. Splitting out same I guess.

never

I will never try to do that

I think at that point I will prepare to do a destroy and add the resources to the other file and do an apply

kskewes

Yeah. Pretty gnarly. Redeploy better if possible.

kskewes

Think I’ll continue splitting environment into multiple directories to limit blast. Symlink around some vars.

Laurynas

Does it make sense to put the whole stack into a module and then use that module in test/uat/prod environments?

tt depends

I guess if you are comfortable with having dbs, albs, instances , containers and a bunch of other stuff in one single terraform if fine

you kinda need to think in terms of how your team will interact with that piece of code too

Laurynas

I will still separate deployment from infra but I’ll have db, alb and RDS in one module

Hi guys, quick question. Is this required to createa ECS cluster first, then you can deploy ECS service using fargate type on that cluster ?

what is required ?

ECS cluster I mean

yes

sorry for clumsy grammar

so let say

ecs cluster, service, task, task definition

we provision ecs cluster with terraform

in that order

is there any specific option for ecs cluster ?

as we plan only to use as fargate option for ecs service later on

when I see in ecs cluster, they say it has ecs cluster as instances

is not on the cluster level

cloudposse/terraform-aws-ecs-alb-service-task

Terraform module which implements an ECS service which exposes a web service via ALB. - cloudposse/terraform-aws-ecs-alb-service-task

we don’t want to follow create instances and use them as cluster

cloudposse/terraform-aws-ecs-alb-service-task

Terraform module which implements an ECS service which exposes a web service via ALB. - cloudposse/terraform-aws-ecs-alb-service-task

Fargate can only run with 16 gb of memory and 10 gb of disk

no EBS mounts

if you need more then you need to use EC2 type an manage your instances

ya

but for easy scalling and handle traffic up and down, is fargate enough ?

depend of your needs but yes it is easier

1 important thing need to ask

and also 3x more expensive than EC2 type

is like a slim down version of k8 running pods

is there anyway we can use terraform to provision ecs service without specific image ? It’s a mandatory to have image at first in ECR right ?

no is not

you can deploy a service that does not work

then deploy a new task def and it will work

oh I see

or a service with a correct task def but empty ECR repo

the reason why I ask those question

is because

we planing to do Ci/CD step for infras.

flow is . vpc -> all thing include ecs cluster -> ecs service

you think it’s can be achieved ?

yes

almost forgot

have you ever been in issue with update the ecs service itself ?

my issue is the name of ecs service, the container name sometimes make terraform stuck as well to reapply. Something relate with lambda_live_look_up on old task definition

no , I have not seen that

we do not use tf to deploy a new task def

we use ecs-deploy python script (google it)

ya

we follow that tool to

but you know what is big issue about ecs-deploy ?

it based the URL container image url. for ex: 1233452.aws.dk..zzz/test:aaa

so what happen when we want to apply new image: 1233452.aws.dk..zzz/exam1:aaa

it stuck forever.

2019-11-05

oscar
data "aws_ssm_parameter" "eks_ami" {
  name = "/aws/service/eks/optimized-ami/${var.kubernetes_version}/amazon-linux-2/recommended/image_id"
}
module "eks_workers_1" {
  source = "git:<i class="em em-<https"></i>//github.com/cloudposse/terraform-aws-eks-workers.git?ref=tags/0.10.0>"

  use_custom_image_id = true
  image_id            = data.aws_ssm_parameter.eks_ami.value
kskewes

oh good thinking, sorry I didn’t look at other data sources.

oscar

try this

1
cabrinha

howdy all… I’m trying to use cidrsubnets() to take a /16 and chop it up into /24’s for my subnets…

cabrinha

however, I think that I may need to hardcode the cidr blocks I want to use for my subnets

cabrinha

because:

A: I don't know if I'll need to add more subnets or AZs in the future
B: I don't think that cidrsubnets() will accurately chop up the cidr blocks into my subnets consistently
cabrinha

is there a better way to go about this other than hard coding the /24’s I want to assign to certain subnets?

cabrinha

I’m guessing there is some magic I’m missing with cidrsubnets(var.vpc_cidr_block, N, length(var.aws_azs)) or something like that

Erik Osterman

take a look at our subnet modules

cabrinha
cloudposse/terraform-aws-multi-az-subnets

Terraform module for multi-AZ public and private subnets provisioning - cloudposse/terraform-aws-multi-az-subnets

cabrinha

This one looks interesting

Erik Osterman

Yes, we use that one all the time

Erik Osterman

we factor in a “max subnets” fudge factor

Erik Osterman

so it will slice and dice it based on the max

Erik Osterman

but allocate the number you need now

Callum Robertson

Hey team, is there a way to pass the assumed role credentials for the provider into a null resource local exec provisioner?

Callum Robertson

@Erik Osterman have you ever done anything like this?

Erik Osterman

my gut tells me it should just work

Erik Osterman

the environment variables are inherited

Erik Osterman

ah, but we assume before we call terraform

Erik Osterman

i’m guessing you’re assuming in terraform

Erik Osterman

then no-go

Callum Robertson

assuming in terraform

Callum Robertson

Uhhh!

Callum Robertson

painful

Erik Osterman

not sure how you’ll work around that one

Erik Osterman

fwiw, we assume role with vault

Erik Osterman

then call terraform

Callum Robertson

yeah, right now I’m assuming a role with a provider block that uses the session token I have in my local machines env

Callum Robertson

so I’m assuming it uses my local environment variables and not those assumed in terraform?

1
Erik Osterman

@Callum Robertson look what @aknysh just did

1
Erik Osterman
Allow installing external packages. Allow assuming IAM roles by aknysh · Pull Request #33 · cloudposse/terraform-aws-eks-cluster

what Update provisioner &quot;local-exec&quot;: Optionally install external packages (AWS CLI and kubectl) if the worstation that runs terraform plan/apply does not have them installed Optionally…

1
Erik Osterman

for getting assumed roles to work in a local-exec

1
Erik Osterman
 aws_cli_assume_role_arn=${var.aws_cli_assume_role_arn}
      aws_cli_assume_role_session_name=${var.aws_cli_assume_role_session_name}
      if [[ -n "$aws_cli_assume_role_arn" && -n "$aws_cli_assume_role_session_name" ]] ; then
        echo 'Assuming role ${var.aws_cli_assume_role_arn} ...'
        mkdir -p ${local.external_packages_install_path}
        cd ${local.external_packages_install_path}
        curl -L <https://github.com/stedolan/jq/releases/download/jq-${var.jq_version}/jq-linux64> -o jq
        chmod +x ./jq
        source <(aws --output json sts assume-role --role-arn "$aws_cli_assume_role_arn" --role-session-name "$aws_cli_assume_role_session_name"  \| jq -r  '.Credentials \| @sh "export AWS_SESSION_TOKEN=\(.SessionToken)\nexport AWS_ACCESS_KEY_ID=\(.AccessKeyId)\nexport AWS_SECRET_ACCESS_KEY=\(.SecretAccessKey) "')
        echo 'Assumed role ${var.aws_cli_assume_role_arn}'
      fi
1
aknysh

we’ve been testing the EKS modules on Terraform Cloud, and needed to add two things:

1
aknysh
  1. Install external packages e.g. AWS CLI and kubectl since those are not present on TF Cloud Ubuntu workers
1
aknysh
  1. If we have multi-account structure and have users in the identity account, but want to deploy to e.g. staging account, we provide the access key of an IAM user from the identity account and the role ARN to assume to be able to access the staging account. TF itself assumes that role. But also AWS CLI assumes that role (see above) to get kubeconfig from the EKS cluster and then run kubectl to provision the auth configmap
1
Callum Robertson

epic work @aknysh!

1
Callum Robertson

I’ll test this out over the weekend and report

1
aknysh

the PR has been merged to master

1
aknysh
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

1
Callum Robertson
Pass credentials to local-exec OR extract credentials via properties · Issue #8242 · terraform-providers/terraform-provider-aws

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or &quot;me to…

Callum Robertson

Found this, sounds like an awesome idea

Erik Osterman
    environment {
      AWS_ACCESS_KEY_ID = "${data.aws_caller_identity.current.access_key}"
      AWS_ACCESS_KEY_ID = "${data.aws_caller_identity.current.secret_key}"
    }
Erik Osterman

clever

Erik Osterman

didn’t know they were available like that

Callum Robertson

they’re not right now =[

Callum Robertson

Hoping Hashicorp review the pull request and get it added in

Callum Robertson

looks like the secrets get persisted in the statefile however, so will need some workaround on that

Erik Osterman

what are you trying to solve? … using local exec

Callum Robertson

aws s3 sync

Callum Robertson

for bucket objects

Erik Osterman

aha

Erik Osterman

yea, i’ve had that pain before

Callum Robertson

Looks like I’ll have to keep this seperate from my tf configuration

Callum Robertson

and put it into a build pipeline of sorts

Erik Osterman

do you strictly need to use aws s3 sync?

Callum Robertson

maybe just a stage that jenkins runs

Erik Osterman

e.g. non deterministic list of files

Callum Robertson

yeah, it’s just a sprawling directory of web assets

Callum Robertson

the TF resource for bucket upload with single object from memory (this was a while ago)

Erik Osterman

sooooo

Erik Osterman

what about using local exec to build a list of files?

Erik Osterman

then use raw terraform to upload them

Erik Osterman

e.g. call find

Callum Robertson

could you give me an example?

loren

You could try using local-exec to build an aws config profile with the assume role config, then pass the –profile option to aws-cli?

this is what I do to assume role :

aws-vault exec PROFILE -- terraform apply -var 'region=us-east-2' -var-file=staging.tfvars 

in my local machine

providers.conf

provider "aws" {
  region = var.region

  # Comment this if you are using aws-vault locally
  assume_role {
    role_arn = var.role_arn
  }
}

jenkins with and instance profile uses straight terraform

the role arn is passed as a var

pretty easy

Callum Robertson

yeah, I’m using AWS-VAULT as well but right now it assumes security account credentials with MFA to get the state file and lock in the state bucket

Callum Robertson

I’ve just made this a stage that my jenkins host runs with it’s instance profile as you suggested @PePe

Callum Robertson

Thanks all!

2019-11-04

Cloud Posse
05:03:37 PM

Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is Nov 13, 2019 11:30AM.
https://zoom.us/meeting/register/dd2072a53834b30a7c24e00bf0acd2b8\|Register for Webinar
#office-hours (our channel)

kskewes

hey everyone, couple questions about the EKS module. https://github.com/cloudposse/terraform-aws-eks-workers

  1. The default ‘use latest image’ filter and data source doesn’t seem to work for me. Using var.region value of "ap-southeast-2" results in 1.11 nodes with a 1.14 cluster (I set cluster version to 1.14). Any ideas? This works: aws ssm get-parameter --name /aws/service/eks/optimized-ami/1.14/amazon-linux-2/recommended/image_id --region ap-southeast-2 --query "Parameter.Value" --output text
  2. I think we will generally set AMI’s and in doing so I see the launch template has updated but new version is not default and the nodes aren’t rolling automatically (updated in place, dummy cluster). Is this expected behaviour? If I terraform destroy -resource=workerpool1 followed by an apply I get the newer node version.
  3. Will PR custom configmap template once get cluster going.
  4. Have egress from an alpine pod to 8.8.8.8:53 but don’t seem to have DNS in cluster. Using multi-az subnets and tagging private (nodes), public (nlb/alb), shared, owned..
cloudposse/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

2019-11-03

winter

If I renamed a folder, terrafrom is creating a new resource from that folder. How do I fix this in the state?

Mateusz Kamiński

How do you use this folder in terraform? Module?

Mateusz Kamiński

If module then path should not matter as long as module name stays same

winter

Actually this folder stores terragrunt file

Mateusz Kamiński

Oh, so it is terragrunt - i haven’t used it - won’t help

Chris Fowles

incase anyone else get’s a requirement to convert TitleCaseText to kabab-case-text in terraform here you go:

lower(join("-", flatten(regexall("([A-Z][a-z]*)", "ThisIsTitleCaseStuff"))))
5
Chris Fowles

I don’t think I’ll ever need to do that again, but it felt a shame to waste

Bruce

Has anyone seen any good resources for using terratest, examples etc and they have tied it in to a CI/CD pipeline?

Erik Osterman
GitHub

GitHub is where people build software. More than 40 million people use GitHub to discover, fork, and contribute to over 100 million projects.

Erik Osterman

They all use terratest with ci/cd (#codefresh)

Erik Osterman

basically, every terraform 0.12 (HCL2) module we have has terratest

Bruce

Thanks @Erik Osterman

2019-11-02

Erik Osterman

Interesting read on why terraform providers for Kubernetes and helm are less than ideal… see comment by pulumi developer. https://www.reddit.com/r/devops/comments/awy81c/managing_helm_releases_terraform_helmsman/?utm_source=share&utm_medium=ios_app&utm_name=iossmf

Managing Helm releases: Terraform, Helmsman, Helmfile, other?

Hey everyone, We are continuing to move more of our stack to Kube, specifically GKE, and have gone through a few evaloutions as to how we handle…

2
Erik Osterman

(It’s also a bit older post. Maybe something have been addressed in later releases of the providers and 0.12)

2019-11-01

Sharanya

Hey Guys, I have a Quick Question - if any one has every come across this “Is there any way we can make our S3 bucket Private and then Cloudfront provides - URL , Which can be viewed from this Private S3 Bucket” ?

ruan.arcega
How to Use Bucket Policies and Apply Defense-in-Depth to Help Secure Your Amazon S3 Data | Amazon Web Services

Amazon S3 provides comprehensive security and compliance capabilities that meet even the most stringent regulatory requirements. It gives you flexibility in the way you manage data for cost optimization, access control, and compliance. However, because the service is flexible, a user could accidentally configure buckets in a manner that is not secure. For example, let’s […]

jafow

acm certs DNS validation with assume role

jafow

here’s what I’m trying to do: create a cert in dev account with a subject alternative name (SAN) in “vanity” domains account

jafow

and validate both with DNS validation

jafow

here’s teh problem I’m getting: Access Denied!

jafow

I do tf apply from my ops profile. It is a trusted entity on both my dev and domains accounts

You need two separate aws providers

Is that how you have it setup?

jafow
cloudposse/terraform-aws-vpc-peering-multi-account

Terraform module to provision a VPC peering across multiple VPCs in different accounts by using multiple providers - cloudposse/terraform-aws-vpc-peering-multi-account

jafow

yes! thanks @

jafow

here’s a gist with the least possible stuff; the acm parts are straight from tf docs

jafow

on looking at the vpc-peering module above, I notice there’s no profile attr, just the assume_role - I wonder if that’s my issue.

jafow

the Acces Denied error indicates that dev account assumed role does not have permissions to access the domains account route53 record.

jafow

but what i’d expect is that it should be assumed under the domain profile to validate the record, because that 2nd provider alias is used.

aws_route53_record.cert_validation_alt1 doesn’t have provider attribute set

I assume you want that to be aws.route53 as well

I can’t tell from the gist what your code is trying to do

Oh, I see, you’re using a SAN. I have never tried that before – does AWS allow multiple cert-validations on the same cert?

I also don’t see you using the aws_acm_certificate_validation resource

jafow


aws_route53_record.cert_validation_alt1 doesn’t have provider…
yep @ that could be it!

jafow

you’re right I left that off of the gist; copy paste fail. updating now to better show the intent.

    keyboard_arrow_up