#terraform (2020-04)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2020-04-08

sahil kamboj avatar
sahil kamboj

Hey guys, just started working with terraform can we make output of terraform store in s3 so my services fetch details from there like elb name- id etc. dont want to use aws cli

Chris Fowles avatar
Chris Fowles

for other terraform you can use this datasource to query a remote state

Chris Fowles avatar
Chris Fowles

for things other than terraform - you probably want to write values to something like parameter store or something like that

Chris Fowles avatar
Chris Fowles

if you’re really set on writing to s3 you could use this resource to write a file out to s3 as part of your terraform module https://www.terraform.io/docs/providers/aws/r/s3_bucket_object.html

AWS: aws_s3_bucket_object - Terraform by HashiCorp

Provides a S3 bucket object resource.

sahil kamboj avatar
sahil kamboj

thnx Chris

setheryops avatar
setheryops

Does anyone know of a way that you can take an output and set a CircleCI env var as that output repended with TF_VAR_ ? Other than a script possibly…im really more wondering if there is a way in CCi but thoguht id ask here since its TF related too.

github140 avatar
github140

I don’t know in CCi however I’d use terraform-docs json & jq to prepare it.

Dave Barnum avatar
Dave Barnum

I have a question about terraform-null-label; It seems like the tags support maps. However, some Azurerm resources like azurerm_kubernetes_cluster support maps while others like azuread_service_principal support only lists. Is there any way to output (or input) lists of tags from null-label?

Gowiem avatar
Gowiem

I don’t know for certain, but I guess likely not since most CP modules are AWS focused and tags in AWS are exclusively (I think) key value pairs. This would probably be a good contribution to that module though.

David avatar
David

Right now, when we add new devs to our team, we add their name, email, github handle, etc. to a terraform input file, and then we have Atlantis provision their Datadog account, github invite to our org, AWS IAM User, etc.

I am looking into Okta so that our Ops side of things can create users across all our accounts, but have some concerns where it seems like support for AWS users and some other orgs would become harder to work with (SAML seems quite annoying compared to SSO, for example, and we like having IAM Users as we already have strategies for eliminating long-lasting credentials)

For those who have faced similar issues before, how did you decide which accounts to provision through Okta/terraform? If you have a well-oiled terraform/atlantis setup, do you feel that Okta is still worth pouring some money into?

Zach avatar

I definitely do not have a well oiled setup yet, but what I do like is that Okta lets me add MFA to things that otherwise don’t really support it. And have different MFA policies depending on what application they’re accessing.

Chris Fowles avatar
Chris Fowles

The other great advantage of SAML is a single point to de-provision access quickly. Auditing login events also becomes a lot easier.

I’d look at the decision by prioritized pragmatism - are things that Okta is going to do part of the current major priority of the business? Usually I see this kind of priority come around during a compliance event, like targeting ISO 27001 or PCI DSS compliance or an IPO.

2020-04-07

caretak3r avatar
caretak3r

question, I have a dir/repo setup like this:

/repo/
-- terraform.tfvars
-- .envrc

• .envrc has:

export TF_CLI_INIT_FROM_MODULE="git::<https://github.com/***/terraform-root-modules.git//aws/tfstate-backend?ref=master>"
export TF_CLI_PLAN_PARALLELISM=2
export TF_BUCKET="devops-dev-terraform-state"
export TF_BUCKET_REGION="us-east-1"
export TF_DYNAMODB_TABLE="devops-dev-terraform-state-lock"
source <(tfenv)

• terraform.tfvars has:

namespace="devops"
region="us-east-1"
stage="dev"
force_destroy="false"
attributes=["state"]
name="terraform-tfstate-backend"

But when i run terraform init it complains about a non-empty directory. i am trying to learn this before jumping to geodesic, but i don’t know how to get the root module copied to my repo above. am i doing something incorrectly?

error:

❯ terraform init
Copying configuration from "git::<https://github.com/***/terraform-root-modules.git//aws/tfstate-backend?ref=master>"...
Error: Can't populate non-empty directory
The target directory . is not empty, so it cannot be initialized with the
-from-module=... option.
:--1:1
1
caretak3r avatar
caretak3r

^bump

androogle avatar
androogle

can you post the specific error

androogle avatar
androogle

looking at what you’ve supplied it doesn’t look like there’s anything for terraform to init

androogle avatar
androogle

do you have a main or any other kind of .tf with provider resources defined?

caretak3r avatar
caretak3r

@androogle i updated the original post

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The behavior of terraform changed in 0.12

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

0.11 used to allow initialization of a directory with dot files

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We have a long thread discussing this here: https://sweetops.slack.com/archives/CB84E9V54/p1582930086018800

I’ve run into the

The target directory . is not empty, so it cannot be initialized with the
-from-module=... option.

issue trying to use terraform 12 on a geodesic container. Anyone know of a work around? For the meantime I’m going to create a temp dir and init into there and move the files the the dir I want to use.

caretak3r avatar
caretak3r

@Erik Osterman (Cloud Posse) aww, damn. i was looking for anything in the changelogs yest about this specific behavior. maybe just init into a new dir, and then copy the tfvars/envrc into the copied-module dir?

Abel Luck avatar
Abel Luck

How do you all work with a situation where you want a terraform module to spin up resources (instances in this case) in multiple regions?

Abel Luck avatar
Abel Luck

I’ve got my terraform root module (remote state in s3) and i want to create an app server in several regions, ideally in one terraform invocation.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This is a design decision that comes down to what you want to achieve with multi-region support

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

One consideration is to move your S3 state out of AWS so that it’s decoupled from AWS failures.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Otherwise, if you want to use something like the terraform-aws-s3-statebackend module, best-practice would be to have one state bucket per region to decouple state operations so they aren’t affected by regional failures.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Note that terraform cloud is not hosted on AWS, so using terraform cloud as the statebackend is an alternative.

randomy avatar
randomy

Ignoring whether that’s a good idea or not, if you want it in 1 terraform invocation and state file then you can:

• create a module with your instances

• call your module for each region, passing a region-specific aws provider into the module

randomy avatar
randomy

See the first example of https://www.terraform.io/docs/configuration/modules.html#passing-providers-explicitly where they call a module and pass a region-specific provider into the module as the default aws provider for that module

Modules - Configuration Language - Terraform by HashiCorp

Modules allow multiple resources to be grouped together and encapsulated.

Abel Luck avatar
Abel Luck

Thanks! I remember now I’ve used explicit providers before when using letsencrypt staging and production api servers at the same time

Abel Luck avatar
Abel Luck


Ignoring whether that’s a good idea or not,
I’d be happy to hear your thoughts on why it’s not a good idea.

randomy avatar
randomy

I wouldn’t say it’s not a good idea, just that there are trade offs. It’s mainly to do with “blast radius” of changes, and what happens if the main region fails. It is probably fine to do what you’ve proposed though.

sarkis avatar
sarkis
mvisonneau/tfcw

Terraform Cloud Wrapper. Contribute to mvisonneau/tfcw development by creating an account on GitHub.

1

2020-04-06

PePe avatar

Hi, Any ide why this https://github.com/cloudposse/terraform-aws-s3-bucket/blob/master/main.tf have only lifecycle rules for versioned buckets ?

cloudposse/terraform-aws-s3-bucket

Terraform module that creates an S3 bucket with an optional IAM user for external CI/CD systems - cloudposse/terraform-aws-s3-bucket

Cloud Posse avatar
Cloud Posse
04:00:06 PM

:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.

This is an opportunity to ask us questions on terraform and get to know others in the community on a more personal level. Next one is Apr 15, 2020 11:30AM.
Register for Webinar
slack #office-hours (our channel)

PePe avatar
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@maxim is back, so we can take a look at some things

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

let’s move PR requests to #pr-reviews

PePe avatar

ohh cool, sorry I did not that channel existed

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it’s pretty new

curious deviant avatar
curious deviant

Another terraform best practices question.. i have a multi account setup wherein I create one environment type (dev, test, stage) per AWS Account. What could be a preferred strategy for storing remote state in S3 backend ? I am currently using 1 bucket /env to store state and the state bucket resides in the same account as the infrastructure being spun by terraform. Someone on the team recommended using one state bucket in the shared service account for all environments. They just want to be able to see all state files in one bucket. While it’s technically feasible, I am thinking this adds additional complexity (cross account setup) without any real benefit. What do folks think ?

androogle avatar
androogle

I don’t know if its considered best practice but I use one bucket per account and use workspaces for environments. I switch between accounts with -backend-config

androogle avatar
androogle

I’ve got a shell function to switch back and forth:

tfswitch() {
  export AWS_PROFILE=$1
  terraform init -reconfigure -backend-config=backends/$1.tf
}
androogle avatar
androogle

I guess I’d also be hesitant to have the prod s3 state bucket share a space with the dev s3 state bucket and the IAM policies for both state writing and cross-account access get complex to manage

:--1:2
curious deviant avatar
curious deviant

I agree… those are my concerns too.

androogle avatar
androogle

additionally if you’re operating as a team you’ll still probably need separate dynamodb lock tables for each env

curious deviant avatar
curious deviant

ok sure.. I haven’t used them thus far but I can take a look to see how that would work out.

Mikael Fridh avatar
Mikael Fridh

I’m using one single s3 bucket, one single dynamo table for maybe 30 or so state files (30 “stacks”)

androogle avatar
androogle

how do you use 1 dynamo table for multiple states?

androogle avatar
androogle

is it just an item per state in the same table and you specify it explicitly in the backend config?

Mikael Fridh avatar
Mikael Fridh

my backend.config is the same in ALL stack folders:

terraform {
  backend "s3" {
    profile        = "aws-profile"
    region         = "aws-region"
    bucket         = "s3-bucket"
    dynamodb_table = "my-infra-terraform-locks"
  }
}
androogle avatar
androogle

then person B making a change to state B has to wait until person A making a change to state A is finished?

androogle avatar
androogle

if they’re all using the same lockfile?

Mikael Fridh avatar
Mikael Fridh

then I have a little Makefile by which I do everything … make plan for example.

That sets the key for each folder based on the current path I’m in:

so if I’m in repo/terraform/infra/battlefield/bl-prod-mesos:

it will do:

echo key = "infra/terraform/b9d-infra/infra/battlefield/bl-prod-mesos/terraform.tfstate" > .terraform/backend.conf
Mikael Fridh avatar
Mikael Fridh

I could also have hardcoded the above in all individual folders… but so far I never did. Makefile does everything the same every time

Abel Luck avatar
Abel Luck


They just want to be able to see all state files in one bucket.
I question that.. why do you need to see the state files? Smells fishy.

We have one account per environment generally speaking, and one state bucket per account. Sometimes we end up with a second or third state bucket in account for a different client project, that’s usually just for small projects and only in the dev/test account.

Shawn Petersen avatar
Shawn Petersen

i have a syntax question. How do I use a colon ‘:’ inside a conditional expression? I want to append a second variable (var.env) to the end of either option like this value = “${var.one != “” ? var.oneenv : var.twoenv}” what am i missing?

Mikael Fridh avatar
Mikael Fridh

regular string interpolation:

  • 0.12: value = var.one != "" ? "${var.one}:${var.env}" : "${var.two}:${var.env}"
Shawn Petersen avatar
Shawn Petersen

ah perfect. thanks aton!

imiltchman avatar
imiltchman

0.11 looks off to me on first sight. I would create a local first with : and then reference it for value here

imiltchman avatar
imiltchman

something like this:

locals {
   trueValue = "${var.one}:${var.env}"
   falseValue ...
}
value = "${var.one != "" ? local.trueValue : local.falseValue}"
:--1:1
Mikael Fridh avatar
Mikael Fridh

I simply forgot 0.11 tricks altogether by now

:--1:1
Shawn Petersen avatar
Shawn Petersen

thanks guys, i used v0.12

2020-04-05

PePe avatar

is someone here working on this module ? https://github.com/bitflight-public/terraform-aws-app-mesh

bitflight-public/terraform-aws-app-mesh

Terraform module for creating the app mesh resources - bitflight-public/terraform-aws-app-mesh

Zachary Loeber avatar
Zachary Loeber

Deploying Kube Apps via the terraform provider, a quick blog I whipped up. More on the beginner side of things but with some interesting tools and a pretty comprehensive example terraform example module for a full deployment of an AKS cluster with app deployment: https://zacharyloeber.com/blog/2020/04/02/kubernetes-app-deployments-with-terraform/

Kubernetes App Deployments with Terraform attachment image

Kubernetes App Deployments with Terraform - Zachary Loeber’s Personal Site

cool-doge1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

My first introduction to finalizers were with installing rancher

Kubernetes App Deployments with Terraform attachment image

Kubernetes App Deployments with Terraform - Zachary Loeber’s Personal Site

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

OMG what a PIA to uninstall an app with dozens of finalizers

Zachary Loeber avatar
Zachary Loeber

I am so glad to get validation from someone like yourself on those stupid things

2020-04-04

Zach avatar

Anyone else using pycharm and the HCL plugin for terraform work? Is there a way to solve the “Can’t locate module locally” error when using remote sourced modules from github? The only workaround I’ve found is to declare a ‘wrapper’ module around the remote module that passes vars to it … which is pretty silly. So if I try to directly use a CloudPosse module or even one from the official terraform registry, it can’t do any code completion or validations.

Failing that, what are people using these days for ‘full featured’ terraform authoring?

Alex Siegman avatar
Alex Siegman

I know a lot of folks use IDEA, which is just a more generic PyCharm. I use VS Code, but as I move to TF12 it doesn’t handle the mix of 11 and 12 very well, and the experimental language server is not the best, but it’s been improving. I think the IntelliJ plugins are more featureful and supporting of 12. I could just have my stuff set up poorly, I haven’t spent much time on it.

Zach avatar

Yes I used VS Code originally but TF12 broke it, and they never really got a good plugin after that

Alex Siegman avatar
Alex Siegman

Yeah. Besides lack of time and just having everything already built in TF11 so path of least resistance, that’s probably the biggest thing holding me back is that it’s not well supported in my dev environment

Zach avatar

hmm so IDEA is just the java IDE from JetBrains, they make PyCharm too. I’d have to assume its the same plugin in fact.

2020-04-03

sumit parmar avatar
sumit parmar

hey guys how to update map values while using it

tags={ Department=”cabs” OS=”ms” Application=”cabs” Purpose =”app”} when i . use tags=var.tags just need to update few values such as OS = ms to OS=linux , purpose=app to purpose=db

loren avatar
loren
merge - Functions - Configuration Language - Terraform by HashiCorp

The merge function takes an arbitrary number of maps and returns a single map after merging the keys from each argument.

loren avatar
loren
tags = merge(
  var.tags,
  {
    Purpose = "baz"
  }
)
ikar avatar

Dear all, is there a way how to define locals for the file scope only? Not for the whole module?

loren avatar
loren

negatory. terraform loads from all .tf files in the directory. only option is to create separate modules

ikar avatar

okay, thanks!

chrism avatar
chrism

https://github.com/cloudposse/terraform-aws-eks-cluster/commit/162d71e2bd503d328e20e023e09564a58ecee139 removed kubeconfig_path which I was using to ensure the kubecfg was available to apply the haproxy ingress after setup. Looking at the changes to the outputs etc I can’t see a way to still get my grubby mitts on the cfg file.

Use `kubernetes` provider to apply Auth ConfigMap (#56) · cloudposse/[email protected]
  • Use kubernetes provider to apply Auth ConfigMap * Use kubernetes provider to apply Auth ConfigMap * Use kubernetes provider to apply Auth ConfigMap * Use kubernetes provider to a…
aknysh avatar
aknysh

kubeconfig path was used only for the module’s internal use, to get kubeconfig from the cluster and apply the auth config map

Use `kubernetes` provider to apply Auth ConfigMap (#56) · cloudposse/[email protected]
  • Use kubernetes provider to apply Auth ConfigMap * Use kubernetes provider to apply Auth ConfigMap * Use kubernetes provider to apply Auth ConfigMap * Use kubernetes provider to a…
aknysh avatar
aknysh

you can use the same command aws eks update-kubeconfig as the prev version of the module was using

aknysh avatar
aknysh

to get kubeconfig from the cluster

aknysh avatar
aknysh

but outside of the module since it does not need it now, and using CLI commands in the module is not a good idea anyway - not portable across platforms, does not work on TF Cloud (you have to install kubectl and AWS CLI)

:--1:1
chrism avatar
chrism

Ta its all in geodesic so its been fine. I just tripped up leaving it on “rel=master” .

The .11 version had a kubectl file which applied the configmap auth; we were hooking on to that thread to bootstrap rancher.

aknysh avatar
aknysh

yes, if you want to continue using the prev functionality, don’t pin to master

aknysh avatar
aknysh

in fact, never pin to master, always pin to a release

aknysh avatar
aknysh

(fewer surprises since any module will evolve and introduce changes)

chrism avatar
chrism

yep yep; Totally down to me rushing to test migrating to the v.12 version. I’ll file it along side my decision to upgrade EKS to 1.15 on a friday and it murdering the cluster in the eyes of rancher (but running fine in EKS)

Tony avatar

hey guys, I created an ami from an ec2 instance I had configured and am trying to now deploy a new ec2 via terraform using that AMI while also joining a managed AD domain in AWS. For some reason when I use my AMI and not say the default win2019 amazon AMI to build this EC2 it fails to join my domain upon creation. Any thoughts? Do I need to prepare the machine in any way prior to creating an AMI out of it so that Terraform can do the domain joining?

Mike Martin avatar
Mike Martin

Need some help - we’re about to begin rolling out our modules to production and need to decide whether to break out the modules like CloudPosse has it (a module per github repo) OR just make a monolith repo that contains all of our modules (making it easier to develop against since all dependencies are in one repo). Likely using TF Cloud. I’m in the boat of break them out - my teammates are against me. Need help! lol Thoughts?

sheldonh avatar
sheldonh

Break them out!

Mike Martin avatar
Mike Martin

Agreed - now how do I defend it…

sheldonh avatar
sheldonh

Tell them it’s how terraform module versioning and integration works.

sheldonh avatar
sheldonh

You increase the reliability by seperating as you tag and release versions. This powers the terraform modules. You can’t git tag folders

sheldonh avatar
sheldonh

I’ll probably write up a blog post on this soon. I ended up building an Azure DevOps pipeline that I just copy and pasted into different terraform module projects and it automatically versions them using Gitversion.

sheldonh avatar
sheldonh

I showed this logic and my team is starting to do the same thing now not a monolith repo. The very nature of terraform modules and being able to version breaking changes and so on would get requires single repost per terraform module.

I’m sure you can get some work around but it’s definitely an anti pattern in my opinion to try and do a monolith repo for Terraform modules. And let me clarify I mean this for terraform cloud. I’m doing all of my pipelines and telephone cloud I don’t even know how it could even work reasonably with their terraform module registry if you are trying to use a monolith repo. It’s not really even a question of an uphill battle, in my opinion, it’s a question of if they’re trying to force an anti-pattern based on how they prefer to do it rather than how terraform cloud is structured to do it

sheldonh avatar
sheldonh

Lastly I’ll mention that you should create your get repos for the simple yaml file and a Terraform plan. Then you won’t have any problems setting up your and managing your repos. I have 90. I think the closest team has like 10 because they’re working on more traditional applications. small purpose repos just work better in my opinion with terraform as well as CICD tools

sheldonh avatar
sheldonh

Hope that helps

sheldonh avatar
sheldonh

Terraform docs back this up
you can publish a new module by specifying a properly formatted VCS repository (one module per repo, with an expected name and tag format; see below for details). The registry automatically detects the rest of the information it needs, including the module’s name and its available versions.

sheldonh avatar
sheldonh

Cheers

Mike Martin avatar
Mike Martin

Since we are using GitHub, they are arguing you could just pin it to a commit hash…

sheldonh avatar
sheldonh

Of course, ie tags

Mike Martin avatar
Mike Martin

I really appreciate the input! I’ll give you a summary in a bit of what happens…

sheldonh avatar
sheldonh

Ask them how they plan on ensuring

  • one version of a module in one folder is easily versioned against another that has pinned changes from a different time.
  • not download the entire repo for each module use… Which it would do by default
sheldonh avatar
sheldonh

To me it seems they need to get comfortable with the terraform recommendation and not try think more repos more problems. It makes it easier.

sheldonh avatar
sheldonh

And solve the management of them by in and hour or so write a yaml file that created all your repos and even adds GitHub hooks for slack and more. It’s actually not much extra effort at all then and your actually improving your source control system manangement too

sheldonh avatar
sheldonh
How Do I Decompose Monolithic Terraform Configurations? attachment image

Throwing everything into one unwieldy configuration can be troublesome. The solution: modules.

sheldonh avatar
sheldonh

And ig you choose to not use terraform registry then you have to start managing each jobs GitHub authorization instead of having that handled by the built by in registry oath connection. It’s really to me a lot more work to try and not use their recommendation IF you are using terraform cloud.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

for some background, @Mike Martin are you practicing monorepos for other things you’re doing?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(i find the monorepo / polyrepo argument to be more of an engineering cultural holy war with everyone on both sides wounded and fighting on principle)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

also, one repo per module is not necessarily required either. we do that as large distributors of open source. there’s also a hybrid mode.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This talks about doing something like creating a terraform-networking repo and then sticking modules for that in there. Since this is for your organization it can be highly opinionated and grouped accordingly.

sheldonh avatar
sheldonh

100% agree. The folks I’m working with don’t do everything through code and so using one get repo for everything miscellaneous has been typically the approach. they get nervous about new repos cuz they’re not used to it. Making it easier to manage and create them consistently I think is a big big win that helps that move forward

Patrick M. Slattery avatar
Patrick M. Slattery

Hi, I’m trying to use a feature flag/toggle in Terraform with for_each previously I have used a count for the toggle but that does not work with for_each Does anyone know how I can do this?

resource "google_project_service" "compute_gcp_services" {
   for_each = {
    service_compute               = "[compute.googleapis.com](http://compute.googleapis.com)"                 # Compute Engine API
    service_oslogin               = "[oslogin.googleapis.com](http://oslogin.googleapis.com)"                 # Cloud OS Login API
  }
  project  = google_project.project.project_id
  # count    = "${var.compute_gcp_services_enable == "true" ? 1 : 0}"
  service  = each.value
  disable_dependent_services = true
  disable_on_destroy = false
  depends_on = [
    google_project_service.minimal_gcp_services
  ]
}
Gowiem avatar
Gowiem

@Patrick M. Slattery You need to create a condition and provide an empty value and a non-empty value. Here’s an example:

    dynamic "action" {
      for_each = var.require_deploy_approval ? [1] : []

      content {
        name      = "ProductionDeployApproval"
        category  = "Approval"
        owner     = "AWS"
        provider  = "Manual"
        version   = "1"
        run_order = 1
      }
    }

2020-04-02

sweetops avatar
sweetops

Does anyone have an example of using log_configuration with the terraform-aws-ecs-container-definition module? I’m trying to update from 0.12.0 to latest (0.23.0) and it looks like the logging configuration has changed. But I can’t find an example of how to implement it now.

androogle avatar
androogle
    "logConfiguration" = {
      "logDriver" = "awslogs",
      "options" = {
        "awslogs-region"        = var.region,
        "awslogs-group"         = "/${terraform.workspace}/service/${var.service_name}",
        "awslogs-stream-prefix" = var.service_name
      }
    }
androogle avatar
androogle

oh sorry, I didn’t see the terraform-aws-ecs-container-definition module part

androogle avatar
androogle

nm that

Gowiem avatar
Gowiem

@sweetops I’m using v0.23.0 and the following works for me:

locals {

  log_config = {
    logDriver = "awslogs"
    options = {
      awslogs-create-group  = true
      awslogs-group         = "${module.label.id}-logs",
      awslogs-region        = var.region,
      awslogs-stream-prefix = module.label.id
    }
    secretOptions = null
  }

  # ... 
}


\# ...

log_configuration            = local.log_config

You having trouble beyond that?

Brij S avatar
Brij S

Do I need to escape any characters in the following terraform?

  coredns_patch_cmd = "kubectl --kubeconfig=<(echo '${data.template_file.kubeconfig.rendered}') patch deployment coredns --namespace kube-system --type=json -p='[{"op": "remove", "path": "/spec/template/metadata/annotations", "value": "[eks.amazonaws.com/compute-type](http://eks.amazonaws.com/compute-type)"}]'"

I get the following error and I’m not sure why its asking for a new line

Error: Missing newline after argument

  on [variables.tf](http://variables.tf) line 101:
  (source code not available)

An argument definition must end with a newline.
Brij S avatar
Brij S

nevermind, got it

setheryops avatar
setheryops

Has anyone here ever had a lambda in one account that needed to be triggered by SNS in another account and used the aws_sns_topic_subscription resource?

setheryops avatar
setheryops

I keep getting an error on plan that the SNS account is not the owner of the lambda in the lambda account

curious deviant avatar
curious deviant

Maybe https://jimmythompson.co.uk/blog/sns-and-lambda/ is helpful describing the permissions policy that will need to be setup in a cross account scenario ?

Linking together Lambda and SNS across AWS accounts –Jimmy Thompson - Software Engineer

A guide on how to link together Lambda functions and SNS topics that belong in different AWS accounts.

setheryops avatar
setheryops

Thanks…ill check it out

:--1:1
Gabe avatar

Question regarding launch templates, block devices and instance types: Do you always use the same device_name for the root volume? Or change it based on instance type? For example, do you always use /dev/sda1 or /dev/xvda?

loren avatar
loren

changes based on instance type, and on OS

Zachary Loeber avatar
Zachary Loeber

What was the terraform provisioner used to collect script output into state again?

Zachary Loeber avatar
Zachary Loeber

thanks!

loren avatar
loren

github has fallen over, in case that’s not responding for you… https://www.githubstatus.com/

GitHub Status

Welcome to GitHub’s home for real-time and historical data on system performance.

Zachary Loeber avatar
Zachary Loeber

I just noticed the same, thanks for confirming I’m not the only one…

:--1:1
loren avatar
loren

that link is for an actual custom provider… there is also a module that abuses null resources terribly to get the outputs into state… but can’t find it right now cuz it’s on github

Zachary Loeber avatar
Zachary Loeber

Yeah, I know they are all dirty hacks and all. I’m not super proud to have to be using it

loren avatar
loren

or you can use the external provider, which is easy too, and pretty clean. only extra downside to me is that the stdout is not output which can make it hard to troubleshoot. i compensate by using the python logger to write to a file

Zachary Loeber avatar
Zachary Loeber

I just need one dumb airflow fernet key

loren avatar
loren

Zachary Loeber avatar
Zachary Loeber

its such a small python script you can oneline the thing

Zachary Loeber avatar
Zachary Loeber

I’ve been avoiding doing so for a while now but I’m seeing no good way to generate an airflow fernet key via tf and its such a short python script and all..

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

btw, for all you terraformers out there, I think you’ll dig what @mumoshu as been cooking up with #variant v2. He has created a declarative way to describe any “cli” tool using HCL that can be compiled down to a single binary. It means anyone on your team who can write HCL can help write the cli tool for your company (e.g. ./acme eks up ) Maybe you use #terragrunt today. Maybe you wish it did things that it doesn’t do today, but you’re not a go programmer. With #variant , that’s less the case because you can define arbitrary workflows like you would in a Makefile (only with a Makefile it doesn’t compile down to a binary for easy distribution) and it’s all in HCL so it easy for everyone to grok.

https://github.com/mumoshu/variant2

mumoshu/variant2

Turn your bash scripts into a modern, single-executable CLI app today - mumoshu/variant2

:--1:6
sheldonh avatar
sheldonh

What! That’s so cool. Going to check it out for sure. And I can figure out a way to shoehorn some power shell in there

mumoshu/variant2

Turn your bash scripts into a modern, single-executable CLI app today - mumoshu/variant2

mumoshu avatar
mumoshu
01:06:54 AM

@mumoshu has joined the channel

2020-04-01

discourse avatar
discourse
09:53:58 AM
Versioning and Deploying Secrets [Terraform]

FWIW on my small team (< 5 engineers) we use mozilla’s sops with great success.

For ages we relied purely on the PGP feature, but recently switched to KMS and it works great. We also use it in ansible using a custom plugin.

I believe our technique meets all your requirements. You can only store json or yaml data, but we get around that by wrapping blobs (pems, etc) in json/yaml and then shovelin…

:--1:2
Zachary Loeber avatar
Zachary Loeber

Curious how others are feeding output from terraform deployments into their pipeline as code

maarten avatar
maarten

You mean chaining it ? What I used was to store certain results in SSM, everything else can read from that.

:--1:1
Zachary Loeber avatar
Zachary Loeber

I know of SSM but haven’t used it, my assumption is that consul would be the non-cloud specific alternative to it right?

1
maarten avatar
maarten

correct

Zachary Loeber avatar
Zachary Loeber

Thanks @maarten, I appreciate the quick feedback

Aaron R avatar
Aaron R

Hi all. Hope you are all well during the covid outbreak. I’ve been attempting to use this great terrafrom module https://github.com/cloudposse/terraform-aws-jenkins

However my use case is slightly different to how this module has been setup. At the moment jenkins is running on elastic bean stalk with public facing load balancers. I want to these load balancers private facing only accessible by VPNing into the the specific VPC that is being run on

cloudposse/terraform-aws-jenkins

Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack - cloudposse/terraform-aws-jenkins

Aaron R avatar
Aaron R

i.e. place the load balancers on a private subnet per region and each instance on a private subnet per region

Aaron R avatar
Aaron R

wondering if anyone had done this before with this module and how they got it to work?

Aaron R avatar
Aaron R

I’m basing it off this

aknysh avatar
aknysh

the complete example gets automatically provisioned on AWS when we run tests

Aaron R avatar
Aaron R

so I’m basing it off this

Aaron R avatar
Aaron R

however I presume I need to create additional private subnets to place the elastic load balancers

Aaron R avatar
Aaron R

as I also want the elastic load balancers to be private

aknysh avatar
aknysh

loadbalancer_subnets = module.subnets.private_subnet_ids

aknysh avatar
aknysh

then place it in private

Aaron R avatar
Aaron R

can the load balancer and instance be placed in the same subnet though?

aknysh avatar
aknysh

you can place it in any subnet you want

aknysh avatar
aknysh

depends on your use-case

aknysh avatar
aknysh

if you need to place LBs in separate subnets (for any reasons), you can create more private subnets, using any subnet strategy you want

Aaron R avatar
Aaron R

so i’ve used this complete main example to scale up a working jenkins instance on aws. (public facing load balancers). I then modified it to place the load balancers in the same private subnet as the instances. However when on the elasticbeanstalk console, attempting to set the application as internal it fails. (even tho everything is private). I connected a client vpn to this vpc and attempted to connect via the load balancer - couldn’t manage to do it. This led me here: https://forums.aws.amazon.com/thread.jspa?messageID=415184&#415184

Aaron R avatar
Aaron R

thanks for your help by the way

Aaron R avatar
Aaron R

I think it’s something to do with trying to set the elastic bean stalk application from public to internal

Aaron R avatar
Aaron R

as it’s set to public by default

Aaron R avatar
Aaron R

when then attempting to set it as an internal app (once terraformed up) it fails to do so

Aaron R avatar
Aaron R

therefore I cannot hit the load balancer when in a VPN attached to the VPC but I can directly hit the instance

Mikael Fridh avatar
Mikael Fridh

Anyone else with this pet peeve? When using a data source for example to get the current IP addresses of elastic network interfaces attached to an NLB… any resource which then make use of this data will always show a diff known after apply even though the data is actually the same every time … Any way around this except just not using it as data, but converting it a possibly a static variable after first creation is done? …

androogle avatar
androogle

out of curiosity, are you looking up an EIP you generated and assigned to the NLB or is the the managed IP’s that AWS assigns to the NLB?

androogle avatar
androogle

If the later, I wonder if the provider can distinguish the difference for that resource and assumes that the IP can change (similar to how it does in ALB’s) and just doesn’t keep the IP static

androogle avatar
androogle

since the resource itself is known to have changing IP’s out of band of the provision process

Mikael Fridh avatar
Mikael Fridh

I let AWS assign those IPs in this case.

The problem is the resource (aws_lb) doesn’t supply the data at all - so I have to go via the data source.

Mikael Fridh avatar
Mikael Fridh

But you are on to something.

Mikael Fridh avatar
Mikael Fridh

or hmm maybe not.

androogle avatar
androogle

yeah if you’re letting AWS assign them, there’s no gauntree they’ll remain that IP I don’t think

Mikael Fridh avatar
Mikael Fridh

The problem is the internal IPs…

Mikael Fridh avatar
Mikael Fridh

I’m not sure I can explicitly decide them?

Mikael Fridh avatar
Mikael Fridh

maybe I can create two elastic interfaces. let me check

Mikael Fridh avatar
Mikael Fridh

no, they can only be auto-assigned it seems.

Amazon creates those elastic network interfaces, and “owns” them, but they are created “in” my account

androogle avatar
androogle

yeah I think you’re right on NLB, no ability in API to specify anything beyond the subnet

Mikael Fridh avatar
Mikael Fridh

I can live with this… but anyone else is going to freak out when things show up as changes.

androogle avatar
androogle

I wonder if you can use lifecycle ignore_changes on that scope

Mikael Fridh avatar
Mikael Fridh

I wonder if maybe it’s mostly depends_on which makes this an issue…

Currently depends_on for data resources forces the read to always be deferred until apply time, meaning the results are always unknown during planning. depends_on for data resources is therefore useful only in some very specialized situations where that doesn't cause a problem, as discussed in the documentation.
Mikael Fridh avatar
Mikael Fridh

Yeah, it’s that.

And I use depends_on here for a good reason…

Because a data source cannot be “optional” I have to make sure the aws_lb resource gets created first. So data source is used second. Otherwise, the first terraform run won’t work fully. dangit.

Mikael Fridh avatar
Mikael Fridh
Data Resource Lifecycle Adjustments · Issue #17034 · hashicorp/terraform

Background Info Back in #6598 we introduced the idea of data sources, allowing us to model reading data from external sources as a first-class concept. This has generally been a successful addition…

androogle avatar
androogle

ah boo

androogle avatar
androogle

well, I normally wouldn’t suggest this but in this kind of scenario you could replace your eni/eip datasource with an external datasource. Write a botoscript or something to lookup the IP and handle it gracefully.

androogle avatar
androogle

because it seems more of an issue with the provider and resource than with the API itself

Mikael Fridh avatar
Mikael Fridh

Amazon needs to add these IPs in their API responses in order for Terraform to be able to fix it properly I guess.

Mikael Fridh avatar
Mikael Fridh

Because going via the network interface list is a bit of a kludge.

Mikael Fridh avatar
Mikael Fridh

I’ll see if I can do some coalesce trick with an external program like you said.

Mikael Fridh avatar
Mikael Fridh

I have the same pain with ecs task definitions… because they can be updated from terraform - or from CodeDeploy …

hope this data source issue on plan can be fixed soon .

androogle avatar
androogle

yeah, for that it looks like airship.tf has a nice approach with drift management via lambdas

1
Mikael Fridh avatar
Mikael Fridh

And my current way of running terraform doesn’t even support executing an external programs right off the bat … since I use AWS profiles.

But I tested it, and it works.

Mikael Fridh avatar
Mikael Fridh
output nlb_ingress_lb {
  value = merge(
    aws_lb.nlb_ingress,
    {
      proxy_ips = coalescelist(
        flatten([data.aws_network_interface.nlb_ifs.*.private_ips]),
        jsondecode(data.external.get_nlb_ips.result.private_ips),
      )
    }
  )
}
:--1:1
Mikael Fridh avatar
Mikael Fridh

airlift yeah this one had some nice things. “cron” jobs etc…

Mikael Fridh avatar
Mikael Fridh

\# lookup_type sets the type of lookup, either

\# * lambda - works during bootstrap and after bootstrap

\# * datasource - uses terraform datasources ( aws_ecs_service ) which won't work during bootstrap
variable "lookup_type" {
  default = "lambda"
}

clever indeed.

randomy avatar
randomy

CFN stacks with custom resources can help sometimes. I used one here to do ECS deployments that don’t interfere with Terraform. It’s just using ECS rolling updates, not CodeDeploy though, but we might extend it to support CodeDeploy. https://github.com/claranet/terraform-aws-fargate-service (it’s in early stages of being taken from a production system and turned into generic module)

claranet/terraform-aws-fargate-service

Terraform module for creating a Fargate service that can be updated with a Lambda function call - claranet/terraform-aws-fargate-service

randomy avatar
randomy

Also you can use 2 separate stacks and output data source values from one, use it in the other, if you want to “save” values. Not always ideal of course.

Mikael Fridh avatar
Mikael Fridh

Yep, considered the two stacks..

Mikael Fridh avatar
Mikael Fridh

I adapted my module to use the tricks from the [airlift.tf> modules now and have solved the task definition issues </i](http://airlift.tf) Now I’m building a similar lambda to solve the NLB lookup .

randomy avatar
randomy

does airlift require setting the lookup variable to one value during bootstrap, and then another afterwards?

randomy avatar
randomy

i mean airship (guess you did too?)

randomy avatar
randomy
blinkist/terraform-aws-airship-ecs-service

Terraform module which creates an ECS Service, IAM roles, Scaling, ALB listener rules.. Fargate & AWSVPC compatible - blinkist/terraform-aws-airship-ecs-service

Joe Presley avatar
Joe Presley

Has anyone seen Terraform useful for a situation where you want about 500 non-technical users create their own prepackaged resources in the cloud? For example, everyone gets the same account setup with predefined VM instance? My instinct is that Terraform is not the best tool for this, but I’ve seen people start with the idea that Terraform could run the backend.

maarten avatar
maarten

If it’s non technical-users does it make sense for them to do the deployment themselves ? From Administrator POV Terraform would be perfect to create 500 predefined instances for example.

Joe Presley avatar
Joe Presley

We wouldn’t want them to do the deployment, but they would sign up for the package that would trigger a deployment. So think of a case where a member of the public signs up for a service that deploys a server for their use. The end user would be a data scientist. So it may be 200. It may be 500. It may be a 1000.

Joe Presley avatar
Joe Presley

I’m asking for a general pattern, but the couple of use cases I’ve seen is with an organization that wants to allow teams of data scientists from the public to use a common dataset with jupyterlab instances stood up, etc.

maarten avatar
maarten

Ah ok, in that case Terraform could for sure work out. In AWS Case I think Cloudformation would maybe be more straight forward as no tooling would be necessary.

Joe Presley avatar
Joe Presley

What’s the pattern you would use for Terraform? Would it be something like a webhook trigger on Jenkins to run a Terraform module that accepts different inputs? Is that a better option than say use a language SDK to great a gui?

Soren Martius avatar
Soren Martius

Scalr is moving in that direction

1
Joe Presley avatar
Joe Presley

I was on a demo call with Scalr a while back for a different use case. I understand they use Terraform on the backend, but I don’t see how Terraform can be used in a normal GitOps way for them. Do you know what pattern they use?

maarten avatar
maarten

@Joe Presley I’d leave Terraform out of it as it’s not an API and with many automated deployments error handling of Terraform will be an issue. Maybe you can look into using the provided IaaS for the Cloud Provider and it’s native deployment mechanisms like Cloudformation.

Joe Presley avatar
Joe Presley

That makes sense. Thanks for the feedback.

    keyboard_arrow_up