#terraform (2018-11)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2018-11-01

onzyone avatar
onzyone

I have a few more questions … how do I ref the output of a module in another module …

onzyone avatar
onzyone

ie create kms key and then pass the arn to my postgres module … then the postgres module pass the endpoint to my rout53 module

maarten avatar
maarten
HashiCorp Terraform: Modules as Building Blocks for Infrastructure

Operators adopt tools like HashiCorp Terraform to provide a simple workflow for managing infrastructure. Users write configurations and run a few commands to test and apply changes…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-root-modules

Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-root-modules

Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules

onzyone avatar
onzyone

nice! got it!

onzyone avatar
onzyone

Thank you @Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@johncblandii your PR looks good, merged to master, thanks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-elastic-beanstalk-environment

Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

1
johncblandii avatar
johncblandii

thx

johncblandii avatar
johncblandii

releasing a version or going to stack a few?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-elastic-beanstalk-environment

Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment

johncblandii avatar
johncblandii

onzyone avatar
onzyone

do you guys have an option on terragrunt?

joshmyers avatar
joshmyers

AFAIK cloudposse/geodesic doesn’t support terragrunt ootb, however you could easily open a PR to change the terraform path to that of terragrunt. The question is why would you want to? After using terragrunt before, and being happy with using in moderation, I’m unsure what you would want that geodesic doesn’t give you. With the added fact that terraform 0.12 is soon landing (not withstanding the beta which is already out) that no doubt will have some backwards compatibility issues with terragrunt. Terragrunt was initially created back when TF was lacking in some of its orchestration. It has moved on since then with alot of this stuff now built in. Yes it still has some nice features that aren’t in mainstream TF, but TF is diverging quicker than that value add IMO

joshmyers avatar
joshmyers

Unpicking your terragrunt when it comes time to will likely be a headache

onzyone avatar
onzyone

we have just started our TF journey … so I am playing around with a number of different approaches

joshmyers avatar
joshmyers

The gruntwork TF testing tooling doesn’t support Terragrunt (yet at least), and they wrote it.

1
onzyone avatar
onzyone

so what is the best way to do orchestration of multiple modules in todays landscape?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@joshmyers nice points, thanks

joshmyers avatar
joshmyers

If starting a new project id certainly look at using geodesic over terragrunt

joshmyers avatar
joshmyers

Not taking anything away from the gruntwork folks, it has helped me loads in the past keeping code dry

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

terragrunt is a nice tool, and we use it sometimes

joshmyers avatar
joshmyers

But geodesic does similar, in a cleaner way IMO

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

there are a few patterns of doing it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Although there are many possible ways of doing that, we use containers + ENV vars pattern. As you mentioned, template rendering is another pattern (as implemented in terragrunt).

We store the ENV vars in either AWS SSM (secrets) or in Dockerfiles (not secrets).

Here are more details:

  1. We have a collection of reusable TF modules https://github.com/cloudposse/terraform-root-modules. The modules have no identity, everything is configurable via ENV vars. (In other words, they don’t care where they will be deployed and how).

  2. We deploy each stage (root, prod, staging, dev, testing) in a separate AWS account for security and better management

  3. For each AWS account/stage (root, prod, staging, dev, testing), we have a GitHub repo which is a container (for which we use geodesic https://github.com/cloudposse/geodesic):

https://github.com/cloudposse/root.cloudposse.co https://github.com/cloudposse/prod.cloudposse.co https://github.com/cloudposse/staging.cloudposse.co https://github.com/cloudposse/dev.cloudposse.co https://github.com/cloudposse/testing.cloudposse.co

Not secret ENV vars are defined in the Dockerfiles, e.g. https://github.com/cloudposse/prod.cloudposse.co/blob/master/Dockerfile#L17 In other words, the account containers have identity defined via the ENV vars.

  1. https://github.com/cloudposse/terraform-root-modules is added to the containers https://github.com/cloudposse/prod.cloudposse.co/blob/master/Dockerfile#L36

  2. Inside the containers, users assume IAM roles ro access the corresponding AWS account and then provision TF modules.

  3. Inside the containers we use chamber (https://github.com/segmentio/chamber) to read secrets from SSM (per AWS account)

So when we run a container (e.g. prod), we already have all ENV vars setup, and we read all the secrets from the account SSM store. An account/stage can be in any region (also specified via ENV var, e.g. https://github.com/cloudposse/prod.cloudposse.co/blob/master/Dockerfile#L14)

Take a look at our docs for more details: https://docs.cloudposse.com/reference-architectures/ https://docs.cloudposse.com/reference-architectures/cold-start/ https://docs.cloudposse.com/reference-architectures/notes-on-multiple-aws-accounts/ https://docs.cloudposse.com/geodesic/

joshmyers avatar
joshmyers

And only using terraform itself is a huge plus. Trust me saying you don’t want to refactor the state machine for your infra.

joshmyers avatar
joshmyers

What @Andriy Knysh (Cloud Posse) said

onzyone avatar
onzyone

@joshmyers @Andriy Knysh (Cloud Posse) these are really good points

onzyone avatar
onzyone

I like #6 we will need to look into this as well

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yea, as we mentioned, terragrunt is very nice, but it’s another wrapper on top of TF, and wrappers, although solve many problems, always have their own issues

1
onzyone avatar
onzyone

ya one can say the same thing about TF and CNF …

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can

joshmyers avatar
joshmyers

I’m not sure how I feel about modules of modules as it is rabbit holes all the way down and finding where you are and passing in more vars then you want is a pain

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but HashiCorp is big, well funded, and is in business for a long time

joshmyers avatar
joshmyers

But that is more of a terraform shortfall than the tooling around it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

one day they will be bought

onzyone avatar
onzyone

I am not going against HashiCorp or TF, from what I have seen I really like it.

onzyone avatar
onzyone

and it is the tool that we are going to be using moving forward …

onzyone avatar
onzyone

I am just trying to understand the best way to setup my modules

joshmyers avatar
joshmyers

See how far something like geodesic gets you and go from there. I’ve been using TF from the early days, orchestrated many ways and as of now, geodesic and what the cloudposse folk are doing is the best I’ve seen, especially around orchestration and reusable modules

joshmyers avatar
joshmyers

You probably only need to write your higher level modules if you look at what you can pick off their shelf

onzyone avatar
onzyone

roles-profiles-modules …brings be back a few years to my puppet days

joshmyers avatar
joshmyers

Terragrunt gives you a slice of the pie, geodesic gives you the whole thing as far as orchestration, managing secrets, account separation, auditability

joshmyers avatar
joshmyers

And if you have to unpick it, it’s just using normal stuff, no terragrunt magic

joshmyers avatar
joshmyers

Kinda.

onzyone avatar
onzyone

ya i am reading up on geodesic now

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yea, we use geodesic everywhere, for many clients, and we support it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s not about our implementation per se, it’s about tooling and workflows, and not only for TF deployments - it supports tens of other tools including k8s, helm, helmfile etc.

2
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and security (assume IAM roles to access AWS)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and account/stage/env separation (as @joshmyers mentioned)

joshmyers avatar
joshmyers

@Andriy Knysh (Cloud Posse) are ya’ll using it with kops? Looks like it. How does TF generation fit in if at all?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Yea using it with kops

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

And EKS

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Just to be clear, not using both kops and eks at the same time

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

and there’s no relation between kops and eks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Give me 5 minutes

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Will show

onzyone avatar
onzyone

oh it has eks support!!!

onzyone avatar
onzyone

it is getting better now

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-ec2-autoscale-group

Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-root-modules

Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-root-modules

Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules

joshmyers avatar
joshmyers

Yup, seen those. Look good. Was specifically wondering more about the kops generating your terraform resources etc in particular, if that was a thing you had looked into.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

geodesic does not include EKS support, you can use geodesic container to provision those

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

^ high level overview

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if anyone interested, we can provide more details

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

more details :

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Provision the Kops cluster

We create a kops cluster from a manifest.

The manifest template is located in /templates/kops/default.yaml and is compiled by running build-kops-manifest in the Dockerfile.

Provisioning a kops cluster takes three steps:

  1. Provision the kops backend (config S3 bucket, cluster DNS zone, and SSH keypair to access the k8s masters and nodes) in Terraform
  2. Update the Dockerfile and rebuild/restart the geodesic shell to generate a kops manifest file
  3. Execute the kops manifest file to create the kops cluster

Change directory to kops folder

cd /conf/kops

Run Terraform to provision the kops backend (S3 bucket, DNS zone, and SSH keypair)

init-terraform
terraform plan
terraform apply

From the Terraform outputs, copy the zone_name and bucket_name into the ENV vars KOPS_CLUSTER_NAME and KOPS_STATE_STORE in the Dockerfile.

The Dockerfile kops config should look like this:

docker
# kops config
ENV KOPS_CLUSTER_NAME="us-west-2.staging.example.net"
ENV KOPS_DNS_ZONE=${KOPS_CLUSTER_NAME}
ENV KOPS_STATE_STORE="<s3://jexample-staging-kops-state>"
ENV KOPS_STATE_STORE_REGION="us-west-2"
ENV KOPS_AVAILABILITY_ZONES="us-west-2a,us-west-2b,us-west-2c"
ENV KOPS_BASTION_PUBLIC_NAME="bastion"
ENV BASTION_MACHINE_TYPE="t2.medium"
ENV MASTER_MACHINE_TYPE="t2.medium"
ENV NODE_MACHINE_TYPE="t2.medium"
ENV NODE_MAX_SIZE="2"
ENV NODE_MIN_SIZE="2"

Change directory to kops folder, init Terraform, and list files

cd /conf/kops init-terraform

You will see the kops manifest file manifest.yaml generated.

Run kops create -f manifest.yaml to create the cluster (this will just create the cluster state and store it in the S3 bucket, but not the AWS resources for the cluster).

Run kops create secret sshpublickey admin -i /secrets/tf/ssh/example-staging-kops-us-west-2.pub --name $KOPS_CLUSTER_NAME to add the SSH public key to the cluster.

Run kops update cluster --yes to provision the AWS resources for the cluster.

All done. The kops cluster is now up and running.

NOTE: If you want to change kops cluster settings (e.g. number of nodes, instance types, etc.):

  1. Modify the kops settings in the Dockerfile
  2. Rebuild Docker image (make docker/build)
  3. Run geodesic shell ([staging.example.net](http://staging.example.net)), assume role (assume-role) and change directory to kops folder
  4. Run kops replace -f manifest.yaml to replace the cluster resources (update state)
  5. Run kops update cluster --yes to modify the AWS resources for the cluster

NOTE: To force a rolling update (replace the EC2 instances), run kops rolling-update cluster --yes --force

NOTE: To use kops and kubectl commands (e.g. kubectl get nodes, kubectl get pods), you need to export the kubecfg configuration settings from the cluster.

https://github.com/kubernetes/kops/blob/master/docs/kubectl.md

Run kops export kubecfg $KOPS_CLUSTER_NAME to export kubecfg settings.

You need to do it every time before you work with the cluster (run kubectl or kops commands, validate cluster kops validate cluster, etc.) after it has been created.

cloudposse/geodesic

Geodesic is the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. https://slack.cloudposse.com/ - cloudposse/geodesic

kubernetes/kops

Kubernetes Operations (kops) - Production Grade K8s Installation, Upgrades, and Management - kubernetes/kops

btai avatar

hi is anyone using terraform to deploy EKS/AKS clusters?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

moring @btai! yep, we are…. have you seen the cloudposse modules?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

~btw, let’s move to #terraform~

btai avatar

isnt this #terraform

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

hahaha

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

See above

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

We posted links to the modules

btai avatar

so i dont have a problem creating the modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

And working example which we provisioned

btai avatar

I was wondering how you are managing upgrades (kubernetes version, node sizes, etc) with zero downtime. the only solution i know of is redeploying a new cluster and doing a cutover

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

You should not have any problem :)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Ask questions if you have

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

That’s is not easy yet with EKS

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

With kops, no problem

btai avatar

yeah, so im not only on AWS but also in azure as well

btai avatar

and kops doesn’t have an azure solution (yet)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yea, terraform is not well suited for managing the lifecycle

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we still use kops for managing k8s on aws

btai avatar

i see

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

there’s a tool by weaveworks attempting to be the kops of eks, but it’s not there yet

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
eksctl - a CLI for Amazon EKSattachment image

a CLI for Amazon EKS

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

it doesn’t handle drain/cordon rolling updates like kops

btai avatar

so kops upgrades the cluster with zero downtime

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yup

btai avatar

so i have a tentative solution that I can think of to automate the upgrade for me using just terraform

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
kops export kubecfg
kops replace -f manifest.yaml
kops update cluster
kops update cluster --yes
kops rolling-update cluster
kops rolling-update cluster --yes
btai avatar

mind if i run it by you guys?

btai avatar

just to get some feedback

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yea, for sure - would like to hear

btai avatar

cool

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the approach I’m thinking of would be to use blue/green node pools

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

how does Azure AKS handle that?

btai avatar

this is just a sample solution

# current running version
terraform workspace new kube-1.9.0
terraform apply -var 'kube-version=1.9.0'

# upgraded version
terraform workspace new kube-1.10.0
terraform  apply -var 'kube-version=1.10.0'

# cutover process + testing

# destroy old cluster
terraform workspace select kube-1.9.0
terraform destroy
clean up old state file (kube-1.9.0)
btai avatar

the more robust solution would be to maintain a changelog of the cluster workspace so workspace: kube-v1 could be initial, workspace:kube-v2 could be upgrading node size (i.e. c4.large -> c4.xlarge), workspace:kube-v3 upgrades kube version from 1.9.0 to 1.10.0

btai avatar

let me know if that makes sense

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(on the phone - will be back in ~20)

btai avatar

ok thanks!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

hrm…. that’s a pretty cool notion

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

need to let that sink in for a moment.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

basically, you’re suggesting using “workspaces” as colors for the worker nodes

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

on the surface that seems like a pretty elegant solution. there still should be a drain+cordon step, but in terms of provisioning the new worker pool this should work

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
  1. provision terraform-aws-eks-cluster in default workspace
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
  1. provision terraform-aws-eks-workers in blue workspace
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

… new release of kubernetes …

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
  1. provision terraform-aws-eks-workers in green workspace
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
  1. drain cordon nodes in blue workspace, then destroy node pool. Something like this could maybe be done: https://github.com/dmathieu/dice
dmathieu/dice

Roll all instances within a kubernetes cluster, using a zero-downtime strategy. - dmathieu/dice

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

not that you’d use dice, but the strategy is interesting.

dmathieu/dice

Roll all instances within a kubernetes cluster, using a zero-downtime strategy. - dmathieu/dice

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

basically you can label nodes, and dice will then terminate them. so something similar can be done to label, cordon, drain nodes

btai avatar

ill definitely take a look into this

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@btai

btai avatar

yep, i’ve used workspaces to deploy in multi-environment and multi-region and its worked really well using only one set of terraform files (keeping things dry)

btai avatar

the workspace pattern i usually use is {environment}_{region}

btai avatar

and you can use map variables to switch on the configs

btai avatar

for my aks cluster, it will have to be a special pattern something like {environment}_{region}_{kube-version}

maarten avatar
maarten

How is that different of having two autoscaling groups inside one ‘workspace’ with specified seperated min/max controlled by a conditional ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@maarten it’s a similar strategy, but this is more fluid

btai avatar

its similar

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you don’t need to maintain 2 of everything

btai avatar

yep

btai avatar

i went with that route initially

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you might need to have 2 GPU node pools, 2 high mem node pools, 2 high network node pools, etc

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

with workspaces, you can just have 1 of each, then have as many “colors” as necessary

maarten avatar
maarten

Ok i see

btai avatar

the problem i ran into with the 2 autoscaling groups solution is when I tried to put it in a fully automated CI job, I realized i needed to maintain the state of which blue or green ASG was currently taking traffic

btai avatar

thanks @Erik Osterman (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Blue/Green Worker Node Deployment - Kubernetes, EKS and Terraform - Lumo

At Lumo, we needed a way to easily update the AMI or other attributes on our EKS nodes with zero downtime. We do blue/green EKS Kubernetes node updates.

btai avatar

this is close to how i’d solve this in AWS, but I’m not allowed enough configurability in Azure

Blue/Green Worker Node Deployment - Kubernetes, EKS and Terraform - Lumo

At Lumo, we needed a way to easily update the AMI or other attributes on our EKS nodes with zero downtime. We do blue/green EKS Kubernetes node updates.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Run your Kubernetes Workloads on Amazon EC2 Spot Instances with Amazon EKS | Amazon Web Servicesattachment image

Contributed by Madhuri Peri, Sr. EC2 Spot Specialist SA, and Shawn OConnor, AWS Enterprise Solutions Architect Many organizations today are using containers to package source code and dependencies into lightweight, immutable artifacts that can be deployed reliably to any environment. Kubernetes (K8s) is an open-source framework for automated scheduling and management of containerized workloads. In […]

2018-11-02

pericdaniel avatar
pericdaniel

If you have multiple ingress rules when creating an SG… for terraform, would I just created multiple ingress brackets

pericdaniel avatar
pericdaniel

example:

pericdaniel avatar
pericdaniel
12:46:23 PM
pericdaniel avatar
pericdaniel

but it doesnt have a good example of multiple ingress rules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@pericdaniel yes, you add multiple ingress rules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and you need to (re)create the egress rule since TF deletes the default one https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/main.tf#L52

cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-rds-cluster

Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but don’t mix the two ways in one module, and the first one (standalone resource) is preferred

pericdaniel avatar
pericdaniel

yea i liked the inline cause it has it all laid out by the specific gorup

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but standalone is better

pericdaniel avatar
pericdaniel

sell me

pericdaniel avatar
pericdaniel

sell me on the standalone

pericdaniel avatar
pericdaniel

lol

pericdaniel avatar
pericdaniel

maybe im misunderstanding somthing

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

b/c:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. in many cases it’s useful to expose the created SG to outside the module so you or someone else could add rules if needed (not possible if used inline)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. You can use counts to enable/disable rules based on conditions (not possible inline) https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/main.tf#L64
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Since it’s a separate resource, could be controlled separately from TF
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but I agree, the inline case looks better

pericdaniel avatar
pericdaniel

ah that makes sense… like if I add a port or rule… itll have to delete the whole sg and then recreate it rather then just adding that one rule?

maarten avatar
maarten

AWS implementation of security groups has been done in a way that it’s very hard for Terraform to know if a security group rule reflects to an inline inside a security group resource or if it is a seperate security_group_rule resource. Everything works without problems as long as people don’t mix this up in Terraform.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@pericdaniel exactly

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@maarten

pericdaniel avatar
pericdaniel

yea it would be nice if terraform had some “I’m aware” capability

pericdaniel avatar
pericdaniel

is the security_group_id = “${join(“”, aws_security_group.default.*.id)}”

pericdaniel avatar
pericdaniel

joinining it to the the sg

maarten avatar
maarten

it’s not about that, the problem is that most resources you create have unique identifiers, security group rules don’t.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s a splat + join pattern

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

since the SG uses count

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if the SG is disabled (count=0), then `security_group_id = “${aws_security_group.default.*.id)}” would produce errors

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

b/c TF could not find it (it’s disabled)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so we join it with an empty string

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if the SG is disabled, the whole expression just produces an empty string

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you don’t use count, you don’t need it

pericdaniel avatar
pericdaniel

hmm

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Interpolation Syntax - Terraform by HashiCorp

Embedded within strings in Terraform, whether you’re using the Terraform syntax or JSON syntax, you can interpolate other values into strings. These interpolations are wrapped in ${}, such as ${var.foo}.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

since the SG uses count, it is a list, not a single resource

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

even if it has just one item (or it does not, if count=0)

Gabe avatar

does anyone here have a strong argument for using terraform remote state vs a data lookup to get information on resources in different accounts?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Good question. We do both. No specific “best practices” related to it right now.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i suppose it comes down to RBAC

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

to use remote state bucket, you need to grant full read access to that state file which may/may not expose secrets stored therein

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

while if you usse the data providers you can use IAM to gate access without exposing anything sensitive in the statefiles

mrwacky avatar
mrwacky

yawp, we’re working to avoid secrets stored in state. In the end, we realized there’s little value generating IAM access keys with TF if it just puts secrets in the state, and they are hard^Wnon-trivial to rotate

joshmyers avatar
joshmyers

Depending on situation, I’d opt for data sources rather than remote state. Use case where had different remote state for lots of things including a DR environment. If the main environment goes down due to AWS issues for example, you can’t terraform your DR environment if it relies on pulling state from S3 in a failed region

joshmyers avatar
joshmyers

Have found a data lookup to be more reliable. Then again if you are working in a massive account which is constantly hitting API rate limiting even after AWS increasing, you may want the one hit request Vs potentially lots of data lookups

maarten avatar
maarten

Which S3 are you using @joshmyers

maarten avatar
maarten

I like Remote State, but learned to not chain remote states too much. Especially with Terraform updates this can be a huge pain. Pro’s of remote state is that the “master”- state only needs to have the information filled in ones and the “slave”- states are updated.

Using datasources one would still need to replicate data a bit as the datasources need to be addressed. For never changing information like vpc_id that is probably fine, but it works just as well with remote state. Matter of taste then.

joshmyers avatar
joshmyers

Pulling in 5 different remote states during a run, mostly from different accounts, on top of the actual state file the main resources was a pain point.

maarten avatar
maarten

I was joking a bit, but availability of S3 as argument for not using remote state is of course a bit far fetched no.

joshmyers avatar
joshmyers

Not when you are building for the government and the entire point was for a DR scenario due to an entire regional outage.

joshmyers avatar
joshmyers

Most situations, yes :)

maarten avatar
maarten

ok wow, wouldn’t think something like that would happen. I remember one s3 outage 2 years ago in us-east-1 i think, but it was solved quickly and most of it was available.

joshmyers avatar
joshmyers

Agreed, extremely unlikely but it has happened before and it was a scenario the client wanted a working DR environment.

joshmyers avatar
joshmyers

AFAICR IAM policies on the KMS keys across a ccounts was also a pain when using the default KMS key. You couldn’t change the policy on the default key and we had strict policy limits like source IPs and MFA etx

maarten avatar
maarten

oh yes, i think I’ve struggled with that once

maarten avatar
maarten

also AWS did not have any good guide on that online

joshmyers avatar
joshmyers

Yeah, lost a day or so to that one. Cloudtrail shows source IP being the correct one, so policy should have been OK. In fact the request was coming from another internal service was failing, despite cloudtrail showing otherwise

maarten avatar
maarten

ah nice..

maarten avatar
maarten

Still it’s quite cool that the US has something like GovCloud. Here in Europe governments are not so quick on moving to Cloud Services which are US HQ’ed.

maarten avatar
maarten

So a lot of those projects end up on super expensive private clouds.

joshmyers avatar
joshmyers

This is UK government. Large chunks of UK gov is on public cloud like Amazon these days

joshmyers avatar
joshmyers

But yeah, the shitty private vcloud private clouds are shitty expensive

btai avatar

so you would resort to only using local state?

joshmyers avatar
joshmyers

Some teams used git and git-crypt

btai avatar

git hosted on a private server?

joshmyers avatar
joshmyers

Yup, gitlab/GitHub enterprise

btai avatar

is it still hosted in the cloud?

btai avatar

artifactory or consul for remote state works too

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

oh, interesting - didn’t know about the artifactory backend

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

have you used it? @btai

btai avatar

for terraform backend, no

btai avatar

but i wanted to throw that out there because remote state gives you state locking

2018-11-03

nnamani.kenechukwu avatar
nnamani.kenechukwu

I am sure we have seen the integration of Terraform with Google Cloud Shell. I love this feature. You can read my blog about it. https://medium.com/@nnamani.kenechukwu/terraform-in-gcloud-shell-learning-infrastructure-automation-made-simple-7ef4a4300ec4

Terraform in GCloud Shell: Learning Infrastructure Automation Made Simpleattachment image

In the world of DevOps, automation is inevitable. We can automate anything and everything from the point of creating the application to…

2018-11-05

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@nnamani.kenechukwu thanks for sharing, interesting article

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

woohoo! @Andriy Knysh (Cloud Posse) added serverless support to https://github.com/cloudposse/terraform-aws-rds-cluster

cloudposse/terraform-aws-rds-cluster

Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster

1
catdevman avatar
catdevman

Just wondering have you heard anything regarding Mysql5.7 support for serverless

cloudposse/terraform-aws-rds-cluster

Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Not supported yet

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

And also not publicly accessible, only in a VPC

johncblandii avatar
johncblandii
DEVOPS-305: Add support for ALB SSLPolicy by johncblandii · Pull Request #60 · cloudposse/terraform-aws-elastic-beanstalk-environment

Problem The SSL policy is set to None by default. This means all versions of TLS, etc are supported and cannot be controlled. Solution Added a new loadbalancer_ssl_policy var Testing Tested with…

johncblandii avatar
johncblandii

no clue why the commits bloated, but the files changed are legit

2018-11-06

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@johncblandii thanks, merged to master

johncblandii avatar
johncblandii

no prob

btai avatar

what are your thoughts on coupling your EKS cluster with a VPC, so when you do an upgrade to the cluster (spinning up a new cluster + cutover) you are also spinning up a new VPC with it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

blue/green clusters+vpc?

btai avatar

yeah

btai avatar

that vs blue/green cluster within same VPC

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

my biggest concern with that would just be ensuring security groups are kept current and allow both VPCs

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

at a certain scale, would maybe be better to consider an active/active setup where you can take one cluster out of commission for upgrades

btai avatar

maintain two identical clusters at all time?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yes, but they don’t need to be scaled up equally

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but i find that if something’s not in use, you don’t know until it’s too late what doesn’t work.

2
joshmyers avatar
joshmyers

Some larger orgs I’ve worked at give you a vpc to operate in, nevermind all vpc setup like peering etc. I’d rather have that separated out

joshmyers avatar
joshmyers

IMO coupling to VPCs sounds like a recipe for a headache

1
joshmyers avatar
joshmyers

Is there a specific reason you’d want to do that?

2018-11-07

btai avatar

@joshmyers not a really good reason, the org im at currently follows that pattern (done with kops) which has worked for them

btai avatar

i personally gravitate towards how you feel which is why i brought up the question

btai avatar

we are actually going multi-cloud so I’m terraforming my azure clusters and its dependencies (vnet, subnets, etc) and I was debating whether to keep the same pattern vs doing the cutover within the vnet

btai avatar

vnet = azure’s vpc if youre unaware

joshmyers avatar
joshmyers

How do you manage terraform codebases for the different providers?

joshmyers avatar
joshmyers

Different runs, code, state files etc?

btai avatar

yeah

btai avatar

terraform repo with an aws sub-directory and an azure sub-directory

btai avatar

its different providers so the terraform code is completely different

btai avatar

but regardless terraform will allow us to manage multi-cloud infrastructure much easier

2018-11-08

inactive avatar
inactive

hey guys, i’m having an issue when creating an EMR cluster using terraform… I have tried multiple things, none of them seem to stick

inactive avatar
inactive

here is the error message: * aws_emr_cluster.a: Error waiting for EMR Cluster state to be "WAITING" or "RUNNING": TERMINATED_WITH_ERRORS: VALIDATION_ERROR: The VPC/subnet configuration was invalid: No route to any external sources detected in Route Table for Subnet: subnet-05df0470906f7ce15 for VPC: vpc-046d66b0ac8496fcd

inactive avatar
inactive

the interesting thing is, that if i run ‘terraform apply’ again, then it works

joshmyers avatar
joshmyers

Race condition?

inactive avatar
inactive

exactly

inactive avatar
inactive

the networking dependencies have not yet been fully created

inactive avatar
inactive

then it attempts to launch emr cluster prematurely

inactive avatar
inactive

i have tried adding “depends on” statements

inactive avatar
inactive

but no joy

inactive avatar
inactive

tried moving the networking resources to the main.tf file, but that doesn’t seem to make any difference…

joshmyers avatar
joshmyers
Using depends_on with emr_cluster does not actually wait for resource · Issue #12590 · hashicorp/terraform

Terraform Version 0.8.8 Affected Resource(s) emr_cluster Terraform Configuration Files resource &quot;aws_emr_cluster&quot; &quot;emr-cluster&quot; { name = &quot;${var.stack_name}-emr-cluster"…

joshmyers avatar
joshmyers

Maybe?

inactive avatar
inactive

wow — exactly my issue

inactive avatar
inactive

but… i don’t see a solution

joshmyers avatar
joshmyers

Without more info, hard to help

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Use terraform apply -target module.vpc -target module.subnets to first provision the network

joshmyers avatar
joshmyers

If these are disposable things you will be destroying and recreating often, it’s probably gonna bite again.

inactive avatar
inactive

ok, thanks aknysh… that may work — i’ll give it a try

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Or move the network files into a separate folder and provision it first. Then in EMR module look up the vpc and subnets, or use a remote state

inactive avatar
inactive

so, in general practice, is it normal to provision some items first? and then run terraform apply again for the rest?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

If you have those issues, yes

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

You can do it in three different ways: use target, lookup data source, or remote state

joshmyers avatar
joshmyers

It happens but is not ideal. If depends on doesn’t fix it there is likely a race condition AWS end in creation of the resource from when you call create and it is ready. Generally resources like this need to support polling the API until a thing is ready

inactive avatar
inactive

right — makes sense

inactive avatar
inactive

i suppose there is not a way to add a “sleep” or increase timeout setting?

inactive avatar
inactive

because i find that it if would just wait a few more seconds, then it would work

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Yea, but not a good practice :). What if next time you need to wait for 7 seconds, but you have timeout only for 3

joshmyers avatar
joshmyers
terraform-providers/terraform-provider-aws

Terraform AWS provider. Contribute to terraform-providers/terraform-provider-aws development by creating an account on GitHub.

joshmyers avatar
joshmyers

p.s. I have no idea if that is the case for your route table resources etc as I haven’t looked into it

joshmyers avatar
joshmyers

If this is a one off thing, @Andriy Knysh (Cloud Posse) suggestions are the way to go

joshmyers avatar
joshmyers

If you destroy and re run this 10 times a day, that will get tiresome.

inactive avatar
inactive

yes, agreed — i just ran the -target vpc and it worked

inactive avatar
inactive

this is an OK workaround for now, i guess — but would prefer that i could run it just once

inactive avatar
inactive

i will look into the other 2 options…

inactive avatar
inactive

i can also share my emr.tf if you care to take a peek..

joshmyers avatar
joshmyers
terraform-providers/terraform-provider-aws

Terraform AWS provider. Contribute to terraform-providers/terraform-provider-aws development by creating an account on GitHub.

inactive avatar
inactive

no — the route table is simply updated

joshmyers avatar
joshmyers

ah sorry, I probably misread

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

agree, all those workarounds are not ideal (better to just do terraform apply once). We had a few situations like this with complex module dependencies. So instead of spending days on trying to find a solution which might not even exists, just use -target

inactive avatar
inactive

i use vpc module to create the initial artifacts, enable nat gateway, and then on my emr.tf I create a new subnet and add it to an existing route table

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

why not create a VPC and subnets first, and then in the EMR module just use them?

inactive avatar
inactive

well i thought i did that by moving all of the networking resources to main.tf

inactive avatar
inactive

but, yes i think creating the VPC first with a targeted approach makes sense

inactive avatar
inactive

then run terraform apply a 2nd time…

inactive avatar
inactive

works for me

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yea i think that when you create a subnet and add it to the route table, it takes time. And then EMR wants to use the route, but it does not exist yet. If you create all subnets first and then use one of them for EMR, should work

inactive avatar
inactive

agreed — thank you both for your input

inactive avatar
inactive

separate topic:

inactive avatar
inactive

i’m still trying to provision an internal load balancer from one of your modules, but not sure if that is supported

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

what module?

inactive avatar
inactive
Provide a way to specify visibility for Elastic Load Balancers to be internal · Issue #58 · cloudposse/terraform-aws-elastic-beanstalk-environment

Hello, I recently started using your module and was able to create a new Elastic Beanstalk environment successfully. However, I need my load balancers to be internal facing only. I could not find a…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ah yea, not supported yet, we have an issue opened for that

inactive avatar
inactive

ok — that’s fine

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you know how to do it, can open a PR?

inactive avatar
inactive

uhm… not sure, i guess if i were to manually create the load balancer

inactive avatar
inactive

but I’m relying on elastic beanstalk because of all of the automation that is included

inactive avatar
inactive

but i’ll keep digging, maybe I can figure it out

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

need to add a few settings, one of them is

setting {
    namespace = "aws:ec2:vpc"
    name      = "ELBScheme"
    value     = "internal"
 }
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

value = "internal" should be a variable

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so we have already 4 settings in the aws:ec2:vpc namespace

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

need to add just this one

setting {
    namespace = "aws:ec2:vpc"
    name      = "ELBScheme"
    value     = "${var.elb_scheme}"
 }
inactive avatar
inactive

ok, great — looks quite straightforward

inactive avatar
inactive

thanks again for your help

2018-11-09

lvh avatar

OK, so, Terraform.

Context: I have some infra (really mostly scheduled ECS tasks) that needs to repeat for multiple IAM roles (it’s auditing infra, each IAM role represents a role in an account with auditing privileges, each scheduled task is a kind of audit). It seems terraform won’t let me do that because no count in modules. Since cloudposse uses modules extensively, I’m guessing I’m not the first person to run into this problem. How do you solve it?

lvh avatar

Right now I’m solving it by having another program generate terraform for me, which is the most go thing I’ve done this week.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@lvh sounds like generating TF files in this case is a good idea. What module are you using? Over what settings do you want to iterate? Maybe we could improve the module, or you could just use plain TF resources with counts

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Well, our modules might not address this particular use-case, but that doesn’t preclude using modules

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you might need some more specialized module that supports what you want to do

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i’d need to see some more concrete code to visualize it (i’m more of a visual person)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i’d avoid the code generation for as long as necessary

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

is there one cloudposse module in particular you’re referring to?

btai avatar

terraform workspaces

btai avatar

if you want to use the same terraform module with different variables

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@btai that could be used

lvh avatar

Nah, it’s not a single module that’s causing problems, it’s an entire file with multiple resources + cloudposse modules in it

lvh avatar

It has ECS tasks, scheduling, iam roles, etc etc

lvh avatar

I also have an input set of IAM roles; I need to replicate the entire structure for multiple IAM roles – the obvious way to do that IMO would be to wrap the entire thing in a module and then have a count?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so at the very least, sounds like you want to wrap as much of that into a module for “business logic” and then invoke that N times

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but of course, count still won’t help

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

on the “code generation” front, if you make each invocation a separate file e.g. “customer1.tf”, “customer2.tf”, then it should be easy to add N invocations and just doing a text-replace

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

if you want to do a screenshare, lmk

lvh avatar

But I guess I can just write out the module calls explciitly

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

hrm… the approach @btai suggests might be interesting

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

anyone know if this may be easier in terraform 0.12? (the universal hail mary)

maarten avatar
maarten

I like the @btai option, I guess @lvh has different customers to audit with different roles respectively. A workspace per customer sounds like a nice idea and scales easily. Wit a big variable map with the iam roles populated per customer makes it easy for the calling code to do a simple lookup.

btai avatar

@lvh i would use terraform workspaces with variable maps

btai avatar
module "some_random_module" {
  source = "../../modules/some-random-module"

  name                 = "${var.name_map[terraform.workspace]}"
}
variable "name_map" {
  type = "map"

  default = {
    "customer_a" = "customer_a_name"
    "customer_b" = "customer_b_name"
  }
}
terraform workspace new customer_a
terraform workspace new customer_b
lvh avatar

I do have multiple customers but each customer has separate IAM roles, because each customer has multiple accounts

lvh avatar

i already have an AWS acct per customer though? so I guess I can do a … workspace per customer account

lvh avatar

which seems annoying but ok it’ll work

lvh avatar

thanks!

lvh avatar

@Erik Osterman (Cloud Posse) and no, 0.12 won’t fix this because they’ve given up on making it work for resources and modules

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

oh really! have you seen them say that “count” has no ETA of ever making it into tf modules?

lvh avatar

sorry I meant for 0.12; they claim to have laid a lot of the ground work for it? but it seems very very far away, and they already said 0.12 wont have it

lvh avatar

just the comprehensions (maps, lists)

lvh avatar

it might still happen eventually, but not any time soon

joshmyers avatar
joshmyers

You should totally have different state files per customer if not already

joshmyers avatar
joshmyers

Until 0.12 maps are good for what you suggest but only 1 level deep so no rich data there without nasty hacks

joshmyers avatar
joshmyers

So I’d probably do a workspace per customer per env?

btai avatar

^ ++

btai avatar

for the time being ive been using the workspace to switch on multiple

btai avatar

{environment}_{region}

btai avatar

and then split the workspace on _

2018-11-12

maarten avatar
maarten

Anyone having issues with security groups with v0.11.10 ?

pecigonzalo avatar
pecigonzalo

What sort of issues?

maarten avatar
maarten

I think it’s more related to anton’s module, will deep dive later

jonboulle avatar
jonboulle

is this a 0 -> -1 change?

maarten avatar
maarten

yes

jonboulle avatar
jonboulle

I wondered about that - an AWS API change? I had a similar one last week with something eles

loren avatar
terraform-providers/terraform-provider-aws

Terraform AWS provider. Contribute to terraform-providers/terraform-provider-aws development by creating an account on GitHub.

1
loren avatar
resource/aws_security_group_rule: Properly handle updating description when protocol is -1/ALL by bflad · Pull Request #6407 · terraform-providers/terraform-provider-aws

Fixes #1920 Previously: — FAIL: TestAccAWSSecurityGroupRule_Description_AllPorts (21.74s) testing.go Step 2 error: Error applying: 1 error occurred: * aws_security_group_rule.te…

Amos avatar

I asked a question over in #announcements concerning one of your terraform project, but am thinking this is probably the more correct place to do it. Should I move that question over here?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I’m attempting to peer a VPC in another region but get this error message

* module.cluster_to_vpn.module.vpc_peering.data.aws_vpc.requestor: data.aws_vpc.requestor: InvalidVpcID.NotFound: The vpc ID 'vpc-xxxxxx' does not exist
	status code: 400
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

nah, we can just link it here

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we’ve not tried to set up inter-region tunnels with that module

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

our use-case was to peer a backing services VPC (e.g. one with RDS, ElastiCache, etc) with a kops vpc

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I suspect the reason it’s not working is that the module will need to have multiple providers so that it can reference two different regions

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I feel like someone submitted PR for another module that does this, but I can’t find which repo that was in (it was not related to vpc peering)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(also, we implemented the module before inter-region VPC peering was possible)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(jumping on a call)

Amos avatar

I see! thanks for the info. I will attempt to fork it and try it out.

Amos avatar

This is similar to my use case as well, but I have additional VPCs, like a general infra VPC and a VPN VPC I need to peer as well (one of which is in a different region).

Suresh avatar
Suresh
12:11:28 AM
Suresh avatar

I am trying to use ELB health for the ASG instead of EC2 health check!

2018-11-13

maarten avatar
maarten

Any VI users in the house? Give me your thumbs up https://github.com/scrooloose/nerdcommenter/pull/350

New delimiter for .tf extension by maartenvanderhoef · Pull Request #350 · scrooloose/nerdcommenter

What Replacing the commenting delimiter from a ; to a # for the .tf-file extension. Why Autodesk Transcript Files are in popularity not anything near Terraform files and it is hard to find a single…

3
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

done

Nikola Velkovski avatar
Nikola Velkovski

Done, I’ve been using https://github.com/tpope/vim-commentary for a long time and haven’t had any issues so far.

tpope/vim-commentary

commentary.vim: comment stuff out. Contribute to tpope/vim-commentary development by creating an account on GitHub.

1
1
Gabe avatar

do you guys use any tools for autocompletion for terraform on vim?

joshmyers avatar
joshmyers
juliosueiras/vim-terraform-completion

A (Neo)Vim Autocompletion and linter for Terraform, a HashiCorp tool - juliosueiras/vim-terraform-completion

2
mpmsimo avatar
mpmsimo

An honorable mention, not specifically for autocompletion but some other terraform nice to haves: https://github.com/hashivim/vim-terraform

hashivim/vim-terraform

basic vim/terraform integration. Contribute to hashivim/vim-terraform development by creating an account on GitHub.

1
1
Gabe avatar

exactly what i was looking for

hashivim/vim-terraform

basic vim/terraform integration. Contribute to hashivim/vim-terraform development by creating an account on GitHub.

mpmsimo avatar
mpmsimo

enjoy! it’s been serving me well so far

2018-11-14

nutellinoit avatar
nutellinoit

Hi everyone, i’m using https://github.com/cloudposse/terraform-aws-vpc-peering to setup peering connection between two vpcs

cloudposse/terraform-aws-vpc-peering

Terraform module to create a peering connection between two VPCs - cloudposse/terraform-aws-vpc-peering

nutellinoit avatar
nutellinoit

the first time i run apply, everything is created correctly

nutellinoit avatar
nutellinoit

the second time, terraform deletes routes on the two route tables of the two vpcs

nutellinoit avatar
nutellinoit

the third, the routes are recreated , and so on

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@nutellinoit it’s been a few months since we tested our implementation here https://github.com/cloudposse/terraform-root-modules/blob/master/aws/kops-aws-platform/vpc-peering.tf

cloudposse/terraform-root-modules

Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-kops-vpc-peering

Terraform module to create a peering connection between a backing services VPC and a VPC created by Kops - cloudposse/terraform-aws-kops-vpc-peering

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

can you confirm that you use similar setup (we did not see the issue you described)

nutellinoit avatar
nutellinoit

uhm

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

but maybe something changed in TF or other things

nutellinoit avatar
nutellinoit

investigating the issue i think it’s related to another vpc module that i’m using, the one from hashicorp aws best practice

nutellinoit avatar
nutellinoit

i’ll get in touch if this is the case

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yea thanks. you can try our vpc module and see what happens

nutellinoit avatar
nutellinoit

found it, the conflict is caused by inline base rule on the aws_route_table resource (like the 0.0.0.0/0 for the igw)

nutellinoit avatar
nutellinoit

removing the route inline and using aws_route resource the problem is gone!

1

2018-11-15

2018-11-17

Tee avatar

Is it possible to create a private dns route 53 address and resolve it as static website in web browser internally behind vpn with internal elb using terraform or cloudformation

2018-11-18

joshmyers avatar
joshmyers

@Tee what problem are you trying to solve? Terraform / CF are just tools which talk to the AWS API. Is this possible via the API?

Tee avatar

Yes. So i want a simple webpage that is only available behind vpn using internal dns. https://internal.example.com so when i open link in web browser i can see content only when connected to vpn. As it just internal route 53 private domain resolved behind VPN

maarten avatar
maarten

@Tee I’ve responded to you in #aws

Tee avatar

Thanks

2018-11-19

mmarseglia avatar
mmarseglia

looks like my terraform state wasn’t saved. resources were created but they don’t exist in the state file. how do folks deal with a situation like this?

mmarseglia avatar
mmarseglia

should i try an import?

mrwacky avatar
mrwacky

sure

joshmyers avatar
joshmyers

@mmarseglia There is also https://github.com/dtan4/terraforming for generating the state of those resources, which you can hack back into Terraform if you now have split state. Although this can mean hand mangling JSON, which you would rather avoid.

dtan4/terraforming

Export existing AWS resources to Terraform style (tf, tfstate) - dtan4/terraforming

mmarseglia avatar
mmarseglia

thanks.. i’ll check that out and maybe try a terraform import

mmarseglia avatar
mmarseglia

i’ve got the code written..it’s just a matter of getting terraform to recognize what it already did

mmarseglia avatar
mmarseglia

i’m still wondering why it didn’t save.. if maybe i did an apply and then shut my system down before it could save

maarten avatar
maarten

@mmarseglia Are you using remote state ?

mmarseglia avatar
mmarseglia

i am

mmarseglia avatar
mmarseglia

saving to an s3 bucket

maarten avatar
maarten

Have you checked for a file “errored.tfstate”

maarten avatar
maarten

in your root module folder

joshmyers avatar
joshmyers

It should obey SIGTERM/SIGINT in a sane way and try and get the state stored. Did you end up with half a state, with some resources?

mmarseglia avatar
mmarseglia

unfortunately i didn’t notice what was wrong, cleaned up by removing the .terraform directory, and tried applying again

mmarseglia avatar
mmarseglia

so any errored.tfstate file is gone

maarten avatar
maarten

errored.tfstate isn’t stored in .terraform afaik

mmarseglia avatar
mmarseglia

oh.. maybe it’s still around then

mmarseglia avatar
mmarseglia

i’ll look

mmarseglia avatar
mmarseglia

my clean step is a rm -rf terraform/.terraform terraform/*.tfstate output.json inspec.json bucket

maarten avatar
maarten

ah yeah, that’s quite rigorous

mmarseglia avatar
mmarseglia

@joshmyers some resources are in the state file, there’s an IAM policy in there that was created

mmarseglia avatar
mmarseglia

but others are not

maarten avatar
maarten

Sometimes simpler than importing is to just delete everything which isn’t in the state and apply again.

1
mmarseglia avatar
mmarseglia

i’ll have to reconsider that clean step

mmarseglia avatar
mmarseglia

sounds like it’s too drastic

maarten avatar
maarten

yep

joshmyers avatar
joshmyers

do you need this thing that failed to build?

joshmyers avatar
joshmyers

or can you destroy and re provision?

mmarseglia avatar
mmarseglia

i think i can go through and delete the resources it created and didn’t save state for

mmarseglia avatar
mmarseglia

that might be easier than an import

joshmyers avatar
joshmyers

Have worked on codebases where a single TF run contains 500+ resources, not so easy to delete, hah. Filter by Terraform tag on resources and delete

mmarseglia avatar
mmarseglia

this is why i need to tag things

joshmyers avatar
joshmyers

Also there is a tradeoff between large state files (the collateral damage that a bad TF run could do) and splitting into tiny but related state, where you find yourself then having to run multiple times to pick up new changes etc. How to draw those dependencies?

mmarseglia avatar
mmarseglia

that’s a tough problem.

joshmyers avatar
joshmyers

Terragrunt has a kinda neat way of dealing with it

joshmyers avatar
joshmyers
gruntwork-io/terragrunt

Terragrunt is a thin wrapper for Terraform that provides extra tools for working with multiple Terraform modules. - gruntwork-io/terragrunt

mmarseglia avatar
mmarseglia

i like it

mmarseglia avatar
mmarseglia

didn’t think about separate .tfvars for different environments. but that’s a good idea

mmarseglia avatar
mmarseglia

right now i just run two environments, dev and prod, in separate AWS accounts

joshmyers avatar
joshmyers

Isn’t ideal though as planning B doesn’t pick up changes that planning changes in A has done, because A’s haven’t actually been apply’d yet. AFAICR

johncblandii avatar
johncblandii

that’ll be resolved once .12 comes out; modules get depends_on

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that’s great!

1
joshmyers avatar
joshmyers

Not things in different state though, no? Hence why it is important to try and split state on logical boundaries

loren avatar

maybe use something simple like Makefile to execute terragrunt/terraform in a required order?

joshmyers avatar
joshmyers

Aye, that fixes half of that problem. Other half is harder because of state.

joshmyers avatar
joshmyers
Terraform, VPC, and why you want a tfstate file per env

How to blow up your entire infrastructure with this one great trick! Or, how you can isolate the blast radius of terraform explosions by using a separate state file per environment.

i5okie avatar

hi all

i5okie avatar
cloudposse/terraform-aws-key-pair

Terraform Module to Automatically Generate SSH Key Pairs (Public/Private Keys) - cloudposse/terraform-aws-key-pair

cloudposse/terraform-tls-ssh-key-pair

Terraform module for generating an SSH public/private key file. - cloudposse/terraform-tls-ssh-key-pair

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@i5okie the first one can generate a key and import it into AWS

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the second one just generates a key-pair (does not touch AWS at all)

i5okie avatar

how does it import it into AWS?

i5okie avatar

nvm

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-key-pair

Terraform Module to Automatically Generate SSH Key Pairs (Public/Private Keys) - cloudposse/terraform-aws-key-pair

2018-11-20

pecigonzalo avatar
pecigonzalo

Trying to trial ecs-codepipeline, but as far as I see there is no example of what generates this https://github.com/cloudposse/terraform-aws-ecs-codepipeline/blob/master/main.tf#L234

cloudposse/terraform-aws-ecs-codepipeline

Terraform Module for CI/CD with AWS Code Pipeline and Code Build for ECS https://cloudposse.com/ - cloudposse/terraform-aws-ecs-codepipeline

pecigonzalo avatar
pecigonzalo

any pointers?

maarten avatar
maarten
cloudposse/terraform-aws-ecs-codepipeline

Terraform Module for CI/CD with AWS Code Pipeline and Code Build for ECS https://cloudposse.com/ - cloudposse/terraform-aws-ecs-codepipeline

cloudposse/terraform-aws-ecs-web-app

Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app

pecigonzalo avatar
pecigonzalo

Yeah, i know that part

pecigonzalo avatar
pecigonzalo

I was checking aws-ecs-web-app

pecigonzalo avatar
pecigonzalo

but the pipeline says codebuild will output a task

pecigonzalo avatar
pecigonzalo

but I dont find any examples of in which format, if it expects the generated JSON or just a taskarn:version

pecigonzalo avatar
pecigonzalo

etc

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@pecigonzalo I can provide examples in about 2 hours

pecigonzalo avatar
pecigonzalo

Yeah no rush! I was just curious/testing/playing around

pecigonzalo avatar
pecigonzalo

@Erik Osterman (Cloud Posse) Dont mean to bother, but if you could show that example, it would be great

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hey! So hands are a bit tied by phone

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This has the build spec example

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We had another example in the chat history but unfortunately it’s over 10K messages ago

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(We are working to exporting slack archive so we can link to these conversations)

pecigonzalo avatar
pecigonzalo

Yeah no proble, maybe someyhing like Obie could work

pecigonzalo avatar
pecigonzalo

Thanks

2018-11-21

antonbabenko avatar
antonbabenko

I’ve just released it to the public! Now nice AWS diagrams created using cloudcraft.co can be exported to Terraform infrastructure code. Please try it and share with your circles. More details in my blog post - https://medium.com/@anton.babenko/modules-tf-convert-visual-aws-diagram-into-terraform-configurations-e61fb0574b10

modules.tf — Convert visual AWS diagram into Terraform configurationattachment image

I am excited to announce public-release of modules.tf — project which allows conversion of visual AWS diagrams created using Cloudcraft.co

10
1
Nikola Velkovski avatar
Nikola Velkovski

Wow @antonbabenko this is truly amazing!

modules.tf — Convert visual AWS diagram into Terraform configurationattachment image

I am excited to announce public-release of modules.tf — project which allows conversion of visual AWS diagrams created using Cloudcraft.co

antonbabenko avatar
antonbabenko

Thanks!

loren avatar

anyone using terraform to apply tags on vpcs/subnets? it looks like terraform does this destructively, so if any tag is created outside terraform, the tag is removed on next apply… any workaround for that behavior?

pecigonzalo avatar
pecigonzalo

This is intended, you have a tag list, if it changes, terraform re-conciliates. You can probably use ignore_changes to only apply initially and then forget about them.

loren avatar

That’s unfortunate. We’d like to be able to set mandatory tags (and manage those over time), and otherwise let users set their own

pecigonzalo avatar
pecigonzalo

I dont believe this is possible, maybe lets wait for someone else

loren avatar

it’s kind of a pain… we have one user that wants to use kops, which wants to set tags on vpcs/subnets, but if we even set the Name tag, then terraform wipes out their tags

pecigonzalo avatar
pecigonzalo

Why do they need to put manual tags?

loren avatar

different teams managing different parts of the environment… we generally setup the boundaries around the account access and network, but within that boundary we don’t care what they do

loren avatar

i didn’t mean to imply they were manually setting tags… they can do it however they want, it’s just not in this terraform config/state…

nutellinoit avatar
nutellinoit
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

merged!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

0.5.0

nutellinoit avatar
nutellinoit

maarten avatar
maarten

quick q. How can I count the size of multiple splitted values of a string , inside a map, inside a list.

[ { 
 env: "key1:val,key2:val"
 somethingelse: "hello"
},
{
 env: "key1:val,key2:val,key3:val"
 somethingelse: "hi there"
}]

count =5

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

perhaps share with us what you want to achieve ? it might open up other possibilities

maarten avatar
maarten

It’s work in progress @ https://github.com/blinkist/terraform-aws-airship-ecs-service/blob/f8df876804f41fd4673e8ad3a47068074ba39d00/modules/ecs_scheduled_tasks/main.tf#L146

I’m creating json which will be sent to a lambda. What I wanted to achieve is to have AWS style NAME VALUE pairs as JSON interpolated inside the task_defs datasource for the container_environment.

I wanted to use a null_resource and loop through the var.ecs_scheduled_tasks to create the name value pairs and refer to them from the task_defs datasource .

But as I control the lambda I can also just pass it through to the lambda and let the lambda figure it out.

blinkist/terraform-aws-airship-ecs-service

Terraform module which creates an ECS Service, IAM roles, Scaling, ALB listener rules.. Fargate & AWSVPC compatible - blinkist/terraform-aws-airship-ecs-service

maarten avatar
maarten

dammit

joshmyers avatar
joshmyers

Terraform is not a proper language

joshmyers avatar
joshmyers

There is probably a hack for it…dare I ask if 0.12 fixes this?

maarten avatar
maarten

didn’t deep dive yet

maarten avatar
maarten

doesn’t look like that nested loops work

maarten avatar
maarten

from: https://www.hashicorp.com/blog/hashicorp-terraform-0-12-preview-for-and-for-each

      for num in var.subnet_numbers:
      cidrsubnet(data.aws_vpc.example.cidr_block, 8, num)
HashiCorp Terraform 0.12 Preview: For and For-Each

As part of the lead up to the release of Terraform 0.12, we are publishing a series of feature preview blog posts. The pos…

maarten avatar
maarten

so maybe it does

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
shuaibiyy/terraform-provisioner

Manage multiple provisions of the same Terraform scripts. - shuaibiyy/terraform-provisioner

antonbabenko avatar
antonbabenko

I am not sure I understand this project I mean, do we need something like this for real?

shuaibiyy/terraform-provisioner

Manage multiple provisions of the same Terraform scripts. - shuaibiyy/terraform-provisioner

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Looks interesting

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

State file for terraform project state

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Wow! This is like Helmfile for Terraform: https://github.com/uber/astro/blob/master/README.md

uber/astro

Astro is a tool for managing multiple Terraform executions as a single command - uber/astro

1
loren avatar

slick, i like it!

uber/astro

Astro is a tool for managing multiple Terraform executions as a single command - uber/astro

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

interesting alternative to terragrunt

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i like that it’s more similar to other workflow automations i’ve seen in yaml

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Feature Request: Support `from-module` parameter · Issue #8 · uber/astro

First of all, I love the direction of this utility. It&#39;s a generalized approach to orchestrated complex, multi-phased applies for terraform. It&#39;s a nice alternative to terragrunt that&#39;s…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

couldn’t help it

loren avatar

i saw i use terragrunt all over the place at the moment, but this is certainly intriguing as an alternative

2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Dig it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Has anyone attempted vendoring terraform modules?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
claranet/python-terrafile

Manages external Terraform modules. Contribute to claranet/python-terrafile development by creating an account on GitHub.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Templating terraform has come up a few times. Was going through my stats and found this https://github.com/cbroglie/terrastache/blob/master/README.md

cbroglie/terrastache

Use terraform variables and mustache templates to generate terraform configs - cbroglie/terrastache

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@lvh

cbroglie/terrastache

Use terraform variables and mustache templates to generate terraform configs - cbroglie/terrastache

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
coretech/terrafile

A binary written in Go to systematically manage external modules from Github for use in Terraform - coretech/terrafile

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

This might be the tool we need to achieve some kind of vendoring with terraform

2018-11-22

Duong avatar

hello, does anyone know how to edit an existing resource that belongs to another module (was created by another module) by terraform

I’m using github.com/cloudposse/terraform-aws-ec2-bastion-server and I want to edit the aws_security_group, I want to add the egress part to that.

# Main bastion
module "bastion" {
  source              = "github.com/cloudposse/terraform-aws-ec2-bastion-server"

  name                = "${var.name}"
  ami                 = "${var.ami}"
  instance_type       = "${var.instance_type}"
  # This key needs to be created beforehand
  key_name            = "${var.deploy_key_name}-${terraform.workspace}"
  vpc_id              = "${var.vpc_id}"
  subnets             = "${var.subnets}"
  ssh_user            = "${var.ssh_user}"
  security_groups     = []
  namespace           = "${var.namespace}"
  stage               = "${terraform.workspace == "staging" ? "staging" : "production"}"
}

resource "aws_security_group" "default" {
  name        = "${var.name}"
  vpc_id      = "${var.vpc_id}"

  ingress {
    protocol  = "tcp"
    from_port = 22
    to_port   = 22

    cidr_blocks = ["xxx/32", "yyy/32"]
  }

  egress {
    protocol  = "tcp"
    from_port = 22
    to_port   = 22

    cidr_blocks = ["xxx/32", "yyy/32"]
  }

  depends_on = ["module.overblock_public_bastion"]
}

thanks.

nutellinoit avatar
nutellinoit
08:59:32 AM
nutellinoit avatar
nutellinoit

i think you need only an additional egress rule on the bastion security group

Duong avatar

thanks @nutellinoit

1

2018-11-23

antonbabenko avatar
antonbabenko

@loren and other #terragrunt users, I’d love to hear your feedback on terragrunt structure I generate when generating code from cloudcraft.co diagrams.

loren avatar

got an example repo?

antonbabenko avatar
antonbabenko
01:23:17 PM

@loren This is an example which was generated from the complete diagram mentioned in my blogpost (includes VPC, SG, and real dependencies)

loren avatar

i don’t see any issues with the terragrunt structure, very easy to follow

loren avatar

the dynamic values script idea is definitely brilliant/hacky

loren avatar

would maybe like to see it in python instead, to try to get some cross-platform support (easy to install python on windows as a prereq)

loren avatar

might be nice to pin the version for the source modules, also

loren avatar

and have an option in the generation to use https source urls

antonbabenko avatar
antonbabenko
  1. dynamic values script worth dedicated blog post I consider externalizing it away to a separate repository and add bats-tests, or maybe rewrite it using devops-default-language (not python)
  2. modules sources as https and versions - yes, good idea. Isn’t git preferred over https? I don’t remember
loren avatar

GitHub docs seem to indicate a preference for https, but I’m thinking mostly just that different teams have different standards, and the source url doesn’t support interpolation, so whatever you choose imposes constraints on the authentication mechanism for all users…

loren avatar

I don’t know that there is a “default” devops language… Golang could work instead of python, or node.js… Go can be annoying with its GOPATH requirements… makes it less portable when running from src. Of course, a compiled binary is perfectly portable, but not great for including in a git repo…

loren avatar

Also, looks like some movement on a feature in terragrunt that might support dynamic interpolation? https://github.com/gruntwork-io/terragrunt/issues/603

new interpolation function to get values from var files · Issue #603 · gruntwork-io/terragrunt

I would like to propose the following enhancement to terragrunt. My team and I have been using terragrunt for about a year now and we like the value it adds on top of terraform :) The issue I have …

antonbabenko avatar
antonbabenko

Dynamic values you mentioned in this github issue are not exactly what I am doing with the shell script. Thanks for pointing to this issue!

antonbabenko avatar
antonbabenko

I think Go scripts which compile and release would be great for portability. Thought I still don’t know how to run different scripts in terragrunt hooks based on type of OS. Lmk if you know.

loren avatar

My language of choice is python, which makes it easy. No need to run different scripts on different platforms, just handle the platform logic in the script. Ought to be able to do the same with any non-shell language

1
Jan avatar

could you link to the post?

antonbabenko avatar
antonbabenko

Good idea, maybe a bit later, I really need people to try it and I can see errors in logs (much fewer than I expected really)

Jan avatar

ID be happy to try it

antonbabenko avatar
antonbabenko

It is free for all types of cloudcraft users

Jan avatar
Downloading provider from terraform init is terribly slow · Issue #15798 · hashicorp/terraform

I&#39;m working in South Korea, and our connection to fastly CDN is terribly slow (<100KB/s). Before 0.10.0, every provider plugin was bundled in terraform binary so it was fine, but It became p…

Jan avatar

train + ipv6 = fail

maarten avatar
maarten

or just BAHN = fehl

Jan avatar

hahaha

maarten avatar
maarten

It’s funny how it’s possible to have more or less working WIFI on airplanes, but crappy signal the moment a train goes faster than 100mph.

Jan avatar

its great in the tunnels

Jan avatar

im gonna be doing this trip regularly

Jan avatar

might start building packetloss ands signal noise heat map

maarten avatar
maarten

which trip is it ?

Jan avatar

Berlin to Freiberg and back

maarten avatar
maarten

long one cute city btw

Jan avatar

Yea its a good 5-6 hour trip

Jan avatar

beats the crap out of flying to Zurich and driving back in

Jan avatar

at least on the train I can work

Jan avatar

provided I dont need to download every thing

maarten avatar
maarten

running tf locally ?

Jan avatar

I hadnt planned it well

antonbabenko avatar
antonbabenko

You can use terraform aws provider for plans locally, yes

Jan avatar

cached to ~/.terraform?

Jan avatar

Just hit Berlin, internet should get better

Jan avatar

in /localhost/

Jan avatar

nice

antonbabenko avatar
antonbabenko

Check my blogpost for arguments in provider - https://link.medium.com/zfM60Pfp5R

Make Terraform faster — skip unnecessary checks (AWS and S3)attachment image

Working with Terraform on a daily basis I feel it can be a good idea to share some small tips I have. Let me start with this one.

maarten avatar
maarten

so with skip_requesting_account_id to true, how does that affect say, aws_vpc ?

antonbabenko avatar
antonbabenko

you can run “terraform plan -refresh=false” and terraform won’t perform API calls externally

2018-11-24

Nikola Velkovski avatar
Nikola Velkovski

@antonbabenko I’ll check them out with our state and see the improvements because we are sending the apply times to CW for measurement.

Nikola Velkovski avatar
Nikola Velkovski

Thanks!

antonbabenko avatar
antonbabenko

Last time I checked it was reduced number of calls, but not speed improvement in first place. Let us know here, or leave a comment to the post and I will update it.

Nikola Velkovski avatar
Nikola Velkovski

will do

2018-11-25

Andrew Jeffree avatar
Andrew Jeffree

never thought I’d be doing this. But I’m using terraform to create an obstacle course of a broken AWS environment for an interview process. But it’s kind of fun.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that’s cool

Andrew Jeffree avatar
Andrew Jeffree

Yeah. We’re finally expanding the team and I wanted a way to be able to reproduce a broken environment.

Andrew Jeffree avatar
Andrew Jeffree

for testing purposes

Andrew Jeffree avatar
Andrew Jeffree

having to reference my modules quite a bit without actually using them, as they’re all built with as much validation as I can put in them to prevent exactly what I’m building from happening

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i’ve tried to write similar things for other languages. it can be tricky to come up with fair challenges.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

one thing i’ve noticed is that technical domain expertise related to triaging “bugs” can be highly tied to an organization’s style of development

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we had a developer on our team who was a very senior terraform guy

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yet when it came to developing on our modules, the kinds of errors he ran into we’re new to him

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

not sure how to be express this. what i mean is that we see often the same kinds of errors for our stuff, which might not be the same kind of errors another company often sees. the “count of cannot be computed” is common for us. yet it seems (from my experience interviewing a lot of candidates) that there are sr terraform developers who are not familiar with this error, despite having written a lot of modules.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so… i am curious, what kinds of things do you want to test for?

Andrew Jeffree avatar
Andrew Jeffree

so we’re not actually testing terraform

Andrew Jeffree avatar
Andrew Jeffree

we’re testing AWS knowledge/troubleshooting

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

aha! i see what you mean

Andrew Jeffree avatar
Andrew Jeffree

I considered testing terraform, but I ran into the same thoughts you’ve mentioned above

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

gotcha, so using terraform to “stage” an environment in a “broken” state

Andrew Jeffree avatar
Andrew Jeffree

yep

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

are you creating a disposable org at the same time?

Andrew Jeffree avatar
Andrew Jeffree

not at this point in time. It’s just a single account thing.

Andrew Jeffree avatar
Andrew Jeffree

The role is a juniorish one

Andrew Jeffree avatar
Andrew Jeffree

It’s our first time hiring for this role externally so we’ll probably adjust it as we go.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yea, plant a stake and start somewhere

Andrew Jeffree avatar
Andrew Jeffree

But for now I’ve been told to avoid doing anything particularly nasty compared to some of the other obstacle courses I’ve made.

Andrew Jeffree avatar
Andrew Jeffree

I made a linux based one where it kicks off a shutdown timer for 60 seconds when you login the first time. We don’t use it often, as we have tons of them, it’s mostly used for the people who are smug and try and throw their weight around in the interview process.

2018-11-26

joshmyers avatar
joshmyers

Doh, just about to open a PR for the Terraform org CloudTrail stuff and appear to have been pipped by https://github.com/terraform-providers/terraform-provider-aws/pull/6580

resource/aws_cloudtrail: Add is_organization_trail argument and refactor import testing by bflad · Pull Request #6580 · terraform-providers/terraform-provider-aws

Closes #6579 Changes proposed in this pull request: resource/aws_cloudtrail: Add is_organization_trail argument tests/resource/aws_cloudtrail: Add import TestStep to all acceptance tests and add i…

joshmyers avatar
joshmyers

Pretty much same implementation, ah well

joshmyers avatar
joshmyers

Pipped by 5 hours

joshmyers avatar
joshmyers

Half a day I’m never getting back

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Argh! That’s annoying….

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It’s pretty amazing (generally) how fast this stuff gets contributed by the community.

Ben Hecht avatar
Ben Hecht

Hi, is this the proper support channel for terraform-aws-kops-vpc-peering?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

hey Ben, yes it is

Ben Hecht avatar
Ben Hecht

Hi - We are looking to peer our Kops VPC to a VPC in another AWS account so pods can access services in the outside account. In looking for a way to codify the “routes” we had planned on adding manually, I came across your project. Few questions - why does namespace need to be specified. Where do we define the network ranges for which traffic to send over the peer? Why does a namespace need to be declared? Mulitple pods in multiple namespaces might be sending traffic to the peer. And what is stage?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the namespace in the module is not a k8s namespace. namespace+stage+name is our naming convention (how we uniquely name all AWS resources

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for example, if you company is CloudPosse, then you can select the namespace as cp (or any other name you want)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

stage could be prod, staging, dev, testing

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

name is your app/solution name

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so, for production, you would select cp-prod-myapp to name all the AWS resources created by the modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we use the label modules for that

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-null-label

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-terraform-label

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-terraform-label

Ben Hecht avatar
Ben Hecht

Got it, what is the purpose of bastion_name masters_name nodes_name is it for identifying existing SGs/Roles?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

here is an example on how we use it to peer the kops VPC with backing services VPC https://github.com/cloudposse/terraform-root-modules/tree/master/aws/kops-aws-platform

cloudposse/terraform-root-modules

Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we did not test it cross-account

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-kops-vpc-peering

Terraform module to create a peering connection between a backing services VPC and a VPC created by Kops - cloudposse/terraform-aws-kops-vpc-peering

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(default to how kops names the masters and the nodes)

Ben Hecht avatar
Ben Hecht

Does the module support peering with VPCs in different accounts? InvalidVpcID.NotFound

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we did not test it, not sure what needs to be done for that (will have to take a look)

Ben Hecht avatar
Ben Hecht

Thanks for the help!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

np

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the module does not support cross-account b/c it was created before it was added by AWS

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you need to use new resources for cross-account https://www.terraform.io/docs/providers/aws/r/vpc_peering.html

AWS: aws_vpc_peering_connection - Terraform by HashiCorp

Provides a resource to manage a VPC peering connection.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i recall with one of our clients we setup cross account VPCs. the downside was there was no easy way to “Autoapprove” (if i recall correctly)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so there was a manual step necessary to establish the tunnels

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i’m going off vague memory here

loren avatar

you can (now) autoapprove a vpc peering link across accounts

4
loren avatar

you need to define two providers, one for each end of the link

loren avatar
provider "aws" {}

provider "aws" {
  alias = "peer"
}

data "aws_vpc" "peer" {
  count = "${var.create_peering_connection ? 1 : 0}"

  provider = "aws.peer"

  id = "${var.peer_vpc_id}"
}

resource "aws_vpc_peering_connection" "this" {
  count = "${var.create_peering_connection ? 1 : 0}"

  peer_owner_id = "${var.peer_owner_id}"
  peer_vpc_id   = "${var.peer_vpc_id}"
  vpc_id        = "${var.vpc_id}"
  tags          = "${merge(var.tags, map("Name", "${var.name}-${var.vpc_cidr}<->${var.peer_alias}-${data.aws_vpc.peer.cidr_block}"))}"
}

resource "aws_vpc_peering_connection_accepter" "this" {
  count = "${var.create_peering_connection ? 1 : 0}"

  provider = "aws.peer"

  vpc_peering_connection_id = "${aws_vpc_peering_connection.this.id}"
  auto_accept               = true

  tags {
    Name = "${var.peer_alias}-${data.aws_vpc.peer.cidr_block}<->${var.name}-${var.vpc_cidr}"
  }
}
loren avatar

you can also create routes on both ends, using the multiple providers… (though now i’m seeing i biffed the route table association and just grabbed one instead of iterating over the list)

resource "aws_route" "public" {
  count = "${var.create_peering_connection ? 1 : 0}"

  route_table_id            = "${var.public_route_tables[0]}"
  destination_cidr_block    = "${data.aws_vpc.peer.cidr_block}"
  vpc_peering_connection_id = "${aws_vpc_peering_connection.this.id}"
}

resource "aws_route" "private" {
  count = "${var.create_peering_connection ? 1 : 0}"

  route_table_id            = "${var.private_route_tables[0]}"
  destination_cidr_block    = "${data.aws_vpc.peer.cidr_block}"
  vpc_peering_connection_id = "${aws_vpc_peering_connection.this.id}"
}

resource "aws_route" "peer" {
  count = "${var.create_peering_connection ? length(var.peer_route_tables) : 0}"

  provider = "aws.peer"

  route_table_id            = "${var.peer_route_tables[count.index]}"
  destination_cidr_block    = "${var.vpc_cidr}"
  vpc_peering_connection_id = "${aws_vpc_peering_connection_accepter.this.id}"

  depends_on = ["aws_vpc_peering_connection_accepter.this"]
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks @loren

loren avatar

you pass providers in when instantiating the module, like so:

module "pcx" {
  source = "../pcx"

  providers = {
    aws      = "aws"
    aws.peer = "aws.peer"
  }

  // vars
}

2018-11-27

loren avatar

oh, and now we have a transit gateway option, looks to make vpc peering mostly obsolete? https://aws.amazon.com/blogs/aws/new-use-an-aws-transit-gateway-to-simplify-your-network-architecture/

New – Use an AWS Transit Gateway to Simplify Your Network Architecture | Amazon Web Servicesattachment image

It is safe to say that is one of the most useful and central features of AWS. Our customers configure their VPCs in a wide variety of ways, and take advantage of numerous connectivity options and gateways including (via Direct Connect Gateways), NAT Gateways, Internet Gateways, Egress-Only Internet Gateways, VPC Peering, AWS Managed VPN Connections, […]

joshmyers avatar
joshmyers

Nice!

Jan avatar

Im curious to see a cost comparison

i5okie avatar

Hi

i5okie avatar

so, the terraform-aws-elastic-beanstalk-environment module isn’t working for me right now. with load-balancer type set to “application” i get CREATE_FAILED

i5okie avatar

Creating load balancer failed Reason: Property HealthyThreshold cannot be empty

i5okie avatar

im testing it with adding the setting

joshmyers avatar
joshmyers

@i5okie Do you have a gist of output or anymore info?

i5okie avatar
i5okie
11:51:07 PM

lost terraform’s ouput.

i5okie avatar

gah now its stuck in “invalid state” lol grrreat

joshmyers avatar
joshmyers

Generally stayed away from EBS as it is a pain to debug. Black box.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

for some of the beanstalk @johncblandii is probably the most senior!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

he’s taken it above and beyond

johncblandii avatar
johncblandii

hehe. More is coming too.

johncblandii avatar
johncblandii

Ping me directly, @i5okie. I’ll see if I can help.

2018-11-28

i5okie avatar

yeah i can’t figure it out. I thought adding that one setting would help, but it didnt. ended up switching back to classic elb

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Configuring an Application Load Balancer - AWS Elastic Beanstalk

Use an Application Load Balancer to route application layer request traffic to different ports on environment instances based on the HTTP path.

Andy avatar

Also, checkout https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html specifically the elbv2 ones might help. I remember running into issues as well and stuck with Classic.

General Options for All Environments - AWS Elastic Beanstalk

Configure globally available options for your Elastic Beanstalk environment.

i5okie avatar

i think i just figured out why my user data file isn’t getting pushed to bastion, with the aws_ec2_bastion_server module..

i5okie avatar

there is a variable “user_data_file”. but it is not actually used anywhere else in the module lol.

1
rohit avatar

you know how we can add propagate_at_launch = "true" to some of the resources but not all of them . i see that we can use cloudposse/terraform-null-label

additional_tag_map = {
    propagate_at_launch = "true"
  }

but wouldn’t it add this to all the resources ?

rohit avatar

and i am not sure about the side effects of adding this to all the resources

rohit avatar

so i am trying to figure out how to add this tag only to the resources that need it

rohit avatar

any help ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

hrm…. “wouldn’t it add this to all labels”

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

can you clarify/elaborate

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

oh, i think i gotcha

rohit avatar

typo

rohit avatar

updated it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so there’s a tags_as_list_of_maps output which uses those additional tags

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this was something @jamie added to make it easier to add tag that only apply to certain resources (E.g. autoscale groups)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-null-label

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label

rohit avatar

so the tags defined in additional_tag_map will only be applied when i call "${module.my_app.tags_as_list_of_maps}" and will not be applied when i call "${module.my_app.tags}" , correct ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

almost, but i think you got it reversed

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so the tags defined in additional_tag_map will only be applied when i call "${module.my_app.tags_as_list_of_maps}" and will not be applied when i call "${module.my_app.tags}"

rohit avatar

lol yeah

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-null-label

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label

rohit avatar

makes sense

rohit avatar

this is great

rohit avatar

i also see that there is another module called terraform-terraform-label

rohit avatar

what’s the difference between these two modules ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

hi @rohit

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

terraform-terraform-label is a simpler version of terraform-null-label (before a lot of features were added to it)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for simple use-cases you can use terraform-terraform-label

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) i don’t think it has the feature he wants: tags_as_list_of_maps

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

those tags as list of maps are only for some AWS resources

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

most of AWS does not require it

1
rohit avatar

@Andriy Knysh (Cloud Posse) makes sense. Thanks

rohit avatar

is it possible to use string interpolation in both these modules ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also, terraform-null-label supports a few additional features: 1) context; 2) var.environment

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@rohit what do you mean by string interpolation? string interpolation can be used in any TF resource

rohit avatar

i mean when using terraform-null-label, can i use something like stage = "${terraform.workspace}" ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes

rohit avatar

nice

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(it’s not related to terraform-null-label, you can do it with any module or resource)

rohit avatar

i am fairly new to terraform and i noticed that there are some restrictions on where you could use string interpolation, count variable etc. so just wanted to confirm

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

aha

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ah ok, yes you can use it as inputs to any module

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but be aware of “count cannot be computed”!

rohit avatar

good to know

rohit avatar

so if i do something like this

tag_specifications {
    resource_type = "instance"
    tags          = "${module.app.tags}"
  }

in aws_launch_template resource and tags = ["${module.app.tags_as_list_of_maps}"] in aws_autoscaling_group resource, will it apply the tags twice ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-ec2-autoscale-group

Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-ec2-autoscale-group

Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-ec2-autoscale-group

Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

because of the “count cannot be computed” issues with terraform-null-label when it’s used in some complex module hierarchies (not all)

rohit avatar

it makes sense how you did it using terraform-terraform-label but if i use terraform-null-label, does my question above makes sense ?

rohit avatar

well, i don’t think my question is related to using one vs other

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yea, your question is related to how the tags will be applied to the EC2 instances if they are specified in both launch template and auto-scaling group

rohit avatar

yup

rohit avatar

so in your case what happens ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we did not see duplicate tags

rohit avatar

ok

rohit avatar

how did you organize your modules ? was it based on apps or category ?

rohit avatar

i am planning to do it based on category

rohit avatar

modules/compute/app1,modules/storage,module/networking

rohit avatar

and i noticed that some people do it by apps

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yea, we have a pretty novel way of doing things.

rohit avatar

just want to know how you did it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

first as you see, we have a lot of modules. these follow the naming convention set forth by the terraform registry

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

then there are the invocations of these modules; basically, how we use these modules in concert with eachother

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we call this our terraform-root-modules because they are the “root” module

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

then we pull in the root modules where we want to use them.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/testing.cloudposse.co

Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

notice we use docker for everything

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this eliminates almost all local (or native) dependencies since we can bundle it in the docker image

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

e.g. terraform, chamber, aws-vault, etc

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

there are certainly a lot of ways to do things

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

here are our refrence architectures: http://cpco.io/reference-architectures

rohit avatar

we are not using docker and i am not sure which one to pick from two approaches i mentioned

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also google #terragrunt and check out their reference architectures

rohit avatar

heard a lot about terragrunt

rohit avatar

but terraform is new to me and i didn’t want to learn a new tool on top of terraform

rohit avatar

is there a reason why you didn’t use terragrunt ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

there are two main patterns to assemble everything together and provide variables:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Template rendering (as done by terragrunt)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  1. Using Docker containers + multi-stage images + ENV vars specified in Dockerfile (and in some secret storage as AWS SSM for secrets) - we do it with geodesic (https://github.com/cloudposse/geodesic)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so we mostly use #2 as you can see in http://cpco.io/reference-architectures

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@rohit to answer you question how did you organize your modules ? was it based on apps or category ?, it’s never that straightforward, we probably organize the modules by categories first https://github.com/cloudposse/terraform-root-modules/tree/master/aws

cloudposse/terraform-root-modules

Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules

rohit avatar

@Andriy Knysh (Cloud Posse) i am planning to organize by category too

rohit avatar

Is it possible to download userdata template from S3 ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-aws-user-data-s3-backend

Terraform Module to Offload User Data to S3. Contribute to cloudposse/terraform-aws-user-data-s3-backend development by creating an account on GitHub.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

this is a nice pattern for offloading the user data to S3

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

unfortunately, this module is quite old and the documentation is ~poor~issing

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the basic idea is inject user data which curls a resource on S3

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-aws-user-data-s3-backend

Terraform Module to Offload User Data to S3. Contribute to cloudposse/terraform-aws-user-data-s3-backend development by creating an account on GitHub.

rohit avatar

the example here looks simpler

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that won’t overcome size limits

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
user_data     = "${data.aws_s3_bucket_object.bootstrap_script.body}"
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i forget the exact limit, but it’s low - like 16K

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

their example is solving a different use-case

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

perhaps some 3rd party generates a bootstrap script or you have a library of bootstraping scripts somewhere

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but the problem remains that if the script is large, it will get rejected by the AWS apis

rohit avatar

that’s true. there is always that size limit that we have to overcome

rohit avatar

Is there a reason why your modules are not posted on terraform registry ?

rohit avatar

or is that not part of your plan ?

rohit avatar

i think you do have great modules and pushing them to terraform registry would be great, just a thought

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

oh yes! that’s our biggest traffic source

rohit avatar

that’s good

rohit avatar

i didn’t know that you had modules up there

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we were so excited about it. the day it was announced we immediately started a huge initiative to rename all of our modules to follow convention and publish them on the registry.

rohit avatar

awesome

rohit avatar

i couldn’t find terraform-aws-user-data-s3-backend on registry

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

oh, crap

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

maybe we missed that one. i thought they were all there.

rohit avatar

do you feel it needs any updates or just have to push it to registry ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so the registry just pulls in realtime from github

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we haven’t used it lately, so it might need some TLC

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(we accept nearly all PRs after codereview)

rohit avatar

i might be able to help if you can let me know what needs to be updated

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

oh, i think it’s in full working order - that’s the way we last left it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but it was many terraform versions ago, so things might have chagned

rohit avatar

i don’t see any pins on terraform version

rohit avatar

BTW, is this module used to download userdata from S3 or the opposite ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

both

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i believe…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

i think it will upload the user data to s3 and configure the user data script to pull it from s3

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so it’s an end-to-end solution

rohit avatar

hmmm. what i am looking for is a way to download the template already stored on s3

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

how did the template get to S3?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

rohit avatar

it was manually uploaded

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

in that case, you might just want to fork this module and create a new one

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

though my professional opinion is to avoid the “manual” part of that upload

rohit avatar

i agree with you on the manual part

rohit avatar

i guess i am debating whether to store the userdata template in the same terraform repo or just pull that template from S3

rohit avatar

so the usecase here is, we use chef for configuration management and it needs few variables called roles,recipes,chef_environment which is different for different apps. All the other stuff is common across apps, so what we currently do is include a light weight userdata script as part of each launch template which in turn calls downloads the userdata store on s3 and passes the variables to that script

rohit avatar

i hope my usecase makes sense to you

joshmyers avatar
joshmyers

@rohit have done exactly this. In cloud-init we set the Chef dna JSON based on Terraform vars

joshmyers avatar
joshmyers

Why not use the thing to automate putting the userdata to S3 in the first place?

loren avatar

definitely agree on automating the push to s3, and would recommend keeping the provisioning templates/scripts in source control along with the logic/automation for launching the instance

joshmyers avatar
joshmyers

Note that you can also do multipart cloud-init for multiple scripts. Probably has been mentioned and obvs but one of the advantages of splitting out and keeping cloud-init minimal and having it call something else that does the heavy lifting, is that if you change the heavy lifting script, Terraform won’t want to immediately throw away all your infra because cloud-init changes force a new resource (unless using lifecycle stuff)

loren avatar

Very good point! A change to user data requires a new instance!

joshmyers avatar
joshmyers

Always fun to roll your infra for a whitespace change

joshmyers avatar
joshmyers

I assume @rohit has hit this problem every time wanting to add/change a node attribute

tamsky avatar


what i am looking for is a way to download the template already stored on s3

a reasonably nice and simple example is here; though the concept of error handling is missing:

rohit avatar

@joshmyers do you mind sharing what you did in your project?

joshmyers avatar
joshmyers

Don’t have that codebase anymore but it was pretty much what https://github.com/cloudposse/terraform-aws-user-data-s3-backend/blob/master/main.tf does

cloudposse/terraform-aws-user-data-s3-backend

Terraform Module to Offload User Data to S3. Contribute to cloudposse/terraform-aws-user-data-s3-backend development by creating an account on GitHub.

joshmyers avatar
joshmyers

Terraform puts an object in S3. Terraform knows your environment etc and likely other Chef node attribute type things. You nodes actual cloud-init is an upstart to pull said file from S3 and run it

rohit avatar

How do you overcome the issue that i mentioned here https://github.com/terraform-providers/terraform-provider-aws/issues/6019 ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-ec2-autoscale-group

Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group

rohit avatar

@Andriy Knysh (Cloud Posse) what i currently have is

launch_template = {
    id      = "${aws_launch_template.app.id}"
    version = "$$Latest"
  }
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we had that before as well

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-ec2-autoscale-group

Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group

rohit avatar

and that would use whatever version is generated from launch template resource ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

(don’t remember all the little details)

rohit avatar

there is a PR open for this issue

rohit avatar
Add ability to set default version of aws_launch_template · Issue #4655 · terraform-providers/terraform-provider-aws

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or &quot;me to…

rohit avatar

so i am not sure if this is the same issue people are facing

jamie avatar

If anyone is getting ‘the value of count cannot be computed’ errors when using the null label module can you show me an example of where it’s used? I have recently been troubleshooting this and I have found some new cases where that error shows up when it normally wouldn’t.

jamie avatar

Short answer being: if upstream dependant resources have an error, that error isn’t displayed and instead downstream computed values like count are broken.

2018-11-29

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@jamie here is an example of that

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-ec2-autoscale-group

Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group

jamie avatar

And when used, it gives the error?

cloudposse/terraform-aws-ec2-autoscale-group

Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group

jamie avatar

sometimes or all the time?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

all the time

jamie avatar

Oh…. sorry i saw you respond

jamie avatar

in under someone else

jamie avatar

thanks, ill grab that and test it too.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks

jamie avatar

Do you have an example code i can cut and paste or shall i just write a new one to use it?

jamie avatar

To run the module

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

don’t have example with null label (did not save it), but should be in the github history I think

jamie avatar

If i change that terraform-terraform-label to null-label it errors?

jamie avatar

@Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and you need to use tags_as_list_of_maps from the label instead of https://github.com/cloudposse/terraform-aws-ec2-autoscale-group/blob/master/main.tf#L61

cloudposse/terraform-aws-ec2-autoscale-group

Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s the main part of the issue

jamie avatar

@Andriy Knysh (Cloud Posse) I have something for you to test

jamie avatar

Do you have an example handy to run the module?

jamie avatar

with the error?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

no sorry, it was a few months ago when I tested it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ahh you mean to run the updated module?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s our EKS modules

jamie avatar

So in that module, i think the error stems to one particular thing

jamie avatar

(maybe)

jamie avatar

if i comment out

  # iam_instance_profile {
  #   name = "${var.iam_instance_profile_name}"
  # }
jamie avatar

and run it

jamie avatar

i get no errors… well.. apart from the example code having the min and max values around the wrong way..

variable "max_size" {
  default     = 2
  description = "The maximum size of the autoscale group"
}

variable "min_size" {
  default     = 3
  description = "The minimum size of the autoscale group"
}
jamie avatar

Which when i swap them.. i don’t get any errors based on ‘count’

jamie avatar

I’m heading to bed. But can you test this again when you have time to see if the error actually still happens for you?

jamie avatar

I switched it to use the tagging as you said, and I’m not getting any errors (after commenting out the iam_instance_profile block)

jamie avatar

Night

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@jamie thanks

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

I’m busy with other things, but I’ll take a look

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Also I’ll prepare a working (haha) solution for you using null-label that produces the count errors

Stephen avatar
Stephen

Question: For terraform-aws-codebuild what is the buildspec variable? i have a buildspec.yml file in my repo. How do i configure my codebuild to use that?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

1 sec @Stephen

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and it was throwing a lot of count can't be computed errors

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s why we just added this https://github.com/cloudposse/terraform-aws-ec2-autoscale-group/blob/master/main.tf#L61 to calculate tags_as_list_of_maps separately

cloudposse/terraform-aws-ec2-autoscale-group

Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group

Stephen avatar
Stephen

@Andriy Knysh (Cloud Posse) i think i got the answer from an issue re: that variable. It picks it up in the root dir during build if not set.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-cicd

Terraform Module for CI/CD with AWS Code Pipeline and Code Build - cloudposse/terraform-aws-cicd

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Build Specification Reference for AWS CodeBuild - AWS CodeBuild

Provides reference information about build specification (build spec) files in AWS CodeBuild.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/jenkins

Contribute to cloudposse/jenkins development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-jenkins

Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack - cloudposse/terraform-aws-jenkins

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-cicd

Terraform Module for CI/CD with AWS Code Pipeline and Code Build - cloudposse/terraform-aws-cicd

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-codebuild

Terraform Module to easily leverage AWS CodeBuild for Continuous Integration - cloudposse/terraform-aws-codebuild

Stephen avatar
Stephen

@Andriy Knysh (Cloud Posse) It worked It just pushed my git repo project up to dockerhub?

Stephen avatar
Stephen

Now to hook up the pipeline, and then webhooks i think. Then it will be EtoE

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

nice @Stephen

Stephen avatar
Stephen

Your cicd example is GitHub -> ECR (Docker image) -> Elastic Beanstalk (running Docker stack) but i will need to replace ECR w/ Dockerhub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

let us know if you need any help

Stephen avatar
Stephen

It looks like maybe i should have just used terraform-aws-cic instead. It looks like it has the Build and Pipeline together?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can

Stephen avatar
Stephen

i am trying to get the aws-cicd example to match with my existing pipeline configuration. What are the available options for Stage.actions.output.Artifacts?

Stephen avatar
Stephen

as well as inputArtifacts

Stephen avatar
Stephen

my live pipeline:

 "outputArtifacts": [
                            {
                                "name": "BuildArtifact"
                            }
                        ],
                        "inputArtifacts": [
                            {
                                "name": "SourceArtifact"
                            }
                        ]
Stephen avatar
Stephen

my new build stage in aws-cicd:

input_artifacts  = ["code"]
      output_artifacts = ["package"]
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Stephen it’s just names you select for the stages’ input and output artifacts, you can assign any names as long as you use the same name from the stage output to the next stage input

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
InputArtifact - AWS CodePipeline

Represents information about an artifact to be worked on, such as a test or build artifact.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-cicd

Terraform Module for CI/CD with AWS Code Pipeline and Code Build - cloudposse/terraform-aws-cicd

Stephen avatar
Stephen

Ok, another question. i am pulling from a public GH repo but it seems like i am required to use a GITHUB_TOKEN for pipeline, even though the codebuild module worked fine without it.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, currently the module requires GitHub token (we used it for private registries) https://github.com/cloudposse/terraform-aws-cicd/blob/master/variables.tf#L33

cloudposse/terraform-aws-cicd

Terraform Module for CI/CD with AWS Code Pipeline and Code Build - cloudposse/terraform-aws-cicd

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can add a token (even if you use only public repos)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Can we make the token optional?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i don’t remember if making the token optional will work here https://github.com/cloudposse/terraform-aws-cicd/blob/master/main.tf#L197

cloudposse/terraform-aws-cicd

Terraform Module for CI/CD with AWS Code Pipeline and Code Build - cloudposse/terraform-aws-cicd

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

needs to be tested

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Stephen if you test it and it works, can open a PR, we’ll review promptly, thanks

Stephen avatar
Stephen

Leaving it blank fails

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

can you test with any random string?

Stephen avatar
Stephen

tried that, it then tried to auth

Stephen avatar
Stephen

w/o string it complained about the config not correct.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yea, that’s why it’s there, for private repos

Stephen avatar
Stephen

ya, it would be better if it were optional. Guess thats upstream issue?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for just public, the module needs to be changed to support that

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s AWS API ‘issue’: you don’t provide a tokem, can access only public repos. You provide a token, it will try to auth even if you don’t want to use it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can create a token with limited permissions and use it for now

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also can you open an issue in the repo and explain that? thanks

Stephen avatar
Stephen

How would i go about adding ENV vars to the pipeline->Build stage?

Stephen avatar
Stephen

In my case, in my buildspec.yml I do a docker login with my dockerhub u/p

Stephen avatar
Stephen

so i can push the docker image up

Stephen avatar
Stephen
pre_build:
    commands:
      - echo Logging in to Docker Hub...
      # Type the command to log in to your Docker Hub account here.
      - docker login -u $DOCKER_USERNAME -p $DOCKER_PASSWORD
Stephen avatar
Stephen

@Andriy Knysh (Cloud Posse) re: the token issue: https://github.com/terraform-providers/terraform-provider-aws/issues/6646

aws_codepipeline should not require oauth token for public repos · Issue #6646 · terraform-providers/terraform-provider-aws

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or &quot;me to…

i5okie avatar
i5okie
05:43:14 PM

you guys ever seen this before?

Stephen avatar
Stephen

@i5okie Ya, i think you added the wrong stack name

i5okie avatar

I was originally using this solution_stack_name = "64bit Amazon Linux 2018.03 v2.8.6 running Ruby 2.3 (Passenger Standalone)"

Stephen avatar
Stephen

64bit Amazon Linux 2018.03 v2.8.6 running Ruby 2.3 (Passenger Standalone) is valid.

i5okie avatar

and copied the config over for another account. and now its complaining wrong stack. so i changed the name to what is in the screenshot

i5okie avatar

ooh wait.. i think i know whats wrong

Stephen avatar
Stephen

2.8.4 not found

i5okie avatar

i’ve changefd to 2.8.4 but didn’t change the date

i5okie avatar

hehe

Stephen avatar
Stephen

i5okie avatar

four eyes is better than two

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Stephen re: env vars in codebuild

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-codebuild

Terraform Module to easily leverage AWS CodeBuild for Continuous Integration - cloudposse/terraform-aws-codebuild

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-codebuild

Terraform Module to easily leverage AWS CodeBuild for Continuous Integration - cloudposse/terraform-aws-codebuild

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

unfortunately, not propagated to https://github.com/cloudposse/terraform-aws-cicd

cloudposse/terraform-aws-cicd

Terraform Module for CI/CD with AWS Code Pipeline and Code Build - cloudposse/terraform-aws-cicd

Stephen avatar
Stephen

ya, i just added it to my local version

Stephen avatar
Stephen

@Andriy Knysh (Cloud Posse) Getting this now during my EB deploy stage:

Deployment failed. The provided role does not have sufficient permissions: Failed to deploy application. Service:AWSLogs, Message:User: arn:aws:sts::xxxxx:assumed-role/xxxx-development-app/xxxxx is not authorized to perform: logs:PutRetentionPolicy on resource: arn:aws:logs:us-east-1:xxxx:log-group:/aws/elasticbeanstalk/xxxxx-dev-node-demo/var/log/nginx/error.log:log-stream:

Any ideas?

joshmyers avatar
joshmyers

It needs access to cloudwatchlogs in order to create log streams and put retention policies on them etc

joshmyers avatar
joshmyers

Did that resolve the issue for you?

joshmyers avatar
joshmyers

I think it would be nicer to split out those perms into a more scoped form, rather than applying on every resource.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so yea, we did not use logs for our EB apps

Stephen avatar
Stephen

@joshmyers Yes, i was able to Source, Build, and Deploy once i added that new action

Stephen avatar
Stephen

Also had to add this as well:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks @Stephen will review and merge

Stephen avatar
Stephen

starting on your webhook module now…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Stephen reviewed https://github.com/cloudposse/terraform-aws-cicd/pull/36, thanks, just a few comments

Stephen avatar
Stephen

@Andriy Knysh (Cloud Posse) all set

2
Stephen avatar
Stephen

How do i replicate the CodePipline UI where it connects to github and i tell it to use “webhooks” for the detection option?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Stephen terraform-aws-cicd does not use webhooks, it uses this https://github.com/cloudposse/terraform-aws-cicd/blob/master/main.tf#L203

cloudposse/terraform-aws-cicd

Terraform Module for CI/CD with AWS Code Pipeline and Code Build - cloudposse/terraform-aws-cicd

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you know how to implement it, we’ll accept one more PR (thanks again for the two PRs)

i5okie avatar
i5okie
09:03:32 PM

nope still

i5okie avatar
i5okie
09:04:01 PM

straight from > eb platform list -v

i5okie avatar

oh crap. its not letting me create a new environment with older stack. argh well this screws me over

i5okie avatar

lol. again.. stupid elastic beanstalk haha

joshmyers avatar
joshmyers

Step. Away. From. ElasticBeanstalk :D

5
2
2
rohit avatar

Is it possible to create an rds postgres instance with multiple databases ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yes/no

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You would use the Postgres terraform provider to provision the additional databases inside of the RDS instance

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The AWS API for RDS does not enable additional databases (as in schemas) to be provisioned

rohit avatar

is there any example that you use ?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

No we don’t have an example

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Provider: PostgreSQL - Terraform by HashiCorp

A provider for PostgreSQL Server.

rohit avatar

thanks Erik

2018-11-30

Max Moon avatar
Max Moon

I have used the postgres TF provider, it works okay

Max Moon avatar
Max Moon

I would up scrapping it, because I still needed to do enough work after applying changes with it that it wasn’t worth using it in the first place

    keyboard_arrow_up