#terraform (2019-12)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2019-12-01
TIL that terraform support list explode notation( the three dot)
2019-12-02
hey,
I currently have to split up a terraform project into multipile ones since the providers needs data from a resource.
Does anyone have an idea on how to share the state / resources? For example I have azure Database and would like to use the terraform postgres provider. I would use the data source but the name I generate for the database has a random string attached to it.
Typically tags can be used to more verbosely describe a resource (assuming that exists in azure similar to aws). So if you can load based on e.g. app=myapp, stage=production
, resulting in the resource.
you can use the remote state https://www.terraform.io/docs/providers/terraform/d/remote_state.html
Accesses state meta data from a remote backend.
yes, remote state. Or you can save all params to SSM when you create a resources and then read them from SSM in other resources or in applications. E.g. https://github.com/cloudposse/terraform-root-modules/blob/master/aws/grafana-backing-services/aurora-mysql.tf#L176
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
Thanks a lot ! I went with remote state!
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Dec 11, 2019 11:30AM.
Register for Webinar
#office-hours (our channel)
Hi Guys, do we really have to tag VPC, to get a EKS cluster created? does the subnet level tagging will do as well?
tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
}
asking this because we dont have permissions to add tags at VPC level but we can at Subnet level, Couldn’t find much info hence asking this.
My understanding from docs is that you need to tag VPC if multiple eks clusters will be deployed in the VPC. This is the same for subnets. Subnets have additional tag requirement telling AWS that they can be used for pods and services etc.
So you might be ok if only one cluster.
@kskewes Thanks, its failing for us even for one cluster, we see events in CloudTrail saying “VPC tagging permissions denied”
When you create your cluster, specify all of the subnets that will host resources for your cluster (such as worker nodes and load balancers).
i think when i was playing around with stuff it was working without the vpc tags but i have no idea what piece depends on it and i’d be very hesitant to move forward without it
2019-12-03
I agree completely with @Chris Fowles. While spinning my EKS cluster with TF, I forgot to tag my VPC appropriately. It seemed to work fine for the tests I carried out (deploy an API with an external ELB). However I fixed it once I caught it since it is indeed recommended by AWS
Hi all! Quick question about TF - is it possible to check in AWS if private IP address in subnet is taken or not?
I’m writing a TF script to deploy Exasol into AWS - it ties to specific IPs where first is for management node and following are for DB nodes. I’m able to get CIDR of VPC subnet and using cirdhost
assign address to network interface - the only thing missing is precaution to check whether this address isn’t already taken in the subnet.
It’s not an option - because the address should be sequential - just for sake of mind of data engineers
consider - it’s a special requirement
but - looks like I’ve solved it by myself. You can’t create a network interface if there already exists network interface with address specified
Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or "me to…
Terraform support for the new AWS EKS Fargate feature.
Starting today, you can start using to run Kubernetes pods on . and make it straightforward to run Kubernetes-based applications on AWS by removing the need to provision and manage infrastructure for pods. With , customers don’t need to be experts in Kubernetes operations to run a cost-optimized and highly-available cluster. eliminates the need for […]
Ah looks like a provider dev put one in just after me: https://github.com/terraform-providers/terraform-provider-aws/issues/11110
Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or "me to…
Hello all. Forgive me, but I’m new to terraform as well as CloudPasse. I do have an issue that may or may not be a bug. I tried to use terraform-aws-cloudfront-s3-cdn, but am geting the error: Error downloading modules: Error loading modules: module cdn: Error parsing .terraform\modules\8b3cb57814301845ecb9970841a29803[main.tf](http://main.tf): At 3 Unknown token: 3:16 IDENT var.namespace Looking at the code I see: module “origin_label” { source = “git://github.com/cloudposse/terraform-terraform-label.git?ref=tags/0.4.0>” namespace = var.namespace stage = var.stage name = var.name delimiter = var.delimiter attributes = compact(concat(var.attributes, [“origin”])) tags = var.tags } Shouldn’t that be namespace = ${var.namespace} ??
@Wayne Johnson you’re using TF 0.12 version of the module, but with terraform binary 0.11
Either pin a module to a release for 0.11, or use terraform binary 0.12
Thanks. I’ll look into upgrading.
Since terraform-docs doesn’t support TF 0.12 yet, how are you guys maintaining your module docs?
Alpine-based multistage-build version of terraform-docs and terraform-docs-replace in multiple versions to be used for CI and other reproducible automations - cytopia/docker-terraform-docs
Ya… same here. This is based on what we started with build-harness
The sad truth is the amount of time that has been invested on this hack, we could have had a bonafide solution
I can imagine. I’ve been reading some threads on this.
i’m surprised no one has forked terraform-docs to make it work
sad that hashicorp wouldn’t accept a PR https://github.com/hashicorp/terraform-config-inspect/pull/17
The markdown output is convenient but opinionated. It would be nice to take a custom template to control the final output. This PR adds the template cli flag to use a custom render template to con…
2019-12-04
2019-12-05
Hey! Does anyone have a good example of creating dynamic blocks or content for the aws_security_group_rule
resource? I need to create an ingress resource that has quite a few rules.
have you thought of using the largest community module for security groups ?
Terraform module which creates EC2-VPC security groups on AWS - terraform-aws-modules/terraform-aws-security-group
I have in the past but in this instance I am importing an existing security group. I wasn’t sure if this fits the use case.
what about creating a new security group and applying next to the old one
and then drop the old one
Hello there, do you know any aws application load balancer terraform module?
Terraform module to provision a standard ALB for HTTP/HTTP traffic - cloudposse/terraform-aws-alb
usage example: https://github.com/cloudposse/terraform-aws-ecs-web-app/blob/master/examples/complete/main.tf#L42
Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app
Terratest for the example: https://github.com/cloudposse/terraform-aws-ecs-web-app/blob/master/test/src/examples_complete_test.go
Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app
@Hugo Lesta ^
THanks @Andriy Knysh (Cloud Posse)
On your terraform modules why do you choose to reference resources that have a count for enable/disable using a join() instead of [0]?
join("", aws_ecr_repository.default.*.name)
aws_ecr_repository.default[0].name
resource "aws_ecr_repository" "default" {
count = var.enabled ? 1 : 0
name = var.use_fullname ? module.label.id : module.label.name
image_scanning_configuration {
scan_on_push = var.scan_images_on_push
}
tags = module.label.tags
}
resource "aws_ecr_lifecycle_policy" "default" {
count = var.enabled ? 1 : 0
repository = join("", aws_ecr_repository.default.*.name)
More context
join always works regardless of whether the resource list is empty or not
and it worked in TF 0.11 (.0.
or [0]
did not)
if enabled=false, this will not work aws_ecr_repository.default[0].name
b/c the list is empty
that makes sense. thanks for explaining!
in TF 0.12, the ternary operator could be used: var name = var.enabled ? aws_ecr_repository.default[0].name : “”
which did not work in TF 0.11 b/c it always evaluated both sides
and got errors on getting the name for an item from the empty list
join
works in all cases in 0.11 and 0.12
good to know! thanks again
does anyone know how to transfer state from TFE to s3?
@Brij S you would need to add the configuration in terraform for that https://www.terraform.io/docs/backends/types/s3.html
Terraform can store state remotely in S3 and lock that state with DynamoDB.
just make sure the bucket is not public and it has versioning enabled
2019-12-06
Hello, I am new to terraform. Terraform plan don’t catch all issue and often they are discovered during the terraform apply . How do you validate the terraform apply before pushing in production ? is that the purpose of workspace ?
Hi Pierre, that is usualy because of the api
e.g. the most common thing is when you’ve policies, plan doesn’t have a way to validate a policy in aws
yes, I know but then should I deploy like this ? create new workspace -> deploy to test -> validate -> delete all resource in the test workspace -> deploy to prod workspace
hmm that’s a good question
I think you’ll find this tool useful
Test Kitchen is an integration tool for developing and testing infrastructure code and software on isolated target platforms. - test-kitchen/test-kitchen
ha nice I found a kitchen-terraform module , I’ll check this
@Nikola Velkovski if I change the backend to s3, wont it try to destroy the current resources and rebuild them?
nope
it will seamlessy migrate to the new backend
so let me get this straight, I simply switch the backend to s3, run tf init/plan/apply etc and it will not try to destroy the already created resources and say no changes necessary?
yup
you’ll see on STDOUT, terraform will ask you if you would like to migrate to the new backend and so on.
beer is on me if that’s not the case
Does anyone know if there is a way to group resource variables in the .tfvars file by the uniquely named resource? An example would be where I have 2 AWS RDS resources that share the same variable names, but I want the variable inputs to be unique for each resource. I know that this can be done with a module: terraform.tfvars file module_name = { engine = “postgres” engine_version = “11.5” } However, I would like to do this without having to create a module.
same way, just use objects in the .tfvars and in the variable definition…
engine1 = {
engine = "postgres"
engine_version = "11.5"
}
engine2 = {
engine = "mysql"
engine_version = "11."
}
then lookup the values from that object in your tf config…
resource "thing" "engine1" {
engine = var.engine1["engine"]
engine_version = var.engine1["engine_version"]
}
resource "thing" "engine2" {
engine = var.engine2["engine"]
engine_version = var.engine2["engine_version"]
}
Thanks, using a map variable for all the resource value assignments with lookup is the right path. In my case I will have to create a variable for each type per resource, i.e. map(string), map(number), and map(bool) and group the inputs by the type per resource.
Has anyone used modules to import existing resources?
As in a module that is just data resources and outputs? I use this for commonly imported groups of things, like VPC parameters, KOPS parameters, etc. Easy to just put a single module block in terraform rather than copy paste all the stuff I need. It’s a bit more DRY that way.
Thanks Alex
2019-12-07
2019-12-08
Can the RDS module be used for creating a cross region replica?
take a look at https://github.com/cloudposse/terraform-aws-rds-replica
Terraform module that provisions an RDS replica. Contribute to cloudposse/terraform-aws-rds-replica development by creating an account on GitHub.
it’s for creating cross-region replicas. PR https://github.com/cloudposse/terraform-aws-rds-replica/pull/4/files added possibility to create replicas in the same region as the master
This is just to finish up #3 since the original poster doesn’t have time to finish it up. Fixes #2 as well.
thanks @Andriy Knysh (Cloud Posse)!
2019-12-09
is there some naming convention recommended when naming resources ? ( like in some programming language ) should my resource be named az_vnet_internal or azVnetInternal ? what about the object name ? how would you rewrite the resource below ?
resource "azurerm_virtual_network" "internal" {
name = "az_vnet_internal"
...
}
Usually, the value that you specify for the name
argument needs to follow the guidelines of the relevant API. Terraform providers aren’t necessarily aware of such conventions, so the best place to look in the respective API documentation.
For example, here’s what Azure has to say about storage accounts: <https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-storage-account-name-errors>
Describes the errors you may encounter when specifying a storage account name.
ok I see, I was expecting something more like in python naming convention: “Use the function naming rules: lowercase with words separated by underscores as necessary to improve readability.” <https://www.python.org/dev/peps/pep-0008/#method-names-and-instance-variables>
The official home of the Python Programming Language
yeah, I figured as much. Unfortunately, there isn’t consistency across providers and usually not even between resources under a terraform-provider-*
category, so there’s no terraform convention here.
Do you mean the resource named as viewed by the cloud provider or the name of the resource in Terraform, e.g. resource <name>
?
teh resource name as viewed by the could provider
by example I have found out the following:
azurerm_storage_account object name
can only consist of lowercase letters and numbers, and must be between 3 and 24 characters long
azurerm_storage_container :
only lowercase alphanumeric characters and hyphens allowed in "name"
Right, exactly, that’s the example I shared in the docs link above to Azure
So I think I will name all object in camelCase
or not can only consist of lowercase letters
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Dec 18, 2019 11:30AM.
Register for Webinar
#office-hours (our channel)
Thanks to @Andriy Knysh (Cloud Posse) we now have support for AWS-managed EKS node pools: https://github.com/cloudposse/terraform-aws-eks-node-group
Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.
Just used this to deploy a node group, killed off the old “workers” module, and it was relatively smooth. I had to deal w/ the taints of the old workers, etc, but it is all running pretty smoothly.
The only issue I have now is getting the kubectl/aws calls to work on Terraform Cloud. I’ll work on that tomorrow, though.
Good stuff!
Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.
I believe @Andriy Knysh (Cloud Posse) has this working on TF cloud
and it downloads those utils
yeah, i’m going to dig in tomorrow. i was at a rancher mini-conf while deploying today
#livingdangerously
lol
@Andriy Knysh (Cloud Posse) @Erik Osterman (Cloud Posse) since you’re not returning the kubeconfig anymore, what do you think is the best practice for me running kubectl/helm outside of the modules?
Store it in SSM I think
it isn’t returned anymore, right? if not, we can’t access it to store it
let me look. maybe i missed it
yeah, it isn’t an output on the cluster anymore
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
Think @Andriy Knysh (Cloud Posse) had a solution for this
Will show how in an hour
we constructed kubeconfig
before (in 0.11 version), but it was not a good idea for a few reasons:
shows up in state
- kubeconfig is always present on the cluster, just need to read it
- We contructed it for
aws-authenticator
, but the newest EKS version usesaws eks get-token
- it’s always be not the latest setup
so, if you use CLI, do it like this https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/auth.tf#L139
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
or, if you need to construct kubeconfig, do it like this https://github.com/cloudposse/terraform-aws-eks-cluster/blob/0.11/master/kubeconfig.tpl
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
all those vars for the template are in the outputs
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
(we did the same for kops
in geodesic
- read kubeconfig
from the cluster, not constructed it)
ah, fair enough
do you place it in the default kubeconfig location or just write the file and --kubeconfig
to use it
or KUBECONFIG
env var
either way
in a container, you can place it in the default kubeconfig location since it’s only on that container
this is terraform cloud so it’ll be gone after that run anyway
if you want to use helm, then use KUBECONFIG
on TF Cloud, I used --kubeconfig
param
what i found odd was it said kubectl
was not found when i tried to use it even when the “install” kubectl param was passed in
i tried to use just my own install (apt-get install kubectl)
it’s complicated on TF Cloud
for a few reasons:
- You can’t execute under
sudo
, so the only place you can install external binaries is under theterraform
user folder
- But if you install binaries there, you have to put the location into the
PATH
variable in order to use the binaries
you can sudo
our make:
init-kubectl:
@sudo apt-get install -y apt-transport-https && \
curl -s <https://packages.cloud.google.com/apt/doc/apt-key.gpg> | sudo apt-key add - && \
echo "deb <https://apt.kubernetes.io/> kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list && \
sudo apt-get update && \
sudo apt-get install -y kubectl
- But it does not work either since once the shell that downloaded the binaries exits, the env vars are gone and
PATH
is not longer what you expect
on TF Cloud? No
on Terraform Enterprise (on-prem), yes
I added var
external_packages_install_path: "/home/terraform/.terraform/bin"
to TF Cloud workspace
and then to call a binary, just do /home/terraform/.terraform/bin/helmfile
for example
or use `
binary = "${var.external_packages_install_path}/helmfile"
in other TF code
we’re not on-prem. this is TFC
maybe they just ignore our sudo
or we were grandfathered in
maybe just ignore
but if you try to use locations other then the terraform
user folder, you get error
gotcha
so let me try out of that folder and see
fwiw
yeah, @Erik Osterman (Cloud Posse), but we have working code in TFC using sudo
in the Makefile
and in a local-exec:
resource "null_resource" "make_install" {
provisioner "local-exec" {
command = "sudo apt-get update && sudo apt-get install make"
}
provisioner "local-exec" {
command = "make init-ci"
}
triggers = {
run = uuid()
}
}
I can’t speak to why it works against their docs…but it does
from the run:
null_resource.make_install (local-exec): Executing: ["/bin/sh" "-c" "sudo apt-get update && sudo apt-get install make"]
In what path does it get installed?
i assume the default dir for apt-get resources. i will check, though
just noticed this in the makefile:
init-aws-auth: ## Initialize aws-iam-authenticator
@curl -o aws-iam-authenticator <https://amazon-eks.s3-us-west-2.amazonaws.com/1.12.7/2019-03-27/bin/linux/amd64/aws-iam-authenticator> && \
chmod +x ./aws-iam-authenticator && \
sudo mv ./aws-iam-authenticator /usr/local/bin
running now to check
null_resource.make_install (local-exec): /usr/bin/kubectl
this does not work, though:
external_packages_install_path = "/usr/local/bin"
i’m assuming it is a sudo issue
install_aws_cli=true
if [[ "$install_aws_cli" = true ]] ; then
echo 'Installing AWS CLI...'
mkdir -p /usr/local/bin
cd /usr/local/bin
curl -LO <https://s3.amazonaws.com/aws-cli/awscli-bundle.zip>
unzip ./awscli-bundle.zip
./awscli-bundle/install -i /usr/local/bin
export PATH=$PATH:/usr/local/bin:/usr/local/bin/bin
echo 'Installed AWS CLI'
which aws
aws --version
fi
not sure what’s working and what’s not. I spent just a few days on it, but there were a lot of issues using paths other than the terraform
user folder (permissions etc.)
Error: Error running command ' set -e
install_aws_cli=true
if [[ "$install_aws_cli" = true ]] ; then
echo 'Installing AWS CLI...'
mkdir -p .terraform/modules/eks_cluster/.terraform/bin
cd .terraform/modules/eks_cluster/.terraform/bin
curl -LO <https://s3.amazonaws.com/aws-cli/awscli-bundle.zip>
unzip ./awscli-bundle.zip
./awscli-bundle/install -i .terraform/modules/eks_cluster/.terraform/bin
export PATH=$PATH:.terraform/modules/eks_cluster/.terraform/bin:.terraform/modules/eks_cluster/.terraform/bin/bin
echo 'Installed AWS CLI'
which aws
aws --version
fi
install_kubectl=true
if [[ "$install_kubectl" = true ]] ; then
echo 'Installing kubectl...'
mkdir -p .terraform/modules/eks_cluster/.terraform/bin
cd .terraform/modules/eks_cluster/.terraform/bin
curl -LO <https://storage.googleapis.com/kubernetes-release/release/$(curl> -s <https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl>
chmod +x ./kubectl
export PATH=$PATH:.terraform/modules/eks_cluster/.terraform/bin
echo 'Installed kubectl'
which kubectl
fi
aws_cli_assume_role_arn=
aws_cli_assume_role_session_name=
if [[ -n "$aws_cli_assume_role_arn" && -n "$aws_cli_assume_role_session_name" ]] ; then
echo 'Assuming role ...'
mkdir -p .terraform/modules/eks_cluster/.terraform/bin
cd .terraform/modules/eks_cluster/.terraform/bin
curl -L <https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64> -o jq
chmod +x ./jq
source <(aws --output json sts assume-role --role-arn "$aws_cli_assume_role_arn" --role-session-name "$aws_cli_assume_role_session_name" | jq -r '.Credentials | @sh "export AWS_SESSION_TOKEN=\(.SessionToken)\nexport AWS_ACCESS_KEY_ID=\(.AccessKeyId)\nexport AWS_SECRET_ACCESS_KEY=\(.SecretAccessKey) "')
echo 'Assumed role '
fi
echo 'Applying Auth ConfigMap with kubectl...'
aws eks update-kubeconfig --name=xolv-prod-tools-cluster --region=us-west-2 --kubeconfig=~/.kube/config
kubectl version --kubeconfig ~/.kube/config
kubectl apply -f .terraform/modules/eks_cluster/configmap-auth.yaml --kubeconfig ~/.kube/config
echo 'Applied Auth ConfigMap with kubectl'
': exit status 1. Output: Installing AWS CLI...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 12.3M 100 12.3M 0 0 72.0M 0 --:--:-- --:--:-- --:--:-- 72.0M
Archive: ./awscli-bundle.zip
inflating: awscli-bundle/install
inflating: awscli-bundle/packages/colorama-0.3.9.tar.gz
inflating: awscli-bundle/packages/ordereddict-1.1.tar.gz
inflating: awscli-bundle/packages/argparse-1.2.1.tar.gz
inflating: awscli-bundle/packages/pyasn1-0.4.8.tar.gz
inflating: awscli-bundle/packages/PyYAML-3.13.tar.gz
inflating: awscli-bundle/packages/docutils-0.15.2.tar.gz
inflating: awscli-bundle/packages/PyYAML-5.1.2.tar.gz
inflating: awscli-bundle/packages/urllib3-1.25.7.tar.gz
inflating: awscli-bundle/packages/python-dateutil-2.8.0.tar.gz
inflating: awscli-bundle/packages/awscli-1.16.302.tar.gz
inflating: awscli-bundle/packages/python-dateutil-2.6.1.tar.gz
inflating: awscli-bundle/packages/colorama-0.4.1.tar.gz
inflating: awscli-bundle/packages/jmespath-0.9.4.tar.gz
inflating: awscli-bundle/packages/urllib3-1.22.tar.gz
inflating: awscli-bundle/packages/futures-3.3.0.tar.gz
inflating: awscli-bundle/packages/s3transfer-0.2.1.tar.gz
inflating: awscli-bundle/packages/rsa-3.4.2.tar.gz
inflating: awscli-bundle/packages/botocore-1.13.38.tar.gz
inflating: awscli-bundle/packages/six-1.13.0.tar.gz
inflating: awscli-bundle/packages/simplejson-3.3.0.tar.gz
inflating: awscli-bundle/packages/virtualenv-15.1.0.tar.gz
inflating: awscli-bundle/packages/setup/setuptools_scm-1.15.7.tar.gz
Running cmd: /usr/bin/python virtualenv.py --no-download --python /usr/bin/python .terraform/modules/eks_cluster/.terraform/bin
Running cmd: .terraform/modules/eks_cluster/.terraform/bin/bin/pip install --no-cache-dir --no-index --find-links file:///terraform/.terraform/modules/eks_cluster/.terraform/bin/awscli-bundle/packages/setup setuptools_scm-1.15.7.tar.gz
Traceback (most recent call last):
File "./awscli-bundle/install", line 162, in <module>
main()
File "./awscli-bundle/install", line 151, in main
pip_install_packages(opts.install_dir)
File "./awscli-bundle/install", line 114, in pip_install_packages
pip_script, setup_requires_dir, package
File "./awscli-bundle/install", line 49, in run
p.returncode, cmd, stdout + stderr))
__main__.BadRCError: Bad rc (127) for cmd '.terraform/modules/eks_cluster/.terraform/bin/bin/pip install --no-cache-dir --no-index --find-links file:///terraform/.terraform/modules/eks_cluster/.terraform/bin/awscli-bundle/packages/setup setuptools_scm-1.15.7.tar.gz': /bin/sh: 1: .terraform/modules/eks_cluster/.terraform/bin/bin/pip: not found
aws cli won’t install w/ the default path (from the module)
module "eks_cluster" {
source = "git::<https://github.com/cloudposse/terraform-aws-eks-cluster.git?ref=0.13.0>"
namespace = var.namespace
stage = var.stage
name = var.name
attributes = var.attributes
tags = module.xoot.default_tags
apply_config_map_aws_auth = true
install_aws_cli = true
install_kubectl = true
kubeconfig_path = local.kubeconfig_filename
allowed_cidr_blocks = var.allowed_cidr_blocks_cluster
allowed_security_groups = var.allowed_security_groups_cluster
kubernetes_version = var.kubernetes_version
region = var.region
subnet_ids = module.subnets.public_subnet_ids
workers_role_arns = [module.eks_node_group_default.eks_node_group_role_arn]
workers_security_group_ids = []
vpc_id = module.vpc.vpc_id
...
}
looks like some bad paths in the pip install command
https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/auth.tf#L108 already has /bin
in the path and it appends it again
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
you’re already cd’d into the local install path then pass -i
of the same folder you’re in: https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/auth.tf#L107
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
yeah…at a bit of a loss. I can’t use the internal tools due to the above and I can’t use my own installed tools for {not sure why, but insert reason here}.
I confirmed the tools are installed and work:
null_resource.make_install (local-exec): /usr/bin/kubectl
null_resource.make_install (local-exec): /usr/bin/aws
null_resource.make_install (local-exec): aws-cli/1.14.44 Python/3.6.8 Linux/4.15.0-1044-aws botocore/1.8.48
My other make commands work with those same tools, but the module’s use of these tools does not.
i haven’t confirmed, due to these issues, but it seems a bit cart before horse on the --kubeconfig
param pointing to kubeconfig_path
which the file won’t exist yet because the cluster isn’t back so I can’t write it.
gotta jet, but i’ll dig in more tomorrow
The module works on TF cloud. I’ll send you the variables I used later today
this is the code that works on TF Cloud:
provider "aws" {
region = var.region
assume_role {
role_arn = var.aws_assume_role_arn
}
}
module "label" {
source = "git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.16.0>"
namespace = var.namespace
name = var.name
stage = var.stage
delimiter = var.delimiter
attributes = compact(concat(var.attributes, list("cluster")))
tags = var.tags
}
locals {
# The usage of the specific kubernetes.io/cluster/* resource tags below are required
# for EKS and Kubernetes to discover and manage networking resources
# <https://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html#base-vpc-networking>
tags = merge(module.label.tags, map("kubernetes.io/cluster/${module.label.id}", "shared"))
eks_worker_ami_name_filter = "amazon-eks-node-${var.kubernetes_version}*"
}
module "vpc" {
source = "git::<https://github.com/cloudposse/terraform-aws-vpc.git?ref=tags/0.8.1>"
namespace = var.namespace
stage = var.stage
name = var.name
attributes = var.attributes
cidr_block = var.vpc_cidr_block
tags = local.tags
}
module "subnets" {
source = "git::<https://github.com/cloudposse/terraform-aws-dynamic-subnets.git?ref=tags/0.18.1>"
availability_zones = var.availability_zones
namespace = var.namespace
stage = var.stage
name = var.name
attributes = var.attributes
vpc_id = module.vpc.vpc_id
igw_id = module.vpc.igw_id
cidr_block = module.vpc.vpc_cidr_block
nat_gateway_enabled = var.nat_gateway_enabled
nat_instance_enabled = var.nat_instance_enabled
tags = local.tags
}
module "eks_workers" {
source = "git::<https://github.com/cloudposse/terraform-aws-eks-workers.git?ref=tags/0.11.0>"
namespace = var.namespace
stage = var.stage
name = var.name
attributes = var.attributes
tags = var.tags
instance_type = var.instance_type
eks_worker_ami_name_filter = local.eks_worker_ami_name_filter
vpc_id = module.vpc.vpc_id
subnet_ids = module.subnets.public_subnet_ids
associate_public_ip_address = var.associate_public_ip_address
health_check_type = var.health_check_type
min_size = var.min_size
max_size = var.max_size
wait_for_capacity_timeout = var.wait_for_capacity_timeout
cluster_name = module.label.id
cluster_endpoint = module.eks_cluster.eks_cluster_endpoint
cluster_certificate_authority_data = module.eks_cluster.eks_cluster_certificate_authority_data
cluster_security_group_id = module.eks_cluster.security_group_id
# Auto-scaling policies and CloudWatch metric alarms
autoscaling_policies_enabled = var.autoscaling_policies_enabled
cpu_utilization_high_threshold_percent = var.cpu_utilization_high_threshold_percent
cpu_utilization_low_threshold_percent = var.cpu_utilization_low_threshold_percent
}
module "eks_cluster" {
source = "git::<https://github.com/cloudposse/terraform-aws-eks-cluster.git?ref=tags/0.13.0>"
namespace = var.namespace
stage = var.stage
name = var.name
attributes = var.attributes
tags = var.tags
region = var.region
vpc_id = module.vpc.vpc_id
subnet_ids = module.subnets.public_subnet_ids
kubernetes_version = var.kubernetes_version
kubeconfig_path = var.kubeconfig_path
local_exec_interpreter = var.local_exec_interpreter
configmap_auth_template_file = var.configmap_auth_template_file
configmap_auth_file = var.configmap_auth_file
oidc_provider_enabled = var.oidc_provider_enabled
install_aws_cli = var.install_aws_cli
install_kubectl = var.install_kubectl
kubectl_version = var.kubectl_version
jq_version = var.jq_version
external_packages_install_path = var.external_packages_install_path
aws_eks_update_kubeconfig_additional_arguments = var.aws_eks_update_kubeconfig_additional_arguments
aws_cli_assume_role_arn = var.aws_cli_assume_role_arn != "" ? var.aws_cli_assume_role_arn : var.aws_assume_role_arn
aws_cli_assume_role_session_name = var.aws_cli_assume_role_session_name != "" ? var.aws_cli_assume_role_session_name : module.label.id
workers_role_arns = [module.eks_workers.workers_role_arn]
workers_security_group_ids = [module.eks_workers.security_group_id]
}
this is the variables and ENV variables for the EKS workspace (we have a TF generator that parses the config YAML and provisions TF Cloud workspaces):
environments:
- name: "testing"
env_vars:
AWS_ACCOUNT_ID: "xxxxxxxxx"
vars:
stage: "testing"
region: "us-east-2"
aws_assume_role_arn: "arn:aws:iam::xxxxxxxx:role/OrganizationAccountAccessRole"
workspaces:
- name: "eks"
enabled: true
repo_name: "cloudposse/terraform-cloud-reference-architectures"
repo_branch: "master"
repo_working_directory: "blueprints/eks"
env_vars:
CONFIRM_DESTROY: 1
hcl_vars:
availability_zones: ["us-east-2a", "us-east-2b"]
vars:
name: "eks"
enabled: true
vpc_cidr_block: "172.16.0.0/16"
instance_type: "t2.small"
kubernetes_version: "1.14"
associate_public_ip_address: true
max_size: 3
min_size: 2
autoscaling_policies_enabled: true
nat_gateway_enabled: false
nat_instance_enabled: false
configmap_auth_file: "/home/terraform/.terraform/configmap-auth.yaml"
kubeconfig_path: "/home/terraform/.kube/config"
external_packages_install_path: "/home/terraform/.terraform/bin"
@johncblandii ^
https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/auth.tf#L108 already has /bin
in the path and it appends it again
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
the first bin
is what we just selected (see variables above, external_packages_install_path
), it could be anything (your choice). The second bin
is what the AWS CLI creates automatically and puts its scripts in there. So it just happens to be bin/bin
I’m reading on my phone, but that looks like what I’m using except the path. I’ll try that path tomorrow
Any thoughts as to why your commands don’t work with my installed packages?
- the installed packages are not in PATH
- or no permissions if they are downloaded into diff location (not inside
/home/terraform/
user folder)
They must be because I do no PATH tweaking.
I’ll find out for sure
ok…running it real quick then will be back later to do more
@Andriy Knysh (Cloud Posse) what are your thoughts on the cart before horse nature of needing a kubeconfig in the module before one is written because we need the results from the module?
Why before? We provision the cluster and it has kubeconfig. Then we read it from the cluster to execute other commands
Did I miss where you were writing the file somewhere inside of the module? I didn’t see that so it looks like it’s reading from a file that doesn’t exist yet.
So it looks like this works, but I haven’t verified whether i can use the tools for my other commands (got an error for aws eks
not being available).
apply_config_map_aws_auth = true
configmap_auth_file = "/home/terraform/.terraform/configmap-auth.yaml"
external_packages_install_path = "/home/terraform/.terraform/bin"
install_aws_cli = true
install_kubectl = true
kubeconfig_path = local.kubeconfig_filename
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
aws eks update-kubeconfig
reads kubeconfig from the cluster and saves it to the file system at --kubeconfig=${var.kubeconfig_path}
but I haven’t verified whether i can use the tools for my other commands (got an error for aws eks
not being available).
once the shell that executes commands in https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/auth.tf exits, you lose all env vars exported there (so PATH will be reverted to default)
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
in other shells, you have to call the binaries by prefixing them with /home/terraform/.terraform/bin
, e.g. /home/terraform/.terraform/bin/bin/aws
aws eks update-kubeconfig reads kubeconfig from the cluster and saves it to the file system at –kubeconfig=${var.kubeconfig_path}
AHHHHHHHHHH…ok. That was what I was completely missing, man. I knew a write had to happen somewhere.
anyone have any prior art for a terraform provider that downloads binary dependencies?
I don’t know of anything that downloads the cli from a remote source and installs it (which is what I assume is the intended behavior you’re describing), but <https://github.com/terra-farm/terraform-provider-virtualbox> leverages <https://github.com/terra-farm/go-virtualbox> which just checks if VBoxManage
is installed locally on your machine.
VirtualBox provider for Terraform. Contribute to terra-farm/terraform-provider-virtualbox development by creating an account on GitHub.
VirtualBox wrappers in Go. Contribute to terra-farm/go-virtualbox development by creating an account on GitHub.
go-getter is similar, in that it requires git to be present and in the path
thanks @Arthur Burkart and @loren!
@Andriy Knysh (Cloud Posse)
I think @Andriy Knysh (Cloud Posse) also mentioned using go-getter
e.g. imagine a terraform provider that wrapped a cli tool
that provider would depend on the cli tool in order to function
2019-12-10
Terraform module to provision an EKS Fargate Profile - cloudposse/terraform-aws-eks-fargate-profile
@Andriy Knysh (Cloud Posse) has just finished implementing support for EKS Fargate node pools!
Terratest
provisions a Node Group with two worker nodes AND a Fargate Profile for k8s default
namespace. Then it creates a stub k8s deployment (using k8s go-client
) in the default
namespace in order for Fargate to add a Fargate node to the cluster to provision the deployment. Then it waits for all three nodes to join the cluster, then deletes the k8s deployment, and finally destroys all AWS resources with terraform destroy
Creating Kubernetes deployment ‘demo-deployment’ in the ‘default’ namespace...
Created Kubernetes deployment ‘demo-deployment’
Waiting for worker nodes to join the EKS cluster...
Node ip-172-16-97-1.us-east-2.compute.internal has joined the EKS cluster at 2019-12-10 03:49:29 +0000 UTC
Node ip-172-16-137-51.us-east-2.compute.internal has joined the EKS cluster at 2019-12-10 03:49:37 +0000 UTC
Node fargate-ip-172-16-53-87.us-east-2.compute.internal has joined the EKS cluster at 2019-12-10 03:51:06 +0000 UTC
All nodes have joined the EKS cluster
Listing deployments in namespace ‘default’:
* Deployment ‘demo-deployment’ has 1 replica(s)
Deleting deployment ‘demo-deployment’ ...
Deleted deployment ‘demo-deployment’
(heh, slack has broken markdown)
yea
Terraform module to provision an EKS Fargate Profile - cloudposse/terraform-aws-eks-fargate-profile
This is slick testing!
We’ve been using terratest also, nice work here!
These slides are really good, https://www.infoq.com/presentations/automated-testing-terraform-docker-packer
Yevgeniy Brikman talks about how to write automated tests for infrastructure code, including the code written for use with tools such as Terraform, Docker, Packer, and Kubernetes. Topics covered include: unit tests, integration tests, end-to-end tests, dependency injection, test parallelism, retries and error handling, static analysis, property testing and CI / CD for infrastructure code.
coming for this module to test next.
Do you guys see any problems of adding
lifecycle {
create_before_destroy = true
}
to : https://github.com/cloudposse/terraform-aws-rds-cluster/blob/master/main.tf#L90
any objections if I send a PR?
PRs always welcome
what’s the purpose of having that?
well I gave it more tough and I do not think is needed
the idea behind was to always create new nodes first before deleting
in case one was terminates by hand but created with TF
for example
or in a cluster with size 4 the writer is being destroy by TF when is set to 2
2019-12-11
Hey guys, Please advise on what is the structure of this value so that I could provide additional S3 bucket policy? https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/blob/a05cffd71b6b5ea8d9a881e20c8b41d038dc9167/variables.tf#L66 Whatever I provide it blames on this or that character somewhere. Can’t find example of what it actually expects.
Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn
Examples of typical use cases for Amazon S3 bucket policies.
Hey @maarten. Thanks. Applied in this form and all works like a charm. I was thinking that it might expect TF style like described here: https://www.terraform.io/docs/providers/aws/d/iam_policy_document.html And not official one for AWS what you sent.
Nevertheless it’s fixed already. Thank a lot!
Generates an IAM policy document in JSON format
2019-12-12
Hi, I have a module that isn’t playing well in 0.12 specifically because it previously referenced private_ip[0] to name an aws_route53_record. I previously used replace() to change the . to -
Hey @NVMeÐÐi I answered your question in Gitter where you asked it earlier.
Oh wow, I didn’t see that alert pop up
THANK YOU AGAIN!
I wouldn’t say I’m to advanced with Terraform, still learning every day.
0.11 to 0.12 upgrades are kicking my butt.
Yeah, it’s just an oddity. You don’t usually expect the types to change underneath your feet.
If you haven’t read the upgrade guide, I strongly recommend it.
• https://www.terraform.io/upgrade-guides/0-12.html#referring-to-list-variables
• https://github.com/hashicorp/terraform/blob/master/CHANGELOG.md#0120-may-22-2019
Upgrading to Terraform v0.12
Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amon…
It seems to cover all the gotchas
My question is this… Is there anyway to change a tuple to a string in 0.12? (Interpolation doesn’t seem to do it anymore).
Example below, I get the tuple error for the name, but records is fine.
resource "aws_route53_record" "this" {
name = "ip-${replace(module.this_mybox.private_ip[0], ".", "-")}"
type = "A"
zone_id = "${var.dns_zone_id}"
records = "${module.this_mybox.private_ip[0]}"
ttl = "600"
}
Oof, got it resolved. Artburkart got me.
2019-12-13
It finally happened… https://github.com/terraform-aws-modules/terraform-aws-eks/issues/635
A general question for users and contributors of this module My feeling is that complexity getting too high and quality is suffering somewhat. We are squeezing a lot of features in a single modul…
It was only a matter of time…
A general question for users and contributors of this module My feeling is that complexity getting too high and quality is suffering somewhat. We are squeezing a lot of features in a single modul…
I think the approach we took with our EKS modules will continue to scale better.
Hey folks, any guidance on this issue I’m running into with the elasticache-memcached module? https://github.com/cloudposse/terraform-aws-elasticache-memcached/issues/8
Following the documentation in the Terraform registry results in an error. Attempted Usage module "elasticache-memcached" { source = "cloudposse/elasticache-memcached/aws" versi…
unfortunately that module was not converted to TF 0.12 yet. We have automated tests for all modules that are converted to 0.12. The tests provision the module on real AWS account. We have not tested that module for a while, will have to look into that (and convert to 0.12)
Following the documentation in the Terraform registry results in an error. Attempted Usage module "elasticache-memcached" { source = "cloudposse/elasticache-memcached/aws" versi…
Any Terratest users? Tried to find a better channel (maybe #terragrunt would have been better here) - but I’m wondering, how can I manage which Terraform version I would be using for doing the runs…? Tried to read the module’s source code also but did not find anything obvious… only TerraformBin variable but I’d guess that does not help crazy much…
I haven’t used Terratest but given these lines of code TerraformBinary and the documentation in their basic terraform example, https://github.com/gruntwork-io/terratest/tree/master/examples/terraform-basic-example “Install Terraform and make sure it’s on your PATH
.” that Terratest runs the terraform binary that is in your path.
Terratest is a Go library that makes it easier to write automated tests for your infrastructure code. - gruntwork-io/terratest
This is the function that runs the Terraform command https://github.com/gruntwork-io/terratest/blob/9c546f282a74359d7d17697a504268b190bb5e35/modules/terraform/cmd.go#L48-L63
Terratest is a Go library that makes it easier to write automated tests for your infrastructure code. - gruntwork-io/terratest
To manage the terraform versions, I would use a tool such as tfenv.
@Joe Presley ok, thanks. I currently plugging our Terratest-orchestrated Terraform tests into GitHub Actions, and I was surprised when I was able to do a Terratest-run with just installing the module-dependencies and running “go test” - no need to install TF in PATH… Hence I starting doubting on what’s happening
@Joe Presley I was also using tfenv before locally, but I found tfswitch more… ux-friendly maybe
Does the GitHub actions use HashiCorp’s terraform GitHub action? https://www.terraform.io/docs/github-actions/getting-started.html I’ll have to checkout tfswitch.
Terraform by HashiCorp
@Joe Presley I could use those yes… And, I’d need to find a small balance here, as I’d need to pull dependent TF modules from TF Enterprise’s Module Registry - meaning, I anyway need to configure ~/.terraformrc somehow…
I might build a fork of the official Terraform Actions which just includes an additional command, “test” (which executes a “make test”… Might be the easiest at the end…
Sent my question to the Terratest team: https://github.com/gruntwork-io/terratest/issues/419 - let’s see…
Hi, I tried to find information on this - checked also the source code of Terratest - but I did not understand how to actually manage which version of Terraform is used to run the tests. We can use…
2019-12-16
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Dec 25, 2019 11:30AM.
Register for Webinar
#office-hours (our channel)
chatting in general re nodegroups vs separate ASGs per zone for stateful apps/services in eks kube. the convo being more appropriate here. . .
since using TF to manage. I was refactoring to support PVCs and dealing with some very active operators which are running tons of node-shells that I did’nt know use ‘nodeSelector’ when templatizing the helm chart and applying the manifests.
so also curious how folks are managing stateful apps/services like operators, kafka, etc that set ‘nodeSelector’, it seems like I need to find a dynamic solution to keeping the ‘nodeSelector’ s updated as scaling events occur, etc.
2019-12-17
btw, I don’t know why @maarten didn’t share this yet, but this looks really rad: https://github.com/Flaconi/terraform-aws-bastion-ssm-iam
AWS Bastion server which can reside in the private subnet utilizing Systems Manager Sessions - Flaconi/terraform-aws-bastion-ssm-iam
Thanks @Erik Osterman (Cloud Posse), maybe we can have an #I-created-this-take-a-look-announce or something
AWS Bastion server which can reside in the private subnet utilizing Systems Manager Sessions - Flaconi/terraform-aws-bastion-ssm-iam
Yea, that’s a good suggestion!
I just created #community-projects
@antonbabenko I saw your post wishing for an “AWS Bastion” service. Have you seen this interesting bastion developed by @maarten?
Hi guys! I saw that one (and 10+ other similar projects). The post was about “AWS should take the undifferentiated load from users and make a proper service for us to just use without asking us to run and manage it”. In one of my current project I am going to use Teleport, because I can’t use “ssm session manager” and “ec2-instance-connect” by AWS.
Teleport is rad, but definitely a lot of work.
Does anyone know good teleport module which I can get as basic (skyscrapers have one)
We deploy teleport on kops/kubernetes
but have a related module for backing services
Gravitational Teleport backing services (S3, DynamoDB) - cloudposse/terraform-aws-teleport-storage
I need teleport in first place to setup RKE cluster on EC2
Not sure I follow…
Do you mean you want to deploy the core teleport architecture on dedicated EC2 VMs?
(if so, it’s not a bad idea)
Brb :)
Bastion server with zero outside ports open
Hi so I’m using this : https://github.com/cloudposse/terraform-aws-rds-cluster/blob/master/main.tf#L163 option to create cluster CNAMES to the cluster endpoint byt my app are not able to resolve the CNAME records, so I’m guessing maybe this is a problem wit the way that aurora cluster endpoint work on route53? If the default cluster endpoints are CNAMEs then my CNAMES will point to a CNAME than then it will point to A record that is the rds instance endpoint so maybe java has an issue with following multiple CNAMES or somesthing?
I never had this problem before but I never pointed CNAMEs to cluster endpoint, I was wondering to maybe switch to custom endpoints
What happens when you dig
it?
Does it resolve correctly?
we use CNAMEs with Aurora clusters all the time with no issues
works with any SQL client. Also works with Metabase, which is Java-based
in your app, try to use the cluster endpoint first to check if it’s working
it could be any other issue (e.g. SSL etc.)
in our app the endpoint works but the CNAME does
from the instance that runs the app I can resolve no ptoblem
; ANSWER SECTION:
team-staging-data-reader-us-east-2.staging-ds.sonatype.com. 300 IN CNAME team-staging-us-east-2-data.cluster-ro-xxxxxxxx.us-east-2.rds.amazonaws.com.
team-staging-us-east-2-data.cluster-ro-xxxxxxxx.us-east-2.rds.amazonaws.com. 1 IN CNAME team-staging-us-east-2-data-2.xxxxxxxx.us-east-2.rds.amazonaws.com.
team-staging-us-east-2-data-2.xxxxxxxx.us-east-2.rds.amazonaws.com. 5 IN A 10.10.10.5
but is a CNAME to a CNAME to an A record
so I wonder if dropwizard does not like that somehow
Ya maybe trying to use SSL? That won’t work with the CNAME
we are not using SSL yet
@channel, need help with below module for ALB. This module is a combined one which created ALB, target group, listerner, Listener rule, attaching certificate. However, as you may see, we are passing a list of elements to this module which is causing this module to break.
It breaks if I remove any element from the list and but this, all the element post the one removed gets deleted and recreated.
Please suggest if anyone has faced such issues and how have they solved. This is happening to any resources is in list. Could be because terraform refresh doesn’t try to re-map the index of elements if anything is missing or deleted or added. Appreciated your help in advance
@jha.bikal see if this issue discusses what you’re seeing, and the linked issue also that describes how to use for_each in terraform 0.12 to address it… https://github.com/hashicorp/terraform/issues/14275#issuecomment-361408631
We have a lot of AWS Route53 zones which are setup in exactly the same way. As such, we are using count and a list variable to manage these. The code basically looks like this: variable "zone_…
Thanks @loren. Yes, they have touched the exact pain point I’m facing now.
Any idea if we can use different terraform version for different workspace(in the same server)
2019-12-18
hei guys, I’ve started recently with terraform-aws-rds(
https://github.com/cloudposse/terraform-aws-rds.git?ref=tags/0.9.3)
) and terraform-aws-vpc (
https://github.com/cloudposse/terraform-aws-vpc.git?ref=tags/0.7.0)
yet it seems they are completely incompatible.
As soon as I run terraform plan
, it gives me errors like the ones below, which require me to go an tweak the generated .terraform modules files, in order for them to work together.
Error: Unsupported block type
on .terraform/modules/rds_instance/main.tf line 71, in resource "random_string" "default":
71: keepers {
Blocks of type "keepers" are not expected here. Did you mean to define
argument "keepers"? If so, use the equals sign to assign it a value.
Error: Incorrect attribute value type
on .terraform/modules/rds_instance/main.tf line 104, in resource "aws_db_subnet_group" "default":
104: subnet_ids = ["${var.subnet_ids}"]
Inappropriate value for attribute "subnet_ids": element 0: string required.
Error: Incorrect attribute value type
on .terraform/modules/rds_instance/main.tf line 134, in resource "aws_security_group_rule" "ingress_cidr_blocks":
134: cidr_blocks = ["${var.allowed_cidr_blocks}"]
Inappropriate value for attribute "cidr_blocks": element 0: string required.
Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.
Terraform Module that defines a VPC with public/private subnets across multiple AZs with Internet Gateways - cloudposse/terraform-aws-vpc
Any idea if that is norm ? is there some best practices to avoid having such incompatibilities ?
I don’t think its ok for me to tweak the generated .terraform modules, just to make those work
looks like you are mixing up TF 0.11 and 0.12. cidr_blocks = ["${var.allowed_cidr_blocks}"]
is 0.11 syntax
take a look at the working example here https://github.com/cloudposse/terraform-aws-rds/blob/master/examples/complete/main.tf
Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.
Terratest that deploys the example on AWS https://github.com/cloudposse/terraform-aws-rds/blob/master/test/src/examples_complete_test.go
Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.
for TF 0.12, use the latest versions of the modules (that’s what the tests usually do)
for TF 0.11, find the prior releases for 0.11 and use all modules for 0.11
https://github.com/cloudposse/terraform-aws-rds?ref=tags/0.9.3 is 0.11 version
Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.
I’m using terraform v.0.12.17 locally now. Is there a way to check for each module release if its on Terraform 12 or 11 ?
there is no automatic way to check it
look into examples/complete of each module
they should use the latest TF 0.12 version (or almost latest)
also, look into release, e.g. https://github.com/cloudposse/terraform-aws-rds/releases/tag/0.10.0
Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.
it will say “Convert to TF 0.12”. All tags after that are 0.12
sometimes we make new releases for 0.11, but the tag will be smaller than the first 0.12 release, e.g. https://github.com/cloudposse/terraform-aws-rds/releases/tag/0.9.3 is a bug fix release for 0.11 version
Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.
alright, thanks a lot, will check now
I’m curious what CloudPosse uses as a rule of thumb for when to have a module call another module vs defining the terraform resource. For instance the aws-ecs-alb-service-task module doesn’t reference the aws-iam-role module for creating its roles. https://github.com/cloudposse/terraform-aws-ecs-alb-service-task/blob/master/main.tf https://github.com/cloudposse/terraform-aws-iam-role
Terraform module which implements an ECS service which exposes a web service via ALB. - cloudposse/terraform-aws-ecs-alb-service-task
A Terraform module that creates IAM role with provided JSON IAM polices documents. - cloudposse/terraform-aws-iam-role
I think the iam role module was created much, much later.
In our case, we use the IAM role module to provision roles needed by pods in kubernetes
and use it more in a standalone context
So if the ecs-alb-service-task was written from scratch today, it would likely reference the iam-role module?
So in < 0.12, using modules like this could very often lead to “count of … could not be computed” https://docs.cloudposse.com/troubleshooting/terraform-value-of-count-cannot-be-computed/
post 0.12, we encounter that much less often, so perhaps.
the main reason we have the iam role module is to enforce consistent names
however, if we’re just provisioning some roles in another module, we probably wouldn’t use this module because we’re already using the null label module for that everywhere else
i don’t think this is the “rule of thumb” you were looking for though
another example: https://github.com/cloudposse/terraform-aws-ssm-parameter-store
Terraform module to populate AWS Systems Manager (SSM) Parameter Store with values from Terraform. Works great with Chamber. - cloudposse/terraform-aws-ssm-parameter-store
We wanted to use this module all over the place, but ended up not being able to because of the count-of problems. Thus, we just ended up using the raw resources everywhere.
Thanks Erik. I appreciate it. So I guess you could say the rule of thumb is to reference other modules unless it creates more problems than benefits.
That’s a good paraphrasing.
Is there a way to release the lock on terraform file in dynamo db through terraform CLI
have you tried terraform force-unlock ?
Just tried it, thanks - it worked
Just need the lock id
But if you give it the wrong one, it will tell you the existing one
Hello all, In the last 24 hours, all of our terraform-null-label modules started failing, with the following error, anyone have any ideas?
Error: Failed to download module
Could not download module "s3_bizrewards_dev_label" (s3_bizrewards.tf:31)
source code from
"<https://api.github.com/repos/cloudposse/terraform-null-label/tarball/0.16.0//*?archive=tar.gz>":
Error opening a gzip reader for
/var/folders/1d/gpvdrwrd0y1_d0jv64w76j645nyvdq/T/getter001152442/archive: EOF.
@Matt Law how do you load the module? In terraform (source=…), or something else? That link you posted is 404, so if it was working before, then something changed in GitHub
Hi, Ive been calling it like this:
module "sqs_bizrewards_dev_label" {
source = "cloudposse/label/null"
version = "0.16.0"
but changed to this, which works, but all of our labels are like the above.
source = "git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.16.0>"
hmm, never seen that error before. Looks like something is going on between the terraform registry and GitHub API
` source = “cloudposse/label/null”`
maybe there was something broken with the terraform registry?
(we still use the git-style URLs everywhere)
…anyways, we have no control or influence over how the registry URL work. they basically proxy/redirect to github.
We experience similar issue
$ terraform --version
Terraform v0.12.18
Terraform Version Terraform v0.12.18 Terraform Configuration Files module "publish-works" { source = "cybojenix/publish/testing" version = "0.1.0" } module "label…
some more info on this: Hashicorp said they are investigating a “random issue” pertaining to this. Also when I used 0.12.18 it worked ok, Versions 0.12.15->0.12.17 seem to have the problems Thanks for all the replies.
thanks for the follow up!
Hi everyone,
Could you please check the following issue & let me know what’s wrong here.
Error: Missing resource instance key
on terraform-aws-codebuild/main.tf line 144, in data “aws_iam_policy_document” “permissions_cache_bucket”: 144: “${aws_s3_bucket.cache_bucket.arn}”,
Because aws_s3_bucket.cache_bucket has “count” set, its attributes must be accessed on specific instances.
For example, to correlate with indices of a referring resource, use: aws_s3_bucket.cache_bucket[count.index]
Error: Missing resource instance key
on terraform-aws-codebuild/main.tf line 144, in data “aws_iam_policy_document” “permissions_cache_bucket”: 144: “${aws_s3_bucket.cache_bucket.arn}”,
Because aws_s3_bucket.cache_bucket has “count” set, its attributes must be accessed on specific instances.
For example, to correlate with indices of a referring resource, use: aws_s3_bucket.cache_bucket[count.index]
Error: Missing resource instance key
on terraform-aws-codebuild/main.tf line 145, in data “aws_iam_policy_document” “permissions_cache_bucket”: 145: “${aws_s3_bucket.cache_bucket.arn}/*”,
Because aws_s3_bucket.cache_bucket has “count” set, its attributes must be accessed on specific instances.
For example, to correlate with indices of a referring resource, use: aws_s3_bucket.cache_bucket[count.index]
Error: Missing resource instance key
Hi @foqal Thanks for the update! Let me take a look into this, basically I would like to achieve the following things, let me know if you can help me out on this:- My stack is: AWS CodePipeline (github integration) + CodeBuild (lambda deploy) packed in terraform. AWS Lambda + API gateway dev environment and a commit in prod branch goes to AWS Lambda + API gateway prod environment
2019-12-19
looks like you are using the old TF 0.11 version of the module which has that issue (and newer version of terraform complains about it, but did not do it before)
take a look at the new 0.12 version (where the issue is fixed) https://github.com/cloudposse/terraform-aws-codebuild
Terraform Module to easily leverage AWS CodeBuild for Continuous Integration - cloudposse/terraform-aws-codebuild
Terraform Module for CI/CD with AWS Code Pipeline and Code Build - cloudposse/terraform-aws-cicd
Terraform Module for CI/CD with AWS Code Pipeline and Code Build - cloudposse/terraform-aws-cicd
Hey there!
The iam_access_key
doc (https://www.terraform.io/docs/providers/aws/r/iam_access_key.html#secret) says for the secret
attribute:
secret
- The secret access key. Note that this will be written to the state file. If you use this, please protect your backend state file judiciously. Alternatively, you may supply apgp_key
instead, which will prevent the secret from being stored in plaintext, at the cost of preventing the use of the secret key in automation. The If you use this part means if you use the resource as a whole? Or just the attribute? Is there a way how to prevent storing secret in the state file? Theterraform apply
knows it’s a sensitive value: ``` $ terraform apply
An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols:
- create
Terraform will perform the following actions:
# module.elasticprop.aws_iam_access_key.elasticprop_access_key will be created
- resource “aws_iam_access_key” “elasticprop_access_key” {
- encrypted_secret = (known after apply)
- id = (known after apply)
- key_fingerprint = (known after apply)
- secret = (sensitive value)
- ses_smtp_password = (sensitive value)
…
```
…using terraform
0.12.18
Provides an IAM access key. This is a set of credentials that allow API requests to be made as an IAM user.
Hello, can resources be generated from an interpolated variable ? or may be i should use templates ? by example will this work :
resource "azurerm_managed_disk" "Disk${var.ServerName}" {}
probably not
but you don’t need to do that
just name it for example default
and use count
or for_each
then select attributes of all resources from the list, e.g. name = azurerm_managed_disk.default[count.index)
(not real, just an example)
ok I see, I will fix that ( taking over a project and some bad naming usage were implemented in object )
2019-12-20
This PR is meant as a proposal for adding Terraform 0.12 support. It replaces the current AST parsing with the hashicorp/terraform-config-inspect module. (I picked up golang this weekend to make t…
@cytopia
This PR is meant as a proposal for adding Terraform 0.12 support. It replaces the current AST parsing with the hashicorp/terraform-config-inspect module. (I picked up golang this weekend to make t…
Rejoice!!!
Pr merged
finally!!!!
I was just wondering what is the best option for supplying secrets to Terraform. For example, when writing secrets to parameter store.
Options I can see are:
- Create an input variable and supply via cli, prompt or ENV var
- Create a gitignored tfvars file and store secrets in there for use in input vars
- Same as (2) but use git-crypt on the tfvars file and keep the encrypted version in git
- Manually add to Vault (or some other secure place that Terraform can access - SecretHub?) and retrieve value I guess the best option will partly depend on how many people are running the Terraform modules.
Interested in what people have found is most practical. Related GitHub issue: https://github.com/hashicorp/terraform/issues/516
#309 was the first change in Terraform that I could find that moved to store sensitive values in state files, in this case the password value for Amazon RDS. This was a bit of a surprise for me, as…
Secrets generated programmatically as part of terraform, we just write directly into SSM. This will still have the same problems that those must be also stored in the statefile. The only way to keep the statefile remotely secure is via gitops and eliminate human plan/apply phase.
#309 was the first change in Terraform that I could find that moved to store sensitive values in state files, in this case the password value for Amazon RDS. This was a bit of a surprise for me, as…
The other kinds of secrets which originate from humans are populated by humans, usually using command line tools. It has all the same problems one might expect: human forgets to populate equivalent secret to another environment. This tends to be more of a coldstart problem when bringing up a new service.
The alternative: git-crypt of secrets. For those unfamiliar, you can basically encrypt a config file using the public key. then at checkout, the build platform has the secret to decrypt.
While the automation is somewhat nice, I don’t like it. The secrets are exchanged on github and the PR review process does nothing. Someone can be changing a secret they shouldn’t be changing or fat-fingering the config file (human error). Hard to catch that.
We haven’t come up with the perfect way yet.
https://github.com/ContainerSolutions/externalsecret-operator/ looks pretty neat
An operator to fetch secrets from cloud services and inject them in Kubernetes - ContainerSolutions/externalsecret-operator
using 1Password as the gui to manage your secrets
Thanks @Erik Osterman (Cloud Posse) - ran out of time to reply in 2019!
Anyway, I appreciate your insights. It kind of confirms my experience.
Great point about secrets being available in TF state.
Regarding the issue with git-crypt, I think this is an issue with external secret storage too. Vault will allow you to validate input if you use Sentinel, but without you could easily have the situation where someone enters a malformed secret. As you said, this type of thing is hard to catch.
For the current client I’m working with I’ve decided to use TF var user input because we’re not automating Terraform plan/apply at all. The inputs are being written to SSM Parameter Store. Parameter Store paths are locked down using IAM roles.
The client is using Dashlane for general password management so we’ll use this to store arbitrary strings too.
Not a great solution but at least it’s simple.
I used dashlane for a ~~~1 year (migrated from 1p). But lost total confidence in Dashlane when passwords would fail to replicate across devices. Also, during that time 1password improved by leaps and bounds, plus introduced 1p for teams. Moved back ~~~years ago and very happy.
That’s good to know. I read elsewhere that Dashlane was flakey.
I’ve tried a few including LastPass. I need to try 1password, esp since they integrate with tools like the one you posted above!
I’m not a fan of LastPass either. Doesn’t support TOTP
Yeah it has its own authenticator app for MFA
And it doesn’t support shared MFA, right?
You mean like a team can each have their own MFA device on the same account?
If so, then yes. I have had two phones attached.
No, like AWS Master Account with MFA - stored in LastPass.
Oh right. Good question. I’m about to set up Teams for another client so I’ll let you know.
2019-12-21
2019-12-22
Folks, any comments here pls ? I have created a security group in account x. But later on updated it with a different privider (account Y); with this updated a security group created in an accountY; but then should the old resource be left in accountX; that’s what I see … is that expected ?
Yes absolutely what you should see. Your state is tied to the account. So if you apply with account X and then account Y you have two states and thus two of the resource
Thanks … what’s the way “terraform show” works with multiple providers ?
2019-12-23
Has anyone been able to create a module for creating a lambda to rotate RDS credentials for Postgres in AWS Secrets manager? There is a CloudFormation template for it but I’m finding it a real pain to replicate it.
@Bruce Have you thought of using aws_cloudformation_stack in terraform. It’s maybe not the nicest solution, but it will save you time and you can still deploy it with terraform.
I was trying to avoid that but that may be the solution
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Jan 01, 2020 11:30AM.
Register for Webinar
#office-hours (our channel)
2019-12-24
Is “aws organisations” the best way to deal with creating and maintaining users across environments (Dev/Stage/Prod) ? Wondering what will be the differences of creating user base and role access to different environment in terraform versus using “aws organizations” ? Will there be any specific advantages of using “organizations” as such ?
Many thanks in advance.
It doesn’t matter for cross account role assumption specifically. The best way, regardless of using Organizations or not is to simply not have any users except for in one account. Or, if using an auth provider, not having any AWS users at all.
Thanks; and you mean “auth provider” means for ex: SAML ?
except for in one account: So do you mean like master account will have all the AWS users in a company and they assume roles when they need to access other accounts ?
Yes, SAML. And yes, users in an “ops” or “admin” account and then assume roles across… Keeping it clean.
Hi i have just used the eks nodegroup module
Great effort
Btw i spent 2 hours to find out an issue
It seems that there is a bug on aws API if you come from ipv6 ip
I have switched to ip4 and it is fixed
2019-12-26
Hi I have just started using terraform. I was reading elasticsearch aws terraform module : https://github.com/cloudposse/terraform-aws-elasticsearch
Terraform module to provision an Elasticsearch cluster with built-in integrations with Kibana and Logstash. - cloudposse/terraform-aws-elasticsearch
I did not find any precision about if its includes logstash in the stack or not.
I am trying to build an ELK cluster on AWS
2019-12-29
Hi Guys, is there a way to for loop in templatefale through miltiple lists/tuples? something like this for example:
%{ for name lastname age in names lastnames ages ~}
echo ${name} ${lastname} ${age}
%{ endfor ~}
Are they the same length?
yes
2019-12-30
any recommendations for terraform version manager ?
Terraform version manager. Contribute to tfutils/tfenv development by creating an account on GitHub.
thanks
Terraform version manager. Contribute to tfutils/tfenv development by creating an account on GitHub.
Terraform & Terragrunt Version Manager. Contribute to aaratn/terraenv development by creating an account on GitHub.
Hello Guys
Stuck in a aws_launch_template user data issue since a long time.
Trying to deploy this template:
data "template_file" "user-data"{
template = <<EOF
echo export DB_CONNECTION="${var.rds_endpoint}" >> /etc/profile
EOF
}
resource "aws_launch_template" "django-launch-template" {
image_id = "${data.aws_ami.django-ami.id}"
instance_type = "${var.instance_type}"
name = "${var.template_name}"
key_name = "${var.key_pair}"
user_data = "${base64decode(data.template_file.user-data.rendered)}"
}
Ans this is the error I get:
Error: Error in function call
on main.tf line 41, in resource "aws_launch_template" "django-launch-template":
41: user_data = "${base64decode(data.template_file.user-data.rendered)}"
|----------------
| data.template_file.user-data.rendered is " echo export DB_CONNECTION=\"Test.String.com\" >> /etc/profile\n"
Call to function "base64decode" failed: failed to decode base64 data ' echo
export DB_CONNECTION="Test.String.com" >> /etc/profile
’=
try without the base64decode()
Ahh.. My bad Used the wrong function Sorry
Works with base64encode()
how would we translate
"${var.var1}${var.var2}"
with the exact same notation
yeah that worked. Thanks
in terraform 12 ?
The format function produces a string by formatting a number of other values according to a specification string.
2019-12-31
when upgrading from terraform 0.11.x to 0.12, what is the best practice to deal with state stored in S3?
It should update without problems
If you don’t have versioning turned on (which you should), I would make sure to have a backup
i am not sure if versioning is turned on not, how do you enable versioning ?
Haven’t had any problems with the state updating automatically. Update to 0.11.14 first, then 0.12.x