#terraform (2019-09)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2019-09-02
Hi, I have been using the resource aws_ami_from_instance to create AMI. The problem with this approach is that I cannot delete instance after creating the AMI. The instance is useless after this. So basically my workflow is as follows:
- Create an Instance
- Run some script inside the instance
- Create an AMI from the instance
- Terminate the instance I have been recommended to use packer for this but problem with packer is that it is doesn’t have a good integration with Terraform (Also I’m passing a lot of variables in the scripts in step 2) Any suggestions please?
@jaykm FYI
@sahil I haven’t used packer but I am not sure why would you want to control packer with terraform.
@Nikola Velkovski So the script I’m talking about in step 2 takes a lot of variables computed while running terraform. In case of aws_ami_from_instance, I can easily pass those variables inside the bash script, but same is not true for packer + terraform
hmmm you might want to drop terraform for that and maybe stick to aws cli
since blocking/maintaing a terraform state for baking amis doesn’t sound quite right.
what kind of data you are computing with terraform? I am guessing ids/arns of resources ?
@Nikola Velkovski Yes, ids and arns.
that is easily doable with aws cli
you are most probably using it for baking the ami, the dependency of terraform just makes it more complex.
I guess I’ll have to use aws cli instead. Thanks for your help.
You are welcome, usually I do not use terraform for things that are contstantly changing, deploys etc.
It gets cumbersome pretty quickly.
How come you aren’t using Packer to bake AMIs?
—deleted because i’ve realized im stupid and can’t read —
@sahil you can use terraform to setup codebuild with packer to setup your ami building pipeline.
After that you can use the aws_ami
datasource to get the last built ami.
@sahil what about building a base AMI and passing in the Terraform computed variables as part of the user_data
script?
@davidvasandani Actually that might work. Thanks!
@sahil No problem! Building a base AMI in Packer that can be used in both a staging and prod environment with different vars loaded at boot via Terraform make testing much easier! Keep us updated with your progress, if you run into any issues, or with a success!
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Sep 11, 2019 11:30AM.
Register for Webinar
#office-hours (our channel)
2019-09-03
I’m not sure if this is a TF12 problem or not, but I made another module just recently and this seemed to work, however - providers and their aliases are not found by the module anymore:
Error: Provider configuration not present
To work with
module.cicd-web.aws_iam_policy_attachment.cloudformation_policy_attachment its
original provider configuration at module.cicd-web.provider.aws.nonprod is
required, but it has been removed. This occurs when a provider configuration
is removed while objects created by that provider still exist in the state.
Re-add the provider configuration to destroy
module.cicd-web.aws_iam_policy_attachment.cloudformation_policy_attachment,
after which you can remove the provider configuration again.
I found this link: https://github.com/hashicorp/terraform/issues/21472 that states that providers need to be explicitly passed down to the module, which I tried but still doesnt work
Hi, I'm having problems upgrading to 0.12.0. We're running in eu-west-1 but one of my modules requires a cloudfront certificate that is only available in us-east-1. The main terraform file …
2019-09-04
Cross posting this from r/terraform: https://old.reddit.com/r/Terraform/comments/czjnvq/analysis_paralysis_bootstrapping_a_new_terraform/ Anyone here have good examples for bootstrapping a clean parameterized Terraform deployment on AWS?
I’m working on a personal project and hitting a bit of a wall. I’ve been using Terraform for a while but other than a few tiny environments, I’ve…
@Matt is this your thread? If so #geodesic is a great tool that many of us will talk to you about
It avoids Workspaces & wrappers like Terragrunt
Rationale for “great tool” https://github.com/osulli/geodesic-getting-started/blob/master/docs/why-geodesic.md
yes, that’s my thread @oscar
I will take a look at Geodesic
@Matt tonight join https://zoom.us/meeting/register/dd2072a53834b30a7c24e00bf0acd2b8
It is Sweetops weekly hour session
Great chance to get a demo and ask Qs.
#office-hours starting in 15 minutes
#office-hours starting now! ask questions, get answers. free for everyone. https://zoom.us/j/508587304
not sure I can make it @oscar, not this week
but this is one of my major grips about Terraform which I generally like a lot
otherwise
2019-09-05
Sorry, for your modules, they compatible with Terraform version 0.12+?
not all of them are converted to 0.12 yet (we are working on it)
those that were converted have hcl2
label https://github.com/cloudposse?utf8=%E2%9C%93&q=hcl2&type=&language=
you can thanks. We also adding Codefresh instead of Travis, and adding tests which are deployed to AWS using Codefresh pipelines (this complicates the task for you)
https://github.com/terraform-providers/terraform-provider-aws/issues/9995
Add your thumbs up if that would be useful to you.
Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or "me to…
Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or "me to…
being quite busy the past few weeks (going back to fixing terraform-lsp on the weekend) was working on a nice research and production project
the project being openstack on nomad
Now I’m just curious was CloudKitty
is
Billing and Chargeback service of OpenStack
2019-09-06
Hey guys, I am looking for a way to have my AWS autoscaling group perform a shutdown script before scaling down. The only way I can find to do this is using lifecycle hooks > Cloudwatch Events > lambda > SSM . But this seems quite a chain to string together. Any suggestions?
the life cycle hook is probably the way to go, but you could try https://opensource.com/life/16/11/running-commands-shutdown-linux as well
Linux and Unix systems have long made it pretty easy to run a command on boot. But as it turns out, running a command on shutdown is a little more complicated.
i’m not sure what your use case is and how important it is to get the shutdown script to run appropriately OR if you OS is event linux.
unless the OS goes bad, the scale in in an ASG will try to let the instance shutdown gracefully - this would let units in /usr/lib/systemd/system-shutdown/ run. i’m not sure what the timeouts would be before a forceful termination by the ASG.
K99runmycommandatshutdown
from the link above works really well in both ASG’s and SpotFleet instances.
Thanks!
I will give it a crack and see if this is for for purpose as it has a lot less moving parts. Thanks for the assistance @Jonathan Le @davidvasandani.
imho that’s the only way
it is normally used for ecs/ecs2 connection draining on scale in(down)
but in your case it might be even more complex since the script has to report success
Thanks @Nikola Velkovski
you are welcome
was hoping someone with regex expertise can help me out here, Im trying the following:
${replace(var.project, "/\\s$/", "")}
where var.project
is a string that will end in the letter s
. I’m trying to strip the s
at the end but im not having any luck. When I run this the s
remains. Any ideas?
You need to use $1
as described here - https://nedinthecloud.com/2018/08/27/terraform-fotd-replace/
@Brij S Your regex is replacing \s, not s
Try ${replace(var.project, "/s$/", "")}
The other approach, to @antonbabenko point, is to match the whole string: ${replace(var.project, "/^(.*)s$/", "$1")}
or just use substr("str", 0, length("str") - 1)
without messing up with regex (https://blog.codinghorror.com/regular-expressions-now-you-have-two-problems/)
2019-09-09
Hi, How can I reference resources created with for_each
?
Below the example what I try to accomplish:
locals {
users = ["user1", "user2"]
}
resource "aws_iam_user" "this" {
for_each = toset(local.users)
name = "${each.value}"
}
resource "aws_iam_access_key" "this" {
for_each = toset(local.users)
user = # Reference above created users
}
Hi @Michał Czeraszkiewicz
user = aws_iam_user.this[each.key].name
@maarten thx
Hello
is terragrunt considered a “best practice” tool to be using?
I don’t know if I consider it a “best practice” tool… most of the must have features of terragrunt (i.e. state locking during apply) have made it to terraform, my answer would be different if this was asked a year or so ago
just asking because i previously worked in a place that we had one terraform repo per environment(dev,prod,staging)
and we had to do alot of repeatable work in each env
It could be used for a very specific style of writing TF to keep things DRY, though you can do this now with workspaces, as well… I’m a fan of this workflow: https://github.com/cochransj/tf_dynamic_environment_regions
This repository is an example of how to use terraform workspaces to implement the same resource declarations across multiple aws accounts across multiple regions. It also shows how to have a data d…
Note: a lot of that assumes you are using TF 0.12+
thanks
another question
imagine that i have an RDS instance in prod
env, but i do not have it in dev
env
i would be able to accomplish this with terragrunt or workspaces?
sure
conditional statement on count which i believe is possible in 0.12, psuedo code: IF $terraform.workspace == “prod” THEN count = 1 ELSE count = 0 on the RDS resource
When we talking about multple accounts, teams, environments…for me, terragrunt has been totally necessary to keep my terraform code organized and without so much boilerplate code
thx
one last question
i used to work with 1 big tfstate per environment
is a best approach to be using multiple tfstates per resourcegroup?
so we can manage VPCs individually, EC2s individually, etc….
we like to split tfstate on multiple dimensions… such as team and stage and app and stateful/stateless…
though, “individually” is a bit relative… for many actions, you can use -target
to restrict the scope of an action… splitting tfstate helps reduce the blast radius of accidents better IMO though
nice
also see the recent thread/posts by @Erik Osterman (Cloud Posse) in #geodesic for another approach… https://sweetops.slack.com/archives/CB84E9V54/p1567187759027700
but this also is freggin scary. i think it’s optimizing for the wrong use-case where you start from scratch. i think it’s better to optimize for day to day operations and stability.
each team has autonomy to apply
without interfering in other team infrastructure
basically, i just started in a new company, nothing in IAC yet
and i’m researching good strategies/architectures to start our environments
and i’m thinking about splitting into multiple tfstates, because we have a project to make a disaster recovery plan. We should be able to disaster recover only some portions of our infrastructure, into another region
we use terragrunt because it is visual (hierarchy/tfstate by directory structure) and easy to comprehend. tf workspaces are less visible in that sense and IMO harder to “know” where you are working. geodesic is solving similar problems in another way entirely
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Sep 18, 2019 11:30AM.
Register for Webinar
#office-hours (our channel)
does anyone know how to get the function name of a created lambda?
resource "aws_lambda_function" "s3_metadata"
, can this resource be accessed via aws_lambda_fuction.s3_metadata.id
?
the docs dont make it apparent..
aws_lambda_fuction.s3_metadata.function_name
since tomorrow is HashiConf
is that a prediction or a question
I am giving my prediction that maybe they will officially announce packer 2.0 with HCL2?
how can i pass db_subnet_group_name
to aws_rds_cluster
resource using data object ?
currently i am trying to use
db_subnet_group_name = "${element(data.my_state.networking.database_subnets,1)}"
2019-09-10
does anyone know how to read subnet name from state file ?
@rohit doesn’t look like there is a data source for this yet: https://github.com/terraform-providers/terraform-provider-aws/pull/9525
Adds a data source for aws_db_subnet_group. Used aws_db_instance as a model for this work. Currently only allows looking up exactly one database subnet group using name as the argument, although th…
@sarkis thanks. I will try a different alternative then
Join us live as HashiCorp Founders Armon Dadgar and Mitchell Hashimoto deliver the opening keynote at HashiConf in Seattle, WA.
terraform plan getting a cost estimation feature on TF Cloud interesting…
I didn’t find any references to ECS Service Discovery in CP modules. Is it because everyone is running an alternative solution?
For someone getting started with containers, and not having more than 3-4 services at the most, should I even bother with orchestration and/or sophisticated methods of service discovery?
Or will ALB/ECS combo get the job done?
for that we usually deploy https://istio.io/docs/concepts/what-is-istio/ in the k8s cluster
Introduces Istio, the problems it solves, its high-level architecture and design goals.
don’t have anything in TF
Introduces Istio, the problems it solves, its high-level architecture and design goals.
That’s what AWS AppMesh does, right? I wonder if that’s an overkill for my use case though.
yes AppMesh should do similar things. we did not use it yet
for 3 static services might be an overkill but at the same time, you get an experience and be able to use it with tens of services
Has anyone played with the new Terraform SaaS offering?
Looks like TF cloud has hit GA
2019-09-11
Not yet but took a read. Keen to hear someone’s experience & comparison to local Geodesic workflow / CI tools using Geodesic workflow / Atlantis
definitely +1 on this. This is the workflow that we’ve just committed to, so keen on hearing peoples experiences!
I have a hard time getting the cloudposse modules to work with the recent terraform version (v0.12.8). I feel like I’m missing something, any ideas?
make sure all modules you are using are converted to TF 0.12
For example, this one is now cloudposse/terraform-aws-alb
Don’t know about all modules in terraform-aws-modules/.......
(they are CloudPosse’s)
Yeah, so it is indeed an issue with the module implementation itself?
not implementation
the modules that are still in TF 0.11 syntax will not work in TF 0.12 (with a few small exceptions)
Try:
on .terraform/modules/alb_magento2/main.tf line 33, in resource "aws_security_group_rule" "http_ingress":
33: cidr_blocks = [var.http_ingress_cidr_blocks]
Change: removal of "${
and }"
If that doesn’t work, try:
cidr_blocks = var.http_ingress_cidr_blocks
.. since it is already a list
#office-hours starting now! ask questions, get answers. free for everyone. https://zoom.us/j/508587304
2019-09-12
How are folks doing multi region as far as Terraform goes…?
provider per region, pass the provider explicitly to each module/resource
these guys have the best reference i’ve seen for it, https://github.com/nozaq/terraform-aws-secure-baseline/blob/master/providers.tf
Terraform module to set up your AWS account with the secure baseline configuration based on CIS Amazon Web Services Foundations. - nozaq/terraform-aws-secure-baseline
Workspaces ?
I’m more interested in things like what you do with the state file
is it the age-old question of one giant state, or many smaller states? i think either way it would be controlled by the backend config…
you can have a backend config with a credential that keeps it all in one region if it is one state, should work fine, even if the resources are in multiple regions
or a backend config per state where you apply some rationale/logic to where you want that state stored…
I don’t think this is so simple. You can’t have state for multi regions all in a bucket in one of the regions
the region goes down, which maybe the reason you have gone multi region in the first place, now you can’t get to your TF state
why not?
that’s a different issue, not a technical limitation of tf
I wasn’t talking specifically about restrictions by TF, I’m wondering how people are doing it in a sane way
re a conversation I’ve just had with @Nikola Velkovski
would cross-region bucket replication be sufficient?
set that up on your backend, then repoint your backend config in tf if you need to use another region
Yeah that could get you out of a bit of a hole, but I don’t want to have to repoint backends etc
what is your backend? can you do consul or something in a cross-region way?
S3
This is what I found regarding remote state and workspaces
Terraform can store the state and run operations remotely, making it easier to version and work with in a team.
sorry here it is for s3
Terraform can store state remotely in S3 and lock that state with DynamoDB.
hmm no mention of changing the bucket with workspaces
IIRC you can’t use interpolation in the backend block
with s3, to avoid manually re-jiggering your backend, you would need to be managing the s3 endpoint rather explicitly, doing some kind of health check on the real endpoints and re-pointing things as necessary
and you may still hit problems when running tf, since you’d have to also be quite careful about targeting resources to avoid running against the downed region
I haven’t yet seen a setup that actually addresses these problems. setting up multiple providers in the same state feels like half a solution, and one that will likely bite you when you need to reach for it
It isn’t easy
yeah, if this is that big a concern, you may be best off confining a state to a single region as much as possible, and setting up your app accordingly (deploy independently to multiple regions)
still may need some coordination layer perhaps that your app states depend on, but now your cross-region blast radius is confined to just that resource
which goes into a state bucket for each said region
replicate between each other maybe
using one bucket, different paths of state per regions you can do that manually means having below tree:
providers/aws
├── eu-east-1
│ ├── dev
│ ├── pre
│ ├── pro
│ └── qa
└── eu-west-1
├── dev
├── pre
├── pro
└── qa
or you can use terraform workspaces
terraform workspaces can’t be interpolated into backend config AFAICR
I think one bucket isn’t ideal…
for me one bucket seems ideal, and you can only play with paths inside it.
and if eu-west-1 goes down? you can’t provision in eu-west-1 OR other region ?
s3 is a global service
buckets are regional, have def seen S3 in a region go down before (not often but has happened and one of the drivers for going multi region for me)
Ah my bad S3 bucket name is unique globally, confused totally agree with you on that spin up a bucket for each region is ideal
@joshmyers would Aurora Serverless Postgres as a TF backend solve this problem?
I believe that if a region went down the DNS would just failover to the new promoted master in a new region.
or leveraging Minio distributed across multiple regions (or even cloud providers!) https://dickingwithdocker.com/2019/02/terraform-s3-remote-state-with-minio-and-docker/
Thanks @davidvasandani, will have a look!
Let us know what you end up going with? I know at some point I’ll need to address a more robust TF backend.
continuing the thread in order to go multi region/environment we can do something like this
locals {
environment = element(split("_", terraform.workspace), 1)
region = element(split("_", terraform.workspace), 0)
}
output "region" {
value = local.region
}
output "environment" {
value = local.environment
}
and the the workspace should be set like
eu-west-1_staging
it’s a bit hacky but does the trick
backends don’t allow interpolation, so you are gonna need some kind of wrapper to get different buckets per region without inputting vars etc
yes it also doesn’t tackle the state problem
but it sounds like you don’t want to put your state in an s3 bucket
maybe other backends might work for you ?
No I think S3 is fine, but it needs to be regional specific and therefore named buckets, so need some way to easily toggle the backend bucket too
The Aurora Postgres idea is interesting, but a few things. Requires much more setup, automating that is possible, but a pain. Requires credentials. Doesn’t solve one of the problems we spoke about. State would be all good in the case of a regional failure as DNS should flip over to the other region and should be all good, but if you have multi region provider in a single run anyway, you are gonna have a hard time if one of those regions is down
Half of your apply or so is gonna fail, potentially leaving you in an interesting state
Really good point.
Have you thought of ways to simulate this?
Nope, and my guess is that when AWS breaks in such a way, all bets are off anyway, but moving onto a client where this is a major concern and wanted to know others feelings
@Erik Osterman (Cloud Posse) any thoughts?
I believe the plan is to usually decouple the infrastructure and application so that the application self heals until the provider resolves the outage (ie don’t try to terraform while S3 is offline )
but looking to hear Erik’s thoughts on this.
terragrunt is a wrapper that lets you use some interpolation in the backend config, it resolves it and constructs the init command for you
heh, I know, that is about all I want out of it at this point! lol
+1 for #terragrunt
@Mike Whiting can you share how you are invoking the module?
@Mike Whiting has joined the channel
@Nikola Velkovski yep that’s the one
oh that woiuld be me
what a coincidence
are you using terraform 0.12 by any chance ?
yeah
unfortunately it has not been ported yet
but I can dedicate some time and do it
that would be awesome
cool
just to clarify…
this will enable ec2 instances to log to cloudwatch events
what do you mean by cloudwatch events
Cloudwatch events are cron like jobs
ah ok
it will add additional metrics to cloudwatch
I just want to see logs from docker
ah that’s not it.
in order to see the logs
from docker you’‘ll need to have:
- the dockerized app to write to stdout
- iam role for the ec2 machines to write to cloudwatch logs
- a log group in cloudwatch logs
I think that should do it
sounds good…
however let me explain how I got here
oh and this
You can configure the containers in your tasks to send log information to CloudWatch Logs. This allows you to view the logs from the containers in your Fargate tasks. This topic helps you get started using the awslogs log driver in your task definitions.
I’m creating a aws_ecs_task_definition and I suspect the service is failing to start because the docker image resides on a gitlab image registry and I imagine it’s not possible to use a docker image somewhere where the authentication isn’t through AWS
but I was hoping to see evidence of that through some kind of logging
if it’s ECS/EC2 then you can ssh into the machine and check the agent logs
otherwise you’ll need to set up logging
from experience the most usual problem is that the instances do not have internet
you can try with a simple docker image
e.g. nginx
and see if it works
if I use a vanilla docker image e.e. jenkins:lts which is available publically then everything works
*e.g
so it’s not an internet issue
you should see how to authenticate through docker with gitlab
makes sense.. I suppose actually I just need to perform the authentication through the user_data field of aws_launch_configuration
pretty much
thanks.. that’s given me some direction
@Erik Osterman (Cloud Posse) what’s the workflow in this case, should Mike create an issue ?
Ya if he needs it now, the best bet is to fork and run terraform upgrade
cool thanks
hey guys i am trying to create a ec2 (after taint’ing the existing ec2), attaching the ebs volume(using aws_volume_attachment), and using an user-data script in my tf to mount the volume (which is /home), and also trying to import some data from /home to the newly created instance, problem is many times the /home is not mounted, and the /var/log/cloud-init-output.log
shows No such file or directory
to the files i am trying to import, any thoughts on this ?
^ hope that question is not confusing
problem is many times the /home is not mounted
is it never mounted, or just sometimes fails?
well most of the times it is never mounted(9/10 times) , i manually ssh into the instance and do a sudo mount -a
and it mounts, i tried adding sudo mount -a
to the user-data script itself – doesn’t help
might be some race conditions
e.g. something (EBS) is not ready yet, but the code tries to mount it
try to add a delay for testing
or maybe there are some settings to wait
i tried adding sleep 60
in the user-data script which didn’t work, OR should i add something to the terraform itself for the wait
thingee ?
no, tf does not wait for ramdom things, just for resources to be created (and not for all in all cases)
oh got it, will try some combinations of wait
in my user-data script itself
I have a bash script which has a line cd /run/media/Username/121C-E137/ this script is triggered as soon as the pen-drive is recognized by the CPU but this line should be executed only after the mo…
I am using mount -o bind /some/directory/here /foo/bar I want to check /foo/bar though with a bash script, and see if its been mounted? If not, then call the above mount command, else do somethin…
still trying ways mentioned from ^ stackoverflows, came up with
if sudo blkid | grep /dev/xvdb > /dev/null; then
sudo mount -a
else
sleep 10
fi
any elegant approaches to make that in a loop ?
update: right now adding a direct sleep 10
(without any loop) seems to have solved the problem
hey all, all of a sudden i’m getting this error when running terraform apply
Error: error validating provider credentials: error calling sts:GetCallerIdentity: NoCredentialProviders: no valid providers in chain. Deprecated.
I have no idea why this is happening. The only thing I did was add some credentials to my .aws/credentials
file.
My providers look like this
provider "aws" {
version = "~> 2.25.0"
region = var.aws_region
profile = "storesnonprod"
alias = "nonprod"
}
provider "aws" {
version = "~> 2.25.0"
region = var.aws_region
profile = "storesprod"
alias = "prod"
}
does anyone know what might be causing this?
@Brij S is it possible you mucked up your .aws/credentials toml format so it’s not being parsed correctly by the TF provider?
@sarkis, it seems when I add some old profiles back to the credentials file it works. But when I remove them i get the error
do your profiles depend on each other? i think there was something like source_profile i can’t remember the exact param
no
two seperate profiles as you see in the snippet above
can you share your ~/.aws/credentials file in DM and redact sensitive data
sure
Anyone using a tool like drifter
or terraform-monitor-lambda
to detect state drift?
Any success or best practices for identifying and correcting Terraform changes over time?
Check for drift between Terraform definitions and deployed state. - digirati-labs/drifter
Monitors a Terraform repository and reports on configuration drift: changes that are in the repo, but not in the deployed infra, or vice versa. Hooks up to dashboards and alerts via CloudWatch or I…
While these can be useful in a small environment, they are supporting a problematic process and are not going to scale well
If using micro services, there will be a state file per microservice. Lets say you a small environment with only 10 service and you have 3 environments. That’s 30 state files plus a few more for environment infrastructure and a few for account infrastructure. This can grow fast
Then there is the bad process that it is supporting. People making production changes from their local systems with possibly no testing or any audit tracking. A much better process would be to commit the change to git repo and have that trigger the terraform run. This gets rid of the drift issue due to uncommitted changed. It also allows you to add testing and ensure it is run, as well as having an audit trail
master brach reflects what is deployed to prod. With all the history from the PRs. There should be no drift since what’s is currently in master is deployed to prod with terraform/helm/helmfile
what we usually do to make and deploy a change to apps in k8s and serverless: create a new branch, make changes, open a PR, automatically (CI/CD) deploy the PR to unlimited staging so people could test it, approve the PR, merge the PR to master, cut a release which gets automatically deployed to staging or prod depending on release tag
for infrastructure (using terraform): create new branch, open a PR, make changes, run terraform plan
automatically, review the plan, approve the PR, run terraform apply
, if everything is OK, merge the PR to master
we use atlantis
for that
Terraform module for deploying Atlantis as an ECS Task - cloudposse/terraform-aws-ecs-atlantis
Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co
2019-09-13
@Brij S you can verify if the profiles are properly set with aws cli
e.g. aws s3 ls --profile storesnonprod
because terraform uses that
or by
AWS_PROFILE=profilename aws s3 ls
I need something like “random_string” resource, but with a custom command. So, execute a command only if the resource isn’t in the state yet (or was tainted), and use commands output as a value to put in the state. Any idea what kind of magic to use to achieve such result?
something like here https://github.com/cloudposse/terraform-root-modules/blob/master/aws/grafana-backing-services/aurora-mysql.tf#L118
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
if you provide the param in var, use it
if not, use random string to generate it
Thank you. Unfortunately it’s not the thing I look for. I need something like:
resource "somemagicresource" "pass" {
command = "openssl rand -base64 12"
}
The local-exec
provisioner invokes a local executable after a resource is created. This invokes a process on the machine running Terraform, not on the resource. See the remote-exec
provisioner to run commands on the resource.
Unfortunatelly provisioners doesn’t store any kind of result in a state.
I’ll try to go with https://github.com/matti/terraform-shell-resource , but thanks for all the links provided
Contribute to matti/terraform-shell-resource development by creating an account on GitHub.
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
Have you guys seen this before?
Error: Error creating S3 bucket: BucketAlreadyOwnedByYou: Your previous request to create the named bucket succeeded and you already own it.
status code: 409, request id: aaaa, host id: aaaa//bbbb+cxxx=
It doesn’t exist on ANY of our accounts
its a very, very, specific and niche bucket name
the changes someone else owns it is extremely slim
I’m sure I saw this about 4-5 months ago, but it actually was created on one of our accounts
we saw that happens when you create a resource (e.g. bucket) not using the TF remote state. Then the state was lost on local machine. Then TF tries to create it again, but it does exist in AWS
check that you use remote state and not losing it
Yeh that adds up with what potentially happened
That local state file is long gone
How can I recover the S3 bucket? :S
you need to find it in AWS
and either destroy it manually in the console, or import it
It isn’t there
I’ve searched hte account (it has no buckets) - new account
make sure you have permissions to see it (maybe it was created under diff permissions)
Yeh I’ve checked sadly with Admin permissions in console
It genuinely isn’t there, even got one of the IT guys to look
I’ve opened a ticket with AWS but slow
S3 is global, so you need to check all your accounts, even those you don’t know about
Wait so, it could be on a different aws account to that on which I ran terraform?!
it could be. I don’t remember if AWS shows the error Error creating S3 bucket: BucketAlreadyOwnedByYou: Your previous request to create the named bucket succeeded and you already own it
in this case
AWS console just shows name already in use
when attempting to replicate but in the console
aghhh
so yes, you created it before and lost the state (if you are saying that the chance is very slim that some other people used the same name)
But surely even if I lost the state file
the s3 bucket would be on the aws account
btw the context is the backend module
Weird it has happened again on another account
it is in my local state file
a resource
but it isn’t on the console
what is going on
you have to follow exact steps when provisioning the TF state backend because you don’t have the remote backend yet to store the state in
Yeh no I know
it was an accident
I’m familiar with it
you have to provision it first with local state, then update TF code to use remote S3 backend, then import the state
Probably the 18th account I’ve used your module on.. just something weird happened this time
Ya, I do this:
run:
# Apply Backend module using local State file
direnv allow
bash toggle-s3.sh local
terraform init && terraform apply
# Switch to S3 State storage
bash toggle-s3.sh s3
terraform init && terraform apply
and my toggle-s3.sh script basically comments out hte backend
ok
It’s worked plenty time
not sure what happened this time though
i guess the bucket with that name exists for any reason (you created it, other people created it on diff account, or other people from diff orgs created it). Try to use a diff name
No I think something weird is happening
on a second account I’m getting this
Error: Error in function call
on .terraform/modules/terraform_state_backend/main.tf line 193, in data "template_file" "terraform_backend_config":
193: coalescelist(
194:
195:
196:
|----------------
| aws_dynamodb_table.with_server_side_encryption is empty tuple
| aws_dynamodb_table.without_server_side_encryption is empty tuple
Call to function "coalescelist" failed: no non-null arguments.
I’ve followed the same pattern and commands as many accounts previously
all version locked etc
No clue why it isn’t having it today
annnnnd its working again
what the ?
what theee
that magic bucket thats there but not there?
can see it on aws cli
but not console
whaat
same permissions (iam role)
@oscar I think you mixed up TF versions
if you use 0.11, use 0.11 state backend
same for 0.12
Its aws
look at htis
✗ . (none) state_storage ⨠ aws s3 rm <s3://xxx-terraform-state>
-> Run 'init-terraform' to use this project
⧉ xxx
✗ . (none) state_storage ⨠ aws s3 ls
2019-09-13 14:20:17 xxx-terraform-state
even after removing it is stil there hahaaha jeez
or, another posibility, aws provider was not pinned, got updated, and the new one has issues
we had a few 0.11 modules basted after the aws provider was updated
I think you nailed it akynsh#
andriy*
managed torecover the state file by cli#
Error: Failed to load state: Terraform 0.12.6 does not support state version 4, please update.
that was released just this morning
I wonder if I had the .terraform/modules directory already there
so the conclusion is, always pin everything, TF modules, TF version, providers, etc.
Damn I have it pinned to major
aws = “~> 2.24”
so I found out why my module was using different credentials. It was because i had a [main.tf](http://main.tf)
in my module with the following content:
provider "aws" {
version = "~> 2.25.0"
region = var.aws_region
profile = "ambassadorsnonprod"
alias = "nonprod"
}
provider "aws" {
version = "~> 2.25.0"
region = var.aws_region
profile = "ambassadorsprod"
alias = "prod"
}
However, when I remove this [main.tf](http://main.tf)
file from the module, and run tf plan
with configuration that references this module I get the following error:
To work with
module.cicd-web.aws_iam_policy_attachment.cloudformation_policy_attachment its
original provider configuration at module.cicd-web.provider.aws.nonprod is
required, but it has been removed. This occurs when a provider configuration
is removed while objects created by that provider still exist in the state.
Re-add the provider configuration to destroy
module.cicd-web.aws_iam_policy_attachment.cloudformation_policy_attachment,
after which you can remove the provider configuration again.
I have a main.tf that is setup so I’m not sure why im getting this error
ominously similar to my situ
I am looking at using the SweetOps s3_bucket module but I am not sure how to enable server access logging using the module. Does the module support enabling server access logging?
depending on the s3 module you want to use
this one supports it https://github.com/cloudposse/terraform-aws-s3-website/blob/master/main.tf#L48
Terraform Module for Creating S3 backed Websites and Route53 DNS - cloudposse/terraform-aws-s3-website
I’m having some issues with Terraform and terraform-aws-rds-cluster module, I’m creating a Global cluster ( I forked the cloudposse module and added one line) but this is not just related to to global aurora clusters but the problem is that the cluster finish creating but terraform for some reason keeps pooling for status until it times out after 1 hour , this is what I see :
module.datamart_secondary_cluster.aws_rds_cluster.default[0]: Creation complete after 9m10s [id=example-stage-pepe1secondary]
module.datamart_secondary_cluster.aws_rds_cluster_instance.default[1]: Creating...
module.datamart_secondary_cluster.aws_rds_cluster_instance.default[0]: Creating...
module.datamart_secondary_cluster.aws_rds_cluster_instance.default[0]: Still creating... [10s elapsed]
module.datamart_secondary_cluster.aws_rds_cluster_instance.default[1]: Still creating... [10s elapsed]
module.datamart_secondary_cluster.aws_rds_cluster_instance.default[1]: Still creating... [20s elapsed]
module.datamart_secondary_cluster.aws_rds_cluster_instance.default[0]: Still creating... [20s elapsed]
that will continue for 1 hour…..
and the console will show it as available
you see that eveytime you provision or just saw one time?
have you seen this before ?
no
if it only once, I’d say your session had expired
it is pretty consistant
the workaround was to create the secondary cluster with 0 instances
then change it two instances
pretty much every time
I mean I have not been able to successfully complete the creation the cluster
has anyone used multiple providers for a module?
Ive done this multiple times with success, but now I’m facing an issue where all resources are created in one account(provider) and not the other and i’m not sure why
@Brij S you’ll need to post some code or errors for us to help you diagnose.
I had an issue like that yesterday, the name of the resource needs to be different and you need to pass the provider alias to every resource
in my /terraform/modules/cicd
folder Ive got a [main.tf](http://main.tf)
file with the following:
provider "aws" {
version = "~> 2.25.0"
region = var.aws_region
alias = "nonprod"
}
provider "aws" {
version = "~> 2.25.0"
region = var.aws_region
alias = "prod"
}
in my /terraform/cicd/stores
folder Ive got a [main.tf](http://main.tf)
with the following:
provider "aws" {
version = "~> 2.25.0"
region = var.aws_region
profile = "storesnonprod"
alias = "nonprod"
}
provider "aws" {
version = "~> 2.25.0"
region = var.aws_region
profile = "storesprod"
alias = "prod"
}
and ive got a /terraform/cicd/stores/web.tf
file Ive got
module "cicd-web" {
source = "../../modules/cicd-web"
providers = {
aws.nonprod = "aws.nonprod"
aws.prod = "aws.prod"
}
........
in all of my resources ive got either a provider = "aws.nonprod"
or provider = "aws.prod"
but they all get created in aws.nonprod
@davidvasandani ^
However, I realized that if I put profiles in /terraform/modules/cicd/main/tf
then it works! However, that defeats my purpose of the module since id want to use different profiles for different accounts
there is no difference between these providers
provider "aws" {
version = "~> 2.25.0"
region = var.aws_region
alias = "nonprod"
}
provider "aws" {
version = "~> 2.25.0"
region = var.aws_region
alias = "prod"
}
they are the same
thats a good point.. didnt notice that
they need to have some diff, e.g. region
but the region is the same as well
if i remove that main.tf from the module I get an error saying it needs it
they have to be diff otherwise why do you need them
provider "aws" {
region = "us-west-2"
shared_credentials_file = "/Users/tf_user/.aws/creds"
profile = "customprofile"
}
diff region, or diff profile, or diff shared_credentials_file
right, I can add profile but if that lives in the module I cant reuse it
for another set of accounts
which tells the provider to use diff credentials from diff profile to access diff account
you need to add that
if I leave profile
in the [main.tf](http://main.tf)
in the module, then I cant reuse the module
because another account will have a different profile
whatever you are saying you can’t reuse, does not make any diff for terraform
so in my module, ``/terraform/modules/somemodule` i have a main.tf which includes a profile which is used for account A
you create a set of providers (they should differ by region or profile)
differ by profile, yes
then for each module, you send a set of required providers
and in each resource use the provider aliases
there is no other way of doing it
wait, in the module, the main.tf if I put a profile in
how does the module become reusable if the profile is hardcoded for a certain account
the module is reusable because you send it a list of providers (which can contain only one)
and the module uses that provider
w/o knowing the provider details
Modules allow multiple resources to be grouped together and encapsulated.
yes I understand that
so that means, I remove [main.tf](http://main.tf)
from my module?
(which causes errors)
not sure I understand that
ok let me explain
you fix the error in main.tf
not remove it
in /terraform/modules/somemodule/main.tf
I have:
provider "aws" {
version = "~> 2.25.0"
region = var.aws_region
alias = "nonprod"
}
provider "aws" {
version = "~> 2.25.0"
region = var.aws_region
alias = "prod"
}
In /terraform/folder/main.tf
I have:
provider "aws" {
version = "~> 2.25.0"
region = var.aws_region
profile = "storesnonprod"
alias = "nonprod"
}
provider "aws" {
version = "~> 2.25.0"
region = var.aws_region
profile = "storesprod"
alias = "prod"
}
In /terraform/folder/web.tf
I have:
module "cicd-web" {
source = "../../modules/somemodule"
providers = {
aws.nonprod = "aws.nonprod"
aws.prod = "aws.prod"
}
that is how im using the providers
can you have multiple providers in a module ?
you can. kinda need to when you want to implement a cross-account workflow, for things like vpc peering, resource shares, etc…
I think you can but should you do it ?
if I remove /terraform/somemodule/main.tf
I get this error:
Error: Provider configuration not present
To work with
module.somemodule.aws_iam_policy_attachment.codepipeline_policy_attachment its
original provider configuration at module.somemodule.provider.aws.nonprod is
required, but it has been removed. This occurs when a provider configuration
is removed while objects created by that provider still exist in the state.
Re-add the provider configuration to destroy
module.somemodule.aws_iam_policy_attachment.codepipeline_policy_attachment,
after which you can remove the provider configuration again.
in my case I instantiate the module twice one with one provider and one with the other
look at this example
they don’t create one resource within two providers
in my module I have multiple resources that have either provider = "aws.nonprod"
or provider = "aws.prod"
mmm, maybe moving to a module that can do do any provider and the pass one provider to the module
ok, we are mixing up at least 4-5 diff concepts here
sorry
- @Brij S if you created resources using a provider, you can’t just remove it. Delete the resources, then remove the providers from
[main.tf](http://main.tf)
, then re-apply again
- @Brij S your providers must be different (that’s after you do #1). Otherwise TF uses just the first one since they are the same (that’s why eveything gots created in just one account)
- @jose.amengual you create a module, but don’t hardcode any provider in it. You can send the provider(s) to it IF nessessary
- But in (almost) all cases, it’s not necessary. The only use-case you need to send provider(s) to a module is when your module is designed in such a way so it creates resources in diff regions or in diff accounts (bad idea)
- Creating such a module that creates resources in diff region is OK (in this case you can send it a list of providers that differ by region)
- Creating such a module that creates resources in diff accounts is bad idea
@Andriy Knysh (Cloud Posse) could I show you the problem Im having? I dont have any resources created but i’m still getting the error
sounds like you have resources created
To work with
module.somemodule.aws_iam_policy_attachment.codepipeline_policy_attachment its
original provider configuration at module.somemodule.provider.aws.nonprod is
required, but it has been removed. This occurs when a provider configuration
is removed while objects created by that provider still exist in the state.
Re-add the provider configuration to destroy
i just ran terraform destroy, no resoruces found
could we zoom possibly?
regarding #6 above: instead of thinking of creating modules that uses providers for diff accounts, it’s better to create yourself an environment which will allow you to login into diff accounts (by using diff profiles in ~./aws
, and eveb better by assuming roles)
2019-09-15
Are you using some of our terraform-modules in your projects? Maybe you could leave us a testimonial! It means a lot to us to hear from people like you.
Hey guys! I’m looking for some advice on how to approach an issue. I’m trying to figure out a way to use Terraform to provision a Windows Server 2016 instance that will run this cloud prep tool once it’s provisioned. I want to do something with Packer down the line but right now I’m just trying to make an easy way to spin up cloud gaming rigs on AWS for myself.
Prep tool: https://github.com/jamesstringerparsec/Parsec-Cloud-Preparation-Tool
Learn how to run commands on your Windows instances at launch.
Is what you’re after. There are plenty of examples out there on how to pass user-data to an ec2 instance in terraform.
@James D. Bohrman this link didn’t work for me.
2019-09-16
Hi @James D. Bohrman this might help “Deploying a Windows 2016 server AMI on AWS with Packer and Terraform. Part 1” by Bruce Dominguez https://link.medium.com/8hIu8JaK1Z
Automating a deployment of a Windows 2016 Server on AWS should be easy right, after all deploying an ubuntu server with Packer and…
Does anyone have a good suggestion on creating a snapshot from a Rds database (that’s encrypted) and restoring it to a Dev/testing Env and doing some data scrubbing?
Suggestions yes, any of them any good? Not so sure
Have seen this done in several ways, none of which were particularly nice
@Bruce https://github.com/hellofresh/klepto looked interesting in this space last time I checked
Klepto is a tool for copying and anonymising data. Contribute to hellofresh/klepto development by creating an account on GitHub.
(probably not a discussion for this particular channel)
Thanks @joshmyers I will check it out.
is anyone able to advise on aws_ecs_task_definition. If I specify multiple containers in the task definition file then neither of the containers come up.
but if I have just one it works
@Mike Whiting you are really going to need to post your instantiation of the Terraform resource or whatever. What you expected. What the actual error message is etc
did you mean to @ me?
these are the resources:
resource "aws_ecs_task_definition" "jenkins_simple_service" {
// volume {
// name = "docker-socket"
// host_path = "/var/run/docker.sock"
// }
volume {
name = "jenkins-data"
host_path = "/home/ec2-user/data"
}
family = "jenkins-simple-service"
container_definitions = file("task-definitions/jenkins-gig.json")
}
resource "aws_ecs_service" "jenkins_simple_service" {
name = "jenkins-gig"
cluster = data.terraform_remote_state.ecs.outputs.staging_id
task_definition = aws_ecs_task_definition.jenkins_simple_service.arn
desired_count = 1
iam_role = data.terraform_remote_state.ecs.outputs.service_role_id
load_balancer {
elb_name = data.terraform_remote_state.ecs.outputs.simple_service_elb_id
container_name = "jenkins-gig"
container_port = 8080
}
}
[
{
"name": "jenkins-gig",
"image": "my-image",
"cpu": 0,
"memory": 512,
"essential": true,
"portMappings": [
{
"containerPort": 8080,
"hostPort": 8000
}
],
"environment" : [
{
"name" : "VIRTUAL_HOST",
"value" : "<host>"
},
{
"name": "VIRTUAL_PORT",
"value": "8080"
}
],
"mountPoints": [
{
"sourceVolume": "jenkins-data",
"containerPath": "/var/jenkins_home",
"readOnly": false
}
]
},
{
"name": "nginx-proxy",
"image": "jwilder/nginx-proxy",
"cpu": 0,
"memory": 512,
"essential": true,
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
]
}
]
if I remove the nginx-proxy container from the definition then ecs-agent successfully pulls and launches the jenkins container but with it included nothing happens
nb: ‘my-image’ is from a private registry and nginx-proxy is public
Do you have any error events being logged?
Are there creds for the private repo?
I’m just observing the ecs-agent logs currently (within the instance)
I followed this guide for the private registry stuff https://docs.aws.amazon.com/AmazonECS/latest/developerguide/private-auth-container-instances.html
as I say, the container from the private image launches fine when I don’t specifiy the proxy container in the definition file.. i.e. one container object
You hadn’t specific which one you can bring up on it’s own, or that one is in a private registry at that point
ECS agent logs should give you an idea
I can bring up the jenkins container (private image) on it’s own
when the nginx-proxy definition is present ecs-agent just sits idle
does that make sense?
yes
what do you think I should try?
starting to wonder if terraform is really for me if I can’t get help
(from anywhere)
Terraform is just making API calls for you
yep
The tags for this module are so confusing: https://github.com/cloudposse/terraform-aws-rds/releases
Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.
I’ve been using 0.11 by mistake as I took the ‘latest’ tag
but that’s actually just a hotfix
the latest .12 tag ios 0.10
true I could have read the list and lesson learned, but had me stumped for a while as to why it wasn’t working!
@oscar don’t pin to master/latest, always pin to a release. In the module, TF 0.12 started with tag 0.10.0
, but when we needed to add some features to TF 0.11 code, we created the tag 0.9.1
which is the latest tag, but not for TF 0.12 code
Yes that’s what I mean
a 0.11 tag is at the top of the tags list
bamboozled me, logically I would have thought only 0.12 tags would be at the top of the ‘releases’ tab
that’s how GitHub works
so I had it pinned to a 0.11 until I realised what was going on
i don’t even see a 0.11 tag in there. there is a 0.11 branch…
exactly
0.9.1 is a TF 0.11 tag
oh you mean the 0.9.1 tag only supports tf 0.11
not that there is a 0.11 tag
Aye
confusing
bamboozles
we did not find a better way to support both code bases and tag them
Haha its fine, I was just pointing out it is a bamboozle
so we started a TF .12 code with some tag and continue incresing it for 0.12
It makes sense
what you are doing makes sense to me, releasing patch fixes on the 0.9 minor stream
for 0.11, usually increase the last tag for 0.11 branch
The lesson learned was ‘don’t just grab the top most tag’
would be cool if tf/go-getter supported more logic in the ref
than an exact committish… a semver comparator (like ~>0.9
) would be awesome
tf/go-getter
waht does this do?
terraform uses go-getter under the covers to retrieve modules specified by source
I see, yeh that would be smart
Package for downloading things from a string URL using a variety of protocols. - hashicorp/go-getter
checks the versions.tf file and cehcks for compatibility
@Andriy Knysh (Cloud Posse) I think I was doing the PR as you commented! https://github.com/cloudposse/terraform-aws-rds/pull/38
thanks @oscar, looks good (you even updated the README
running automatic tests now, if ok will merge
where abouts are your tests?
I couldn’t see them
I noted Codefresh wasn’t in the PR either
Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.
Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.
testing this example https://github.com/cloudposse/terraform-aws-rds/tree/master/examples/complete
Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.
Oh I see. When I navigated the test/ directory it looked like an example
but I realise now that examples_complete_test.go
is related ot the examples/ dir
and that examples/ isn’t just documentation. Nice
Yah that’s some nice gitops
I was expecting a trigger
but that’s cooler
it is a trigger, but we have to trigger it (for security when dealing with PRs from forks)
Oh that makes sense actually
otherwise you could DDOS it
Yeh
merged and released 0.11.0
(now you have that tag ) thanks
woop, thanks
Debate/Conversation:
“We should enable deletion_protection for production RDS”
https://www.terraform.io/docs/providers/aws/r/db_instance.html#deletion_protection
For: anyone in console / terraform cannot accidentally delete (assuming IAM permissions are not super granular & TF is being operated manually)
Against: presumably this would mean the resource cannot be updated? I’m not too familiar with RDS so unsure on how many settings actually cause a re-create
better to enable it, but usually when you want to delete an RDS instance aws takes a snapshot of it as back up.
guys do you know when we will have count
enabled for module
Not seen an ETA yet, just that it is reserved alongside for_each
?
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Sep 25, 2019 11:30AM.
Register for Webinar
#office-hours (our channel)
I’ve had a brainwave that perhaps I need to add another dedicated aws_ecs_service resource for the nginx-proxy - see my example code above. is this a possibility?
Is there a MKS/Kafka module anywhere?
Has anyone solved a solution for dynamically determining which subnets are free in a given VPC to then use for deploying some infrastructure into? Or know of some examples?
what do you mean by are free
?
available ip address space
that’s not easy
yup; plus we have multiple cidr blocks (secondaries) being added to the VPC so in some cases the the secondary blocks are barely usable because subnets created off of them don’t garnish many ip address space (e.g. \28)
so yeah - in those cases basically need a way to filter away “unusable” subnets
the closet thing i’ve found is running a local cmd and finding a way to stuff it into a data template to somehow use downstream - kind of like the solution here: https://medium.com/faun/invoking-the-aws-cli-with-terraform-4ae5fd9de277
but all very ugly
you can use https://www.terraform.io/docs/providers/aws/d/subnet_ids.html to get all subnets for a VPC
Provides a list of subnet Ids for a VPC
does TF support inline code for lambda functions like cloudformation?
2019-09-17
Inside terraform(.tf) i can use assign dynamic stuff using variables like - key_name = "${var.box_key_name}"
for different environments, how can i do the same inside the user-data scripts attached to tf, i am tyring to have unique values for sudo hostnamectl set-hostname jb-*environtmenthere*
in the user-data script
something like this? https://www.terraform.io/docs/providers/template/d/file.html
Renders a template from a file.
hi gents, has any one of you succesfully created s3 bucket module with dynamic cors configuration?
not sure what you mean by ‘dynamic configuration’, but take a look here https://github.com/cloudposse/terraform-root-modules/blob/master/aws/docs/main.tf#L79
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
by dynamic configuration, I thought about utilizing terraform’s ‘dynamic’ feature
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
the same approach you linked I use right now but it forces to have any kind of CORS configuration applied to the bucket, even when you do not need CORS at all
with dynamic configuration I thought I will be able to create s3 buckets with or without cors configuration
that’s easy to implement
I ended up with something like this:
dynamic "cors_rule" {
for_each = var.cors_rules
content {
allowed_headers = [lookup(cors_rule.value, "allowed_headers", "")]
allowed_methods = [lookup(cors_rule.value, "allowed_methods")]
allowed_origins = [lookup(cors_rule.value, "allowed_origins")]
expose_headers = [lookup(cors_rule.value, "expose_headers", "")]
max_age_seconds = lookup(cors_rule.value, "max_age_seconds", null)
}
}
when variable cors_rules
is a list of maps like this:
cors_rules = [{
allowed_origins = "*"
allowed_methods = "GET"
}]
however, this approach is still not perfect, because values not mentioned in the cors_rules
variable will be applied anyway with default values
am I missing something ?
i don’t think it’s possible to do it, unless you want to use many permutations of dynamic blocks with for_each
with different conditions
I see
thanks for answering
that’s how we deploy https://docs.cloudposse.com/
Here’s a little tool I’ve been working on that the gamers here might like. I used a lot of Cloud Posse modules also
Terraform module for deploying a Parsec Cloud Gaming server. - jdbohrman/parsec-up
for discoverability, have you considered renaming it to terraform-aws-parsec-instance
? this is the format hashicorp suggests for the registry
Terraform module for deploying a Parsec Cloud Gaming server. - jdbohrman/parsec-up
I haven’t but I will probably do that!
@Andriy Knysh (Cloud Posse) finally so apparently A) Terraform state loading is private to itself in the UI and Command code, so I will need to talk to either paul or terraform team about it, B) and good news finally find out that loading in terraform is implicit cascading
terraform will declare main.tf to be empty
and skip reading
is useful, since I need it to do resource & data types gathering for error checking
2019-09-18
Anyone seen the issue where you curl from an EKS worker node to the cluster and get SSL issues?
Using CP worker / cluster / asg modules.
curl: (60) SSL certificate problem: unable to get local issuer certificate
… this is curling the API endpoint as per EKS
@Addison Higham I’m using your branches from here https://sweetops.slack.com/archives/CB6GHNLG0/p1566415698381800
Error: Invalid count argument
on .terraform/modules/eks_workers.autoscale_group/ec2-autoscale-group/main.tf line 120, in data "null_data_source" "tags_as_list_of_maps":
120: count = var.enabled ? length(keys(var.tags)) : 0
The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.
for the cloudposse modules, I got all these working with 0.12: https://github.com/cloudposse/terraform-aws-ec2-autoscale-group/pull/14 https://github.com/cloudposse/terraform-aws-eks-workers/pull/21 https://github.com/cloudposse/terraform-aws-eks-cluster/pull/20
I forgot to update the version for CI to 0.12, will try and push that out
but getting the following error. Could you provide any guidance on what you think that might be?
yeah, that was an oopsie, a fix got merged… but maybe it didn’t make it onto the branch I was trying to upstream
lemme find it
Thanks. If possible could you push it to your fork’s master? :slightly_smiling_face: I did try your inst-*
branch but that didn’t seem to quite fix it
oh that is a different issue @oscar, what are you passing to tags? as the error message says, it can’t have anything dynamic being passed in
tags is actually empty
I’m passing var.tags which is an empty {} in my terraform proejct that calls your eks_worker module
so am I correct in using your worker & cluster branches @master branch?
because I’m aware you also have the ASG one updated, but do the master branches of worker and cluster point to that?
oh yeah, so that is why we use the inst-version
, which does this: https://github.com/instructure/terraform-aws-eks-cluster/pulls?utf8=%E2%9C%93&q=is%3Apr
Terraform module for provisioning an EKS cluster. Contribute to instructure/terraform-aws-eks-cluster development by creating an account on GitHub.
to be safe, whenever I change refs, I also just delete .terraform
directory and re-init
it is sorta weird, we didn’t want to open a PR to our updated module, but they do need to merge them in order for these to work
Ya I understand the need for the branch. I’ll give another go later on.
So worker inst Cluster master
And that should fix my previous issue with count?
I think so? at least that is what we have and don’t have any issues
public #office-hours starting now! join us to talk shop https://zoom.us/j/508587304
@Addison Higham - darn still got the same issue
module "eks_cluster" {
source = "git::<https://github.com/instructure/terraform-aws-eks-cluster.git?ref=master>"
...
}
module "eks_workers" {
source = "git::<https://github.com/instructure/terraform-aws-eks-workers.git?ref=inst-version>"
...
same error?
Yeh
Error: Invalid count argument
on .terraform/modules/eks_workers.autoscale_group/main.tf line 120, in data "null_data_source" "tags_as_list_of_maps":
120: count = var.enabled ? length(keys(var.tags)) : 0
The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.
module "eks_workers" {
source = "git::<https://github.com/instructure/terraform-aws-eks-workers.git?ref=inst-version>"
namespace = var.namespace
stage = var.stage
name = var.name
tags = var.tags
...
}
var.tags is empty (defaulting to {}
)
is your cluster_name
dynamic? see https://github.com/instructure/terraform-aws-eks-workers/blob/master/main.tf#L2, the workers module computes some tags, so your cluster_name needs to be known at plan time
Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - instructure/terraform-aws-eks-workers
Omg
that must be it
that is why in the example you see them use the label
module to compute the name of the cluster in multiple distinct places
Interestingly though… https://github.com/cloudposse/terraform-root-modules/blob/master/aws/eks/eks.tf#L77
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
# mine
cluster_name = "${module.eks_cluster.eks_cluster_id}"
Will hardcode to a string for now
Super thanks. Cluster and workers up now
But back to workers not connecting to cluster.
@oscar did you apply this https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/examples/complete/kubectl.tf ?
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
Ah - no thank you. I saw this before but didn’t honestly understand it! Should this be run at cluster creation or can be applied afterwards?
so at the time we did it, in some cases there were some race conditions, that’s why we did not enable it by default
after the cluster applied, we set the var to true
and applied that
Many thanks
but now you can test it with the var enabled from start
we did that almost a year ago so a lot prob has changed
and we will convert the EKS modules to 0.12 and add auto-tests this/next week (finally )
Would love to get a hold of those updated modules
Andriy you are my hero
My workers are now connected
TF weirdly got an unauthorized response when applying the command:
kubectl apply -f config-map-aws-auth-xxx-development-eks-cluster.yaml --kubeconfig kubeconfig-xxx-development-eks-cluster.yaml
but my kubectl already had the context activated
so I just ran the apply configmap without the –kubeconfig
@Andriy Knysh (Cloud Posse) XD XD XD XD XD, so I found the biggest issue that is causing vs code users for using the terraform lsp plugin, I forgot to omit the hover provider from the first release that I was trying out(so is very error prone), since I only use vim, so there is no hover that get activated
so now is alot more stable for any GUI based Editor that is going to use terraform-lsp
nice @Julio Tain Sueiras
@justingrote btw, didn’t realize you were on sweetops. We discussed your comment today #office-hours today https://github.com/hashicorp/terraform/issues/15966#issuecomment-520102463 (@sarkis had originally directed my attention to it)
Feature Request Terraform to conditionally load a .tfvars or .tf file, based on the current workspace. Use Case When working with infrastructure that has multiple environments (e.g. "staging&q…
@justingrote has joined the channel
i am facing issues with pre-commit
when using in my terraform project
repos:
- repo: <git://github.com/antonbabenko/pre-commit-terraform>
rev: v1.15.0
hooks:
- id: terraform_fmt
- id: terraform_docs_replace
i receive the following error
pkg_resources.DistributionNotFound: The 'pre-commit-terraform' distribution was not found and is required by the application
any ideas on what could be the problem ?
@antonbabenko
@rohit Not sure if it will fix anything, but you can try changing the git://
to https://
. Here’s mine for reference:
- repo: <https://github.com/antonbabenko/pre-commit-terraform>
rev: v1.19.0
hooks:
- id: terraform_fmt
- id: terraform_docs
i think the problem is with terraform_docs_replace
and maybe it has terraform version 0.11.13
i want to replace the README file automatically as part of commit
do you know if the same can be achieved using terraform_docs
?
its possible. I contributed terraform_docs_replace
several months ago, it probably hasn’t been touched since then.
i think terraform_docs_replace
is only supported in terraform v12
terraform_docs
just makes changes to an existing README between the comment needles
<!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
stuff gets changed here
<!-- END OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
terraform_docs_replace
was made quite a while ago, before 12 came out
when i update variables and their description in [variables.tf](http://variables.tf)
, my README.md files does not gets updated using terraform_docs
Terraform by HashiCorp
good read
if ive got a module such as:
module "vpc_staging" {
source = "./vpc_staging"
}
can I access a variable/output created in that module in another module like so?
module "security-group" {
source = "terraform-aws-modules/security-group/aws"
version = "1.25.0"
name = "sg"
description = "Security group for n/w with needed ports open within VPC"
vpc_id = "${module.vpc_staging.vpc_id}"
}
Would I use the variable name, or output id? What do I reference basically?
The second module can use the outputs of the first module. So, the vpc_staging module would need an output
called vpc_id
for that example you gave.
right! I thought so, just wanted to confirm
thanks
Hey guys, I do have a problem with the examples on the eks_cluster, more specifically on the subnets module It has an unssoported argument there:
An argument named “region” is not expected here.
unsupported
module subnets on main.tf: this line -> region = “${var.region}” Terraform complains about it not being an expected argument
the example is not actually correct since the EKS modules are TF 0.11, but the subnet module are pinned to master which is already 0.12
we are working on converting EKS modules to 0.12
for now, pin the subnet module to a TF 0,11 release
module "subnets" {
source = "git::<https://github.com/cloudposse/terraform-aws-dynamic-subnets.git?ref=tags/0.12.0>"
Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets
same with https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/examples/complete/main.tf#L37
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
pin to 0.4.1
which is TF 0.11
Thank you mate
2019-09-19
How come only the creator of the EKS cluster can connect using the CP moduels?
By default, only the creator of the cluster has access to it using IAM. The aws-auth
ConfigMap in the kube-system namespace controls it. You can add an IAM role mapped to a K8s group that will give anyone who is able to assume that role the ability the log in. Looks like CloudPosse’s implementation of the terraform-aws-eks-workers
module doesn’t make this configurable yet.
Looks like the template for the ConfigMap is here: https://github.com/cloudposse/terraform-aws-eks-workers/blob/master/config_map_aws_auth.tpl
The EKS cluster example shows it being applied here: https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/examples/complete/kubectl.tf
Here’s an example of what it would look like with an IAM role bound to a K8s group that would give anyone that is able to assume the role my-eks-cluster-admin
the ability to log into the cluster with cluster-admin privileges:
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: arn:aws:iam::REDACTED:role/REDACTED
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
- rolearn: arn:aws:iam::REDACTED:role/my-eks-cluster-admin
username: my-eks-cluster-admin
groups:
- system:masters
mapUsers: |
mapAccounts: |
Then, you would change the command being run in your kubeconfig to use the role by using the -r
flag in the aws-iam-authenticator token
command.
Be advised that this will defeat some auditability because Kubernetes will see everyone as the user my-eks-cluster-admin
. You can do a very similar thing with the mapUsers
section in order to map each user you want to give access to with a username in Kubernetes.
The syntax for mapUsers
is
mapUsers: |
- userarn: <theUser'sArn>
username: <TheUsernameYouWantK8sToSee>
groups:
- <TheK8sGroupsYouWantTheUserToBeIn>
Thank you we found the answer to this earlier on! Really apporeciate your detail!
We’re planning to fork it when 0.12 of the module goes live to support this customizability
thanks guys, we will add additional roles and users mapping (working on 0.12 of the modules now)
Ah that’s cool. My new firm is really keen to use CP’s own version of 0.12 (not the fork/PR branch). We have our own customizability reqs so once 0.12 is done and pushed we can start extending
https://github.com/hashicorp/terraform/issues/22649 anyone experiencing this out of nowhere? (All devs using the state file are on 0.12.6)
Terraform Version v0.12.7 Debug Output Error: Error loading state: state snapshot was created by Terraform v0.12.7, which is newer than current v0.12.6; upgrade to Terraform v0.12.7 or greater to w…
they have been busy adding new features
Terraform Version v0.12.7 Debug Output Error: Error loading state: state snapshot was created by Terraform v0.12.7, which is newer than current v0.12.6; upgrade to Terraform v0.12.7 or greater to w…
usually that happened when using 0.12 then trying to read the state with 0.11
but now looks like any version bump causes that
But everyone (2 people - we’re next to eachother) using that project are using the same geodesic shell and have the same version 0.12.6… yet the statefile in S3 says 0.12.7 O.O
neither of us have 0.12.7 which is super weird
geodesic
has 0.12.6 as well?
yep!
or rather
we are both in geodesic
and terraform version
is 0.12.6 on both our PCs
No one else feasibly ran this
inside geodesic
, terraform version
is 0.12.6 as well?
Yes
on our locals: 0.12.0
on our geodesics: 0.12.6
Whilst we’d like to know why, we’re happy to use 0.12.9 etc
.. but we’re using cloudposses terraform_0.12
@Andriy Knysh (Cloud Posse) I see that 0.12.7 is in your packages https://github.com/cloudposse/packages/blob/master/vendor/terraform-0.12/VERSION
however apk add –update –no-cache terraform_0.12 does not work as expected
Cloud Posse installer and distribution of native apps, binaries and alpine packages - cloudposse/packages
Ok updated my geodesic FROM to 0.122.4 and that cleared the cache
now on 0.12.7
i thought you needed apk update && apk add --update terraform_0.12@cloudposse
is the @cloudposse
not required?
doh that must be it
merci beaucoup
granted, using the newest geodesic is also nice~ features and bugfixes, oh my
I was only coming from 0.119 - not that far behind!
I also usually customize that in my own dockerfile that wraps geodesic:
RUN apk add terraform_0.12@cloudposse terraform@cloudposse==0.12.7-r0
Is what’s in ours, but we only have one or two 0.12 projects, everything is mostly on 0.11 still
stupid question I’m using
locals {
availability_zones = slice(data.aws_availability_zones.available.names, 0, 2)
}
but sometimes my resources end up in the same AZ
better to just hardcode them ?
@jose.amengual what do you mean by sometimes
? When in diff regions?
the code above is ok and should work
same region
no need to hardcode anything
I’m using the terraform terraform-aws-rds-cluster
module
which I’m going to send a PR to support global clusters
you have to make sure you create the subnets in diff AZs https://github.com/cloudposse/terraform-aws-rds-cluster/blob/master/examples/complete/main.tf#L40
Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster
that example worked many times
I know is weird because if I recreate the cluster then it will work
I wonder now….maybe I just have a problem in one region
we use TF to create the accounts so in every region we subnets for every AZ
I was wondering if for some reason we made a mistake or something
but I’m using a data lookup to find them base on tags
yea check the data lookup if it returns the correct result
exactly what I’m doing
I’m gettin 3 subnet ids in us-east-1 and 4 in us-west-2
so the data lookups are good
hmm… maybe we need to specify AZs now https://www.terraform.io/docs/providers/aws/r/rds_cluster.html#availability_zones
Manages a RDS Aurora Cluster
cluster_size = 2 and I pass 4 subnets, then it should be ok
Has anyone else run into the issue where you can’t pass variables via the command line when using the remote backend since last week when they released terraform cloud?
Error: Run variables are currently not supported
The "remote" backend does not support setting run variables at this time.
Currently the only to way to pass variables to the remote backend is by
creating a '*.auto.tfvars' variables file. This file will automatically be
loaded by the "remote" backend when the workspace is configured to use
Terraform v0.10.0 or later.
Additionally you can also set variables on the workspace in the web UI:
<https://app.terraform.io/app/Boulevard/sched-dev-feature-branch-environments/variables>
Global cluster support PR @Andriy Knysh (Cloud Posse) https://github.com/cloudposse/terraform-aws-rds-cluster/pull/56
thanks @jose.amengual
commented
fixed
did you run
make init
make readme/deps
make readme
looks like README was not updated
and docs/terraform.md
was deleted
weird
mmm
❰jamengual❙~/github/terraform-aws-rds-cluster(git:globalclusters)❱✔≻ make readme 5.2s Thu 19 Sep 18:51:17 2019
curl --retry 3 --retry-delay 5 --fail -sSL -o /Users/jamengual/github/terraform-aws-rds-cluster/build-harness/vendor/terraform-docs <https://github.com/segmentio/terraform-docs/releases/download/v0.4.5/terraform-docs-v0.4.5-darwin-amd64> && chmod +x /Users/jamengual/github/terraform-aws-rds-cluster/build-harness/vendor/terraform-docs
2019/09/19 18:51:24 At 3:16: Unknown token: 3:16 IDENT var.namespace
make: *** [docs/terraform.md] Error 1
hmmm
looks like something is broken (will have to look)
Looks like an old build harness
The ident error tells me that it’s using an old version of terraform
Terraform-docs does not support it natively, so we have a wrapper around terraform docs
ohhhh
one sec
I have two binaries
Also might get fixed if you blow away build harness and rerun make init. Just a hunch.
(On my phone so cant provide more detail)
done
Unknown token: 3:16 IDENT
happened to me when TF versions mismatched
thanks guys
tested on AWS and merged
I am currently working on a new fix for the terraform-docs.awk
wrapper here: https://github.com/antonbabenko/pre-commit-terraform/issues/65
If there are any other issues coming up, let me know
How reproduce Working code: staged README.md <!– BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK –> <!– END OF PRE-COMMIT-TERRAFORM DOCS HOOK –> staged vars.tf variable "ingress_ci…
2019-09-20
Azure
Hi everyone, I’m about to move my big terraform configuration into separate modules, but I have a question about best practice regarding resource-groups.
If I will create resource-group
resource in every of my modules, it will be fine, because it will be created once, but when for some reason I will remove entire module or I will try to redeploy it, wouldn’t Terraform want to delete my resource group (and all other resources-modules)? Should I rather use data
resource to make reference to resource group created in another module or what are your ideas?
Thanks
hey guys… not sure whats going on but it looks like the 0.9.0 - terraform-aws-cloudfront-s3-cdn module is creating ARN IDs like
“arniam:user/CloudFront Origin Access Identity XXXXXXXXXXXXXXXX”
for S3 policies to allow Cloudfront access
ah that’s a new issue
I just encountered it today
oh thank god.
AWS changed how the API behaves
in the background
if you need a quick fix
I literally thought i was going crazyh
hahah it happened to me as well
i do ,. please
sec
Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or "me to…
the glorious fix is
aaaaah
can i just downgrade my provider version?
lets take a peak
principals {
type = "AWS"
identifiers = [replace("${aws_cloudfront_origin_access_identity.this.iam_arn}", " ", "_")]
}
Thank you!
for now you should be able to patch it until @Andriy Knysh (Cloud Posse) or @Erik Osterman (Cloud Posse) wake up and officialy fix it
haha Erik is a long time friend of mine, i can hold something over him i think to get it fixed
although, I was the one who was usually embarrassing themselves…
I think using the replacements only works for current state files, if you’re doing new policies you have to use type CanonicalUser and identifier s3_canonical_user_id
aaaah
nope that’s not going to work
It just applied for me.
even though CanonicalUser and identifier s3_canonical_user_id will pass tf apply
try it again
aws is changing it in the background
you’ll get a change on every apply
really
ugh
at least that’s what happened to me
Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
damn
It wont take the replace suggestion, keeps telling me bad
Error: Error putting S3 policy: MalformedPolicy: Invalid principal in policy
gonna try something
it was too early, i was using dashes lol….
underscores work
thanks for the help Nikola!
gonna lurk here now….
you are welcome
2019-09-22
guys, any update on this https://sweetops.slack.com/archives/CB6GHNLG0/p1566415698381800 ? Really looking forward to use these modules with TF 0.12
for the cloudposse modules, I got all these working with 0.12: https://github.com/cloudposse/terraform-aws-ec2-autoscale-group/pull/14 https://github.com/cloudposse/terraform-aws-eks-workers/pull/21 https://github.com/cloudposse/terraform-aws-eks-cluster/pull/20
I forgot to update the version for CI to 0.12, will try and push that out
yes, we are working on that now, will be done in the next 2-3 days
for the cloudposse modules, I got all these working with 0.12: https://github.com/cloudposse/terraform-aws-ec2-autoscale-group/pull/14 https://github.com/cloudposse/terraform-aws-eks-workers/pull/21 https://github.com/cloudposse/terraform-aws-eks-cluster/pull/20
I forgot to update the version for CI to 0.12, will try and push that out
thanks for the update @Andriy Knysh (Cloud Posse)
2019-09-23
If I create NATs in one module, is there a way to get a list of NAT GW and pass it to a new sg with TF?
Your module can output the list of NAT GWs and you can do whatever you desire with that list
is that only if I am creating that sg within the same module?
Nope.
So there are levels. Think of them as boxes. Terrafrom resources have attributes( variables you set say, ami_name
for an EC2 instance) and outputs( say instance_name
). You can take that output and play around with it in the same module. Or you can get that output and push it out of your module — your module now outputs that value too.
Output values are the return values of a Terraform module.
Thank you! Is there another way to do it with using just data? like data aws_nat_gatway and then scrape for a list with tags
there are examples in terraform-root-modules of reading the output of other modules using their remote state.. https://github.com/cloudposse/terraform-root-modules/blob/9301b150c89a5543bdd2785ecdacf000ee6c5561/aws/iam/audit.tf#L15
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
@pericdaniel I believe this post will answer your questions https://blog.gruntwork.io/how-to-manage-terraform-state-28f5697e68fa#7077
Thank you!
@Andriy Knysh (Cloud Posse) https://github.com/cloudposse/terraform-aws-rds/pull/41
why To use this module and not cause a re-creation, you would have to hardcode the password somewhere in your config / terraform code. This is not a secure method. Naturally if you use a secrets sy…
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Oct 02, 2019 11:30AM.
Register for Webinar
#office-hours (our channel)
Hi guys,
Any of you has experience with maintenance of SaaS environments? What I mean is some dev, test, prod environments separate for every Customer?
In my case, those environments are very similar, at least the core part, which includes, vnet, web apps in Azure, VM, storage… All those components are currently written as modules, but what I’m thinking about is to create one more module on top of it, called e.g. myplatform-core
. The reason why I want to do that is instead of copying and pasting puzzles of modules between environments, I could simply create env just by creating/importing my myplatform-core
module and passing some vars like name, location, some scaling properties.
Any thoughts about it, is it good or bad idea in your opinion?
I appreciate your input.
the idea is good. That’s how we create terraform environments (prod, staging, dev, etc.). We have a catalog of terraform modules (just code without settings/configs). Then for each env, we have a separate GitHub repo where we import the modules we need (using semantic versioning so we know exactly which version we are using in which env) and provide all the required config/settings for that environment, e.g. AWS region, stage (prod, staging, etc.), and security keys (from ENV vars or AWS SSM)
As I understand, you’re actually not creating a Terraform Module of your core/base infra, but instead you have catalogs/repos per environment with versioned “module puzzles”?
for example, we have a catalog of TF modules - reusable code which we can use in any env (prod, staging, dev, testing) https://github.com/cloudposse/terraform-root-modules/tree/master/aws
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
the code does not have any identity, it could be deployed anywhere after providing the required config/settings
then for example, in testing env, we create projects for the modules we need (e.g. eks
), https://github.com/cloudposse/testing.cloudposse.co/blob/master/conf/eks
Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co
and load the module code from the catalog https://github.com/cloudposse/testing.cloudposse.co/blob/master/conf/eks/.envrc (uisng semantic versioning)
Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co
but all the config/settings are provided from a few places:
Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co
- Dockerfile (in which we have settings common for all modules in the project) https://github.com/cloudposse/testing.cloudposse.co/blob/master/Dockerfile
Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co
- Secrets are from ENV vars (which get populated from diff sources, e.g. AWS SSM, Secrets Manager, Vault, etc.) when the CI/CD deployment pipeline runs, or on dev machine by executing some commands)
I see, thank you very much I started with different approach, I keep all my environments in one Terraform Repository with projects and I include modules from external git repositories (each module in separate git repository)
that’s what we do too
https://github.com/cloudposse/terraform-root-modules is a (super)-catalog of top level modules which are aggregations of low-level modules
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
each projects in there connects low-level modules together into a reusable top-level module https://github.com/cloudposse/terraform-root-modules/blob/master/aws/eks/eks.tf#L31
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
ah, right
those aggregations are opionated since you can have many diff ways to connect low-level modules to create a top-level module you need
Interesting approach. I was reading quite a lot recently, best practices with Terraform,TF Up & Running etc. and in most cases people don’t recommend using nested modules, but it looks really reasonable in your case.
they are not nested (in that sense)
those are module instantiation and connecting them together into a bigger module
that’s why we have modules in TF - to reuse them
in other modules
Actually that was my understanding of nested word, sorry. English is not my first language
By nested modules, they mean modules of modules. Cloudposse stuff does use modules of modules e.g. module A may use module B, and module B may use module C
It works fine, but can be interesting to debug several layers down
If you want composable modules, there isn’t much of a way around that
Any samples/examples for implementing Cloudwatch events>create new rule>Ec2 Instance State-Change Notification > Target > SNS > email
, currently going through official docs
@Hemanth you cannot create email subscription to an SNS topic with terraform, because they require a confirmation
Hey All, has anyone had issues creating azure resources with an s3 backend?
@Andriy Knysh (Cloud Posse) have you ever used an s3 backend with other providers for resources? I’m getting an issue where my declared resources are being pick up in the state file
did not use azure, but you can give more details about the issue, maybe somebody here will have some ideas
Otherwise, you just want to create the following resources: aws_cloudwatch_metric_alarm, aws_sns_topic, and aws_sns_topic_subscription
maybe you could find some ideas from https://github.com/cloudposse?utf8=%E2%9C%93&q=alarm&type=&language=
@Hemanth ^
@Andriy Knysh (Cloud Posse) the https://github.com/cloudposse/terraform-aws-ec2-cloudwatch-sns-alarms is empty. but thanks those samples are helpful
Terraform module that configures CloudWatch SNS alerts for EC2 instances - cloudposse/terraform-aws-ec2-cloudwatch-sns-alarms
that one was not implemented
2019-09-24
re: https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/pull/51 @Andriy Knysh (Cloud Posse) if you make this change every terraform plan will produce a change
you just need to replace ” “, “_” on the old value
You guys using terraform cloud at all yet?
What are the overall benefits?
Visualization into runs via the web UI. You can see whats been applied recently and how that run went.
You can lock down certain users, you can also plan/apply automatically based on changes to git.
Interesting. I’ll have to check it out. Used to getting the auto features baked into my CI workflow, so if tf-cloud can potentially simplify that, it could be a win.
Does the visualization piece look at anything outside the tf-state?
Using #atlantis for now, as it is more flexible
Though terraform cloud does look appealing
Can you use terraform_remote_state data source as an input attribute for subnet in the cloudposse aws ec2 module?
I am using the terraform approved aws vpc module to create my VPC, and have correctly setup all my outputs, one specific being a public_subnet ID and I am trying to reference said subnet ID as a terraform_remote_state data source as the subnet attribute but am not sure of the proper syntax
@leonawood take a look here https://sweetops.slack.com/archives/CB6GHNLG0/p1569238378151700
If I create NATs in one module, is there a way to get a list of NAT GW and pass it to a new sg with TF?
thank you!
I have a terraform module in which we use to setup new AWS accounts with certain resources. So this module is generic enough to use on ‘dev’ aws account, ‘qa’ account and ‘prod’ account for say. However, I need to only create some resources based on the environment. How can I achieve this with a module? I saw this online: https://github.com/hashicorp/terraform/issues/2831
We have a couple of extra terraform resources that need creating under certain conditions. For example we use environmental overrides to create a "dev" and a "qa" environment fr…
is this still the best way?
was about to try that out but read that if the count is set to 0, it would destroy the resource ?
for all resources in the module, you could use count = var.environment == "prod" ? 1 : 0
or count = var.environment == "qa" ? 1 : 0
etc.
or any combination of the conditions
so adding count = var.environment == "prod" ? 1 : 0
would ensure the resource is only created in prod?
it will ensure that if var.environment == "prod"
then the resource will be created. If you run it in prod
, it will be in prod.
at the same time, you could make a mistake and set var.environment == "prod"
and run it in dev
, then it will be created as well in dev
@Brij S you need some kind of container (or separate repo) where you set all configs for let’s say prod
(e.g. region and AWS prod account ID) and where you set var.environment == "prod"
when you run it, it will be used only in the prod account and since var.environment == "prod"
, the resource will be created
so a better strategy would be not to create a super-giant module with many conditions to create resources or not
divide the big module into small modules
then use some tools to combine only the required modules into each account repo
the tool could be terragrant
or what we do using geodesic
and remote module loading https://www.reddit.com/r/Terraform/comments/afznb2/terraform_without_wrappers_is_awesome/
One of the biggest pains with terraform is that it’s not totally 12-factor compliant (III. Config). That is,…
anyone here split up state files? we use tf workspaces and it works quite nicely. I am interested if theres a way to combine all the outputs into one file tho for reference?
so I can just send to our sys admin and it contain all the relevant details
@Andriy Knysh (Cloud Posse) i will look into terragrunt, as for now Id like to use the above suggestion with TF11, but having some issue with syntax:
${var.aws_env} == "prod" ? "1" : "0"
doesnt work - what am i missing?
"${var.aws_env == "prod" ? 1 : 0}"
what about the closing }
need it too
cool let me try that
PR for more alb ingress config options: https://github.com/cloudposse/terraform-aws-alb-ingress/pull/22
Feature Added the following with sensible defaults to not break the current consumers: health check variables to enable/disable and control the port + protocol slow_start stickiness // CC @aknysh…
I did not check the provider versions so unsure if it’ll break consumers or not
Feature Added the following with sensible defaults to not break the current consumers: health check variables to enable/disable and control the port + protocol slow_start stickiness // CC @aknysh…
added a simple example too
thanks @johncblandii
Terraform module to provision an HTTP style ingress rule based on hostname and path for an ALB using target groups - cloudposse/terraform-aws-alb-ingress
no prob
is there a way to create an IAM user, generate access keys and plug them into paramstore with terraform?
2019-09-25
Components for secure UI hosting in S3
• S3 — for storing the static site
• CloudFront — for serving the static site over SSL
• AWS Certificate Manager — for generating the SSL certificates Route53 — for routing the domain name to the correct location Did anyone come across any modules for this in terraform ?
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
thankq @Andriy Knysh (Cloud Posse)
S3-backed serverless PyPI. Contribute to amancevice/terraform-aws-serverless-pypi development by creating an account on GitHub.
Hi all,
I’ve been using terraform (0.10 & 0.11) for close to three years now and as terraform 0.12 gets more support/becomes more of the industry standard, my team is looking to adopt it in a way where we can rearchitect our terraform structure, and reduce the general number of pain points across the team.
Currently we are a multi-region AWS shop that has single terraform repos for every service we deploy, with modules at the root of the repo, and directories representing each of our environments (qa-us-east-1, qa-eu-west-1). We run terraform from within those environment specific directories and push remote state to S3 to maintain completely separate state.
We’re thinking about how we can merge all of this into a single repo where:
- There are modules that can be reused across all of our different services (they’d either live at the root of the base terraform repo or in a separate terraform modules repo that we can reference from within our base repo)
- We duplicate as little code as possible (probably obvious but still worth mentioning)
- We continue to keep all state separate on a per environment basis
- Follow terraform best practices to make sure that upgrade paths continue to be easy/straightforward
We also want to keep in mind that we are shifting to a multi account AWS organization where our terraform will be deploying into different AWS accounts as well.
The team so far has demoed both Terragrunt and Terraform Workspaces. We are also considering not using workspaces or Terragrunt but still migrating to the single repo structure. There have been mixed opinions about all options considered. I’d love to get feedback from the community if anyone has opinions based on current or previous experiences with either.
Please note that we are currently not using Terraform Enterprise but that has been an option that could be considered as well
Regarding the multiple AWS account, we have a similar setup where, depending on the env directory you’re in, we hop into the correct AWS Account. Would that work for you are are you planning on deploying the same environment within multiple accounts?
it would be different environments within multiple accounts. The rough plan is to have each of our teams have a production & development/test account. So one thought was that the specific account would be another extracted layer of directories, either a level above or below the env directory
@kj22594 take a look here, similar conversation https://sweetops.slack.com/archives/CB6GHNLG0/p1569261528160800
Hi guys,
Any of you has experience with maintenance of SaaS environments? What I mean is some dev, test, prod environments separate for every Customer?
In my case, those environments are very similar, at least the core part, which includes, vnet, web apps in Azure, VM, storage… All those components are currently written as modules, but what I’m thinking about is to create one more module on top of it, called e.g. myplatform-core
. The reason why I want to do that is instead of copying and pasting puzzles of modules between environments, I could simply create env just by creating/importing my myplatform-core
module and passing some vars like name, location, some scaling properties.
Any thoughts about it, is it good or bad idea in your opinion?
I appreciate your input.
thanks. I’ll take a look
Hi guys,
Any of you has experience with maintenance of SaaS environments? What I mean is some dev, test, prod environments separate for every Customer?
In my case, those environments are very similar, at least the core part, which includes, vnet, web apps in Azure, VM, storage… All those components are currently written as modules, but what I’m thinking about is to create one more module on top of it, called e.g. myplatform-core
. The reason why I want to do that is instead of copying and pasting puzzles of modules between environments, I could simply create env just by creating/importing my myplatform-core
module and passing some vars like name, location, some scaling properties.
Any thoughts about it, is it good or bad idea in your opinion?
I appreciate your input.
in short, we use the following:
- Terraform modules to provision resources on AWS https://github.com/cloudposse?utf8=%E2%9C%93&q=terraform&type=&language=
- A catalog of top-level modules where we assemble the low-level modules together and connect them. They are completely identity-less and could be deployed in any AWS account in any region https://github.com/cloudposse/terraform-root-modules/tree/master/aws
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
- A container (
geodesic
https://github.com/cloudposse/geodesic( with all the tools required to provision cloud infrastructure
Geodesic is a cloud automation shell. It's the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…
- Then, for a specific AWS account and specific region, we create a repo and Docker container, e.g. https://github.com/cloudposse/testing.cloudposse.co
Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co
it provides:
1) all the tools to provision infrastructure
2) Settings and configs for a specific environment (account, region, stage/env, et.) NOTE that secrets are read from ENV vars or SSM using chamber
3) The required TF code for each module that needs to be provisioned in that account/region gets loaded dynamically https://github.com/cloudposse/testing.cloudposse.co/blob/master/conf/eks/.envrc
Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co
4) to login to AWS, an AIM role gets assumed in the container (we use aws-vault
)
so once inside that particular container (testing.cloudposse.co), you have all the tools, all required TF code, and all the settings/configs (that specify where and how the modules get provisioned)
so the code (logic) is separated from data (configs) and the tools (geodesic
), but get combined in a container for a particular environment
Wow, thanks. That makes a ton of sense and seems to be a very sound way of approaching this problem. I do really like the idea of having root level modules repo where you can interconnect different modules for use cases that happen numerous times but also having the modules split out so that they can be reused separately too
yes
also, while terragrunt helps you to organize your code and settings, this approach gives you much more -code/settings/tools in one container related to a particular environment
(terragrunt still can be used to organize the code if needed) https://www.reddit.com/r/Terraform/comments/afznb2/terraform_without_wrappers_is_awesome/
One of the biggest pains with terraform is that it’s not totally 12-factor compliant (III. Config). That is,…
the nice part about all of that is that the same container could be used from three different places: developer machine, CI/CD pipelines (those that understand containers like Codefresh or GitHub Actions), and even from GitHub itself using atlantis
(which is running inside geodesic
container) - that’s how we do deployment and testing of our modules on real AWS infrastructure
That is really cool. Atlantis is something that I’ve had conversations with a friend about but we’ve never actually implemented it or even tested it
I really appreciate this, this is all great knowledge and insight
public #office-hours starting now! join us to talk shop https://zoom.us/j/508587304
2019-09-26
Hey does anyone have a terraform party slackmoji?
I will trade you one terraform-unicorn-dab slackmoji.
lol
I’d love a terraform-parrot
hahaha
I was hoping for something like my kubernetes party:
I stole that form kubernetes.slack.com
Nice!
I just made it!
Probably not my best work, but not bad for a first gif
¯_(ツ)_/¯
Thanks
Niiice!
Where do I get that unicorn XD
I really need it in my workspace
1) download the icons above
2) go here https://$[team.slack.com/customize/emoji](http://team.slack.com/customize/emoji)
where $team
is your slack team
XD
2019-09-27
@here I am trying to upgrade from v.11.14 to v.12 and after going through the upgrade steps and fixing some code changes … now I am seeing following issue
Error: Missing resource instance key
on .terraform/modules/public_subnets.public_label/outputs.tf line 29, in output "tags":
29: "Stage", "${null_resource.default.triggers.stage}"
Because null_resource.default has "count" set, its attributes must be accessed
on specific instances.
For example, to correlate with indices of a referring resource, use:
null_resource.default[count.index]
did anyone faced similar issue and was able to fix it
try "${join("", null_resource.default..*.triggers.stage}"
Hi
I would apreciate some help with the terraform-aws-elasticsearch module
when trying to use it from the complete example
i get in a plan the following
tha’s one example but i get that for all the variables
you have to provide values for all variables
it seems as if it were not reading the set variables
yeah, but in the variables.tf file?
or to use the .tfvar
files, use :
terraform plan -var-file="fixtures.us-west-1.tfvars"
terraform apply -var-file="fixtures.us-west-1.tfvars"
hoooo i see
but don’t use out values
change them
so the modules as module "elasticsearch" { blah blah
should be empty of values if i use a tfvars file rigth ?
you can provide values for the vars from many diff places https://www.terraform.io/docs/configuration/variables.html
Input variables are parameters for Terraform modules. This page covers configuration syntax for variables.
how do you provide credentials to private terraform github repository module ?
like this in your providers.tf
then just set in variables.tf files the values
thanks
and how do i provide the path to github module if it is not at the root level
for example, source = "[email protected]:hashicorp/example.git"
but my main.tf is under modules
directory
@AgustínGonzalezNicolini how would i access it ?
[email protected]:hashicorp/example.git//myfolder?ref=tags/x.y.z
thanks
Thanks @Andriy Knysh (Cloud Posse)!!!
2019-09-29
Hey guys, I am looking for the best way to roll back a change to an ASG to the previous known working ami as part of CICD pipeline with Terraform. Thinking of using a script to tag the previous AMI and using that to identify last known config. Has anyone else solved this problem?
I’ve been asked to provision 3 EKS clusters: Dev, Staging, and Prod. What is the way that you guys do this? Currently, I’m thinking of
- Having 3 branches in my git repo called “dev”, “staging”, and “prod”
- Having 3
.tfvars
files calleddev.tfvars
,staging.tfvars
,prod.tfvars
- If I commit to
dev
, My CICD runsterraform apply
using a workspace calleddev
, usingdev.tfvars
Hi @roth.andy, personally I am a fan of workspaces. We used to have to have this setup but without the fixed branches, CI/CD automaticaly deployed a branch to staging and for prod it was a interactive apply ( if tests passed )
2019-09-30
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Oct 09, 2019 11:30AM.
Register for Webinar
#office-hours (our channel)
Guys, i’ve been using https://github.com/cloudposse/terraform-aws-vpc-peering to peer two vpcs and it works awesome. On current project I need to peer N numbers of VPC’s all with each other. As number of VPCs grows it become pretty hard to manage everything even with terraform. Is there any way to dynamically create peering mesh? CIDRs are carefully chosen so there will be no overlapping and I can fetch all vpcs with single data source. This shot from AWS describes my setup perfectly
I’d suggest using transit gateway
you can now use share vpcs : https://docs.aws.amazon.com/vpc/latest/userguide/vpc-sharing.html
VPC sharing allows multiple AWS accounts to create their application resources, such as Amazon EC2 instances, Amazon Relational Database Service (RDS) databases, Amazon Redshift clusters, and AWS Lambda functions, into shared, centrally-managed Amazon Virtual Private Clouds (VPCs). In this model, the account that owns the VPC (owner) shares one or more subnets with other accounts (participants) that belong to the same organization from AWS Organizations. After a subnet is shared, the participants can view, create, modify, and delete their application resources in the subnets shared with them. Participants cannot view, modify, or delete resources that belong to other participants or the VPC owner.
@Milos Backonja I would look into Transit Gateway. This allows you to have a hub and spoke type of network and manage the routing tables centrally.
second this
Awesome, thanks a lot, This simplifies my setup enormously. I will need to check/estimate costs.
you can do a bunch of other cool things with transit gateways, like centralize nat gateways, or hook in a central direct connect
Hi Guys, does anyone here using Terraform Enterprise ?
Hub n’ Spoke with VPC Transit Gateway
does anyone know how to add private subnets to the default vpc using terraform?
Don’t use the default pvc, it is bad practice…
is there a module that creates a vpc with private subnet?
yes, just go to clousposse github and search for vpc and subnets
we use their modules and they work great
thanks @jose.amengual
@Brij S take a look at this example https://github.com/cloudposse/terraform-aws-emr-cluster/blob/master/examples/complete/main.tf
Terraform module to provision an Elastic MapReduce (EMR) cluster on AWS - cloudposse/terraform-aws-emr-cluster
specifically lines 5-24, right?
yes
does aws_alb_listener
resource multiple certificate_arn
?
For those interested in the EKS modules, we’ve converted them to TF 0.12
For those interested in the EKS modules, we’ve converted them to TF 0.12:
https://github.com/cloudposse/terraform-aws-ec2-autoscale-group https://github.com/cloudposse/terraform-aws-eks-workers https://github.com/cloudposse/terraform-aws-eks-cluster/releases/tag/0.5.0