#terraform (2019-02)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2019-02-01
Heyho! I’m trying to build a sweeet infra and began using terraform+cloudposse module “terraform-aws-elastic-beanstalk-environment”. I try to create a “SingleInstance” environment with a public ip. But i can’t get the module convinced to put the EC2 into the public net instead of the private. How to do that? I’m using the “complete” example and switched “environment_type” to SingleInstance and added updating_min_in_service to 0 as the docs tell me.
@Lucas 5 min, I’ll take a look
@Lucas the module has vars for private and public subnets
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment
the private subnets are to place the EC2 instances in
but
those are just the names (and best practice)
if you provide a public subnet ID in the variable "private_subnets"
, the instance will be placed into the public subnet
ah! nice, i’ll give it a try right away. Indeed the names suggest something different. But maybe it’s because its my first day with terraform
You’re off to a flying start if you already found our modules on your first day.
(and found our slack!)
I looked into terraform because managing AWS by hand seemed not right. Than i saw the huge complicated and potent config of terraform and thought “oh well, that will take some months until i get it running”. A video on youtube introtuced the concept of modules to me and i found the cloudposse repo. Bingo
Thanks @Andriy Knysh (Cloud Posse) it worked.
nice
we had a request already to rename the inputs to elb_subnets
and instance_subnets
(which both could be private or public, your choice) - so maybe it’s a good idea
2019-02-03
This is more of an AWS related question but since it was triggered through the terraform-aws-ec2-bastion-server
module for my bastion I’m asking here. The security group created by the module doesn’t allow for the builtin user_data.sh
to complete all of its tasks since apt is blocked by the outoing secgroup rules, so by design bastion host needs to be added to a more permissive secgroup.
This leads to the following questions when trying to come up with an outgoing traffic security policy: what’s the best compromise between allowing package managers access and maintaining security, especially for the bastion host? Is the best way enabling outgoing traffic on the secgroup on demand? Am I trying too hard, and should just open ports 1025-65535 to 0.0.0.0/0 and stop thinking about it?
2019-02-04
If you really want to lock it down, could have egress proxies that are a gate to the outside world, a central choke point
Sure, I could route the traffic through the NGW, the thing is, is this worth doing? How are people tackling this in real world scenarios?
Depends what you real world requirements are?
Lots of places I know of don’t lock down egress at all
Others allow only to specific endpoints
e.g. pypi, rubygems etc
How locked down do you need this environment?
It can be a bit painful to manage those outbound rulesets
Are you running a sandwich making app, or a bank?
No highly sensitive data involved, but private data that should be safeguarded nevertheless. Thanks for your comments, they’re helpful!
Do you have specific security concerns / accreditation requirements?
In this case none at all, just trying to gauge what’s generally acceptable/best practice
What kind of outgoing rules would you need to manage? Do you apps/infra need to reach out to all the internets at run time?
Can you lock it down to DNS outbound and maybe a few upstream endpoints?
Well, front/backend will be using a NGW, my question was just for the bastion host which is exposed, and needs access to the o/s repos, and our gitlab repos for bootstrapping itself.
Do these places provide static IPs to add to egress whitelist SG rules? Doesn’t sound like too much to manage if so and I’d probably want to lock down egress on the bastion
Not really, Canonical/AWS don’t provide static IPs for their distributions’ repos, just URLs. Thanks for your answers. I think I’m settling around NGW egress, unless that presents any issues that I can’t think of atm.
Helpful question stored to <@Foqal> by @joshmyers:
This is more of an AWS related question but since it was triggered through the terraform-aws-ec2-bastion-server module for my bastion I’m asking here. The security group created by the module doesn’t...
Hi everyone, using: https://github.com/cloudposse/terraform-aws-ec2-autoscale-group how can i define additional block mappings on ec2?
Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group
i know it should be a list, but in the aws_launch_template is an object
it should be a list of maps. Did you try that?
uhm
(re: my question last week about ecs cli deploys) I found this one to be very useful and really easy to use: https://github.com/fabfuel/ecs-deploy.
Powerful CLI tool to simplify Amazon ECS deployments, rollbacks & scaling - fabfuel/ecs-deploy
@johncblandii Pure bash implementation - https://github.com/silinternational/ecs-deploy
Yup. I saw that one and it wasn’t being kind. I was under the gun so switched to the pip version
our containers have py installed already
Terraform 0.12 released yet?
Terraform 0.12 is the new Perl 6
trolling hard, what happened to Perl ?
great!
Welcome @Ram Glad you stopped by.
thanks eric.
2019-02-05
#terraform, I have a very generic question with regards to deployment of premium services in Azure platform. when I try deployment of Azure premium services like App service environment and Redis Cache. I will run for 1hr and timeout and error with the following message “Error creating deployment: Future#WaitForCompletion: context has been cancelled: StatusCode=200 – Original Error: context deadline exceeded”
Do we have a fix for Terraform to run deployment process for more than 1 hr?
@praveen not many people here are familiar wit #azure, sorry if you don’t get an answer you are looking for
@Andriy Knysh (Cloud Posse),not a problem. In general what is the timeout value for Terraform to run a config
@praveen i’m not aware of global timeout settings, but some resources have it https://www.terraform.io/docs/configuration/resources.html#timeouts
The most important thing you’ll configure with Terraform are resources. Resources are a component of your infrastructure. It might be some low level component such as a physical server, virtual machine, or container. Or it can be a higher level component such as an email provider, DNS record, or database provider.
@Andriy Knysh (Cloud Posse) thank you for the feedback. I am also checking with Microsoft on the same. Let me see if I have a definite answer on the same
form Microsoft Azure
Thank you once again for your quick response @Andriy Knysh (Cloud Posse)
Hi All and thanks again for the SweetOps resources from CloudPosse! :slightly_smiling_face:
I’m trying to use https://github.com/cloudposse/terraform-aws-vpc-peering/blob/master/main.tf to set up some peering between VPCs in different accounts where the remote vpc is in the other account and the requestor is ‘my’ account,
I see the aws_vpc_peering_connection
https://www.terraform.io/docs/providers/aws/r/vpc_peering.html supports peer_owner_id
? > (Optional) The AWS account ID of the owner of the peer VPC. Defaults to the account ID the AWS provider is currently connected to.
However the cloudposse module does not appear to support peer_owner_id
Am I correct? Is there a work around? If not shall I add one with a PR?
Thanks again!
Terraform module to create a peering connection between two VPCs in the same AWS account. - cloudposse/terraform-aws-vpc-peering
Provides a resource to manage a VPC peering connection.
@Toby for cross-account peering, please use https://github.com/cloudposse/terraform-aws-vpc-peering-multi-account
Terraform module to provision a VPC peering across multiple VPCs in different accounts by using multiple providers - cloudposse/terraform-aws-vpc-peering-multi-account
a-ha!!
Thanks
have you guys done vpc peering with terraform? does it work well?
We do it all the time, works well
See the two modules above
oh wow i didnt even notice
2019-02-06
https://github.com/cloudposse/terraform-aws-s3-log-storage/blob/master/main.tf#L29 I believe line actually breaks the lifecycle rule
This module creates an S3 bucket suitable for receiving logs from other AWS services such as S3, CloudFront, and CloudTrail - cloudposse/terraform-aws-s3-log-storage
as it filters to those tags, and objecets most likely wont have that
It should be a separate var.
or removed
@pecigonzalo Please open an issue explaining what you are running, what you expect to see, what you actually see, a proposed fix if there is one, and example of running proposed fix
Ill just open a PR, but I want to first confirm what was the intention of that line
Im actually not experiencing an error, I just saw the config
@pecigonzalo why do you think it will break the rules? Tags are allowed https://www.terraform.io/docs/providers/aws/r/s3_bucket.html#tags-1
Provides a S3 bucket resource.
Because it sets them to the ones of the label
or you mean id = "log"
should be the same as "rule" = "log"
?
which you use to label the bucket, but not necessarily the objects in the bucket
the apply works, but the lifecycle “does not work”
the tags passed on that line, are used for filtering which objects to apply the lifecycle to
so in general, it should != the tags of the bucket, but at least should not permanently do that
given:
name = "this"
stage = "that"
this will ONLY apply lifecycle to objects that have those as tags as well
which I belive is not the intention there
ok, so you saying those tags are for filtering? (I did not know that )
can you provide some links where it’s explained?
Creates a new lifecycle configuration or replaces an existing lifecycle configuration for the bucket using the PUT Bucket lifecycle REST operation.
(thanks for finding it out btw)
Yeah, they are for filtering
how would we use it then? (e.g. a use-case)
You only want to lifecycle objects with X tag
EG: I have objects with tag Expire = true
and other objects
only apply to Expire = true
but for this, your thing sending the objects has to set the tags on them
you could also reverse that logic
TBH, I have never used it, as we normally just apply to the bucket or not
If you create a resource with that, then inspect in the console, you will see the tags under: Add filter to limit scope to prefix/tags
They work like the prefix
setting
ok that sounds correct and same in the link above from @joshmyers
can you add a separate var and open a PR? (please rebuild README: make init
make readme/deps
make/readme
)
Yeah sure, will do
https://github.com/cloudposse/terraform-aws-s3-log-storage/pull/14 I intentionally did not rename the output, to keep it compatible
The tags generated by the label module were propagated to the S3 Lifecycle filters, this is in general not desired, as it means the lifecycle only applies to objects with those tags. Console help m…
PR passes, give me a minute to test it
Im 90% sure its a map
but just in case
thanks
can you also remove (Optional)
from the descriptions, README build will do it automatically
Yeah, I did not know if it was part of your standard
Cleaning
(not a big deal, just a few rules to follow for consistency, thanks)
Yeah, indeed, but since it has (optional)
in the rest, I guess its a legacy thing
yea, thanks
did it work with the tag maps?
Im checking, 1 sec finishing another task
At a glance, it applied but did not create the filter
im checking wasp
duh, used the wrong var name
@Andriy Knysh (Cloud Posse) works like a charm
You guys put over 100 modules out there to either use or get inspiration from, this is a minor contribution
Anyone here using https://github.com/blinkist/terraform-aws-airship-ecs-service ? I’m new to ECS, and I like how this module handles bootstrapping an initial task definition, but in other ways it feels pretty limited in what I can do. For instance, I can’t figure out how to use it to define a task that launches two containers together.
Terraform module which creates an ECS Service, IAM roles, Scaling, ALB listener rules.. Fargate & AWSVPC compatible - blinkist/terraform-aws-airship-ecs-service
@maarten is your goto man.
does this error happen intermittently? value of 'count' cannot be computed
@btai it could be that variable being assigned to count is invalid. Care to share a snippet of the offending code?
these are the error messages I got
* module.vpc.aws_route_table.private: aws_route_table.private: value of 'count' cannot be computed
* module.vpc.aws_eip.public: aws_eip.public: value of 'count' cannot be computed
* module.vpc.aws_subnet.private: aws_subnet.private: value of 'count' cannot be computed
* module.vpc.aws_subnet.public: aws_subnet.public: value of 'count' cannot be computed
fwiw this vpc module has been used before with no issues
but here is a snippet of the offending code
resource "aws_subnet" "public" {
count = "${local.region-to-az-count-map[var.region]}"
vpc_id = "${aws_vpc.default.id}"
cidr_block = "${cidrsubnet(local.public_subnet_block, ceil(log(local.region-to-az-count-map[var.region] * 2, 2)), count.index)}"
availability_zone = "${var.region}${local.num-to-az-letter-map[count.index]}"
map_public_ip_on_launch = true
tags = "${merge(
var.public_subnet_tags,
var.tags,
map(
"Name", "public-${var.name}-${var.region}${local.num-to-az-letter-map[count.index]}",
"Environment", "${var.environment}",
"AvailabilityZone", "${var.region}${local.num-to-az-letter-map[count.index]}"
)
)}"
}
locals {
region-to-az-count-map = {
"us-west-1" = 3
"us-west-2" = 3
"us-east-1" = 6
"us-east-2" = 3
}
num-to-az-letter-map = {
"0" = "a"
"1" = "b"
"2" = "c"
"3" = "d"
"4" = "e"
"5" = "f"
}
}
off to team meeting…will take a closer look in the hour.
yeah the frustrating thing is ive used this module in the past without issues
TF does not like maps
in counts
even if the map is static (as in your case)
we have a writeup here https://docs.cloudposse.com/troubleshooting/terraform-value-of-count-cannot-be-computed/
@Andriy Knysh (Cloud Posse) weirdly I have used this same exact code before for over a year and I never ran into this error until today
i have noticed this before
stuff we had working stops….
yeah
sad
that’s probably because you introduced other dependencies and TF now calculates things differently or in diff order
Helpful question stored to <@Foqal> by @Andriy Knysh (Cloud Posse):
Hi All and thanks again for the SweetOps resources from CloudPosse! :slightly_smiling_face:...
(@Fogal lags by a few days )
@vlad
it also happens when you change the inputs… so something might apply at first, then the inputs change, then you get cannot compute count errors…
will ping u offline @Andriy Knysh (Cloud Posse).. Thanks @Erik Osterman (Cloud Posse)
i’ve seen this count error many times in passing but it hadn’t happened to me. (I’ve avoided dynamic counts for the most part just really for this case where different regions require different # of subnets etc)
I thgought i avoided it mostly because my map is static
but frustrating to have to use the workarounnd
was about to use the terraform-aws-ecs-web-app module, but found an inception of terraform-terraform-lable or null label usage throughout. and we use a slightly different naming convention so I keep a local copy of modules with modified label modules. but this one is on another level lol. sigh this is too meta
ours is <brand/product>-<name>-<stage>-<attributes>
hmm
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label
lolz
you can use variable "label_order"
to reorder the attributes as you like
for <brand/product>
use environment
but for modules that use terraform-terraform modules i guess its different
yea, that one is simple
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label
<brand/product>-<name>-<stage>-<attributes>
should be covered by namespace-name-stage-attributes
terraform terraform label i just ended up modifying the one line id = "${local.enabled == true ? lower(join(var.delimiter, compact(concat(list(var.namespace, var.name, var.stage), var.attributes)))) : ""}"
your ecs modules, include your other modules, which each reference the label modules from your git. so i do i download each of those modules and point them all to the local label module.. orrr just frankenstine my own terraform files by using your terraform modules for inspiration?
and you know, those are just names, you can assign anything to them, although will look ugly, should work
that is certainly true.
i could just say to the devs. deal with it.
and use your naming convention
(i proposed before to just name them p1
, p2
, p3
, p4
, p5
)
even better
Helpful question stored to <@Foqal> by @Andriy Knysh (Cloud Posse):
Hi All and thanks again for the SweetOps resources from CloudPosse! :slightly_smiling_face:...
im just wondering, what are your examples of actual use for “namespace” value?
like real-life examples
or, that’s the MAIN attribute
to namespace all resources for a company
so we use cp
and cpco
a lot
nice and short. nice
for all clients we namespace their resources by using the company name or abbreviation
yeah that makes sense.
in this case, we can have the same stage
, name
and attributes
, but still have unique names (especially for global resources like S3 buckets)
cool. here’s a diff question. whats the difference between terraform-aws-ecs-web-app
and terraform-aws-ecs-alb-service-task
?
the former includes the later
Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - cloudposse/terraform-aws-ecs-web-app
oh geez
ic
super meta
terraform-aws-ecs-web-app
has everything to deploy a web app
(it’s opinionated)
so many cool features. i like it.
so, our stack includes the main monolith, and 3 supporting services. so each would be its own service. but only two are on public alb. would this module be a good fit for the supporting services? or should i just use the service-task for those? I’m hoping i can, because having the build badges would be neat lol. (the most important part of course)
Also, here’s an example of using terraform-aws-ecs-web-app
to deliver a service like atlantis
Terraform module for deploying Atlantis as an ECS Task - cloudposse/terraform-aws-ecs-atlantis
for example, if you had a company project called “widget-store”, you would create a terraform module called terraform-aws-ecs-widget-store
any one kick the tires yet?
We’re on TFE paid so don’t know the difference w/ the free, but I like it so far.
2019-02-07
@johncblandii What is TFE differences? Atlas?
Atlas wasn’t ready to handle our multi-branch/multi-account approach without customizations or using a non-standard version like CP’s. I liked it, though.
So w/ TFE we have a workspace per stage and it works per account through environment vars based on our gitflow branches without needing multiple environments (atlas runs) to handle it.
Not super in favor of this TFE thing
I think its just forcing people to their hosted product, like the “remote runner” backend they added
which is only for their private service
at least the registry is public
I’ve literally never even looked at it
remote runner is optional. Use it or don’t. The same w/ remote state.
@joshmyers TFE is basically a TF customized CI. You can run TF locally and validate it against the remote state like anywhere else (s3, etc), but the config is miles easier than creating multiple buckets, paths, etc
@Andriy Knysh (Cloud Posse) resolved block_device_mappings
list of maps worked?
yes!
Nice!
2019-02-08
2019-02-09
21 votes and 6 comments so far on Reddit
have to wonder how many people complaining are paying for or contributing to any hashicorp products?
2019-02-10
Always the way
OS - I haven’t paid for your this thing you have ploughed time and effort into, that I chose to use, but I demand you support my use case!
Welcome to Open Source
however, I’d like to mention that of all the hundreds of issues we’ve received from our community, I can’t recall one that came across this way…
(but I see it in a lot of other projects)
That’s a solid response by mitchellh.
You even have the opposite now, companies like AWS/etc, put OS tooling for their platform (unique to their paid platform) and then just expect people to add all the features. eg: ecs-cli or similar
2019-02-11
hi everyone, is someone of you using terratest?
We expect that users will stick to 0.11 through at least the end of 2019.
(Also for clarification those expectations were Hashimoto’s words)
Hahah no worries
@Erik Osterman (Cloud Posse) and the rest of Cloud Posse, what do you guys think about the points /u/xulsitatirev raised in the link you shared, as people who have so much of their work based on Terraform?
Tons of issues have been closed or ignored, because of: This problem will be solved in Terraform 0.12.
We’ve definitely encountered this a fair bit. There are a lot of pain points in < 0.11.x, but they are a “known evil” - like the “count of cannot be computed”. Due to the interoperability between our modules, we need to be strategic about how we orchestrate this move, so we are not left straddling both versions. Also, we are soliciting input for how to manage versions of our modules across 0.11 and 0.12
We’re not yet investing in porting our modules 0.12. That will definitely happen, but we are not sure yet how easy that will be.
Interesting. You’ve got a pretty complex problem to tackle, with dependency tentacles everywhere. Do you know to what extend the current modules will need to be rewritten?
does gracefully shutting down terraform ever work for you guys?
Depends on what it’s doing when you try to stop it
2019-02-12
https://docs.geopoiesis.io/manual/ - looks pretty interesting but incomplete. Did anyone look into it? Is it very similar to Atlantis, or what?
Turbocharging your infrastructure-as-code
I think I like UI very much, but I have not tried it myself
Never seen
I’m starting to play with Terraform Enterprise and one thing I’m courious to hear what others are doing is around configuring workspaces and variables. Using the TFE Terraform provider to create and manage workspaces, works great and make sense, however it feels odd using the TFE provider to declare variable resources which then push vars into TFE.
@johncblandii
have you guys gotten this with your vpc peering?
* module.vpc_peer_database.data.aws_route_table.requestor: data.aws_route_table.requestor: value of 'count' cannot be computed
no, but it depends on many factors
we create peering after all other resources get created
oh
darn
Yes this is the biggest PIA
Basically in terraform they don’t support nesting of modules
im not nesting modules though
ok, in this case data
provider is the problem - yea
sounds like theres a “create” happening somewhere there’s a “data” lookup in the same lifecycle event
yeah the subnets havent been created yet
in this case
anyone using automated testing for your modules? I was looking at inspec, but not sure if there are some better tools out there. maybe terratest?
you guys arent using the helm provider for anything right
No, helm provider is insufficient for what we do
Terraform template files do not support conditionals
#helmfile ftw!
Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or "me to…
DocumentDB support merged
Anyone given terraform 0.12 beta for a spin yet?
there are a few brave souls in #terraform-0_12
why is it common in terraform to see arguments defined as lists, but then only support a single element in the list?
Provides a Load Balancer Listener Rule resource.
but there are several others I’ve encountered too
I think it’s not so much terraform as it is the upstream AWS API
so the API is “aspirational” and defined so that it may “one day” support it
and terraform is just piggy backing on that interface for consistency.
it’s annoying though
I’ve run into that with alb target groups. i want to define a list of paths that map to a service. the property supports a list. but the api rejects if you specify more than one path.
indeed
2019-02-13
Hello, I am trying to deploy a Elastic Beanstalk application on AWS with terraform using your module, but i don’t fully understand it, in order to deploy the app i need the two modules; https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment and https://github.com/cloudposse/terraform-aws-elastic-beanstalk-application ?
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment
Terraform Module to define an ElasticBeanstalk Application - cloudposse/terraform-aws-elastic-beanstalk-application
We have an advanced example of the beanstalk implementation here:
i don’t find it
Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack - cloudposse/terraform-aws-jenkins
that shows the interplay between the modules
thank you, let me see
so.. what if I have the application in a .zip file, can i import it?
it’s definitely possible, just not sure if our modules support that use-case
not sure - it’s been a loooooooooooooooong time since we used beanstalk
(we’re using k8s, eks, ecs)
there are quite a few folks in this channel though using these beanstalk modules though
(most are based in US, and asleep right now)
Hello there guys and gals! I was going through the impressive CP library as I was hunting for naming inspiration. I’m trying to put together a service/module catalog of my own and I was wondering what the differences are between the terrarform-null-label and the terraform-terraform-label. They seem to implement the same thing while one of them doesn’t use the null provider? Is that right? The main.tf of the null-version is way more complicated (for the lack of a better word…) Are you guys recommending one over the other? THX
You’re right, one uses null while the other uses locals. I have similar modules I use. Using locals is simplier, faster, doesn’t change state, but has limitations. Lately CloudPosse has been extending the null version more. So, if you need those features, your decision is made. With my modules the functionaly is almost the same with 1 important difference. The locals version can only create data for a single label, while the null version can create many. So, I use the locals version whenever I need a single lable and the null version when I need a list of labels
@Ralph In short, terraform-null-label was written first and ended up getting quite a lot added to fit general community use cases, but because of that ended up with a hairier more complex implementation.
@Steven @joshmyers Thanks a ton guys, that helped! In the spirit of KISS, I guess I’ll use terraform-terraform-label as a starting point and evolve it from there…
Sooooo…fargate task role uploading to S3 without supplying the access key via env vars. Anyone get through that?
I’ve tried VPC access, but this is a front-end upload (via Angular) so not technically from within the VPC. I tried opening up the access via Allow policies…no go.
our atlantis does it
use this as an example: https://github.com/cloudposse/terraform-aws-ecs-atlantis
Terraform module for deploying Atlantis as an ECS Task - cloudposse/terraform-aws-ecs-atlantis
so the only prob, I think, is this is uploaded via angular so it isn’t technically coming from within the service itself
we get back a, seemingly, valid presigned url with the access key id but it fails w/ a 400
ohhhh
hrmmmmmm so you need to generate signed URLs
that allow uploads
I haven’t looked into this for some time, so I’m not current on the best way to do this with a static site
@Andriy Knysh (Cloud Posse) or @joshmyers might have some ideas
Yeah, we have the generated URLs. They just don’t auth well.
so we usually do this kind of things by creating separate IAM roles for the k8s nodes to assume (via kiam
)
can Fargate assume roles?
it receives a role (task and execution roles) but I’m not sure about assuming a different one
Helpful question stored to <@Foqal> by @joshmyers:
Hello there guys and gals! I was going through the impressive CP library as I was hunting for naming inspiration. I'm trying to put together a service/module catalog of my own and I was wondering what...
yes, @Andriy Knysh (Cloud Posse):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_IAM_role.html
nice
Helpful question stored to <@Foqal> by @joshmyers:
Hello there guys and gals! I was going through the impressive CP library as I was hunting for naming inspiration. I'm trying to put together a service/module catalog of my own and I was wondering what...
Hmm, i’m having trouble adding an IAM Server certificate to a cloudfront distribution. It says that the cert must exist in us-east-1, but the aws_iam_server_certificate
resource has no region attribute
use provider
for us-east-1
https://github.com/cloudposse/terraform-root-modules/blob/master/aws/acm-cloudfront/main.tf
Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules
Not long ago someone linked a site for terraform best practices – and I apparently forgot to bookmark it. It was like it’s own site, had small/med/large templates, etc. Anyone familiar?
how annoying. thanks @Andriy Knysh (Cloud Posse)
Awesome, thank you @Andriy Knysh (Cloud Posse)
2019-02-14
Please open or issues if you find something very important missing or simply wrong
2019-02-15
Are you using some of our terraform-modules in your projects? Maybe you could leave us a testimonial! It means a lot to us to hear from people like you.
Anyone doing automated Terraform unit/integration testing? (Kitchen, etc)
a little basic stuff, but just using terraform itself to apply/destroy, wrapped in a CodeBuild job
recently started digging a bit using Terratest
same. we have a test/ folder and just write implementations there to test
exactly
I can read it but fumble writing
terratest doesn’t seem too complicated, https://github.com/confusdcodr/spel-vagrant/blob/master/vagrant_test.go
Create a vagrant box for the spel image. Contribute to confusdcodr/spel-vagrant development by creating an account on GitHub.
@johncblandii you might like our approach then
ruby… shudder
Collection of Makefiles and test scripts to facilitate testing Terraform modules, Kubernetes resources, Helm charts, and more - cloudposse/test-harness
agreed, @loren. it isn’t rocket surgery, but still hesitate to introduce something like that
Use bats
to define simple tests that any one with modest sh
experience can write.
even hashicorp uses bats
Helm chart to install Consul and other associated components. - hashicorp/consul-helm
interesting @Erik Osterman (Cloud Posse)
I know it’s controversial and there’s no “right answer”, but here’s our write up: https://docs.cloudposse.com/design-decisions/0002-infrastructure-integration-testing/
only controversial at the water cooler
lol
typo
just change it to itemtotem
and everyone wins
that’s pretty dope, though. my concern w/ Kitchen was teaching the squad Ruby. It isn’t for everyone
they all know shell to an extent
what I like about the shell approach is it mimicks the human operator experience
right
while some elaborate framework does not. I am not writing ruby code to run terraform. i am not writing go code to run terraform.
i am writing shell scripts all the time to run terraform.
also, sometimes we do more than terraform
terraform + kops + helm + chamber + etc
i think the bats approach supports better story around testing the integration of all tools
and bats can still call some pupose built tool each test kitchen or terratest
so it encompasses all the other testing tools as well.
not just one tool
cool. will revisit it when I can breathe.
bats
is simple
good points…
here’s how we tested it on Codefresh pipeline https://github.com/cloudposse/terraform-null-label/blob/add-tests/codefresh.yml
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label
(interactive tutorial)
pretty simple there, @Andriy Knysh (Cloud Posse). what i like is i could copy that to any CI w/ docker support
If parsing/munging data structures (like tfstate), I’d take ruby/python over Bash
100%
i guess that it doesn’t interest me though
i trust terraform 100% that if i define a resource it will be created and in the state file
what i want to test is does it do what I want it to do
i think testing against the statefile itself is barely useful.
if I spin up a CDN, I want to test that I can retrieve objects from them and that they return the right headers
if I spin up an RDS instance, I want to make sure it’s accessible from within the VPC and that the user account provisioning works, and not publically accessible
(keep in mind, terraform itself has extensive testing; i don’t want to do that twice)
Agreed. The test kitchen stuff fits nicely with awspec, which does the above
The awspec matchers are pretty handy
Ok, that part is nice
hi everyone, do you have any example of how to implement a rolling upgrade using terrafrom aws autoscaling group?
I am changing the ami and the launch config and I would like to terminate old instance one by one
I add wait_for_elb_capacity and the name related to the lc name
but for stateful service like consul is not enough
2019-02-17
@deftunix we use an external python script fo rthis
A set of Terraform modules for configuring production infrastructure with AWS - segmentio/stack
and you want your ASG to have termination_policies = ["OldestLaunchTemplate", "Default"]
@joshmyers @Andriy Knysh (Cloud Posse) I highlighted some comments here: https://github.com/cloudposse/terraform-aws-organization-access-group/pull/12, just pinging here so it does not get lost
what aws_iam_group_membership [1] is a greedy resource that can cause inconsistent behaviour. The resource will conflict with itself if used more than once with the same group. To non-exclusively m…
Hey @pecigonzalo this was biting us in client deployments and needed a fix asap, users kept being removed from groups
Yeah, we have a similar situation at my company
but I dont think the solution is the correct one IMHO. I would have changed the user defintion to add the groups instead
Example Terraform Reference Architecture for Geodesic Module Parent (“Root” or “Identity”) Organization in AWS. - cloudposse/root.cloudposse.co
the group definition, as when using SecurityGroups without attachs, its meant to be independent and the source of truth when used with aws_iam_group_membership
in a group definition
Using aws_iam_user_group_membership
could have other issues, as described in my comment there, and also leave leftovers (any member not in the state)
and I think this will bite you on a from scratch provisioning, it only works in the current config if:
A) user_names
is NOT coming from the output of something else
B) this output was already defined in the past (so terraform can calculate against an old state, ending in some really odd situations)
you could pass user_names
from vars, but that mean you need some external logic that makes those users pre exist
eg. in the code from CP, you have modules
to create users, they will have to be run separately
It could be this is accepted, i was just unsure, as we have similar code and wanted to highlight the potential problems on a bootstrap situation (eg a new AWS account in the org)
Yeah, we have users in different TF state that gets run first and in our use cases e.g. https://github.com/cloudposse/terraform-root-modules/blob/master/aws/iam/security.tf#L14 is an empty list and group membership is defined in https://github.com/cloudposse/root.cloudposse.co/tree/master/conf/users
Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules
Example Terraform Reference Architecture for Geodesic Module Parent (“Root” or “Identity”) Organization in AWS. - cloudposse/root.cloudposse.co
Im trying to a avoid the file/module per user waiting for TF12, but seems like that is taking forever we might have to end up splitting
Have you confirmed that you get count cannot be computed
if passing in users from another module directly into this in the same run?
Not for this particular run, but count
cant compute from output
unless output
is preexiting (from a previous run)
Its due to terraform 11 internals (i think it fixed in 12, not sure) as it computes the count
in plan stage, and at that stage, the output is empty
Yes if the thing in question is a computed value
12 is supposed to fix most of this I think, but yet to test
Exactly, and that is there length(var.user_names)
so your count cant exist if user_names = "${module.this.names}"
sort of thing
Freaking TF12 its building so much hope/expectation that im afraid of what is actually going to be there or when
There was a post on this recently
Yeah, saw it on reddit but still ¯_(ツ)_/¯
53 votes and 29 comments so far on Reddit
As names is not a computed value, I’m not sure this would trigger a count cannot be computed - https://github.com/terraform-providers/terraform-provider-aws/blob/master/aws/resource_aws_iam_user.go#L44-L48
Terraform AWS provider. Contribute to terraform-providers/terraform-provider-aws development by creating an account on GitHub.
It would be good to confirm this and open an issue if so
We can hit the count cannot be computed in quite a few modules depending on usage, which is a massive pain.
Yeah true, it really depends on how you use them
what Document the error module.ec2_instance.module.label.null_resource.tags_as_list_of_maps: null_resource.tags_as_list_of_maps: value of 'count' cannot be computed Document Terraform issue…
I know names
is not, but if you pass user_names = "${module.this.names}"
as in my example
to generate an implicit dependency
then it is the product of certain output, and that output doesnt exist in the plan phase, so the it cannot know the count
ill do small test if you want, but im farily certain it will trigger in that use case
as you are passing a list of users from tfvars
it should be fine, as they are known when calculating the count
Correct
I was not sure how CP was using it, and was just commenting as to the unintended side-effect that now user_names
can no longer be the output of some other module, so potentially breaking your use-case or for anyone using it, so i’d rather comment and make sure this is “known”
For your use case, I wonder if you could use null_data_source
to create a dependency between the two modules, while passing in the same list of users to both modules.
YMMV - totally untested
I know the above can be used to create deps between resources, not tried with modules @pecigonzalo too late on a Sunday to test
Yeah, no worries, we have a different module, but was looking at the change and saw that situation. Im sure I follow your last message.
2019-02-18
@Erik Osterman (Cloud Posse) interesting monologue in why-helm.mp4
. I don’t have much to add because I still didn’t pass and embrace the complexity of k8s as operator. Once I understand it 100% or someone takes care it for me I will be happy to give less tasks to custom scripts/ansible/terraform. We are still too early in the evolution or maturity.
thanks @antonbabenko
Why I think it would be better to declare the CIDR rather than default to use 10.0.0.0. I was using the example in the README but wanted to use the 10.8.0.0/16 range and got errors until I discove…
any thoughts on changing defaults?
+1
Follow-up from last weeks issue of S3 uploads….CORS.
Update the bucket and ….uploads.
2019-02-19
fyi this ticket has closed for those following it: https://github.com/terraform-providers/terraform-provider-aws/pull/4904
Allow Terraform to authenticate with an EKS cluster via the Kubernetes provider: resource "aws_eks_cluster" "foo" { name = "foo" } data "aws_eks_cluster_auth&q…
there is a aws_eks_cluster_auth
data resource now https://www.terraform.io/docs/providers/aws/d/eks_cluster_auth.html
Get an authentication token to communicate with an EKS Cluster
Small security note on this (as I noted in the PR comments); for anyone using remote state files, the aws_eks_cluster_auth data source will commit a token signed by the most recent person to update the TF state file, and that token can be reused by anyone with access to the state file to impersonate the user (e.g. by running terraform state pull and reading the data source output). If you trust your developers with Terraform access this is fine, but do be aware of the potential security/auditing limitations.
Get an authentication token to communicate with an EKS Cluster
thanks @mbarrien for the heads up, at least for us we don’t actually have IAM users that have access to our state files, we can only assume roles that either have or dont have access to specific state buckets
@Andriy Knysh (Cloud Posse)
That’s nice, will look into it
This module is for setting up a custom domain for an existing api-gateway - fedemzcor/terraform-agtw-domain
2019-02-20
hello guys
it’s safe to get directily the modules from public repo for my corporate infrestructure?
I would say yes, but based on number of forks on some of the modules (https://github.com/terraform-aws-modules/terraform-aws-vpc has 430 forks) people like to fork them and use their forks instead.
Terraform module which creates VPC resources on AWS - terraform-aws-modules/terraform-aws-vpc
perfect, it is the best practice, i guess!
I don’t know if it is, I never use my forked repos even if I am not managing those repos myself.
in npm for instance, public utilities are very abstract so there’s not problem, but in terraform the code has power, could be change in the public repo , then when you run an update if not checks the plan, some resources are comprome
the difference though with NPM is when you run the code, you don’t even see a “plan”
so you’re putting even more faith in the npm modules
with terraform, you’ll at least see the h4x0r’s plan to pwn you before you get pwn’d
keep in mind, that any NPM (or other app code) that runs in the context of a machine with an IAM instance profile can perform whatever action that instsance profile role grants
and that’s without a plan
you’re right, must there a confirmation prompt in every CI/CD pipeline.
true, but you don’t have to run untrusted code unverified (hi curl | bash
) anyway.
Including an externally-controlled terraform module as part of your infrastructure feels like a huge security concern. If a malicious actor somehow got write access to this repository they could ad…
I wrote up some thoughts on this before
there are also some approaches to “vendoring” in terraform
see terrafile
the problem with terraform and vendoring is with nested modules
the sources need to be rewritten
@antonbabenko how’s the azure+atlantis coming along?
No progress there since we talked last time, too many other things
Do you guys have a tool or similar you use to bootstrap a new git repo for a customer and/or module? e.g adds .gitignore and basic makefile etc
@Andrew Jeffree take a look here https://github.com/cloudposse/reference-architectures
Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures
Thanks.
we use it to bootstrap all repos for all stages (prod, staging, dev, etc.)
in particular, everything here https://github.com/cloudposse/reference-architectures/tree/master/templates are templates that get auto-generated
Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures
hi dudes, noticed you added mountpoints to the aws-ecs-container-definition module. but i didnt see anything for volumes
am i missing something here?
@Ryan Ryke if we missed anything (https://github.com/cloudposse/terraform-aws-ecs-container-definition/commit/a1bf0d8338918e379c00097e29698265512b52d3), your contributions are welcome (it was a PR from an outside contributor)
saw that commit thanks. i just noticed though that you need to have a volume created in order to use the mount point
(the mountpoints were contributed by a community member; we’re not using them anywhere right now)
so the mountpoint is great if you already have a volume created
i didnt know if there was some cool shit that you guys figured out
haha
im looking at adding “volumes” now
but for whatever reason ecs isnt recognizing the option
gonna read the docs some mroe
just fyi if anyone else sees this later, im looking at this doc
With bind mounts, a file or directory on the host machine is mounted into a container. Bind mount host volumes are supported when using either the EC2 or Fargate launch types. Fargate tasks only support nonpersistent storage volumes, so the host and
@Ryan Ryke If it helps, I use both mount points and volumes in my module. https://github.com/devops-workflow/terraform-aws-ecs-service
Contribute to devops-workflow/terraform-aws-ecs-service development by creating an account on GitHub.
thanks dude
2019-02-21
Any advice on how to get terraform to update ecs service’s task definition when using https://github.com/cloudposse/terraform-aws-ecs-alb-service-task
Terraform module which implements an ECS service which exposes a web service via ALB. - cloudposse/terraform-aws-ecs-alb-service-task
terraform state show module.regatta_portal.module.alb_service_task_portalbackend.aws_ecs_task_definition.default
id = hiab-qa2-regattaportal-bend
arn = arn:aws:ecs:eu-west-1:594350645011:task-definition/hiab-qa2-regattaportal-bend:3
revision = 3
terraform state show module.regatta_portal.module.alb_service_task_portalbackend.aws_ecs_service.default
task_definition = hiab-qa2-regattaportal-bend:2
so task definition has revision=3 in state but ecs_service uses revision 2 for some reason
lifecycle {
ignore_changes = ["task_definition"]
}
so this explains the behavior and found also the issue about why it was added. So the question after this is: Can I some how get terraform to ignore the ignore_changes or do I have to update the ecs_service via some other means?
@Samuli terraform is not a good tool for code deployments. The ECS module deploy a default backend, then ignore changes to task definition and you should be deploying to ECS via an out of band method
OK, thanks for clarification
Otherwise there will be a state to manage per ecs service/task which is pretty ugly
Hi, I’m trying to create EFS backups using the cloudposse/terraform-aws-efs-backup
module (https://github.com/cloudposse/terraform-aws-efs-backup). And it was wondering how to specify the efs_mount_target_id
.
Because the the type is specified as a string but the output of our EFS module (and that of the cloudposse/terraform-aws-efs
) is a list (https://github.com/cloudposse/terraform-aws-efs/blob/f4c8c735a9d4d042928229b56e754eea400fb5c3/outputs.tf#L26).
And since there is no way to specify an availability zone how do I specify the “correct” efs_mount_target_id? I seem to missing something.
Terraform module designed to easily backup EFS filesystems to S3 using DataPipeline - cloudposse/terraform-aws-efs-backup
Terraform Module to define an EFS Filesystem (aka NFS) - cloudposse/terraform-aws-efs
Fixed it by using the first element of the list, don’t know if that’s the correct way to do it but it seems to work. I do however run into the following error:
* module.efs_backup.output.sns_topic_arn: Resource 'aws_cloudformation_stack.sns' does not have attribute 'outputs.TopicArn' for variable 'aws_cloudformation_stack.sns.outputs.TopicArn'
@sirhopcount we tested terraform-aws-efs-backup
many months ago, maybe something changed already that throws the error
here is a complete solution that uses it https://github.com/cloudposse/terraform-aws-jenkins/blob/master/main.tf#L96
Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack - cloudposse/terraform-aws-jenkins
btw, AWS now has a Backup service which does EFS backup as well https://aws.amazon.com/backup/
AWS Backup is a fully managed backup service that makes it easy to centralize and automate the back up of data across AWS services in the cloud as well as on-premises using the AWS Storage Gateway.
should start using it (we don’t have a module for it)
@Andriy Knysh (Cloud Posse) Already found that example my configuration doesn’t differ that much (my configuration is in the issue I created: https://github.com/cloudposse/terraform-aws-efs-backup/issues/36). Thanks for tip, didn’t know about AWS backup.
Hi, I'm trying to create EFS backups using this module but I keep running into the following error: * module.efs_backup.output.sns_topic_arn: Resource 'aws_cloudformation_stack.sns' doe…
@sirhopcount if you find/fix any issues, PRs are welcome
Unfortunately I’m not that familiar with AWS Cloudformation. I think it has to do with the output of the template (https://github.com/cloudposse/terraform-aws-efs-backup/blob/master/templates/sns.yml) as that’s where the TopicArn
output is set. I checked the terraform docs and it seems aws_cloudformation_stack
does output a map on based on the template but I have no clue as to why TopicArn
isn’t in that map.
Terraform module designed to easily backup EFS filesystems to S3 using DataPipeline - cloudposse/terraform-aws-efs-backup
looks like it was changed in https://github.com/cloudposse/terraform-aws-efs-backup/commit/f2e6705eba56926f75e2e81356cf16d3c9e86e06
Terraform module designed to easily backup EFS filesystems to S3 using DataPipeline - cloudposse/terraform-aws-efs-backup
Terraform module designed to easily backup EFS filesystems to S3 using DataPipeline - cloudposse/terraform-aws-efs-backup
(don’t use master
branch anyway, pin to a release)
I pinned it to 0.8.0 but might try an older version and see if that works. But AWS Backup also seems very promising.
looks like 0.8.0
changed the topic ARN logic, try 0.6.0
How do you guys arrange components of your system in terraform? We started building out logical components that had their own remote state files, with those remote state files being pulled in as data for components that relied on it, but it feels a little bad because that it relies on the state file to be present instead of terraform building a dependency graph for you. Not sure if we should go back to components and all dependents sharing the same state file.
https://github.com/cloudposse/terraform-root-modules/tree/master/aws is all different state files, doing the same thing as you mentioned above
Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules
and other mechanisms other than pulling in the remote state directly e.g. lookup via tag
there is no silver bullet in terms of dependencies between your top level modules (different TF state)
There are some good reasons to split states across logical boundaries of resources
also because of Terraform “fun”
I was originally splitting it up since I wanted our team to manage the underlying vpc while devs could create whatever infrastructure they wanted to and just reference my state file
have you guys ran into any issues with using this approach?
It does force you to know which piece of infra needs to be built first
and in the case of DR, might cause some confusion
you can always add a Makefile or script that executes things in the necessary order
Ah, yeah thats a good idea. Thanks loren!
I’ve seen it done that way
Make sure all resources folks are creating contain a set of consistent tags or something
so you know why your AWS bill is so high
where do your vanity domains point to? what dev1 created or dev2?
or is this just in dev? across an org?
yeah its dev1, dev2, etc. Each team gets its own test environment complete with vpc, domain, etc
We build the modules, and have requirements for tags enforced by module variables
2019-02-22
Hiya, terraforming a vm, and using a remote-exec to install ansible on the VM and then running your cloudposse/terraform-null-ansible - keep geting ansible-playbooks command not found - (how do you resolve this ??) we want to use vanilla images
AFAIK the module uses local-exec to run ansible so you should have it installed locally not on the remote vm.
Thank you
you have something already ?
Is there a place where a man can rant about terraform acceptance tests ?
on a more serious note do we have some hashicorpers here, I’ve a weird error when doing acceptance tests.
no but it would be great to get some in the team
@Jake Lundberg (HashiCorp) do you know anyone at Hashicorp on the dev side who might be interested in joining our community? As a sizable hashicorp user base of pretty hardcore terraform users (323+ and counting) would love if we had some hashicorpers around
Our developers are pretty adamant about reducing social surface area. We have Google Groups for all of our products and the developers monitor those. If you have serious product issues, I’d suggest posting information there or opening issues in Github for the product in question.
While I love slack, it’s not a very good platform for long term management of issues.
What is the error @Nikola Velkovski?
While I love slack, it’s not a very good platform for long term management of issues.
Agree with that, but it’s a great way to build communities. Gitter just ain’t that. Having a means of short-form communication is essential for building the p2p relationships. I didn’t mean to imply for technical support.
@joshmyers I am creating a new feature for shield protection in the terraform-aws-provider and I am getting weird errors when doing the acceptance test for it when trying to create a shield protection for global accelerate
My best best is on a bug in the go sdk
The other acceptance tests are running fine though it’s only this one
@Jake Lundberg (HashiCorp) thanks for the info will do some more testing and will open an issue accordingly.
Weirdly enough I don’t get the error when I am just running the terraform apply with the same terraform template without the test framework.
@joshmyers I was able to fix the error I was getting and now the pr is live https://github.com/terraform-providers/terraform-provider-aws/pull/7721
Fixes #1769 Changes proposed in this pull request: create a aws_shield_protection resource. Add documentation for aws_shield_protection resource. Output from acceptance testing: Note: The accepta…
The error is not fixed per-se but I chose to test the import with a EIP rather than Global Accelerator
Otherwise I got a wrong endpoint host for global accelerator even though the provider speficies it explicitly
-- FAIL: TestAccAWSShieldProtection (6.21s)
testing.go:538: Step 0 error: Error applying: 1 error occurred:
* aws_globalaccelerator_accelerator.acctest: 1 error occurred:
* aws_globalaccelerator_accelerator.acctest: Error creating Global Accelerator accelerator: RequestError: send request failed
caused by: Post <https://globalaccelerator.us-east-1.amazonaws.com/>: dial tcp: lookup globalaccelerator.us-east-1.amazonaws.com: no such host
So the testing framework doesn’t handle well cases wher eyou have 2 or more resources that have a specific endpoints ( usually in 1 region )
IN this case shield (us-east-1 ) and global accelerate (us-west-2)
Nice @Nikola Velkovski!
hi everyone, Could someone direct me on how to import data (bucket) from other state ?
should be something like this’
1) terraform init
with your current state bucket
2) terraform state pull > terraform.tfstate
to back up the state
3) terraform init
with your new bucket; it should ask you if you want to import the state file
Right now, I assume that if I apply my terraform; it will fails ‘cause the bucket already exists. I want to keep it and the old state could be removed… how do I need to proceed ?
are you using our tfstate-backend
module?
Terraform module that provision an S3 bucket to store the terraform.tfstate
file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption. - cloudposse…
So you have two different terraform configs (states) that create the same bucket?
Want to move it from one state file to another? Is this bucket being used and cannot be destroyed/recreated?
Ever since we implemented support for configuring Cloudflare via Terraform, we’ve been steadily expanding the set of features and services you can manage via this popular open-source tool.
2019-02-24
has anyone used terraform with the AWS App Mesh yet? https://www.terraform.io/docs/providers/aws/r/appmesh_virtual_node.html#
Provides an AWS App Mesh virtual node resource.
2019-02-25
Automatically convert your existing AutoScaling groups to significantly cheaper spot instances with minimal(often zero) configuration changes - AutoSpotting/terraform-aws-autospotting
2019-02-26
Anyone got a good and up to date best practice for how to structure terraform files?
Thank you :-)
Guys, I am trying to use this module https://github.com/cloudposse/terraform-aws-ssm-parameter-store
Terraform module to populate AWS Systems Manager (SSM) Parameter Store with values from Terraform. Works great with Chamber. - cloudposse/terraform-aws-ssm-parameter-store
And I wanted to pass map variable via a file. I have tried to use template but it converts map into string and causing error.
any advice ?
@Mohit - One option that I can see is to use tfvars directly. -var-file=foo.tfvar and put your map variable there
@aaratn has a good suggestion
@Mohit can you use tfvars
files?
there’s also the file(...)
interpolation
Thank you. @Erik Osterman (Cloud Posse)
export TF_VAR_amap='{ foo = "bar", baz = "qux" }'
This can be used aswell if you want to leverage environment variables
also, parameter store only supports strings; it has no concept of terraform data structures
Yes @Erik Osterman (Cloud Posse).
tfvars seems okay for me. It seems template_file only supports string.
2019-02-27
Anyone who uses Dependabot with tf modules in subfolders?
Ah ok, thanks.
@loren So If I tag a new release within the modules.git repo. Will it automatically detect that only folderX changed and push a new version for that ?
yep
Nope, using it, but all mine are top level that it monitors
Hi folks. Attempting to setup VPC peering across multi accounts: https://github.com/cloudposse/terraform-aws-vpc-peering-multi-account
I would say my TF file is standard:
data "aws_vpcs" "requester_vpc" {
tags = {
Name = "${terraform.workspace}-vpc"
}
}
module "vpc_peering" {
source = "git::<https://github.com/cloudposse/terraform-aws-vpc-peering-multi-account.git?ref=master>"
namespace = "he"
stage = "${terraform.workspace}"
name = "vpn"
requester_vpc_id = "${data.aws_vpcs.requester_vpc.vpc_id}"
requester_aws_assume_role_arn = "arn:aws:iam::xxx:role/vpc-admin"
requester_region = "${var.region}"
accepter_vpc_id = "${var.vpn-vpc}"
accepter_aws_assume_role_arn = "arn:aws:iam::xxx:role/vpc-admin"
accepter_region = "${var.region}"
}
However my error doesn’t mean a whole lot to me… I can’t find the reference in the readme or code for the route_table:
Error: Error refreshing state: 2 error(s) occurred:
* module.vpc_peering.data.aws_route_table.requester: data.aws_route_table.requester: value of 'count' cannot be computed
* module.vpc_peering.data.aws_route_table.accepter: 2 error(s) occurred:
* module.vpc_peering.data.aws_route_table.accepter[1]: data.aws_route_table.accepter.1: Your query returned no results. Please change your search criteria and try again.
* module.vpc_peering.data.aws_route_table.accepter[0]: data.aws_route_table.accepter.0: Your query returned no results. Please change your search criteria and try again.
Does this mean anything to anyone? Please can anybody suggest what is causing this etc. My VPC is being created with this successfully:
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "1.57.0"
name = "${terraform.workspace}-vpc"
cidr = "${local.cidr}.0.0.0/16"
azs = ["${var.region}a", "${var.region}b", "${var.region}c"]
private_subnets = ["${local.cidr}.0.1.0/24", "${local.cidr}.0.2.0/24", "${local.cidr}.0.3.0/24"]
public_subnets = ["${local.cidr}.0.101.0/24", "${local.cidr}.0.102.0/24", "${local.cidr}.0.103.0/24"]
enable_nat_gateway = true
enable_vpn_gateway = true
tags = {
Terraform = "true"
Environment = "${terraform.workspace}"
}
}
Terraform module to provision a VPC peering across multiple VPCs in different accounts by using multiple providers - cloudposse/terraform-aws-vpc-peering-multi-account
Scanned through chat histories and nothing quite covers my issue in both terraform or kubernetes chats :’
Would it help if I made the VPC with https://github.com/cloudposse/terraform-aws-vpc instead of the other one?
Terraform Module that defines a VPC with public/private subnets across multiple AZs with Internet Gateways - cloudposse/terraform-aws-vpc
@oscarsullivan_old i had the same problem
you have subnets not assigned to a default route table
hey @oscarsullivan_old, @nutellinoit thanks for pointing that out
@oscarsullivan_old just an example, if you use this VPC module https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/examples/complete/main.tf#L36 with this subnets module https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/examples/complete/main.tf#L46, the route tables will be assigned correctly
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
although you can you any VPC modules for sure
and any subnets module that suits your needs, for example https://github.com/cloudposse?utf8=%E2%9C%93&q=subnets&type=&language=
i think his problem is the acceptor vpc on the other account
perhaps created manually
Correct! Default VPC on another account.
perhaps created manually
@oscarsullivan_old here is another example where we create a backing service VPC https://github.com/cloudposse/terraform-root-modules/blob/master/aws/backing-services/vpc.tf
Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules
Accounts: 1 –> Contains non-iac prod + VPN + dev 2 –> Sandbox for iac
Goal: VPC peer account 1’s only VPC to all of account 2’s so that I may benefit from being ‘internal’ when on VPN and can use r53 private zones for instance in account 2
then do peering witt the VPC created by kops
https://github.com/cloudposse/terraform-root-modules/blob/master/aws/kops-aws-platform/vpc-peering.tf
Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules
(although they are in the same account so uses a diff peering module)
Actually having a really hard time setting up multi account architecture
yea, that’s not easy first time
we spent a lot of time thinking about diff approaches
i’ll send you some links to get you started
Thanks. AWS documentation has NOT been useful so far for this subject
yea, it’s a lot of stuff with no easy to follow steps
so, we tried two different approaches setting up multi-account architectures
the first one is described here https://docs.cloudposse.com/reference-architectures/
examples of the accounts are here https://github.com/cloudposse?utf8=%E2%9C%93&q=cloudposse.co&type=&language=
all of those repos use the module catalog https://github.com/cloudposse/terraform-root-modules (it’s just an example, you will need to fork the repo and update it for your own needs)
Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules
Thanks
That was a lot of links and directions in one go.
For my goal
, as per above, do I need to re-do my architecture for accounts or use cloudposse’s module for VPC and subnets?
but then we have a new approach of setting it up https://github.com/cloudposse/reference-architectures
Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures
where we generate everything (repos, Dockerfiles, etc.) from templates
For my goal
, as per above, do I need to re-do my architecture for accounts or use cloudposse’s module for VPC and subnets?
so you can try using the modules and setup two VPCs with subnets in two diff account, and test it
without using the old (prob manually created) resources
once it’s working for you, you can adjust for the existing resources, or even import them if they were created manually
old (prob manually created) resources
Yep. The account 1 (aka original account non-iac) is using Default VPC
Thanks. Actually starting at a new company and bringing IaC into their firm.
My account plan is as follows:
Account 1: Sandbox
Account 2: Prod
Account 3: Other (Dev/Staging/ETC.)
Account 4: MGMT (Jenkins & VPN)
Account 5: Existing Prod (To be decommissioned)
4 and 5 are technically the same.. But I prefer to produce it all
in new accounts so 5 can just be left blank as the parent org
account.
Looks similar to reference-architectures
so you can try using the modules and setup two VPCs with subnets in two diff account, and test it
Why this instead of doing a data reference to the existing one?
you can, as long as the subnets and route tables are setup correctly as @nutellinoit mentioned
Ah I see
(unless there are other issues)
And doing it via TF on both accounts permits that, from the get-go.
Gotcha.
yes, using the TF modules will create everything, which is good for testing
then you can try the existing one. But I see you already tried and got the count
error :slightly_smiling_face: which could be related to the route table not setup correctly (or to some other issues since the count
error pops up everywhere)
Fab thanks, I’ll give that a go over the next day.
I would like to just move the existing VPN that is in one monolithic manual + default VPC to another VPC in the same account. Then peer the two VPCs (one IaC one default with prod on it) then Peer the VPN IaC VPC to my other accounts sounds a pain
That all depends on whether I’d get the same issues when peering in the asme account
Damn. Think it’d be worthwhile for me to just create a whole new VPN in IaC and forget about trying to connect to the old one and migrating it haha
yea that’s might be better, because if you do VPN -> VPC1 -> peering -> VPC2, you have to carefully allocate all CIDRs on all sides for them not to overlap
Yep currently using a mapping solution to ensure zones/envs/workspaces never overlap CIDRs. Happy to share code.
would be nice to see it, thanks
2019-02-28
would be nice to see it
Will polish it up and pop it into a repo @ the weekend
Is there something like dependbot for terraform modules?
dependabot
it supports terraform modules
Based on discussion in dependabot/feedback#118.
Terraform module to provision a standard ALB for HTTP/HTTP traffic - cloudposse/terraform-aws-alb
oh, didn’t know. Thanks
@Andriy Knysh (Cloud Posse) I created my new accounts (manually, not with IaC )
It looks as per screenshot attached.
Realised when I switch role into the sub accounts I can’t then create a new key.. so now I’m wondering how do I tell terraform to use the subaccounts
Hi everybody. I have a question about chicken-or-egg problem.
After register AWS, I have a root account. So I want to prepare some resources for remote backend state (a IAM user, KMS for encrypting S3 bucket, S3 bucket and DynamoDB table for locking). I can use root access_key, but IAM best practice doesn’t recommend that. Should I do? create a new IAM user (manually) or another solutions?
Yea, we ran into this problem with our reference architectures
there are just lesser “evils” but no silver bullets
In our case, step (1) is to provision the bootstrap user: https://github.com/cloudposse/terraform-root-modules/tree/master/aws/bootstrap
Example Terraform service catalog of “root module” invocations for provisioning reference architectures - cloudposse/terraform-root-modules
Then we use that to setup all the account scaffolding on a cold start
when we’re all done, we disable the module which causes the user to be deleted.
Thank Erik, let me to read this bootstrap
This was implemented as part of our ref arch
Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures
our ref-arch automation is not totally polished though and is a [wip]
I feel there’s overlap Erik between terraform-root-modules (to create root user in cold start account) and reference-architecture’s make root
right?
root
is an overloaded term
it needs to be taken in context of what it’s doing
Create a temporary pair of Access Keys. These should be deleted afterwards.
Export your AWS “root” account credentials as AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY (this is temporary for bootstrapping).
Still need root access_key for bootstrapping
yes, so the problem is the master credentials cannot be used with assume roles
and the only way to access sub accounts is with assume roles
so when provisioning the AWS account architecture (e.g. 7 sub accounts), we use this module to first provision a user in the root (parent) aws account, but leveraging the master root credentials.
I see. So my issue was I was IAM into ROOT, then IAM into SUB… meaning no go. But if I ROOT into ROOT, I can then IAM into SUB and generate USERS to generate KEYS
@keen might have some other thoughts
(we’ve been talking about this stuff in #geodesic)
he’s working on account automation right now
Hrm….. I see
hey, sorry for pushing this thread, I have some questions?
- After run bootstrap, u will have an IAM user (this user can assume to another IAM role) right? -> Only use this IAM user for terraform?
- IAM role with
AdministratorAccess
so above IAM user can assume, have full access to AWS?
I read some blogs, don’t recommend to use AdministratorAccess
, they recommend to use PowerAccessUser
But how to create a new IAM user?
I see you don’t enable MFA for above IAM user. Is it ok with security?
Actually I’m in a bind. I’ve decided to use Geodesic on my existing account and no reference architecture (stupidly because we don’t have a spare domain that we’d then like to use in the future at IAC go live).
I created an IAM role with SystemAdministrator policy on all my sub accounts. On my security account only people who need this access have an IAM user. This IAM users security key pair is added to awsvault. The other sub accounts are then sourcing security and listing the role_arn generated above. Anyone who needs portal access of varying access levels uses AWS SSO.
This is a technical debt in my backlog now however it is one that can be changed easily in the future.. I just had to move on as if was blocking me for like 5 days.
DevOps Engineer, passionate teacher, investigative philomath. - osulli
So the limitations are noted in the dock. The reason it is a technical debt is because anyone with an IAM user on security now has CLI systemadminitatoe access on all other sub accounts I.e. prod. Now i tried adding a condition to the iam role on who can become, but that didnt work so.
Recap:
- IAM user on security account only for people who need CLI access to environments
- All other access handled centrally on SSO as it’s easier and contained
- Off or onboarding requires creating a user in SSO and security account if they need CLI access as SSO generated CLI keypairs do not work with awsvault
- this works on existing account and geodesic
- this can be changed easily in the future as it’s not so fundamental like an account name
u create a IAM user (can assume to sysadminacess role), this user has access_key + secret_key (store in aws-vault) for terraforming. Don’t enable MFA for this user. right?
You can have it off or on
@xluffy I think creating more specialized roles for your organization is better
the roles we provide are overly permissive
but works for one org permissions-wise won’t work for the next, so we’ve not addressed.
If I want to use tf for managing IAM, I need to grant AdminAccess for tf user.
make me more confuse
haha yea
so there’s always the coldstart problem
even that is tricky for us to generalize, because different orgs will have a different starting off point
our example ref-arch assumes a virgin account, whereby you only have the master creds - the bare minimum
from there, it uses the bootstrap
module to create the user for bootstrapping
(which can later be destroyed)
Hi Guys. I am using this module https://github.com/cloudposse/terraform-aws-vpc-peering-multi-account and it works like a charm for the VPC’s between the same account. However am struggling a bit setting it up across multiple accounts, especially around specifying the owner_id. Could you please point me to an example which has the correct parameters. Couldn’t find anything related to this in the chat archives or the documentation. Apologies if this question is very basic, I am terraform newbie, have been checking it out only since yesterday (edited)
Terraform module to provision a VPC peering across multiple VPCs in different accounts by using multiple providers - cloudposse/terraform-aws-vpc-peering-multi-account
I don’t recognise owner_id
in https://github.com/cloudposse/terraform-aws-vpc-peering-multi-account
Terraform module to provision a VPC peering across multiple VPCs in different accounts by using multiple providers - cloudposse/terraform-aws-vpc-peering-multi-account
well account_id, I just need to know how to specify the account_id for the acceptor account
accepter_account_id = “${join(“”, data.aws_caller_identity.accepter.*.account_id)}”
basically how do I specify the account_id of the other AWS account?
So the way it works is using terraform AWS providers
define a new provider for the secondary account which shall be assumed
designate one account as the accepter and the other as the requester
name the providers like this:
Terraform module to provision a VPC peering across multiple VPCs in different accounts by using multiple providers - cloudposse/terraform-aws-vpc-peering-multi-account
Does anyone have a way of sharing a DynamoDB table across multiple accounts (it is storing the lock state of TF).. Have browsed https://github.com/cloudposse/terraform-aws-dynamodb but no such feature
Terraform module that implements AWS DynamoDB with support for AutoScaling - cloudposse/terraform-aws-dynamodb
do you want to use dynamodb for tfstate locking?
Terraform module that implements AWS DynamoDB with support for AutoScaling - cloudposse/terraform-aws-dynamodb
Yes exclusively for tfstate locking , unless you are suggesting there is a better method (I am not locked to using it) @Erik Osterman (Cloud Posse)
we do that here: https://github.com/cloudposse/terraform-aws-tfstate-backend
Terraform module that provision an S3 bucket to store the terraform.tfstate
file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption. - cloudposse…
our strategy though is to share nothing
thus deploy one statebackend per aws account
But how would you then use Workspaces? In a backend config you cannot use interpolation. Do you have different backend files per workspace?
we don’t use workspaces to separate stages
For instance:
terraform {
backend "s3" {
encrypt = true
bucket = "xx-xx-state-xx"
region = "eu-west-2"
dynamodb_table = "terraform-state-lock-dynamo"
key = "aws/ops_test/terraform.tfstate"
}
}
What do you use? Accounts?
I’m using Workspaces to dynamically switch between ~/.aws/credentials
profiles and therefore run against different accounts per stage
But my last blocker is now dynamodb can’t be shared. I havea single bucket in a mgmt
account
i.e.
provider "aws" {
region = "${var.region}"
profile = "${terraform.workspace}"
}
in our parlance, environments all exist in the same account
and an one stage per account
Ah, I use stage and environment interchangeably
dev / stage / prod / sandbox == env == stage
Workspaces allow the use of multiple states with a single configuration directory.
there are different interpretations of this, but our interpretation is that workspaces should not be used for separating production from dev, etc
organizations commonly want to create a strong separation between multiple deployments of the same infrastructure serving different development stages (e.g. staging vs. production) or different internal teams. In this case, the backend used for each deployment often belongs to that deployment, with different credentials and access controls. Named workspaces are not a suitable isolation mechanism for this scenario.
Right. But I DO use different AWS accounts per stages (dev / staging / prod /sandbox). I ONLY use TF Workspaces to switch my AWS profile and control which account I run against
Now I say this out loud, there is probably a MUCH better way of switching that lol
I can’t believe there’s not a guide for this sort of thing. So common yet so little information on how a standard workflow should look
Feels pointless for every person to have to research and come up with their own way, maybe finding a document slightly guiding them
well - welcome to sweetops!
that’s exactly our sole purpose
provide a set of best practices for how to do that
I’d love to once I figure them out
I love documentation
we recommend using aws-vault
(our documentation is definitely lagging)
I love teaching and I’m a good communicator so I love writing docs!
@oscarsullivan_old let’s connect offline
Right, perfect. Didn’t know what aws-vault was before reading that. Thought it was a dupe of hashicorp vault
Best candidate so far.. https://registry.terraform.io/modules/rafaelmarques7/remote-state/aws/1.4.0
this script should be executed once and once only.
if that execution fail, you should delete all the resources created previous to the failure, and retry.
but these…
I’d obviously just like to do this.. but I have no idea where to put such a policy when I’m using SSO https://www.terraform.io/docs/backends/types/s3.html#dynamodb-table-permissions
Terraform can store state remotely in S3 and lock that state with DynamoDB.
We’d like to invite the community to try the first Terraform 0.12 Beta release. This release includes major Terraform language improvements and a tool for automatically upgrading m…
hooray