#terraform (2018-08)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2018-08-01
wow, thanks, those projects look great!
something related I would like to implement: https://github.com/mozilla-services/pytest-services
pytest-services - Unit testing framework for test driven security of AWS configurations, and more.
hi @pmuller
for security testing on AWS, take a look at this tool https://github.com/dowjones/hammer
hammer - Dow Jones Hammer : Protect the cloud with the power of the cloud(AWS)
The new tool, called Hammer, was developed partly in response to the growing need for automation amid talent shortages and the fast-paced nature of software development, said Dow Jones CISO Jaswinder Hayre.
so im looking at terraform-aws-s3-log-storage module. looks awesome. Here’s a loaded question.. I’ve a got a custom elastic beanstalk platform, using your terraform-aws-elasticbeanstalk module..
Is there a way you’d recommend to make that bucket name exposed to eb instances? (not manually).
Hrmm… so basically, how to pass the bucket name from the log storage module to the beanstalk module?
I think I’ll be using fluentd to push logs, and I’ve ran into a bit of head scratcher, the config file for td-agent doesn’t support environmental vars so
yes
td-agent
~ fluentd?
yes
just a non-gem version
ok
ok, so taking a further step back
you’re running fluentd on the beanstalk instances to forward logs
yep
ok, gotcha. sec
Ok, so the tf-agent
config should get parameterized
We typically use gomplate
for that
then using gomplate
, you can consume env variables
then pass the bucket name as an env to the beanstalk
I haven’t completely settled on fluentd. but it leaves a bit of flexibility instead of marrying to data firehose..
hm
thanks
gomplate documentation
okay that just looks like magic
lol
2018-08-02
hey, cloudposse/terraform-aws-cloudwatch-logs in description says … for use with fluentd. Do you guys have an example of how you use it with fluentd?
We use it with Kubernetes
I can share how we do that
hey @Ziad Hilal! Glad you signed up
@Ziad Hilal has joined the channel
Not Terraform, but .. should be here I think.. I had a fight with a daemon which was running inside a wrapper. It is sidekiq, and it’s important that sidekiq receives the SIGTERM. The wrapper script was necessary as it does a few other things.
Having this inside the dispatch.sh
SIDEKIQ_COUNT=3 SIDEKIQ_MAXMEM_MB=2000 SIDEKIQ_PRELOAD=sidekiq_swarm exec sidekiqswarm -t 25 -C config/sidekiq.yml
is not enough as it would still be a child of the entrypoint.
So in the Dockerfile this is what did the trick.
CMD exec /$APP_DIR/bin/dispatch.sh
@maarten was that meant for this channel?
yeah,.. I just wanted to share something others might stumble upon at one moment.. Unsuited ?
Oh, just maybe don’t get the full context.
Aha, adding the exec
part
exactly.. on 2 places
so basically, it’s running /bin/sh -c 'exec /$APP_DIR/bin/dispatch.sh'
, which then replaces PID1
with dispatch.sh
I’ve also run into problems with signal handling and shell scripts with docker.
and then inside dispatch sidekiq replaces it
we can also have a docker channel
haha
yea, probably a good idea.
I’ll remove my stuff here then
#docker created
that’s fine - we’ll start new ones there.
2018-08-03
hey so for terraform-aws-elasticsearch module. its saying that EBS storage must be selected for t2.small.elasticsearch.
is that not what’s used by default?
@Andriy Knysh (Cloud Posse)
i’m gonna do more research into ES for now
well apparently i fail it setting up ES from this tf module. i’ve got a route53 domain, that all worked, the vpc endpoint is in public subnets.. but I cannot access ES or Kibana endpoints at all.
and security group allows all traffic from 0.0.0.0/0
you deployed it in the public subnets and opened up the security groups?
i believe so. I’ve just added the default security group just to try. will see if that helps
i have iam access policy configured. im thinking thats the issue
terraform-root-modules - Collection of Terraform root module invocations for provisioning reference architectures
here’s an example of how andriy deployed it
tnx
might be a dump question. but what is the purpose of this:
in terraform-aws-elastic-beanstalk-environment
It’s part of this horible/nasty hack to make it easy to pass envs
terraform-aws-elastic-beanstalk-environment - Terraform module to provision an AWS Elastic Beanstalk Environment
perhaps there’s a better way of doing it now adays in HCL
We had a more elegant way using null_resource
, but it would lead to frequent errors like cannot compute count of dynamic variable (or something like that)
So basically, the module always defines N fixed environment variables
If the user provides it, it uses it, other wise you see something like DEFAULT_ENV_20=UNSET
in your beanstalk environment
(which is just a place holder)
odd
I was looking at an existing platform. I wonder why they’re not showing how to use their container config in examples.
with a custom platform, they just say here, specify :environment settings, and bam you got a platform. but then for example, looking at Ruby / passenger platform they have :container config as well, with nicely defined json for settings and what not.
What Changed the way the custom ENV vars are calculated in awsapplication:environment setting Why Using null_resource to generate key/value pairs for ENV vars like this: resou…
Here is more context
What Fix element() for empty list workaround Fix key-value association Why For empty env_vars there was error element() may not be used with an empty list in: hashicorp/terraform#9858 keys() ret…
@pmuller I still think we should consider this//github.com/cloudposse/terraform-aws-rds-cluster/issues/26>
It would be a nice security enhancement
hehe, love this approach!
did someone tried https://github.com/juliosueiras/vim-terraform-completion ?
vim-terraform-completion - A (Neo)Vim Autocompletion and linter for Terraform, a HashiCorp tool
2018-08-06
What Add an option to use a sub-provider for roles, so they can be created in a different account than the users who can assume it. Why We have a use-case where we need admin and readonly roles in …
any opinions on taking this approach?
@sarkis @Andriy Knysh (Cloud Posse)
2018-08-07
please see my comments on that: https://github.com/cloudposse/terraform-aws-iam-assumed-roles/pull/7#issuecomment-411117761
What Add an option to use a sub-provider for roles, so they can be created in a different account than the users who can assume it. Why We have a use-case where we need admin and readonly roles in …
@jamie shouldn’t we be using a label here? https://github.com/cloudposse/terraform-aws-efs-cloudwatch-sns-alarms/blob/master/alarms.tf#L13
terraform-aws-efs-cloudwatch-sns-alarms - Terraform module that configures CloudWatch SNS alerts for EFS
Yes
Ok, I’ll log an issue
All of the modules need to have a revision to handle context
how are you doing man?
Exhausted!
from vacation?
Yeah
Way behind from vacation so playing hard catchup
My ticket is up tomorrow! Tulum, MX
@jamie some of your changes to terraform-null-label
do not respect the enabled
flag. So when I test it with var.enabled=false
, Terraform still wants to recreate some resources
when you change the module, you might want to take a look at that too
I will address that. Sorry that should have been picked up at review. But it was pushed through quickly.
Thank you for testing.
2018-08-08
I’m trying to figure out how to use https://github.com/cloudposse/terraform-aws-rds-cloudwatch-sns-alarms with https://github.com/cloudposse/terraform-aws-sns-lambda-notify-slack am I missing something obvious?
terraform-aws-rds-cloudwatch-sns-alarms - Terraform module that configures important RDS alerts using CloudWatch and sends them to an SNS topic
terraform-aws-sns-lambda-notify-slack - Terraform module to provision a lambda function that subscribes to SNS and notifies to Slack.
@jamie can maybe provide an example for this. I am afk this week. @sarkis also might have a similar example ready that we are using for ECS.
terraform-aws-rds-cloudwatch-sns-alarms - Terraform module that configures important RDS alerts using CloudWatch and sends them to an SNS topic
terraform-aws-sns-lambda-notify-slack - Terraform module to provision a lambda function that subscribes to SNS and notifies to Slack.
Thank you
https://github.com/cloudposse/terraform-aws-ecs-cloudwatch-sns-alarms appears to take the SNS as a parameter
terraform-aws-ecs-cloudwatch-sns-alarms - Terraform module to create CloudWatch Alarms on ECS Service level metrics.
That would work with the slack tf then.
Great! Hope that unblocks you.
Put a TODO for myself to get some better example in the repos for how they work together..
ah, I think I see! You can provide a topic name to the slack integration to use, and set create_sns_topic to false. That’s not super clean, as you end up with a lambda per sns topic, and if I need one for each module of alarms I use (i.e. one per rds, one per asg) that’s going to rack up a count very quickly.
It would be nice if they composed.
Terraform as a language is pretty limited. To reduce complexity of of an individual module we like to keep them logically separated. This reduces the scope and easier to test. We also like to think this makes them even more composable. Users can mix and match as they see fit without being overly opinionated.
For example, look at our ecs web app module. This is a great example of how many modules are composed to implement a powerful opinionated module.
In this case I think the problem is that the slack lambda expects a one-to-one mapping, and reality is a many-to-one. It should take a list of sns topics to subscribe to.
If you want to extend it, we’ll prompt review the the PR :-)
Thinking about it. Part of the problem is that Cloudposse is delegating to another terraform module altogether. So that would mean taking maintenance of that module.
That other module is by @antonbabenko
Check with him - he maintains a lot of great modules and would probably accept the PR or provide insights
@jamie started a Cloudposse slack notification module, but don’t think he has had the time to finish it.
Where is my sense of completion
I’ll get to sorting that tomorrow for you!
Haha you certainly set off on an ambitious path. Think you wrote (or started) like 20 modules in a month.
I suspect policies are part of all this, but I’m not sure. Policies in AWS don’t ever compose nicely, you end up centralizing your SNS because you need an RDS policy and you can’t just append that to an existing one. I feel like Terraform should let you “append” to resources defined elsewhere, but that’s a very hard set of constraints to match I accept.
yeah, that’s it. The RDS module creates it’s own SNS topic because it doesn’t just do CloudWatch it also configures RDS event notifications to the topic. This means the policy isn’t just CloudWatch, but CloudWatch + RDS. (Although the description is wrong - typo-level PR incoming)
:wave: Hello! :slightly_smiling_face: I’m attempting to use terraform-aws-cloudtrail
for the first time and just hitting an issue with the event_selector
. It defaults to {}
but this causes aws_cloudtrail
to create an event selector anyway.
We then get this on every apply
:
~ module.cloudtrail.module.cloudtrail.aws_cloudtrail.default
event_selector.#: "0" => "1"
event_selector.0.include_management_events: "" => "true"
event_selector.0.read_write_type: "" => "All"
Removing the event_selector
variable/parameter from terraform-aws-cloudtrail
‘fixes’ the issue.
Looks like null
/ undefined parameters will be coming in HCL2 but until then I’m not sure what the answer is.
Hrmmm I am not sure off the top of my head. @Andriy Knysh (Cloud Posse) might have some suggestions. He originally implemented it.
@antonbabenko has joined the channel
hi @paul, give me some time, I’ll look into terraform-aws-cloudtrail
event_selector
. You can open a PR with your fixes so it’d be easier to review and test
@Andriy Knysh (Cloud Posse) :wave: At the moment I don’t have a fix, other than commenting out the event_selector
. We’re discussing it internally at the moment, if we think of a neat way around it then I’ll PR it.
@Andriy Knysh (Cloud Posse) @paul out of curiosity, does the string ""
work as blank? Or maybe doing event_selector = []
?
I’ll give it a go, let’s find out.
Actually, the variable type is set to a ~map~ist.
hehe I hit something similar this morning.
of maps.
Giving it a go.
Yep. It’s the same for launch_configuration root_block_device, for no particular reason, perhaps for nicer syntax as:
event_selector {
…
}
Rather than
event_select = {
…
}
tbh, the pain of this goes away with terraform 12 if you can wait, now that conditionals can return lists you can do:
event_selector = "${var.event_selector == {} ? [] : [var.event_selector]}"
https://github.com/hashicorp/terraform/issues/12453#issuecomment-327266951 you could try this though if you’re a masochist.
(Basically join the k/vs of the event_selector with a separator, and then split it outside the conditional)
So if I do this..
variable "event_selector" {
type = "list"
description = "Specifies an event selector for enabling data event logging. See: <https://www.terraform.io/docs/providers/aws/r/cloudtrail.html> for details on this map variable"
default = []
}
It then works correctly.
It would then require people to supply a list of maps though, so it wouldn’t be backwards compatible.
I think.
You could also do that, which is what I decided to do internally. However, I think that changes usage slightly. It might be as simple as changing event_selector = {}
to event_selector { }
Where would I make that change?
when using the module, that would be, so still a breaking change.
Ah.
Fiddling with different combinations.
Anyone have experience deploying AWS AD service with terraform?(includes creating vpc, subnets, jumpboxes etc)
bah. No joy.
A map() function would make this so much easier (in the fp sense), as you could use compact & map.
@paul how about slice(list(var.event_selector), 0, length(var.event_selector) > 0 ? 1 : 0)
that took way too much fiddling to discover. Nice trick once you know it though
Oh, interesting. I’ll give that a try later this evening. (BST timezone)
Thanks.
Fellow brit?
@paul @dominic so I tested this example https://github.com/cloudposse/terraform-aws-cloudtrail/blob/master/examples/complete/main.tf
terraform-aws-cloudtrail - Terraform module to provision an AWS CloudTrail and an encrypted S3 bucket with versioning to store CloudTrail logs
when event_selector = []
(empty list), Terraform never tries to recreate resources
if we put a map inside a list, it always tries to recreate regardless if the map is empty or populated
event_selector = [{}]
event_selector = [{
read_write_type = "All"
include_management_events = true
}]
in both cases
~ module.cloudtrail.aws_cloudtrail.default
event_selector.#: "0" => "1"
event_selector.0.include_management_events: "" => "true"
event_selector.0.read_write_type: "" => "All"
looks like a feature/bug
also in the docs, it does not mention at all that event_selector
should be a list, it looks like a map
Provides a CloudTrail resource.
so we can change it to
variable "event_selector" {
type = "list"
description = "Specifies an event selector for enabling data event logging. See: <https://www.terraform.io/docs/providers/aws/r/cloudtrail.html> for details on this map variable"
default = []
}
which will silence it in the case when we don’t need any event selectors
but will not help in other cases
@jamie any ideas on that?
@Andriy Knysh (Cloud Posse) I think we can have the best of both worlds using the slice()
trick above. The other being that we don’t break backwards compatibility.
don’t worry about backwards compatibility, we use tags in all modules like here https://github.com/cloudposse/terraform-root-modules/blob/master/aws/cloudtrail/main.tf#L34
terraform-root-modules - Collection of Terraform root module invocations for provisioning reference architectures
so if we need to update to the new version, we update the tag and update the top-level module
I think backwards compatibility is always important, regardless of pinned versions. I imagine this is something that differs across programming communities. I also think the current interface is the most idiomatic.
@jamie Regarding SNS, RDS etc. I just after much hacking around terraform, managed to create a “terraform-sns-claims”. Essentially modules like the RDS alarms module export a variable of claims with a value like ["cloudwatch", "rds"]
. Internally, the sns module stores a statement relating to that claim. It then generates a policy from the set of claims. I need to find out if anything else actually pushes to SNS (e.g. autoscaling events perhaps), but this is a fairly tidy system for allowing the sns policy to be dictated by the alarms you want.
If you’re hacking on the slack sns stuff tomorrow, I can probably publish my SNS work, and you can bring it into cloudposse if you wish (whatever license I need to use, I will). I’d love to not have to fork slack too I’ll probably look into contributing RDS event support at the same time.
Thanks @dominic ! Definitely ping feel free to ping me directly as well
Both @jamie @Andriy Knysh (Cloud Posse) @Igor Rodionov and @maarten can create repos under the Cloud Posse org.
Generally, we use the APACHE2
license. Just copy it from any one of our other repos.
Will do Of course it’s dependent on you guys liking the approach, my thinking it has merit means little
I’ll do it tomorrow then. Please push any suggestions you want and I’ll merge them as needed! Please and thank you!
Apologies, I didn’t get chance to try out the suggested solutions this evening. It’s in my diary for the morning.
I think I’m leaning towards the changing to a list solution. It’s far cleaner solution than slicing and the AWS provider is expecting a list anyway.
@jamie references for the gh repos:
https://github.com/SevereOverfl0w/terraform-aws-rds-cloudwatch-sns-alarms https://github.com/SevereOverfl0w/terraform-aws-sns-claims
Usage example for both found in the rds repo.
terraform-aws-rds-cloudwatch-sns-alarms - Terraform module that configures important RDS alerts using CloudWatch and sends them to an SNS topic
terraform-aws-sns-claims - Create an SNS topic based on a list of claims
https://github.com/cloudposse/terraform-aws-cloudtrail-cloudwatch-alarms/blob/master/main.tf#L8 might be a better approach overall, uncertain of how multiple policies apply to an sns.
terraform-aws-cloudtrail-cloudwatch-alarms - Terraform module for creating alarms for tracking important changes and occurances from cloudtrail.
2018-08-09
@jamie I ended up forking the slack lambda, https://github.com/SevereOverfl0w/terraform-aws-notify-slack it’s not really in a pull-requestable state, but if you’re hacking on this it might serve as inspiration.
terraform-aws-notify-slack - Terraform module which creates SNS topic and Lambda function which sends notifications to Slack
Thank you
I’m on them now
I’m now onto monitoring our autoscaling groups I see you’re also responsible for ec2
FYI team, this environment variable isn;t in the documentation last time i checked, but can get you around some annoying errors when destroying resources.
Thanks, I’ve added an inssue to document this.
what This environment variable isn't in the documentation, but can get you around some annoying errors when destroying resources. export TF_WARN_OUTPUT_ERRORS=1 why It converts the errors that …
export TF_WARN_OUTPUT_ERRORS=1
It converts the errors that would normally halt the destruction of resources from a module
into warnings
and therefore allows you to complete the destruction of a TF template
For terraformers, who like dirty hacks, and who are encountering issues with count.index inside a conditionally created resource with count to length(var.of_a_list).. here’s something that worked for me
create a data “template_file” with count of the length of the list with no conditions.
data "template_file" "custom_listen_host" {
count = "${length(var.custom_listen_hosts)}"
...
And refer to the template_file from the resource with the condition..
resource "aws_lb_listener_rule"
"host_based_routing_custom_listen_host" {
....
count = "${local.create && length(var.custom_listen_hosts) > 0 ? length(var.custom_listen_hosts) : 0 }"
values = ["${data.template_file.custom_listen_host.*.rendered[count.index]}"]
}
thanks @maarten for the example
Until 0.12 is there, everything is allowed I thought @Andriy Knysh (Cloud Posse)
not every hack I guess, but if it looks good, why not
this count = "${local.create && length(var.custom_listen_hosts) > 0 ? length(var.custom_listen_hosts) : 0 }"
could be simplified to count = "${local.create ? length(var.custom_listen_hosts) : 0 }"
hey, that’s true
@paul
I think I’m leaning towards the changing to a list solution. It’s far cleaner solution than slicing and the AWS provider is expecting a list anyway
want to test and open a PR?
Yeah, happy to give that a go tomorrow afternoon. Off the clock now
perfect
i tested it a little bit https://sweetops.slack.com/archives/CB6GHNLG0/p1533750657000315
@paul @dominic so I tested this example https://github.com/cloudposse/terraform-aws-cloudtrail/blob/master/examples/complete/main.tf
@Igor Rodionov has joined the channel
2018-08-10
I spent a bit of time on the issue with event_selector
this afternoon but haven’t got it working for all use cases (no event_selectors, a single event selector, multiple event selectors). I’m going to return to it towards the end of next week.
thanks @paul
(agree, not easy to make it working in all cases)
No problem. I found lots of github issues around Terraform asking for it to support exactly this kind of thing but no concrete solutions have materialised.
I’m going to see if I can get a colleague or two of mine to have a look next week to see if we can think of a way around it.
hi again
I’ve just ran into a head-scratcher issue with terraform..beanstalk module i think.
I’ve got 5 environments. it created 4. but this last one its got this issue. and I can’t figure out for the life of me where its picking up “elb-logs-makeshift” bucket name from.
in the main.tf of the module, this is the only reference:
that’s really odd! haven’t seen elb-logs-makeshift
before. I grepped through all of our modules and don’t see it and it doesn’t appear in any of our code. https://github.com/search?q=org%3Acloudposse+makeshift&type=Code
GitHub is where people build software. More than 28 million people use GitHub to discover, fork, and contribute to over 85 million projects.
ikr
and terraform plan, grepped for elb, also doesn’t produce it.
what about your ENV?
export | grep makeshift
makeshift is a known name. but “elb-logs-makeshift” in that combination is not
uhm
export locally yes, but that’d affect other environments tho..
in this case, looks like namespace=elb
, stage=logs
, name=makeshift
(if you’re using our label module)
not explicitly no. just using your beanstalk environment module
ok
we had this environment up before too. this is so bizarre. we used an older module, and im now rewriting with the new one. but then even in your module, the way you define it, it’d be elb-logs ending.. not elb-logs+string
#killmenow
yea, makes no sense. must be some silly error somewhere.
this has to be some bug with aws or something. because the rest of the environments just use a normal elb-logs-<random_digits> s3 bucket.
Don’t think i like EB anymore. So high maintenance
And very slow to iterate
i’d ask amazon but don’t have support subscription yet
and funny thing is, aws cli says that s3 bucket doesn’t exist.
but when trying to create it, it says “$ aws s3 mb <s3://elb-logs-makeshift> makeshift2.3.7 make_bucket failed: <s3://elb-logs-makeshift> An error occurred (BucketAlreadyExists) when calling the CreateBucket operation: The requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again.”
“$ aws s3 ls elb-logs-makeshift makeshift2.3.7
An error occurred (NoSuchBucket) when calling the ListObjects operation: The specified bucket does not exist”
oh, maybe someone else on AWS owns that bucket.
that would explain why you cannot list it
@i5okie we suggest using label
or a similar naming pattern for consistency and to eliminate naming contention
yes, all buckets are global
except when you do aws s3 ls
, it shows only your account
yeah that makes sense.
you said elb-logs-<randomstring>
could makeshift
be generated as one of the random strings ?
but why would it try to create/use a bucket that nothing asked it to use. and the other environments created with the exact same module and config essentially, just use the standard elb-logs-3434343434 thing that aws picks by default.
no, because other environments go into elb-logs-<randomdigits>
the same one
without that being explicitly specified anywhere in the config files.
oh i see. its actually elb-logs-<accountnumber>
just use this https://github.com/cloudposse/terraform-null-label
terraform-null-label - Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes])
terraform-terraform-label - Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes])
it makes no sense that from scratch I’ve just created other environments with this module and it went fine, and used the normal per-account elb-logs bucket. and this one odd-ball decides not to. I’m using this: https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment
terraform-aws-elastic-beanstalk-environment - Terraform module to provision an AWS Elastic Beanstalk Environment
for all of our environments.
so why would i use label separetly?
not sure how it relates to the <accountnumber>
can you share your code?
thats just what EB does naturally creates per-account elb-logs s3 bucket, and uses it for all environments. terraform no terraform, its just what it does
which code
i have to use local copy of the module because to modify the healthchec url varriable a bit.
what’s the diff between those 5 envs? stage
?
no. app names
5 applications, 3 stages each
but im only re-doing the staging env for now
which is in its own vpc, etc etc
so “name” variable would be different between the 5. plus app versions, and env vars.
terraform plan, or apply output does not mention “elb-logs-makeshift”. in fact here’s a line
the EB logs bucket get created here https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/blob/master/main.tf#L965
terraform-aws-elastic-beanstalk-environment - Terraform module to provision an AWS Elastic Beanstalk Environment
but @i5okie still, how all of this relates to what you said about elb-logs-<randomdigits>
?
what is randomdigits
here and where they from?
its actually account number.
ELB doesn’t use these buckets that are created.
it ends up using the account-default elb-logs-<accountnumber> bucket instead.
don’t know why
it’s not in the image above
im thinking this is a bug at AWS side.
because I could let you read all of my terraform files, and you wouldn’t find anything that would put the strings together to say “elb-logs-makeshift”
oh i have an idea. i’ll delete the .terraform folder and try again
yea try that
can you show all bucket names from the 5 env that EB created?
there is in fact 5th one
but where is randomdigits
in there?
but they are all empty. and all elb logs go to the elb-logs-<accountnumber> bucket instead.
it should not add any random things
your module doesn’t
its just what elb does
i think you have naming collision somewhere
we used an older module of yourse for this. all names were the same. in fact the only difference i noticed between the versions of modules is the order of names/namespace/stage is different.
not that it matters
ok deleting terraform folder and re-starting didn’t help
and its lying
it did create the load balancer
but it quits on actions and doesn’t add instances
i’ll have to wait an hour before the “(terminated)” envs dissapear from console, and create the env manually. then try to import it into the terraform.
@i5okie just to confirm. The bucket that terraform-aws-elastic-beanstalk-environment
creates, is for Load balancer logs https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html#command-options-general-elbv2
Configure globally available options for your Elastic Beanstalk environment.
they have the names as you showed in the images above
at the same time, EB creates a bucket per region where it stores all EB stuff (e.g. app versions)
and that bucket name looks like this elasticbeanstalk-us-east-1-<accountId>
i have both
that’s the only thing with Account ID
there
i have the buckets that you specify in the terraform module for elb logs, the elb-logs-<accountnumber> bucket that elb spits elb logs into when i enable those. and the elastic-beanstalk-<region>-<accountnumber> buckets
i don’t ask it to change which buckets to put logs into. i’m just going with what it does.
it probably does it when i say to forward logs to cloudwatch
in web console
its aws
just created environment from scratch by manually on web console. same error.
hmm
what’s the error?
the reference shows up in cloudformation template
how it gets to that template, i have no friggin idea
im going to delete the stack and try creating it manually again
yea sounds like you need to delete everything first
which is odd. i deleted the environment, waited for it to disappear from the console, then manually created it. and it ran into the same error. I guess it kept the cloudformation stack template and kept trying to re-create the stack from template? so odd
ok, time for aws support. deleted environment. deleted cf template. applied terraform. same error
you can try to create a new EB application and deploy the env into it
(it’s actually a good idea to deploy each env into its own application)
hmm
a new application for each stage?
yes
hmm
it’s better for many reasons
Since each stage should be in a separate AWS account to ensure isolation.
Which then necessitates using a separate app.
yeah separate account makes sense
for now each stage could/should be in a separate app
not using EB is probably also a good idea
not really It’s good at what it does. Never seen the issues like you are having
no not like this. But the hours I’ve spent “trying” to make things work with EB.. i probably could have learned how to use ECS properly.
maybe
but EB is actually much simpler in many cases
true very much so
guess what
new application, manual environment setup in web console.
same issue
lol. alright i’ll see what amazon says.
2018-08-13
wow. all the time i was pulling my hair out.
just to find the “elb-logs-makeshift” in .ebextensions of the app code itself.
well, glad you found the issue @i5okie
how the other 4 env were deployed? without ` .ebextensions`?
different applications
2018-08-16
Funny thing I just found out.
It is possible to chain aws_iam_policy_documents by taking the .json-output of one aws_iam_policy_document’s as input with the parameter source_json for another aws_iam_policy_document. This way it’s possible to conditionally add statements to a single policy. Context is that not all sources support multiple policies ecr_repo is one of them.
yeah, it’s a great feature, but can still use some improvements, https://github.com/terraform-providers/terraform-provider-aws/issues/5047
Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or "me to…
that’s a nice trick!
Anyone figured out how to add multiple principles using aws_iam_policy_document ?
principals {
identifiers = ["${var.allowed_read_principals}"]
type = "AWS"
}
gets rendered to this json: “Principal”: { “AWS”: “arniam:root”, “arniam:root”, “arniam:root”, “arniam:root” }
nevermind, it’s something else, replacing var.allowed_read_principals with a self-made list does work
2018-08-17
@maarten is var.allowed_read_principals
a list?
I should have deleted the question
no it was not in the end
all fixed
2018-08-20
Greetings. I’ve been testing out your beanstalk-environment module and I’ve been having the problem that every-time I run terraform it plans to change everything, even immediately after the initial environment creation etc.
I haven’t updated anything or changed any values and from the output I’m seeing Terraform wants to change values from their current value to the exact same value.
In some cases it’s removing a setting and then re-adding it later on
setting.3926968379.name: "" => "SSHSourceRestriction"
setting.3926968379.namespace: "" => "aws:autoscaling:launchconfiguration"
setting.3926968379.resource: "" => ""
setting.3926968379.value: "" => "tcp, 22, 22, 0.0.0.0/0"
setting.502734328.name: "SSHSourceRestriction" => ""
setting.502734328.namespace: "aws:autoscaling:launchconfiguration" => ""
setting.502734328.resource: "" => ""
setting.502734328.value: "tcp,22,22,0.0.0.0/0" => ""
in other cases it’s the below:
setting.3402994671.name: "Statistic" => "Statistic"
setting.3402994671.namespace: "aws:autoscaling:trigger" => "aws:autoscaling:trigger"
setting.3402994671.resource: "" => ""
setting.3402994671.value: "Average" => "Average"
Hrm… don’t believe that should be the case.
@i5okie are you seeing this behavior?
also, what version of terraform and aws provider are you using?
@Andrew Jeffree please show the output from terraform init
and terraform plan
, and also the code how you instantiate the module
for reference, this is how we deployed it before https://github.com/cloudposse/terraform-aws-jenkins/blob/master/main.tf#L14
terraform-aws-jenkins - Terraform module to build Docker image with Jenkins, save it to an ECR repo, and deploy to Elastic Beanstalk running Docker stack
(although it probably was tested with older TF version)
Sure gimme a min
So I think at least with the first snippet I pasted it’s an issue between how terraform provides the string to beanstalk and how beanstalk returns it.
@Erik Osterman (Cloud Posse) I’m seeing terraform re-do almost every setting. Very similar behaviour to re-setting the Tags even if they were not changed.
it doesn’t replace any environment, or instances. just the setting.xxxxxxxxx.xxxx stuff.
using terraform 0.11.8
Hrmmm my guess is it’s a regression in terraform
We didn’t see this behavior in earlier versions but sounds like it’s a problem now.
It’s probably because maps in golang aren’t stable
Stable in the sense they are not ordered the same between executions
I don’t know how to fix this but will gladly accept any PRs
Haha
Yeah I figure it’s either terraform or the beanstalk api.
I’ve had to hassle AWS to fix a few bugs in the beanstalk api recently
so I wouldn’t be surprised if it has further issues.
If I figure it out I’ll certainly submit a PR
Thanks for at least confirming I’m not going insane and missing something super obvious
Ahhh yea…. you’re not going insane
not yet at least
2018-08-21
So for the ssm parameters piece. If I have the ssm stored in AWS and just want to pull that. Do i just use data AWS ssm parameters to get the keys?
@pericdaniel after you store params in SSM, you can read their values from another TF module: https://github.com/cloudposse/terraform-aws-ssm-parameter-store#simple-read-parameter-example
terraform-aws-ssm-parameter-store - Terraform module to populate AWS Systems Manager (SSM) Parameter Store with values from Terraform. Works great with Chamber.
you need to know the names of the params when you write them and then when you read them from other modules
2018-08-22
what Add new target to upgrade all module sources why Keep modules up to date demo Processing ./test/test.tf… [SKIPPED]: ../ Processing ./test/cloud-provider.tf… Processing ./examples/wordp…
@tamsky
what Add new target to upgrade all module sources why Keep modules up to date demo Processing ./test/test.tf… [SKIPPED]: ../ Processing ./test/cloud-provider.tf… Processing ./examples/wordp…
@sarkis want to turn this into a go app?
Yes! Awesome work on this @tamsky /bow, I’d be able to get some time to get an initial commit at least this Sunday.
@sarkis I get zero credit for this PR – it was all Erik
Well then Erik /bow
Added make target to upgrade all terraform module sources to latest version
2018-08-23
Based on discussion in dependabot/feedback#118.
@michal.matyjek @Daren
Based on discussion in dependabot/feedback#118.
that’s amazing – it’s merged already
yea, he’s fast!
i’ve heard many-a-times a developer say “i have some free time, let me implement that today”
and 2 mo later it’s not done.
even more impressive is the non-trivial nature and volume of code. agree on all points
yea… i thought so too
2018-08-24
Some nice modules here: https://github.com/devops-workflow
Full Automated, DevOps type, Workflow Project
2018-08-26
@alex.somesan has joined the channel
2018-08-27
Broad topic, but how are you guys handling multi account AWS deployments?
- Creds per account (interpolated somehow in CI)
- Cred with assume roles?
- some other?
Im just curious to see how others are doing this
we use assume role
@pecigonzalo we use separate AWS accounts per stage (prod, staging, dev) and also a separate account (we call it root
, although a better name might be identity
) where we provision all IAM users and roles. We then use roles to login to the member accounts
take a look at our reference architectures https://docs.cloudposse.com/reference-architectures/
what @Andriy Knysh (Cloud Posse) said
the terraform provider looks something like this:
provider "aws" {
profile = "<profile with credential allowed to assume role>"
assume_role {
role_arn = "<role arn in target account>"
}
}
@Andriy Knysh (Cloud Posse) we use something similar for our users and assume roles for entering the accounts
but for CI I was unsure, we are going the way of per AWS Account->CI User
I think I saw this in the AWS reinvent talk
so we are sure that we limit the scope of the blast
but at the end of the day we have to interpolate the correct CI User for each stage of the deployment
which is meh
oh, we are working on that now too
and CI has all users, so the blast could be really big
so we are going to assume roles with MFA by using this tool https://piotrkazmierczak.com/2016/mfa-tokens-in-your-terminal/
A personal blog.
I saw your [prod.name.com](http://prod.name.com)
[this.name.com](http://this.name.com)
structure for the AWS Accounts and think its great for the accounts, but I dont think it translates to app deployment
So each app corresponds to a stage
But you’re right app specific cicd does not belong in this repo
For that we use a build pipeline defined in each app repo
We use Codefresh for cicd
Yeah, i was not asking so much about the tool, but more about the how do you pass different creds/etc for AWS app deployment, as you can see further on, I commented how we plan to do it
yes
we just added it to geodesic
https://github.com/cloudposse/geodesic/pull/248
what Install oath-toolkit why Easy build one-time password authentication systems (including for AWS with MFA) Required for Terraform CI/CD install Step 65/74 : RUN echo "http://dl-cdn.alp…
Interesting
geodesic
is our container which we use to login to the accounts and provision resources
Yep
so we create a CI/CD user per stage (prod, staging, dev, etc) and then use oath-toolkit
to get the MFA token and then assume role
to login to the account
but doesnt generating the token on your computer defy the objective of MFA? as you have the MFA gen and AWS creds on the same place
In any case, I believe that might be a different converesation
yea, it’s a long conversation
My original quesiton was more about, how do you deploy apps to the different environments with multiple AWS accounts
Codefresh
We define a pipeline file in each codebase
Then use staged Codefresh accounts. So a production account executed production pipelines
A staging account executed staging pipelines
Staging account also executed production pipeline to preproduction account in staging environment so we get to test that too
Different Codefresh accounts all together or different stages in the pipeline?
So with codefresh enterprise, we can create as many accounts as necessary, just like AWS accounts. So the idea is to use a different codefresh account for each aws account that needs CI/CD
We can reuse the pipeline, or create new ones, but they are all stored in git
we register the pipeline we want to use in the account that has the integrations
so the production account will have integrations to production kubernetes cluster, production ECR, etc
while the staging account will only have integrations to the staging kubenretes cluster, and a pre-production ECR registry where we test the production pipeline, but in a staging context
the key is the pipelines are reusable across accounts
we can test/use them in any account
I did not know that was a feature of CF enterprise, interesting!
We do the same for pipelines, but I think having a complete separate account, where you can register separate integrations could be really interesting.
In any case, thanks for sharing!
For sure… let me know if you’d like an introduction to someone over there.
We work a lot with them and I can vouch for their support being top notch.
(they also use slack)
ah, it’s easy let me show
In your case its Kube, so you have to pass the kube endpoint and creds for each env
depending on the branch or stage in the pipeline
so we have a collection of TF modules which we use for all stages/environments. The modules have no identity (you can say they are just templates) https://github.com/cloudposse/terraform-root-modules
Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules
then in the geodesic
shell for a member account, we pull the resources we need
1 moment, that part I follow
As I follow your cloudposse project, thanks a lot btw for some of the ideas/concepts
Example Terraform/Kubernetes Reference Infrastructure for Cloud Posse Production Organization in AWS - cloudposse/prod.cloudposse.co
Example Terraform/Kubernetes Reference Infrastructure for Cloud Posse Production Organization in AWS - cloudposse/prod.cloudposse.co
no problem
so the identity comes into play in the accounts repos
which we use from geodesic
shells per account
and we store all credentials in AWS SSM and use chamber
to read them
Yeah we use a similar workflow for that part
with chamber
But those are a repo per env, so those are a bit different I guess.
After you have KOPS and you your PaaS in a way, lets say now you have app1
how do you deploy it?
EG: some people have a var set (all fake vars, just used as an example) KOPS_ENDPOINT KOPS_USER KOPS_PASS then on each stage of the pipeline, or branch depending on how you deploy, they interpolate the right values there.
even if they are on chamber, you have to store either 3 AWS accounts for chamber, or 1 AWS account for chamber, with multiple secrets
we use CodeFresh pipelines to deploy apps to k8s clusters
and we use geodesic
which has ` chamber` inside
in the pipelines (per account/stage) we read the ENV vars from SSM and CodeFresh applies them
so CodeFresh has permissions to access each SSM for each env?
Codefresh uses containers per step, so it’s easy to use geodesic
there
Ah right
and you have the creds on each container
Nice
Thanks for your answer, It gave me some ideas
you welcome. I believe @Erik Osterman (Cloud Posse) has a lot more to add to this
We are going a different way right now, we will a have a CI AWS Account, whith a CI chamber, that we can get the envs for each environment on each stage of the deploy
But was looking for alternatives/improvements
BTW Im trying to fix https://github.com/segmentio/chamber/pull/70 so it finally merges and we get per env/custom paths
This PR makes it possible to have the service have a variable depth for the service (<path>/<service>). The default "." separator is also supported e.g chamber write some/path…
interesting
since chamber
can accept multiple services when reading (e.g. chamber read service1 service2 key
) we use multiple services to override the default values (if needed)
e.g. chamber read kops app1 db_password
Yeah, we use something like chamber exec ci-development ci-app1
But I want to have a cleaner SSM, as the interface sucks a bit
so having /devel/ci
or similar will be ideal
or something like /external/thisprovider
for any sahred keys we want to actually share
for CI of apps we are currently moving to something like:
(CI has CI AWS Account creds)
chamber exec aws-development --
(CI AWS Account is overwritten by Dev AWS Account)
chamber exec ci-app1 -- example command
aws-development
secret lives in CI Acccount
ci-app1
secret lives in Dev Account
That’s nice
We use the same secret names for all accounts since SSM is per account, and this way all our code remains the same
yeah we would only interpolate the first chamber for the same reason, after that all SSM secerts are the same across environments.
we thought about moving ci-app1
to the CI Account and changing it to something like /dev/ci-app1
or so, so we can use the fact that chamber can read multiple secrets at once, but ultimately we prefer to have less interpolation of environment names etc
2018-08-28
So my customer wants to pay someone to fix that bug around Terraform and beanstalk settings, that we discussed here last week.
I’m not sure that even if someone can figure out where the bug is and patch it they’ll be able to get it merged in a timely manner.
comments/thoughts/suggestions appreciated.
@Andrew Jeffree are you asking about this one https://sweetops.slack.com/archives/CB6GHNLG0/p1534814857000100
Greetings. I’ve been testing out your beanstalk-environment module and I’ve been having the problem that every-time I run terraform it plans to change everything, even immediately after the initial environment creation etc.
i’ll take a look at it
If you created a vpc in one tf file… How do you use another tf file to find that vpc and those subnets to deploy resources too
@pericdaniel Terraform works per-folder, meaning everything in the folder will be used. So if you create a VPC in one file, you can use it in all other files in the same folder
Here is an example
Collection of Terraform root module invocations for provisioning reference architectures - cloudposse/terraform-root-modules
Im using separate folders
Terraform also supports modules in local directories, identified by a relative path starting with either ./ or ../. Such local modules are useful to organize code in more complex repositories, and are described in more detail in Creating Modules. https://www.terraform.io/docs/modules/usage.html#source
^ one way of doing it
another way is to look up the VPC using data sources - you create it in one folder and do terraform apply
, then in another module in another folder you look it up
example:
Terraform module to lookup resources within a Kops cluster for easier integration with Terraform - cloudposse/terraform-aws-kops-metadata
Anyone ever used: aws_iam_account_alias ? What is the practical use of it ?
it’s a friendly name for the account instead of account ID (which is not easy to remember)
so the URL where you login would be like this:
hm that’s quite nice actually
but also an information give-away in some cases
yep
Here is a Terraform trick we used recently. Looks simple, but we really did not know how to do it before It might help somebody.
In some cases, you have a TF module and want to provide some settings in a list
or map
.
And you have a conditional variable (let’s say var.condition1
) which changes the settings for the module.
locals {
settings1 = [
{
name = "1a"
value = "1a"
},
{
name = "1b"
value = "1b"
}
]
settings2 = [
{
name = "2a"
value = "2b"
},
{
name = "1b"
value = "1b"
}
]
}
module "example" {
settings = "${var.condition1 ? local.settings1 : local.settings2}"
}
won’t work because Terraform does not support list
and map
in conditional expressions (maybe V2 will do it better, but we don’t really know).
So here is the slice
pattern (for the lack of a better name)
locals {
settings = [
[
{
name = "1a"
value = "1a"
},
{
name = "1b"
value = "1b"
}
],
[
{
name = "2a"
value = "2b"
},
{
name = "1b"
value = "1b"
}
]
]
from_index = "${var.condition1 ? 0 : 1}"
to_index = "${var.condition1 ? 1 : 2}"
settings_final = "${slice(local.settings, local.from_index, local.to_index)}"
}
module "example" {
settings = "${local.settings_final}"
}
I feel like this is one of the many warts that will be addressed in v0.12 https://www.hashicorp.com/blog/terraform-0-12-rich-value-types
As part of the lead up to the release of Terraform 0.12, we are publishing a series of feature preview blog posts. The pos…
Implemented here as an example: https://github.com/cloudposse/terraform-aws-dynamodb/blob/master/main.tf#L11
Terraform module that implements AWS DynamoDB with support for AutoScaling - cloudposse/terraform-aws-dynamodb
another way of doing this is to put the conditional eval in the index of the map… map[condition ? true : false]
where true and false are the key in your lookup dictionary
Yep, for maps it will work as well
The slice
pattern works better if you need to remove some settings based on condition (and you can’t send empty or default values to the module)
love it!
https://www.reddit.com/r/Terraform/comments/99zri8/gke_regional_cluster_deployment_defining_zones/ you can publish your slice pattern here
0 votes and 1 comment so far on Reddit
@Andriy Knysh (Cloud Posse) i cant get this to work^
what’s the error?
so, since you are using data sources to lookup the VPC and subnets, two possible issues here:
- Did you already provision those VPC and subnets
^yes
the tags match
- Did you specify the correct filters to look them up?
the filter im not sure about
this makes sense ot me
filter { name = “tag:Name” Values = “${var.AD-Private-Subnet2}”
filte for
filter for
tag name
with the value of this variable
oh i dont want the tag name tho i want the subnet
hmm
you can use tags, but you need to create them with the same tags
yes thats what i did
the tags match whats in aws
and match the other tf file thats creating them
the other tf file is in sep folder
I’ll take a look in 30 mins (in a meeting now)
no rush
thank you so much!
sorry im still learning!
@pericdaniel to lookup VPC, you can use id
and filters
https://www.terraform.io/docs/providers/aws/d/vpc.html
Provides details about a specific VPC
but for subnets
, you use tags
https://www.terraform.io/docs/providers/aws/d/subnet_ids.html
Provides a list of subnet Ids for a VPC
and vpc_id
is required
yea i was tyring to avoid the id due to the fact it changes each time a vpc is created
i was looking for a way to have it pass through the current environment
you can use the id
from the VPC you look up data.vpc.my_vpc.id
Provides details about a specific VPC
All of the argument attributes except filter blocks are also exported as result attributes
without inputing the varibale
variable
for subnets
:
vpc_id = “${data.aws_vpc.my_vpc.id}”
it will first lookup the VPC and then use its ID (and tags if you want) to lookup the subnets
(you prob don’t need the tags, you can get all subnets from the VPC by its ID)
will i need this variable “AD-Private-Subnet1” {}
(nitpick, consider using all lower case and and underscores - it’s the most common convention for terraform resource names)
is there a way to do it without inputing the variable?
im missing somthing here
@pericdaniel you can explain what you want to achieve and paste your complete code here (or DM me). (sorry, don’t want it to be http://xyproblem.info :))
Asking about your attempted solution rather than your actual problem
I love how you always have the term to describe something
Asking about your attempted solution rather than your actual problem
we made it working with @pericdaniel
@Andrew Jeffree so I tested terraform-aws-elastic-beanstalk-environment
and yes, it does re-create all settings
on each plan/apply
setting.1039973377.name: "InstancePort" => "InstancePort"
setting.1039973377.namespace: "aws:elb:listener:22" => "aws:elb:listener:22"
setting.1039973377.resource: "" => ""
setting.1039973377.value: "22" => "22"
setting.1119692372.name: "" => "ListenerEnabled"
setting.1119692372.namespace: "" => "aws:elbv2:listener:443"
setting.1119692372.resource: "" => ""
setting.1119692372.value: "" => "false"
setting.1136119684.name: "RootVolumeSize" => "RootVolumeSize"
setting.1136119684.namespace: "aws:autoscaling:launchconfiguration" => "aws:autoscaling:launchconfiguration"
setting.1136119684.resource: "" => ""
setting.1136119684.value: "8" => "8"
setting.1201312680.name: "ListenerEnabled" => "ListenerEnabled"
setting.1201312680.namespace: "aws:elb:listener:443" => "aws:elb:listener:443"
setting.1201312680.resource: "" => ""
setting.1201312680.value: "false" => "false"
This feature/bug was present for years and still is not fixed:
https://github.com/hashicorp/terraform/issues/6729 https://github.com/terraform-providers/terraform-provider-aws/pull/901 https://github.com/hashicorp/terraform/issues/6729 https://github.com/hashicorp/terraform/issues/6257 https://github.com/terraform-providers/terraform-provider-aws/issues/280 https://github.com/hashicorp/terraform/issues/11056 https://github.com/terraform-providers/terraform-provider-aws/issues/461
nobody is sure who’s bug it is, Terraform or AWS
(i tested some ideas from the links above, nothing worked)
the only possible solution is to add this:
lifecycle {
ignore_changes = ["setting"]
}
but it’s a hack since it will not update the env if you update any of the settings
@Andriy Knysh (Cloud Posse) can you open a new issue here: https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/issues
Terraform module to provision an AWS Elastic Beanstalk Environment - cloudposse/terraform-aws-elastic-beanstalk-environment
with all your research above?
that way we can track it since this comes up quite frequently
hate for you to have to explain it everytime
@Andriy Knysh (Cloud Posse) yep, am aware it’s an open issue in multiple places. For now we’re ignoring changes to settings in the lifecycle but as you mentioned it’s a hack. The customer wants to pay someone to fix it, but I don’t like their odds.
sounds like it would be a difficult thing to give an estimate on, so it would probably need to be T&E
is anyone using private submodules with codebuild?
i use codebuild a fair bit… do you mean git submodules? or terraform modules that themselves have modules?
git submodules
Regarding terraform-aws-elastic-beanstalk-environment
recreating the settings
all the time, here what I think is happening:
- Terraform sends all settings to AWS, but some of them are not relevant to the environment you are deploying
- Elastic Beanstalk accepts all settings, applies the relevant ones, and throws away the rest
- Next time Terraform asks about the settings, Elastic Beanstalk returns a subset of the values and probably in different order
- Terraform can’t decide/calculate if the settings are the same - they sure look different (and would require an
advanced
algorithm to determine if they are the same) - Terraform assigns new ID to the entire array of settings and tries to recreate all of them
- Elastic Beanstalk accepts the settings, applies the relevant ones, and throws away the rest - the cycle repeats
What’s a possible solution?
Introduce var.settings
(list of maps) to be able to provide all the required settings from outside of the module.
It might work, but in practice would be very difficult to know all the needed settings and tedious to implement.
@Andrew Jeffree ^
(opened an issue to track any progress on this https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/issues/43)
terraform-aws-elastic-beanstalk-environment recreates all settings on each terraform plan/apply setting.1039973377.name: "InstancePort" => "InstancePort" setting.1039973377.n…
interesting
2018-08-29
2018-08-30
This fixes the case where a module label is instantiated with a non-default delimiter, and then another label is generated based off the context but uses the default delimiter as the local var take…
@jamie any insights?
@Andriy Knysh (Cloud Posse)
checking
we can override delimiter
for any label. If it’s not provided and context provided, it will be taken from the context. If nothing is provided, the default will be used
i’m reviewing the PR
Its an easy fix
I see that in the PR it has been broken out into less condense parts so that each step can be explained.
@mrwacky yea looks like it will be resolved in 0.12
, thanks
As part of the lead up to the release of Terraform 0.12, we are publishing a series of feature preview blog posts. The pos…
no need for the slice
pattern
So much goodness in 0.12
, they’re even addressing JSON warts:
https://www.hashicorp.com/blog/terraform-0-12-reliable-json-syntax#comments-in-json
As part of the lead up to the release of Terraform 0.12, we are publishing a series of feature preview blog posts. The pos…
So does that mean TF will be one step closer to CloudFormation?
ducks for cover
hopefully they won’t have too many issues in V2 and we don’t spend too much time on resolving them
0.12 will be amazing.
Literally doing zero work on new tf stuff until it drops
lololol
roll d20
lolol
got’em
@Gabe
@Gabe has joined the channel