#terraform (2019-05)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2019-05-01
does anyone know if the terraform-aws-vpc-peering module can be used to peer inter-region vpcs? I don’t see any way to specify a region for the requestor or acceptor
I’m trying to create a mesh of vpc peering between about 6 vpcs each in a different region, all part of the same account
right now, we don’t support that in our vpc peering module b/c it requires passing the provider and there’s no way to make it optional
we have a cross account peering module, but no cross region
Terraform module to provision a VPC peering across multiple VPCs in different accounts by using multiple providers - cloudposse/terraform-aws-vpc-peering-multi-account
ok, thanks. I have seen the cross-account module, was wondering if i could still use it but with a single account
I think it would work actually
to circle back on this it does work
in my case i have a user that is auth’d via SAML and gives me an existing role with all the privileges needed to peer the vpcs so I just used the arn for that role for both the accepter and requestor role arn variables
and I had to export the env variable AWS_PROFILE to match the username I use to login. my creds/tokens are in my ~/.aws/credentials file so the terraform aws provider found them ok
The problem with using terraform provider settings for everything is it requires updating modules and passing those settings down
We have been relying on the bare minimum of settings and instead relying on the standard AWS environment variable interface
We use the aws-okta cli and aws-vault cli tools that manage our env
TIL
When the environment variable TF_IN_AUTOMATION is set to any non-empty value, Terraform makes some minor adjustments to its output to de-emphasize specific commands to run. The specific changes made will vary over time, but generally-speaking Terraform will consider this variable to indicate that there is some wrapping application that will help the user with the next step.
Am I the only one who can’t get terraform init -force-copy
to behave as indicated? It still prompts “do you want to copy local state to remote backend”.
yes you bastards, YES
This is almost a year old fix ;(
Not really a suggestion, but … time to move to remote state 100% of the time? (we use terragrunt to help us with this, although necessity for it will be diminished a bit in 0.12+).
Ha. I’m doing a bunch of terraform state mv
to rename things, and it’s waaay faster to pull it locally, do the moves, then re-enable remote state
Otherwise, we are all remote
^ haha yeah i do the same thing often. oh, and when mv’ing things between states, it’s kinda required to pull it locally / push it back afterwards
but i always get scared that someone else will modify in the meantime… i think i needa write a quick ‘lock this state in dynamodb’ aws cli alias
So got golang lsp skeleton working, and the lsp will have both static schema and provider plugin based , should able to show a demo this week
=) if everything goes well, I might expand to include nomad and other hcl based tool
2019-05-02
while using the aws-vpc-peering-multi-account module I get an error on the first run like so Error modifying VPC Peering Connection Options: OperationNotPermitted: Peering pcx-xxxxx is not active. Peering options can be added only to active peering
which basically seems to be an object creation/modification race condition. I can re-run the apply and it gets past that error
my question is, is there a common way to avoid having to do this or is it just something I have to accept?
hrmmm probably an underlying terraformism
You need to apply twice
There is no way in TF to wait for the connection to become active
ok, thanks!
There are two ways to fix it: separate connection options into a separate module, or apply with —target
after I run a second time I’m getting a different error:
* aws_vpc_peering_connection_options.requester: Error modifying VPC Peering Connection Options: OperationNotPermitted: Modifying VPC peering connection options AllowEgressFromLocalClassicLinkToRemoteVpc, AllowEgressFromLocalVpcToRemoteClassicLink is not supported for cross-region VPC peering connections
same error for the accepter
the plan clearly shows it is attempting to set that property:
~ module.vpc_peering_cross_account.aws_vpc_peering_connection_options.accepter
accepter.1102046665.allow_classic_link_to_remote_vpc: "" => "false"
but I can’t figure out where the module is getting the variable for accepter.xxxxxxx.allow_calassic_link_to_remote_vpc
or more specifically, how I can keep my terraform setup from trying to send/set that variable
hmm, seems the error messages were misleading. it turns out the one parameter that was being explicitly set in the module, allow_remote_vpc_dns_resolution
is actually not allowed for cross-region peering. once I set this to false the errors disappeared
Nice find. We did it for cross account but in the same region, so the flag was set to true and it worked
2019-05-03
hey, anybody try to work around depends_on module? look like https://github.com/hashicorp/terraform/issues/1178#issuecomment-449158607?
Possible workarounds For module to module dependencies, this workaround by @phinze may help. Original problem This issue was promoted by this question on Google Groups. Terraform version: Terraform…
Pass a referenced attribute from the dependent module to an attribute on the depending resource… Setting a tag or description are how I’ve done it
Possible workarounds For module to module dependencies, this workaround by @phinze may help. Original problem This issue was promoted by this question on Google Groups. Terraform version: Terraform…
The null resource trick sounds like it ought to work also, but haven’t had to go to that length yet
Do u have any example code?
i’ve only remember having to do it one time, was a while ago, lemme look…
looks like it might have been refactored away at some point, can’t find it now
2019-05-04
@Andriy Knysh (Cloud Posse) https://asciinema.org/a/NBjkPvXsqnTqARnWHiEOl7o6s what you think?
Recorded by juliosueiras
very nice @Julio Tain Sueiras
Wow can’t wait to see more
quick PR to note it before I forget the details over the weekend https://github.com/cloudposse/terraform-aws-eks-workers/pull/14
See awslabs/amazon-eks-ami#183. According to AWS support, adding the default bridge support is needed in order for docker in or docker on docker to build images inside of a pod. (moving fast, but l…
https://asciinema.org/a/Ey0Tt3zlveGWoSv71tBCrK7kX next stage is full recursive completion
Recorded by juliosueiras
(so you can then complete any level of module variable)
2019-05-05
hi, i get “ERROR: Job failed: exit code 1” when a run terraform init at gitlab-ci, somebody knows why?
could be anything there should be additional logs available with a more specific error message
got resource completion, I will be releasing the first version(github) within next week
the first version should have these:
- Variable Completion(complete completion, infinite nesting type, including mix of list and map)
- Resource Completion(including nesting blocks)
- Data Source Completion(including nesting blocks)
- Functions Completion(including signatures)
- Provider config completion
- Backend completion
- Module Completion(including infinite nesting input)
- Error checking
Note: the resource and data source will talk to the terraform provider binary using grpc, so A) it will provide completion data of the version you specify, B) it will not require wait for update
any feedback is welcome
after the first version, I will need to figure out how to provide completion for scope for loop and dynamic block
oh forgot to mention Provisioner completion
I will need to see if it can talk to grpc based provisioner
(hence providing completion for ansible provisioner)
2019-05-06
more progress https://asciinema.org/a/UDt4V3udIQ67TqTUjzrDl3sGS
Recorded by juliosueiras
@Andriy Knysh (Cloud Posse) for you, since you use intellij
super
will be waiting for that plugin
(this is using intellij-lsp)
Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7).
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Mar 20, 2019 11:30AM.
Add it to your calendar
https://zoom.us/j/684901853
#office-hours (our channel)
Terraform LSP Client for Atom
question, what do you guys consider a must require to have for a editor plugin for terrafor
terraform*
(features-wise)
- highlight errors (wrong vars, missing vars, wrong resources, etc.)
- Show ref count, when clicking on it shows all references/usages
- Autocompletion
k, np, those two are in my list of next thing to do after releasing tomorrow
third one np, I am also working providing dynamic block completion
so completion with the context within dynamic block
- Go to implementation
I most likely will have 1,2,4 done around friday or Saturday this week
since the funniest thing about HCL2 ¯_(ツ)_/¯
is that every(and I mean every) syntax tree object have a range
function, and associated range var(like type range, declare range, open range, close range, etc)
is there a clean way to get the output of a local CLI call from a null_resource? My googling isn’t pulling up any solid results
as opposed to using a data external?
I did switch to that then realized…there’s a k8s query so just use it
i already have the k8s config so it is easier than expected
now if only this iam auth works on TFE. That’s TBD.
pushing the envelope
tell me about it.
2019-05-07
Hey all, quick question. I’m using some awesome Cloud Posse modules in terraform but unfortunately it doesn’t support all of the attributes I need for the resulting resources. What’s the best way of updating the resources created from the modules after the fact?
@Adam Barnwell you can open a PR to add the missing attributes, then we merge it to master, then you apply again
@Andriy Knysh (Cloud Posse) in the short term, could I then update the resource using the same name after the module?
Yes, you just update to the new version, add the new attributes, and apply again
@Julio Tain Sueiras how can i install de VS extension you are creating?
I already created a vscode extension for it
The languageclient extension for it, already did the same for atom
XD
Vim , IntelliJ, and few others(Emacs,etc) is support natively
Atom and Vscode need the proxy client extension for lsp
Vim and the others doesn’t need
So the only thing you need is the general lsp client and the terraform-lsp
I will put the per editor instruction
For setup
Helpful question stored to <@Foqal> by @Andriy Knysh (Cloud Posse):
Hey all, quick question. I’m using some awesome Cloud Posse modules in terraform but unfortunately it doesn’t support all of the attributes I need for the resulting resources. What’s the best way of...
when using aws_cloudfront_distribution
resource, if i want to enable Forward all, cache based on all
value for Query String Forwarding and Caching
what all parameters do i have to enable ?
forwarded_values {
query_string = true
}
is this good enough ?
because in the under cloudfront distribution console, i see another option called Forward all, cache based on whitelist
so i am not sure what option will be enabled
there is an extra option under that block call query_string_cache_keys
which is a list of string
from the tf docs
When specified, along with a value of true for query_string, all query strings are forwarded, however only the query string keys listed in this argument are cached. When omitted with a value of true for query_string, all query string keys are cached.
makes sense
thanks
getting there
2019-05-08
doing the release in 1-2 hour, need to work out the issue with goreleaser
Language Server Protocol for Terraform. Contribute to juliosueiras/terraform-lsp development by creating an account on GitHub.
FYI: of course there is a lot of improvement and features still need to be implement
2019-05-09
I want to add a feature to https://github.com/cloudposse/terraform-aws-rds to pass additional security groups in to the module which will be added to the RDS instance. I have it working a fork, before I push a PR upstream I was wondering what the best name for the new module parameter is. For now I chose server_security_group_ids
but I’m open to suggestions or pointers to naming conventions from other modules.
Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.
hey @David Nolan thanks for doing it
we called it allowed_security_groups
before
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
you can also add https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/variables.tf#L63
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
this module already has security_group_ids
which implements that functionality, but that is different from what I’m looking to add.
but I think the modules has it already with diff name https://github.com/cloudposse/terraform-aws-rds/blob/master/variables.tf#L33
Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.
right now it creates an SG which applies ingress rules to the RDS instance. I want to join the RDS instance to an existing SG which is used as a target on egress rules in other security groups.
i.e. I have RDS clients which have a SG (“client SG”) with a rule allowing port 3306 to an existing “server SG” security group, and I want that “server SG” group added to the RDS instance
Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.
it allows ingress from the existing SGs to the instance
that a list of allowed clients, where I want to also add the RDS to a list of allowed servers on those clients
Here’s my code: https://github.com/vitroth/terraform-aws-rds/commit/314dfb0ac5c9e850476799481e72a2c6466c8241
This setting allows for existing SGs to be assigned to the RDS instance, which allows for setups where other services have Security Groups which have rules pointing to existing target groups, and w…
The result is the RDS instance ends up with multiple SGs applied, which is a powerful usecase
i think it’s what we have now, no?
No, what exists now is “create a new SG which allows ingress from this list of security groups”
ah sorry, you want to add the instance to those SGs
Right, I also want to add the instance to a list of SGs
that allows other SGs to target this RDS instance (and potentially dozens of other instances) with a single rule targetting that server group
so we did not do it before b/c it’s easier to create a SG for the instance9s) and then connect that SG to any other SGs you have
placing the instance in a list of SGs will have the same result
but more complicated to manage
In this case I’ve got a use case for a single SG, “Managed RDS instances” and then a vault server which has a rule which allows it to talk to only those known RDS instances. The way to add new RDS instances to that known list is by joining the instance to that SG
The result (which works in my testing) is the RDS instance ends up in two SGs. One exists to defined the ingress rules specifically for this instance, the other exists to provide the grouping of servers for targetting by the separate service.
I would expect the behavior of “I already have an SG, add it to the RDS instance” to be pretty common, as it allows for centralized management of SG rules. (In my case the existing SG is managed by our legacy CloudFormation, so I have to import that data and apply it to the RDS)
i understand your use-case, but will not you achieve the same result by connecting “Managed RDS instances” SG to the ingress rule of the created SG for the instance?
That doesn’t change the Egress rules on my other service. (I tried that first before having the AHA moment and realizing what was missing)
In order to make this RDS instance a matching target in the egress rules in the my mysql client instance’s SG rules, the RDS instance has to be added to that SG.
If the existing parameter was named client_security_group_ids
then naming this one server_security_group_ids
might be more obvious in intent. Other names I contemplate are like extra_server_security_group_ids
or join_security_group_ids
.
As with many thing in AWS, there’s more than one way to do the SG setup… adding support here makes the module more flexible for working with existing setups.
I’d be happy to open the PR and move discussion there if that makes sense.
yes please
FWIW, I’m doing a bunch of work on building new infra w/ terraform, and I blame @sarkis for getting me pointed at the cloudposse module suite as a strong starting point. But I’m deploying in an environment that is heavily CloudFormation based now. So I expect I’ll continue to find edge cases where I want to add more flexibility like this. So I’m trying to understand your preferred flow on proposed feature additions.
yes thanks for using the modules
and thanks @sarkis
I miss working w/ @sarkis
btw, in cases like this, we did the following: provide an existing SG to the module in a var. If it’s empty, then create a new SG and all the rules. If it’s not empty, use it and don’t create SG and the rules
Thats not the behavior this module has today though. If you provide SGs in the existing var it creates a new SG that treats each of the provided SGs as an allowed client source.
yes, we did it for some other modules, not in this one
but what you are proposing will work too
I think its nice and flexible with both options. You can define known clients specific to this database by passing in the client security group list security_group_ids
, and now you can add common rules (for management, monitoring, etc) that are defined elsewhere by adding the instance to an existing SG as well.
Release candidate 1 of Terraform 0.12.0 is now available for testing. Unless testing identifies a significant blocker, we expect to publish the final 0.12.0 release a few weeks fro…
Somehow I thought 0.12.0 was already out, probably because almost all the docs pages already document the 0.12 syntax with links to the older syntax. Oh, and tfenv list-remote
was already showing 0.12.0….
Release candidate 1 of Terraform 0.12.0 is now available for testing. Unless testing identifies a significant blocker, we expect to publish the final 0.12.0 release a few weeks fro…
@Andriy Knysh (Cloud Posse) question, right now I am mostly focusing on non-intrusive features for the LSP(code completion, goto reference, error check, etc), do you think CodeAction would be useful ?
@Julio Tain Sueiras quick fixes would be great, without them even if you highlight the issues, developers lose time reading and understanding the message, and then inventing a way to fix it manually. Not sure how easy for you to implement all of that
Quick easily actually, only part is to figure out what are the common issue pattern in developing terraform
will be adding functions error checks(next is checking for attributes and the others)
an important question @Andriy Knysh (Cloud Posse) when you have some time
what you think
Wow super
2019-05-10
Is there a policy on version compatibility with the AWS provider? I’m adding support for a flag that has been supported in the provider since version 1.39.0 (last October).
FYI, I just opened a PR for this: https://github.com/cloudposse/terraform-aws-rds/pull/31
RDS now supports a deletion_protection flag, similar to the termination_protection flag on EC2 instaces. If enabled, this flag will prevent the accidental deletion of a database. This require the t…
@Andriy Knysh (Cloud Posse) Should I ping you on PRs like this? Or just the channel?
ping me in Slack or on GitHub. We are getting a lot of PRs and issues, so it’s difficult to keep track of everything. Thanks
No worries.. Thanks for merging.
added Basic Dynamic Block Completion
also data source & reosurce attribute completion
onward to locals, and for each
2019-05-11
and now welcome for inferred type completion in for reach https://asciinema.org/a/245663
Recorded by juliosueiras
2019-05-13
Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7).
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Mar 20, 2019 11:30AM.
Add it to your calendar
https://zoom.us/j/684901853
#office-hours (our channel)
added a mechanism for dealing with google-beta
(it will read if your resource is using provider = google.beta)
2019-05-14
Any advice on transitioning ECS CloudFormation over to Terraform?
@Bruce I’ve seen some cft to tf converters, but honestly they never worked for me. I just ended up using in this case, the ECS tf module in the registry or one of the community ones
I’d start with one of these https://github.com/cloudposse?utf8=%E2%9C%93&q=ecs&type=&language=
Terraform module to autoscale ECS Service based on CloudWatch metrics - cloudposse/terraform-aws-ecs-cloudwatch-autoscaling
Thanks @shaiss I’ll be looking at writing the current ECS deployments in Terraform so this helps alot! Thanks.
does anyone know if there’s a way to make bucket/iam policies dynamic based on TF vars? IE https://www.terraform.io/docs/providers/aws/r/s3_bucket_policy.html. Instead of the resource stating the bucket name, it would be handy to have tf replace that value at runtime with the computed/coded bucket name.
Attaches a policy to an S3 bucket resource.
@shaiss’s question was answered by <@Foqal>
any suggestions on making this more readable? It works, just fugly!
data "template_file" "init" {
template = "${replace(replace(file("${var.bucket_policy}"),"[log_prefix]","${var.log_prefix}"),"[bucket_name]","${var.bucket_name}")}"
}
looks like using vars might work
vars = {
bucket_name = "${var.bucket_name}"
log_prefix = "${var.log_prefix}"
yep! i also recently learned that you can access terraform functions within the templated file, which is one way you can manage lists/maps of things in the templated bucket policy
here’s an example where we just updated a module to do that, https://github.com/plus3it/terraform-aws-wrangler/pull/15/files
note in particular, in the test example bucket policy:
"aws:SourceIp": ${jsonencode(compact(split(",", replace("${list_o_things}", "\n", ""))))}
and the value of ${list_o_things}
comes from the var.bucket_policy_vars
map:
bucket_policy_vars = {
list_o_things = <<-EOF
10.0.0.0/16,
10.1.0.0/16,
10.2.0.0/16,
EOF
}
Couldn’t you set the var to a terraform list and iterate over it in the terraform template?
Not sure about iterating, but could maybe just jsonencode the list, in my particular use case. I think I ended up where I did because the template resource throws up if any of the map values are non-strings
@Andriy Knysh (Cloud Posse) I know is a bit off-topic, but would a lsp for nomad be useful?
nomad jobspe
jobspec*
and I finished it(the nomad-lsp), will release it in one hour, right now the only thing missing is the full schema, which I will put it in this week, mostly going to focus on terraform-lsp though
https://twitter.com/juliosueiras/status/1128502456932081664 , nomad is a lot easier to tackle compare to terraform
2019-05-15
In my [providers.tf](http://providers.tf)
I’ve defined 4 aliased providers which assume different roles in different regions. However, there’s also an invisible non-aliased provider which although not defined anywhere keeps prompting me for entering a value for provider.aws.region
on every terraform apply
and when trying to run apply
it doesn’t work unless I set my access/secret keys in an env var or as [default]
profile in /.aws/credentials
.
terraform providers
lists it, but I can’t find its config…
the ‘main’ provider has to be configured somewhere, e.g. https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/examples/complete/main.tf#L1
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
it can either assume a role https://github.com/cloudposse/terraform-root-modules/blob/master/aws/eks/main.tf#L12
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
or use the credentials from the default
profile if not configured directly on the provider
@Andriy Knysh (Cloud Posse) so you think that it can be (defined) in a module that is external/on the registry and thus while not visible in my cfgs still picked-up when running apply
?
For now, I’ll have to use the default
profile on aws
it can be defined in any module, but it’s better not to define it in low-level modules that are used in top-level modules
let the top level modules (or examples) define everything they need to run
and provide all vars, settings, and providers with regions and credentials/roles
ironically among the modules I’m using is cloudposse/terraform-aws-iam-user
but searched any mention of a provider and there’s nothing
yea, you need to define the provider in your top-level module. If not defined, TF will ask you for the region
but i already defined 4 (aliased) providers which I’m passing to each terraform-aws-iam-user
instantiation:
...
providers = {
aws = "aws.users.frankfurt"
}
that’s what I find surprising…
yea, but TF is asking you for the region for the main provider for your main module which instantiates the 4 terraform-aws-iam-user
modules
in your main module, add something like this
provider "aws" {
region = "${var.region}"
}
This worked. Thanks @Andriy Knysh (Cloud Posse)!
Are you using some of our terraform-modules in your projects? Maybe you could leave us a testimonial! It means a lot to us to hear from people like you.
office hours starting now: https://zoom.us/j/684901853
is there a hard requirement for this default security group to be added? https://github.com/cloudposse/terraform-aws-alb/blob/master/main.tf#L11
Terraform module to provision a standard ALB for HTTP/HTTP traffic - cloudposse/terraform-aws-alb
i’m tasked with locking down the SG to specific IP addresses. i see i can turn off http and https to not use the default
Terraform module to provision a standard ALB for HTTP/HTTP traffic - cloudposse/terraform-aws-alb
no hard requirements, but we add a separate SG to all modules and then allow other SGs as ingress
in other modules we made the created SG optional
using a var and count
could be added to this module as well
that’s what i was thinking as well
I have a way around for now, though
:man-facepalming::skin-tone-4: http_ingress_cidr_blocks
is sitting right there in the vars. that’s what i need
yep
2019-05-16
@Erik Osterman (Cloud Posse) I will need to jump in on the next office hours. I’ve been running issues deploying the reference architecture on my AWS account.
i’m curious if anyone ever had an issue with aws_route53_zone
changing the order of the nameservers on you… specifically for creating the SOA record (which is usually set to the first name server in the array). i looked in the terraform source and they are doing a SORT() on the nameserver list for some odd reason, sorting them in alphabet order, which makes no sense.
so we cannot set the first parameter of the SOA record to the correct value because terraform is sorting them… but i’m unsure if this matters or not. RFC for SOA record states that first arg is the primary master name server… but unsure how aws graphs their dns replication, but it just seems unsafe to be choosing one seeming at random since we don’t have the original order in the name_servers output of aws_route53_zone. any thoughts?
The SOA record master host value generally only matters if your zone is accepting dynamic dns updates (via the DDNS protocol) , which is not relevant to AWS, and I would assume AWS knows what they’re doing an always sets the SOA to the actual critical value if it matters for any of their internal tooling. Per AWS’s own docs the NS and SOA records are chosen automatically by AWS and should not be modified.
(And now I have to go stuff the part of my brain that used to manage DNS infrastructure back into a box and pretend it doesn’t exist again…)
Recorded by juliosueiras
What is the SweetOps preferred way of standing up a cloud-agnostic kubernetes cluster via Terraform?
There’s no “cloud agnostic” way to setup a kubernetes cluster; doing so would preclude taking advantage of the best capabilities of that cloud provider.
I should have re-phrased that without the agnostic part. That being said, I assume you are using kops w/ terraform output?
actually, we’re not
We use some terraform modules to make it easier to work with though
Terraform module to lookup resources within a Kops cluster for easier integration with Terraform - cloudposse/terraform-aws-kops-metadata
Terraform module to lookup IAM roles within a Kops cluster - cloudposse/terraform-aws-kops-data-iam
Terraform module to lookup network resources within a Kops cluster - cloudposse/terraform-aws-kops-data-network
Even terraform is not cloud agnostic. The way you terraform for AWS is different from GKE, Azure etc.
For AWS we’re still predominantly using #kops due to it’s support for managing the full lifecycle of the kubernetes cluster including rolling updates/upgrades, which is not well supported by the other options. I think eksctl
(by weaveworks) has recently adding some support for rolling updates.
@Andriy Knysh (Cloud Posse) , Hello, what do you think about this - https://github.com/cloudposse/terraform-aws-ec2-autoscale-group/issues/9#issuecomment-493318731
launch_template_version is not being used anywhere
2019-05-17
Anyone knows how to manage multiple providers which assume an assumed role (i.e. crossaccount assumes)? I have the following setup:
provider "aws" {
region = "eu-central-1"
profile = "${var.aws_profile}"
}
provider "aws" {
version = ">= 2.10"
region = "eu-central-1"
assume_role {
role_arn = "arn:aws:iam::${var.custom_account_id}:role/${var.custom_role_name}"
}
profile = "${var.aws_profile}"
allowed_account_ids = [
"${var.custom_account_id}",
]
alias = "custom.frankfurt"
}
provider "aws" {
version = ">= 2.10"
region = "eu-central-1"
assume_role {
role_arn = "arn:aws:iam::12345:role/12345-admin"
}
allowed_account_ids = [
"12345",
]
profile = "${var.aws_profile}"
alias = "12345.frankfurt"
}
provider "aws" {
version = ">= 2.10"
region = "eu-central-1"
assume_role {
role_arn = "arn:aws:iam::12354:role/12354-admin"
}
allowed_account_ids = [
"12354",
]
alias = "12354.frankfurt"
}
which results in the following error:
the profiles is defined in ~/.aws/config
and work just fine with aws sts assume
or other awscli
commands
I want to be able to provision resources across accounts and regions by different providers which assume a different role for each account essentially
You need to create an alias for each provider, then you reference the alias. With alias euc1 you’d reference with aws.euc1
@Steven thanks! I’m already doing that, but pseudonimized it due to confidentiality reasons
btw, I tried having the aws profiles in both ~/.aws/configure
and ~/.aws/credentials
but with no luck
OH, different error. Sorry. You have 2 different errors. 1 not finding any credentials for some providers and 1 credential needing MFA, which is a different setup
1 sec. I’ll grab an example
all providers have different aliases
provider “aws” { profile = “appzen-admin” region = “us-east-1” skip_credentials_validation = true skip_get_ec2_platforms = true skip_region_validation = true }
# Provider for each account: dev, qa, shared, provider “aws” { alias = “dev” profile = “appzen-admin” region = “us-east-1”
assume_role { role_arn = “arniam:role/OrganizationAccountAccessRole” } }
provider “aws” { alias = “infra” profile = “appzen-admin” region = “us-east-1”
assume_role { role_arn = “arniam:role/OrganizationAccountAccessRole” } }
This references a single profile in ./aws/config
thanks @Steven but I don’t have only 1 profile in ~/.aws/config
I have 1 profile with static creds in ~/.aws/credentials
then approx 5 dynamic profiles (which use source_profile
to reference the static one in ~/.aws/credentials
or another profile which is as assumed role)
For example, bob@account-users =assume=>developer@acccount-dev =assume=> prod-admin@account-prod can be defined in a dynamic profile called account-prod
which refs the developer
assumed role in the source_profile
while dynamic profile account-dev
refs the static profile account-users
in its own source_profile = account-users
The profile I use across the providers is an assumed role which can assume all the roles in the other providers (at least via awscli
)
i think there is a bug in terraform and the aws provider, where if you use something like source_profile
in your aws config, then assume_role
does not work in the provider config
they currently have two different logic paths for resolving credentials, and one works with assume_role
in the provider, and one does not
we’ve submitted a patch that uses the same logic path to resolve credentials, waiting on review…. https://github.com/hashicorp/aws-sdk-go-base/pull/5
Fixes #4 Cleans up credential obtaining logic. NOTE: I contributed the credential process provider to the underlying AWS SDK Go. Proposal: Ensure creds obtained (e.g., session-derived) before atte…
So, far I’ve been doing the assume role in either terraform provider or aws profile. But have not combined them as you’re trying to do. I’d try simplifing the setup then adding the additional layers one at time. This will let you prove if it is a current limitation
Is it possible to check if S3 bucket exist before creating one ?
I don’t believe so, but you could import an existing bucket into your terraform state if you want to take over management of that bucket.
If you know a bucket exists and just want to reference it, use a data source.
I want to use same bucket for 2 different workspaces
and trying to figure out best way to do it
@rohit in terraform or just in general?
e.g. aws s3 ls <s3://klnasdlkjasdlka> >/dev/null 2>&1 || some command
in terraform
You could have one tf deployment that defines it and treat the statefile as a datasource in the other two.
@David Nolan Can you please elaborate on this ? I am not following
You can reference objects defined in another tf config by using the statefile as a datasource via the remote_state type. https://www.terraform.io/docs/providers/terraform/d/remote_state.html
Accesses state meta data from a remote backend.
The problem is i want to use the same bucket in 2 different workspace. For example: i have 2 qa environments, they are under 2 different terraform workspace and i want to use the same bucket for both
^^ Any initial thoughts? State diff is nice, I guess, but not sure if it’s compelling enough at this point
I’m experiencing an interesting issue with the lambda resource. when I provide the kms_key_arn hardcoded (or any other method), on the first terraform apply the lambda function is created without the kms key. when I run tf apply again, it now has the right cmk key. Is this some bug or am I missing something?
terraform 0.12 checklist (punch list for everything you need to do to migrate to 0.12 #terraform-0_12 https://github.com/hashicorp/terraform/pull/21241
(Please note that this PR is targeted at the v0.11 maintenance branch, not at the master branch.) There's a small set of tasks that are easier to do if handled before upgrading to Terraform 0.1…
The output of this tool is in GitHub-flavored Markdown format so it can easily be pasted into a Markdown-capable issue tracker, like GitHub issues. Here is an example of output from a tailored configuration I wrote to show off some of the different checklist item types:
neat idea
nice tks!
is there an eta for 0.12 yet?
anyone here terraform eks and installing a chart w/ the helm provider?
ran into this issue awhile ago: https://github.com/terraform-providers/terraform-provider-helm/issues/195
Terraform Version v0.11.1 Affected Resource(s) helm provider Terraform Configuration Files provider "helm" { version = "~> 0.7.0" //debug = true install_tiller = false servic…
part of me thinks i’m configuring it wrong since i’m the only one that has that github issue in half a year
my helm provider config:
provider "helm" {
install_tiller = true
tiller_image = "gcr.io/kubernetes-helm/tiller:v2.11.0"
service_account = "${kubernetes_service_account.tiller.metadata.0.name}"
namespace = "${kubernetes_service_account.tiller.metadata.0.namespace}"
kubernetes {
host = "${module.eks_cluster.endpoint}"
cluster_ca_certificate = "${base64decode(module.eks_cluster.certificate_authority_data)}"
token = "${data.aws_eks_cluster_auth.eks.token}"
load_config_file = false
}
}
I’ve attempted to comment out the token, but then i get an Unauthorized error
Will doing a make clean
on the reference-architecture break things. I’m having an issue completing a make children
where it fails on the Security account.
2019-05-18
2019-05-20
Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7).
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Mar 20, 2019 11:30AM.
Add it to your calendar
https://zoom.us/j/684901853
#office-hours (our channel)
guys
trying to use https://github.com/cloudposse/terraform-aws-ec2-instance
Terraform Module for providing a general EC2 instance provisioned by Ansible - cloudposse/terraform-aws-ec2-instance
does anyone have an example of using it?
can’t seem to corrctlly define the ssh_key_pair
since it’s not being created as a aws_key_pair resourse
Hi guys! 28th of May, I will speak about Terraform AWS modules and some of best-practices on AWS meetup in Mountain View. RSVP here https://www.meetup.com/awsgurus/events/261055503/ . Please share and join. If you are local around SF Bay Area, use Terraform, and want to meet and (maybe) like coffee as much as I do - we have to meet 26-28th of May
Tue, May 28, 2019, 6:00 PM: Schedule and Agenda00 - 6:30 : Arrive and Network!6:30 - 6:40 : Announcements and sponsors recognition6:40 - 8:00 : Presentation and demosTerraform AWS modules and best-
Anyone have a good resource for blue/green deployments with CodeDeploy?
…specifically related to a TF implementation.
of what though?
e.g. blue/green of ECS tasks, kubernetes deployments, ec2 autoscale groups, etc
apologies. ECS
Aha, most the solutions I see around that are using terraform to call cloudformation with ASGs
You’ve seen those?
@endofcake has this post https://medium.com/@endofcake/using-terraform-for-zero-downtime-updates-of-an-auto-scaling-group-in-aws-60faca582664
A lot has been written about the benefits of immutable infrastructure. A brief version is that treating the infrastructure components as…
Will check that out. thx
You may have your roll your own @johncblandii . Here’s a good overview of the main approaches https://youtu.be/jO_LMD-YAFQ.
CodeDeploy has it. That’s 2015, but I’ll check it out.
2019-05-21
Hi , I want to create Logic App using terraform. The script should create “blank logic app” and should mimic the “Logic apps designer” . Is this possible using Terraform
Hello all, I’m getting into the “random_string” resource and reading the random provider docs here: https://www.terraform.io/docs/providers/random/index.html#resource-quot-keepers-quot-
The Random provider is used to generate randomness.
I’m using “keepers” in order to decide when to generate a new string … but I’m getting some weird errors:
* module.ecs-service.random_string.ecs-service-suffix: keepers (ordered_placement_strategy): '' expected type 'string', got unconvertible type '[]interface {}'
Do all the values in the “keepers” map need to be of type string?
yes
damn
My issue is that ecs_service resources are idempotent
I want to be able to create_before_destroy on an ecs_service … but two ecs services can’t have the same name
I’m solving this by appending a random string to the end of the ecs service name
the doc does say “arbitrary” keys/values, but that error is definitely indicating it must be a string
I’d like to use random_string.keepers to watch all the attributes that would force the recreation of the ecs service
ordered_placement_strategy
is a list of map
if you have a list, you can try join(" ", <list>)
or somesuch
oi
list of maps
yeah, so i need to convert that list of maps to a string? then back into a list of maps when I’m actually using it?
well no need to convert back as you’d still have the original
but i think otherwise yes
I need to convert it back because it needs to be passed through the random resource
# ["${var.ordered-placement-strategy}"]
ordered_placement_strategy = "${random_string.default.keepers.ordered-placement-strat}"
if that makes sense
I’m trying to set up a Google Cloud Load Balancer and one step requires updating the named ports on the managed instance groups for which I need a formatted string to generate the command-line call…
not sure how to convert a list of maps to a string though
you can try this https://www.terraform.io/docs/configuration-0-11/interpolation.html#jsonencode-value-
Embedded within strings in Terraform, whether you’re using the Terraform syntax or JSON syntax, you can interpolate other values into strings. These interpolations are wrapped in ${}
, such as ${var.foo}
.
Is there a jsondecode()?
but yeah, the encode did successfully convert it to a string
because yeah, it’s not being read correctly
Error: module.ecs-service.aws_ecs_service.default: "ordered_placement_strategy.0.type": required field is not set
you could use depends_on instead to preserve the tree
yeah, checking the source and the schema looks like it’s just a TypeMap… https://github.com/terraform-providers/terraform-provider-random/blob/master/random/resource_string.go#L21
Terraform random provider. Contribute to terraform-providers/terraform-provider-random development by creating an account on GitHub.
which should allow nested lists…
anyway gotta run, good luck!
Yeah, so jsonencode()
worked to convert the list of map to a string, but converting it back is going to be the tough part
still willing to entertain solutions
maybe an external data source to do the conversion?
hm
does an update to “ordered_placement_strategy” force a new ecs_service resource?
I believe it does …
I think thats one of the unchangeable attributes of ecs services, in the aws api
The external data source docs seem to be a little … confusing to me
well uh
I got it figured out
2019-05-22
hey everyone! Is there an easy way to also store/export/save apply
outputs to SSM Parameter Store? The main reason being so that they’re consumed by other tools frameworks which are non-Terraform?
Provides a SSM Parameter resource
Tried this already ?
output is a reference to a resource , as @aaratn pointed out you can reference the resource in the value
parameter and set your ssm param.
good tip @aaratn but I haven’t tried it because it essentially means defining another resource for each param I’d like to create in SSM and that’s with too much overhead - I shouldn’t have to think about SSM but automatically upload a set number of outputs that I get from terraform state list
and then terraform state show
@Nikola Velkovski yes, but I still need to define in TF as many of those ssm_parameter
as VPCs, subnet_ids, etc
I get it now, you want something in between that reads the outputs and stores them in ssm.
well for starters jq
is your friend
maybe lambda eventually ? If you store the state in s3, theoretically it can trigger a lambda which will do that for you.
if only terraform state show <module.etc.id>
would return a JSON
HI All has anyone being able to use https://github.com/cloudposse/terraform-aws-key-pair module
Terraform Module to Automatically Generate SSH Key Pairs (Public/Private Keys) - cloudposse/terraform-aws-key-pair
I would like to see an example of how you created the keypair resource from the module.
This is how I used it but it fails because the key isn’t generated yet.
here https://github.com/cloudposse/terraform-aws-ecs-atlantis/blob/master/main.tf#L36 we use https://github.com/cloudposse/terraform-aws-ssm-tls-ssh-key-pair
Terraform module for deploying Atlantis as an ECS Task - cloudposse/terraform-aws-ecs-atlantis
Terraform module that provisions an SSH TLS Key pair and writes it to SSM Parameter Store - cloudposse/terraform-aws-ssm-tls-ssh-key-pair
in your example using https://github.com/cloudposse/terraform-aws-key-pair, you don’t have to use resource "aws_key_pair"
since the module itself 1) generates keys; 2) writes them to AWS https://github.com/cloudposse/terraform-aws-key-pair/blob/master/main.tf#L27
Terraform Module to Automatically Generate SSH Key Pairs (Public/Private Keys) - cloudposse/terraform-aws-key-pair
Terraform Module to Automatically Generate SSH Key Pairs (Public/Private Keys) - cloudposse/terraform-aws-key-pair
@Andriy Knysh (Cloud Posse) Thanks I will give that a try.. I am guessing I can use ${aws_key_pair.generated} as keyname when I am using launch configuration
Is anyone doing blue/green deployments in AWS/GCP etc with Packer and Terraform?
I remember it being a pain last time I looked… there seem to be some interesting articles on the topic though: https://medium.com/@kemra102/blue-green-deployments-in-aws-with-terraform-2755942d4090
Lately I’ve been using Terraform more and more as we use it in my day job very extensively. I do think Terraform has some niceties over…
depends on where you want to deploy, EC2, ECS, EKS etc.
yea I’ve watched videos all over
its a great concept if you can get it going
any ideas on how to jsonencode every value in a list, when the values may not be strings?
* local.encoded: local.encoded: formatlist: list has non-string element (string) in:
${formatlist("${jsonencode("%v")}", local.values)}
Public/Free Office Hours with Cloud Posse starting now!!
tf 0.12.0 has dropped, https://github.com/hashicorp/terraform/releases/tag/v0.12.0
Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amon…
oh, and 0.12 has a jsondecode
function, nice
@cabrinha think you were looking for that yesterday?
yes, but im not on 0.12
just meant as an fyi, for future use
I wait depend_on
for module, but no release
Trying to stand up an eks cluster. Everything seems ok but I see this in the events:
kube-system 0s Warning FailedScheduling pod/coredns-7f66c6c4b9-v5g7h no nodes available to schedule pods
if you are using terraform-aws-eks-cluster
module, did you apply this https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/examples/complete/kubectl.tf ?
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
I actually had the same issue yesterday with a new cluster I spawned. Going on the node, I found out that the script that starts Kubelet with the extra args I provided worked just fine, Kubelet was running, but I had an issue with some rights. Still not working but I hadn’t the time to look more into this. (for info, I used exactly the same vars that I used for a cluster I spawned 2 months ago and worked just fine). I will keep you up to date when I’ll find out what the issue is.
Looking more in depth it seems to be an issue with the authenticator, just activate the logs on the EKS cluster and you will see that you can login with your user but the mapping with the EC2 role is kind of broken. I deployed it on another account with the exact same template and it did work… The only difference is that I changed the cluster version for Kubernetes 1.12
I found out my issue, I am deploying this cross-account, and the local-exec that try to create the config-map aws-auth
did fail and so the nodes cannot authenticate to the cluster.
This is all because of the switch role that is misconfigured somewhat
So if you deploy an EKS cluster with a assumed role, you have to add the assumed role arn within a variable like this :
kubeconfig_aws_authenticator_additional_args = ["-r", "arn:aws:iam::<account_id>:role/<role_name>"]
@Andriy Knysh (Cloud Posse) No I did not. I just added that and will try again.
@Andriy Knysh (Cloud Posse) Ok, I added the kubeconf.tf to my apply but still seeing the same issue when I stand up a new cluster.
It may be our internal tooling is generating an MFA iam arn and its not compatible with this cluster setup. It generates iam for kops.
did you apply the k8s config from kubeconf.tf?
I added apply_config_map_aws_auth
, should I do something else?
@Andriy Knysh (Cloud Posse) I applied the config_map_aws_auth output manually and I think its up and running. I thought it would auto-apply that
you have to set the variable to true
and then run terraform apply
2019-05-23
We are very proud to announce that Terraform 0.12 is officially released. Terraform 0.12 is a major update that includes dozens of improvements and features spanning the breadth a…
terraform 0.12 is here
I just installed it, I’m looking forward to playing around with it
I’m looking forward for someone with time and energy to write a script that can transform my .tf’s from 0.11 to 0.12
I believe they are planning to release a converter
The 0.12upgrade subcommand automatically rewrites existing configurations for Terraform 0.12 compatibility.
terraform 0.12 has a 0.12upgrade
command
does it work fine?
I tried it on a small example project all was fine
nice! i have hundreads of files to migrate
Christmas in May
Yeah, I saw a demo of it last year and it was pretty legit at converting and suggesting updates.
I look forward to never being able to upgrade because we use >=3 seemingly unsupported providers that may never be upgraded ;(
Hi guys, we have been using some of your modules and first of all Thanks you for all the hard work
But I have a question about : https://github.com/cloudposse/terraform-terraform-label
what is the purpose of it ? I get the naming convention idea but having to call the module for every instance seems weird
humans are bad at consistently naming things. if we are to support consistent naming where by the delimiter may be different (since the delimiter is a parameter), we need to invoke it for each resource and cannot just assume -
as the delimiter
(maybe i am missing the point though of your question)
I guess where I get confuse is :
if we have a naming convention and it based on automated pipelines to push terraform changes then it will not be possible to fall out of the naming standard therefore I can use a map variable to do it an populate it from the CD pipeline etc
so I don’t know if this is because you create modules that are use in the community and want to keep consistency ?
I don’t know how you guys run your pipelines so there is a lot of assumptions in my comments
I guess where all started is in this file :
that now that I see it is using a very old tag of the label module
naming convention and it based on automated pipelines to push terraform changes then it will not be possible to fall out of the naming standard
the problem with this is nested modules and multiple invocations of modules. Modules need to know how to name things and that cannot be external to the module.
so I don’t know if this is because you create modules that are use in the community and want to keep consistency
Yes, we need to ensure all the modules work with each other. To simplify the calculus, we need to standardize the naming so we don’t cause collisions.
it makes sense
that now that I see it is using a very old tag of the label module
We don’t always keep module versions up to date since we don’t have a feasible way to do regression testing against so many modules.
understood
I think I fail to understand the use case
my team uses (a forked version of) this to ensure that we have consistent tags on AWS resources
ohhh ok, we use a map variable and add var.project-name to the Name tag
like so :
tags = "${merge(var.resource_tags, map("Name", "support-iq"))}"
where support-iq could be var.project-name too
yep: tags = "${module.label.tags}"
But our version adds in a couple extra tags we want on our resources as well
If we were smarter, we probably could have just wrapped it in another module that adds our extra mojo and turtles all the way down…
the label
module is for 1) consistent and unique resource ID generation; 2) consistent tagging
if you want to name your resources (across regions, accounts, companies) consistently and uniquely, you need to come up with something similar to namespace(org)-stage-name-additionalAttributes
-
is the delimiter, also configurable
instead of doing it manually for every module, we invoke the label module to do it
ohhhh I c ok, so is like I created my own module to create the map I use in all my modules
to avoid duplication of that code and sanitize etc
yes
I see now
second, uniqueness across companies and accounts, especially for global resources like S3 buckets
so we can name our buckets in diff environments: cp-prod-app
, cp-staging-app
, cp-dev-app
you can name your buckets using the same modules: pepe-prod-app
, pepe-staging-app
, pepe-dev-app
exactly
cool, I get it
I have another question: We have many aws accounts per team each account will have a number of vpc peered to the monitoring account and in different regions too, we have been thinking on creating accounts automatically and create vpcs and peer them automatically so in that case the S3/dynamo state file will be owned by the automation system atlantis/jenkins ( or whatever) but then each team have admin access to their accounts and start creating whatever they need ( ECS, EKS, fargate etc) on top of those pre-created vpcs so we are trying to find a good aproach to share/query the statefile that created the account to be able to find the vpcs IDs, subnets etc but I was thinking that is using the label module and adding consistent naming we could just use a data resource and just find the vpc base on the names, what do you guys do in this cases ?
I have seen examples where everything is created in the same main.tf vpc+infra+policies etc but in our case we want to layer that so that the user just create the stuff they need for their app to run
i would suggest remote states, tha should let you pick up the vpc id of a vpc that’s in another tfstate
will work on the rest of the new features for terraform-lsp on the weekend
we were thinking to use remote states but is a bit more complicated for us since we all assumerole to different accounts and each team own and support their infra but not the vpc or role settings so if we use the “management” account to create the remote state s3 bucket then that will mean we will have to create an read only IAM role for each team to be able to read the state file
but maybe that might not such a bad idea
any of you have examples of a main.tf that can read state from one s3 bucket and save the state in another one ?
we do not want people to be able to modify the state file that was created by us “SRE” team
in the terraform config that you manage, you can also push values into an s3 bucket or ssm path, which you then grant their roles access to based on key/path. then they do not need access to your tfstate
that can be ssm/s3 in their account, or in yours, with granular resource-based policies
What about an “exports” terraform configuration, managed by SRE, which can read your main state files and defines outputs to map the important bits from those files into its own state. Then that state file is stored in a bucket you grant everyone who needs it read-only access to. That way you’re not exposing any secrets that might be stored in the actual state files, only a small set of chosen outputs which you propagate through to where others can read them.
Two very good ideas
When I worked at EA we used Consul to store all this so then it was very easy to query the K/V for those values
You could even populate that with terraform directly
exactly, but here we do not use Consul so SSM/KMS is a pretty good option
@David Nolan so your idea is to use outputs to create another state file ?
it will be cool if you can export object values to something
like module.vpc
so all attributes are available
without having to do one by one
If there is risk in either granting “everyone” read access to the existing state files, or other contents in the same buckets, then an intermediary tf configuration to essentially export those values that matter should work
but how do you export part of an state file ?
output "foo" { value = "${data.terraform_remote_state.some_output_name}" }
and then
terraform output foo
?
to json or something ?
and then terraform apply, and your state file will contain that output.
then in the other group’s terraform configs they also use a remote state file to read from the imtermediary state file and all they will be able to see is the values you chose to output to them.
whereas if they can read the original state file they could fetch it directly and read everything.
and the resource
terraform_remote_state
is just a file on s3 ?
yes, in a bucket you grant the other teams access to
sorry I never done this before, so excuse the stupid questions
no worries.
its a bit convoluted for sure
There would end up being 3 tiers of terraform configs.
- A) Your central SRE configs that manage the VPC, etc.
- B) A second tier of “exported” configs which merely expose values from tier A to tier C
- C) The configs managed by other teams which pull from B as a data source
ok if I understand this correctly : my SRE main.tf will have a remote state that is the one I do not want to share and I will create in the team’s owned bucket a output file that will be populated by
output "foo" { value = "${data.terraform_remote_state.some_output_name}" }
type of outputs and then the team’s project tf file will read that state file as an a intermediate file from which they will be able to read the objects attributes needed
Replace ‘output file’ with ‘a second statefile”
correct
I think you’re getting it
If your team has write permissions to their bucket this is even easier, You just write the export module and store the state in their bucket.
I did not know you can output a object
I was assuming one global “exported state” bucket which you would grant every team read access to
I though where only individual attributes
I believe you can output strings, lists and maps. Anything you can store in a variable basically.
This isn’t outputting the entire object, just copying an output from A to B so it can be read by C
ohhh I c ok
In TF 0.12 this is more clear because what you see in the datasource structure is ${data.terraform_remote_state.outputs.some_output_name}
I was just reading that
Accesses state meta data from a remote backend.
but that needs to read the SRE state file anyways
data "terraform_remote_state" "vpc" {
backend = "atlas"
config {
name = "hashicorp/vpc-prod"
}
}
The change in the data source object naming I think is it make it clear that only explicit outputs are available. But it also means they could add additional functionality side by side later.
this looks like a regression to me :
# Terraform >= 0.12
resource "aws_instance" "foo" {
# ...
subnet_id = "${data.terraform_remote_state.vpc.outputs.subnet_id}"
}
# Terraform <= 0.11
resource "aws_instance" "foo" {
# ...
subnet_id = "${data.terraform_remote_state.vpc.subnet_id}"
}
in 0.11 you could use the object
in 0.12 only outputs
weird
no it was always just the outputs, but the naming made it look like it was the object
the object would have been data.terraform_remote_state.aws_vpc.some_name.id
or data.terraform_remote_state.aws_subnet.some_name.id
subnet_id
is already a defined output from the terraform config, but in 0.12 they changed where that is exposed in a terraform_remote_state data source in order to make that clear.
2019-05-24
ohhh I see ok
I think the confusion you had is the primary reason they changed it, but I also think it allows them to in the future expose the entire object.
exactly
@Andriy Knysh (Cloud Posse) question, I am planning out what else is needed for lsp, and want to ask, do you guys care about completion + inspection of remote state?
(using terraform_remote_state)
that would be fantastic :slightly_smiling_face: many people use terraform_remote_state
2019-05-25
hello, i’m noticed some issue while using modules. https://hastebin.com/lugodiyiho.cs
@AleksandarN you’re using v0.12. I don’t think any of the cloudposse modules have been upgraded for that yet
Steven. Ok. I’m gonna downgrade ver before.
works on 0.1.14
Should a version marker be added to the cloudposse modules? And maybe start creating branches for 0.12 support
is it possible to use Default CloudFront Certificate (*.cloudfront.net)?
it was resolved adding acm_certificate_arn
is it possible to use Default CloudFront Certificate (*.cloudfront.net)?
2019-05-26
is it possible to configure option Restrict Bucket Access On in cloudfront origin?
Restrict access to your Amazon S3 content by creating and using an origin access identity.
is that what you want to do?
@AleksandarN AFAIK is not possible but it could have changed recently , when we tried AWS said we need to open up the origin to pretty much every single aws ip that very well could be an attacker so we never mode to cloudfront because of that
2019-05-27
Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7).
This is an opportunity to ask us questions on terraform
and get to know others in the community on a more personal level. Next one is Mar 20, 2019 11:30AM.
Add it to your calendar
https://zoom.us/j/684901853
#office-hours (our channel)
Posted this in another Slack last night but didn’t receive any answers, so thought I’d ask here too… is there any way to disable marking certain attributes as sensitive in plan output? i want to see what this is going to be changed to
Terraform will perform the following actions:
~ module.server_ini.aws_ssm_parameter.main
value: <sensitive> => <sensitive> (attribute changed)
Plan: 0 to add, 1 to change, 0 to destroy.
I think I found a bug in the docs for this module and some examples
Terraform module which implements an ECS service which exposes a web service via ALB. - cloudposse/terraform-aws-ecs-alb-service-task
the readme says
private_subnet_ids = ["xxxxx", "yyyyy", "zzzzz"]
but the variable input name is subnet_ids
@jose.amengual please open an issue
no problem
done
thanks!
I’ve integrated our test-harness
into terraform-null-label
as well as added a basic terratest
implementation.
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label
Collection of Makefiles and test scripts to facilitate testing Terraform modules, Kubernetes resources, Helm charts, and more - cloudposse/test-harness
the test-harness
is using bats
this is one step in the direction for supporting 0.12. basically, as we undertake this massive effort, I want to introduce better testing.
2019-05-28
Hello, could you tell me please. Wthat I doing wrong? Where my mistake? My code looks like
- database is created
- using the ReadWrite user, I can create a table and I can execute INSERT query into it
CREATE TABLE employee(phone VARCHAR(32), firstname VARCHAR(32), lastname VARCHAR(32), address VARCHAR(64), company VARCHAR(32));
INSERT INTO employee(phone, firstname, lastname, address, company) VALUES(\'+10000000000\', \'John\', \'Doe\', \'Country\', \'Company Ltd.\');
But using the user ReadOnly, I can’t execute a SELECT query from the database. Error: permission denied for relation employee
SELECT * FROM employee ORDER BY lastname;
The logic is as follows:
- create an owner role
- Create a database with this owner
- Create the group role ReadWrite, ReadOnly (login = false)
- Give rights for the database for the sequence and table of these roles from item #3
- Create the user ReadWrite, ReadOnly and add to the groups role from item #3
Try on AWS RDS (PostgreSQL)
what PG client did you use?
via fusillicode#3122 on Discord. Following the instructions here does not work on RDS. Results in error: sql> GRANT SELECT ON ALL TABLES IN SCHEMA information_schema TO hasurauser [2019-02-28 13…
created a postgresql instance on AWS with the username ziggy. I restored a database to that instance. however I cannot even select any of the tables select * FROM mac_childcare_parcels gives me …
maybe some help there ^
A blog written by PoAn (Baron) Chen.
@evgmoskalenko as ^ mentions, the order of operations is important
Thanks.. But I want to manage the database via Terraform. Create a database, users, roles, grant privileges.
And only then - start a service that will create migrations and tables
@Andriy Knysh (Cloud Posse), How do you create and manage databases, users, roles in your infrastructures? How then update the password to the user or add a new user to ReadOnly permissions for the database?
Via code or manualy? Terraform or sql scripts?
Thanks..
it depends on many things. We did not do all of that with terraform. What we usually do with terraform is to create the infrastructure and the database with master user/password, and then write the user/password into SSM param store for later consumption from the app (using chamber
for example)
how to create other users and the app use them, depends on many things as well
but this is more about database administration, not infrastructure provisioning
could be done using SQL scripts
you could do it in terraform, but the order of operations is important (e.g. you need to create a table first and then give permissions to the user to use the table)
Guys, i know of this tool
Small tool to convert an IAM Policy in JSON format into a Terraform aws_iam_policy_document - flosell/iam-policy-json-to-terraform
neat, hadn’t seen this tool
Small tool to convert an IAM Policy in JSON format into a Terraform aws_iam_policy_document - flosell/iam-policy-json-to-terraform
works fine as long as your source policy is written in perfect aws-json format
but does anyone know of a similiar one that can do it the other way around?
Can you just use the TF code to create the policy in IAM and copy from there?
Yes sure, but the idea was to be able to convert multiple already written scripts
Anyone know of the new format couldn’t wrap my head around the documentation.
something to do with dynamic
Hi guys, can anyone enlighten me as to why cloudposse
/ terraform-aws-cloudtrail
doesn’t implement an enabled
flag ? turns out we have a need to selectively enable a cloudtrail in certain accounts, and i thought the enabled
flag was kind of the pattern here….?
was wondering if the fact the enabled
is not present was intentional…? we could create a PR to add it? cc @Erik Osterman (Cloud Posse)
Not an official response here, but I recommend just creating a PR for that feature. So far I haven’t seen the cloudposse admin’s refuse any reasonable feature submission. (This sounds like it was just an oversight to me.)
@chris the module was created before we started using enabled
flag. PRs are always welcome, thanks
@Andriy Knysh (Cloud Posse) thanks, makes sense! will submit one…
hopefully it’s sane
thanks, reviewed, a few comments
Hi Guys, I was playing with and I ended up forking since I plan to use CodeDeploy but even before that I have a problem with the ALB resource, since it is not done creating before the target group
aws_ecs_service.bluegreen: InvalidParameterException: The target group with targetGroupArn arn:aws:elasticloadbalancing:us-east-1:4444444:targetgroup/dev-fargateecs-app/07be25bc125ab2a0 does not have an associated load balancer.
so i was wonder if could do something to hack a depends_on at the module level ?
we do terraform apply
two times
depends_on
does not exist at the module level
but, for example, you can add the dependent module ID to the other module tag, creating implicit depends on
mmm could you please give me a quick example ?
what do you mean as module.id ?
like a resource id from the module?
moduleA.id add to some tag of moduleB. In this case, Terraform should create moduleA first and only then moduleB, creating implicit depends_on
ohhh I see ok, I will try that
Another question : https://github.com/cloudposse/terraform-aws-ecs-alb-service-task/blob/master/main.tf#L192 I can see you guys define only egress and icmp ingress for the service-task but and you pass a list of SGs defined from : var.security_group_ids that in here : https://github.com/cloudposse/terraform-aws-ecs-web-app/blob/master/examples/without_authentication/main.tf#L116 it comes from the defaul vpc security group
But why to use the default vpc security group instead of :
resource "aws_security_group_rule" "ecs-servicetask-allow-alb-ingress" {
security_group_id = "${aws_security_group.ecs_service.id}"
type = "ingress"
from_port = 0
to_port = 0
protocol = "-1"
source_security_group_id = "${var.security_group_ids.[0]}"
where the var.security_group_ids.[0] is = to the aws_alb SG id
isn’t that more secure ?
we usually delete the default VPC SG always
but I wonder you guys deploy a VPC per app and that is why prefer to use the default vpc sg instead
there is no particular reason for doing that. It was done probably b/c the module was originally used in terraform-ecs-web-app
which is very opinionated and is deployed in a separate VPC (we use it to deploy atlantis
)
if you think of a better way, please submit a PR, we’ll review promptly
awesome I wil
@Andriy Knysh (Cloud Posse): @Daren was wondering if there’s an easy fix for https://github.com/cloudposse/terraform-aws-s3-bucket/issues/11
If I create a new bucket and pass in a policy and setup allow_encrypted_uploads_only the policy is ignored and bucket policy only contains allow_encrypted_uploads_only related statements.
(not urgent)
2019-05-29
public #office-hours starting now! join us here: https://zoom.us/j/684901853
2019-05-30
@Andriy Knysh (Cloud Posse) so intellij-hcl released 0.7.0 for HCL2, but I tested it, and it look like it does not support for each completion, dynamic content completion, nested variables completion, etc
@Julio Tain Sueiras i did not test it yet, but thanks for letting me know
Hi everyone,
I was using this module terraform-aws-modules/vpc/aws
with terraform version 0.12 so it gave the error
Error parsing .terraform/modules/359629d31c12c09f870d559d03898da7/terraform-aws-modules-terraform-aws-vpc-e99089a/main.tf: At 2:23: Unknown token: 2:23 IDENT max
but then I read the issue https://github.com/terraform-aws-modules/terraform-aws-vpc/issues/267 so I changes the module version to ~>v1.66
after that I have started getting some different errors descibed in the snippet.
When running terraform init I am getting: Error downloading modules: Error loading modules: module vpc: Error parsing .terraform/modules/b7ccc849f6df97c277b2bf6e0054b489/terraform-aws-modules-terra…
I am using terraform enterprise and tried downgrading the terraform version but it didn’t work.
@Vidhi Virmani ask in #terraform-aws-modules
2019-05-31
I’m getting both terragrunt
and terraform
stuck at random_string.bucket_prefix: Refreshing state... (ID: none)
when trying to destroy a bunch of terraform-aws-codebuild
modules - https://github.com/cloudposse/terraform-aws-codebuild
Terraform Module to easily leverage AWS CodeBuild for Continuous Integration - cloudposse/terraform-aws-codebuild
@Andriy Knysh (Cloud Posse) @Erik Osterman (Cloud Posse) @Igor Rodionov @jamie do any of you know anything about this?
Can be a network issue or something similar. Try again. What terraform plan shows?
@Andriy Knysh (Cloud Posse) solved it! I was call your modules from another/my wrapper module that used providers with profiles that required MFA. I spotted it in the logs