#terraform (2021-11)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2021-11-02
Hi folks, have any of you used Terraform to bootstrap iAM roles for a new AWS account? We’re deploying an environment using CloudPosse’s Terraform modules but we also need to bootstrap the roles for each new account. Would be great if you refer to some information about this
@Grubhold we use components to manage this. please see our docs.cloudposse.com
• iam-delegated-roles note: our components are often out of date from current usage
i thought thole comments used our upstream role module but it doesn’t appear to do so
basically, primary roles are provisioned in a single account and delegated roles are assumable roles in child accounts. delegated roles assume the assumable roles.
hope that helps!
@RB thanks for your reply. You mean by components the CloudPosse modules? Does this also apply for accounts that are totally seperated (different clients)? I’m not sure if I understood the difference between primary and delegated roles, are those cross referenced from each other?
Check this document out on how stacks and components are used https://docs.cloudposse.com/reference/stacks/
components are root level modules that use cloudposse upstream modules
I think I just found out a hidden gem from me, thanks for referring going through it right now
dimensions = {
TargetGroup = var.tg_element_number != "" ? aws_lb_target_group.lb_target_group[var.tg_element_number].arn_suffix : aws_lb_target_group.lb_target_group[count.index].arn_suffix
LoadBalancer = data.aws_lb.lb.arn_suffix
}
got error
Error: Invalid index
on ../aws_cloudwatch_metric_alarm.tf line 94, in resource "aws_cloudwatch_metric_alarm" "alarm":
94: TargetGroup = var.tg_element_number != "" ? aws_lb_target_group.lb_target_group[var.tg_element_number].arn_suffix : aws_lb_target_group.lb_target_group[count.index].arn_suffix
|----------------
| aws_lb_target_group.lb_target_group is tuple with 2 elements
| var.tg_element_number is "8"
The given key does not identify an element in this collection value.
why is that?
looks like tg element number is 8 and the tuple only has 2 elements
try changing the tg element number to 0 or 1
There’s a common pattern in cloud provisioning, where you check to see if the resource exists already, then if not, create it. But this doesn’t seem possible to do in Terraform. What I would want to do is have a data source check the existence of the resource, and if it exist, use that resource, otherwise, create it. However, if the resource doesn’t exist, data source causes the whole flow to exit with non-zero. Thus you cannot have real idempotence with the behavior of data source.
Yes, this is not possible at the moment
terraform considers it’s state as the view of the world - if it’s not in state, it doesn’t exist as far as terraform is concerned. this model obviously comes with trade offs and benefits.
you can import an existing resource into the state, but there’s not any automatic import or create as yet
2021-11-03
I’m thinking about using a cloud posse module for cloudfront-s3 in a production environment with pretty strict governance. (https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn) There is already an s3 module in the company that is certified and maintained. I’d like to ensure that I can use any arbitrary s3 bucket with this module. I think that I can, but I wanted to be sure before going too far down this path.
Terraform module to easily provision CloudFront CDN backed by an S3 origin - GitHub - cloudposse/terraform-aws-cloudfront-s3-cdn: Terraform module to easily provision CloudFront CDN backed by an S3…
yes, use this var https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn/blob/master/variables.tf#L45
Terraform module to easily provision CloudFront CDN backed by an S3 origin - terraform-aws-cloudfront-s3-cdn/variables.tf at master · cloudposse/terraform-aws-cloudfront-s3-cdn
Terraform module to easily provision CloudFront CDN backed by an S3 origin - terraform-aws-cloudfront-s3-cdn/main.tf at master · cloudposse/terraform-aws-cloudfront-s3-cdn
Perfect. Thanks!
Hey comrades, any familiar with setting up eks with emr via terraform?
Could it be done by using the CP terraform eks stack along with the CP terraform emr cluster stack? Assuming the appropriate perms and policies are set.
Not sure if this got talked about in #office-hours, but some awesome functionality coming to v1.1: Config-driven refactoring. Docs: https://github.com/hashicorp/terraform/blob/1ca10ddbe228f1a166063f907d5198f39e71bdef/website/docs/language/modules/develop/refactoring.html.md
Enables this type of HCL:
# Previous
# resource "aws_instance" "a" {
# count = 2
#
# # (resource-type-specific configuration)
#}
# New
resource "aws_instance" "b" {
count = 2
# (resource-type-specific configuration)
}
moved {
from = aws_instance.a
to = aws_instance.b
}
Hi all, My name is Korinne, and I’m a Product Manager for Terraform We’re currently working on a project that will allow users to more easily refactor their Terraform modules and configurations, set to be generally available in Terraform v1.1. From a high-level, the goal is to use moved statements to do things like: Renaming a resource Enabling count or for_each for a resource Renaming a module call Enabling count or for_each for a module call Splitting one module into multiple The a…
when is 1.1 going to be released
I feel like they’re close, but I have no inside info. Before the end of the year would be my guess
v1.1.0-beta1 1.1.0 (Unreleased) UPGRADE NOTES:
Terraform on macOS now requires macOS 10.13 High Sierra or later; Older macOS versions are no longer supported.
The terraform graph command no longer supports -type=validate and -type=eval options. The validate graph is always the same as the plan graph anyway, and the “eval” graph was just an implementation detail of the terraform console command. The default behavior of creating a plan graph should be a reasonable replacement for both of the removed…
2021-11-04
Hi fellow terraformers. I am currently working on moving to terraform v1.0.10 from 0.14 and have encountered an issue within terraform-aws-s3-log-storage.
Error: Invalid count argument
on .terraform/modules/this.s3.s3_bucket/main.tf line 163, in resource "aws_s3_bucket_policy" "default":
163: count = module.this.enabled && (var.allow_ssl_requests_only || var.allow_encrypted_uploads_only || var.policy != "") ? 1 : 0
The "count" value depends on resource attributes that cannot be determined
until apply...
This is being referenced by a fairly default configuration of
module "s3" {
source = "cloudposse/cloudtrail-s3-bucket/aws"
version = "0.23.1"
name = "cloudtrail-${random_id.this.hex}"
force_destroy = true
}
Which is using the latest version. From looking at the variables, they should all be known default values.
Is there anything I could be missing? (before I open up a github issue). Thanks in advance!
latest aws provider now supports bottlerocket for EKS
resource/aws_eks_node_group: Support for BOTTLEROCKET_ARM_64
and BOTTLEROCKET_x86_64
ami_type
argument values
I wonder if anyone has encountered any solutions that make migrating resources between different terraform workspaces less tedious?
(in Workspace A, i’m doing terraform import blahblah abc123
, and on Workspace B, I’m doing terraform state rm blahblah abc123
)
not yet, but it has been a discussion topic in the new “config-driven refactoring” feature coming out in tf 1.1… https://discuss.hashicorp.com/t/request-for-feedback-config-driven-refactoring/30730
Hi all, My name is Korinne, and I’m a Product Manager for Terraform We’re currently working on a project that will allow users to more easily refactor their Terraform modules and configurations, set to be generally available in Terraform v1.1. From a high-level, the goal is to use moved statements to do things like: Renaming a resource Enabling count or for_each for a resource Renaming a module call Enabling count or for_each for a module call Splitting one module into multiple The a…
Yeah no good way. I don’t go the state rm + import route and instead do the state mv -state-out
approach, because import is shit and will bite you. I can’t say it makes things easier though.
I was asking for better functionality around this in that config-driven refactoring thread because this is some of the ugliest that terraform gets.
Ugly makefile hack if you’re interested:
## tf/mv-state -- Moves a given resource from one root module to another via `terraform state mv`.
## Example:
## DEST_DIR=monitor \
## DEST_WORKSPACE=dev \
## SRC_DIR=k8s \
## SRC_WORKSPACE=dev \
## RESOURCE=module.datadog_monitors \
## make tf/mv-state;
tf/mv-state:
cd ./components/terraform/$(DEST_DIR); \
[[ "$(DEST_WORKSPACE)" != "default" ]] && terraform workspace select $(DEST_WORKSPACE); \
terraform state pull > $(DEST_WORKSPACE)-`date +"%Y-%m-%d"`.tfstate; \
cd ../$(SRC_DIR); \
[[ "$(SRC_WORKSPACE)" != "default" ]] && terraform workspace select $(SRC_WORKSPACE); \
terraform state mv -state-out=../$(DEST_DIR)/$(DEST_WORKSPACE)-`date +"%Y-%m-%d"`.tfstate $(RESOURCE) $(RESOURCE); \
cd ../$(DEST_DIR); \
terraform state push $(DEST_WORKSPACE)-`date +"%Y-%m-%d"`.tfstate;
I didn’t know about state pull and state push. thank you matt, this will make this a lot simpler
2021-11-05
Terraform AWS Provider version 3.64.1
• New Data Source: aws_cloudfront_response_headers_policy
(#21620)
• New Data Source: aws_iam_user_ssh_key
(#21335)
• New Resource: aws_backup_vault_lock_configuration
(#21315)
• New Resource: aws_cloudfront_response_headers_policy
(#21620)
• New Resource: aws_kms_replica_external_key
(#20533)
• New Resource: aws_kms_replica_key
(#20533)
• New Resource: aws_prometheus_alert_manager_definition
(#21431)
• *New Resource//github.com/hashicorp/terraform-provider-aws/issues/21470))
• resource/aws_eks_node_group: Support for BOTTLEROCKET_ARM_64
and BOTTLEROCKET_x86_64
ami_type
argument values (#21616) @Erik Osterman (Cloud Posse)
Hi, I’m getting a weird error when trying to create a mysql aurora with module 0.47.2
module "rds_cluster_aurora_mysql" {
source = "cloudposse/rds-cluster/aws"
version = "0.47.2"
engine = "aurora"
cluster_family = "aurora-mysql5.7"
cluster_size = 2
namespace = "eg"
stage = "dev"
name = "db"
admin_user = "admin1"
admin_password = "Test123456789"
db_name = "dbname"
instance_type = "db.t2.small"
vpc_id = "vpc-xxxxxxx"
security_groups = ["sg-xxxxxxxx"]
subnets = ["subnet-xxxxxxxx", "subnet-xxxxxxxx"]
zone_id = "Zxxxxxxxx"
}
Error:
module.rds_cluster_aurora_mysql.aws_rds_cluster.primary[0]: Creating...
Error: error creating RDS cluster: InvalidParameterCombination: The Parameter Group test-stage-rds-20211105125821182400000001 with DBParameterGroupFamily aurora-mysql5.7 cannot be used for this instance. Please use a Parameter Group with DBParameterGroupFamily aurora5.6
status code: 400, request id: 44eac3b8-f377-4d45-a2f7-7a3e95c22297
on .terraform/modules/rds_cluster_aurora_mysql/main.tf line 50, in resource "aws_rds_cluster" "primary":
50: resource "aws_rds_cluster" "primary" {
https://github.com/cloudposse/terraform-aws-rds-cluster/tree/0.47.2
Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - GitHub - cloudposse/terraform-aws-rds-cluster at 0.47.2
my guess, having never used this module, is that you need to specify the engine as aurora-mysql
to use cluster_family = "aurora-mysql5.7"
.
Can you please try that and reply with the status?
Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - GitHub - cloudposse/terraform-aws-rds-cluster at 0.47.2
@managedkaos Worked perfectly! In the github documentation the engine is just like “aurora”.
I will request the correction.
Thanks a lot for the help!
no problem. glad it worked out
Did anybody look at Hashicorp’s S1/IPO documentation? Quite interesting to read:
• First of all, amazing they created such a big business growing rapidly, kudos to the entire team there. They’re the first company I know of that replicated Redhat’s model successfully (everyone else have failed, including Docker).
• Vast majority of revenue is from support at very high margin - which means there’s room to compete with Hashicorp on this (offer support for a far lower price).
• Interesting to see their cloud-hosted services were $4m last year, growing 100% year-over-year, and costing a lot more to run than they’re bringing in. That tells me TFC is probably mostly free users and that the business there is still small (a couple of million a year).
• TFE probably a bigger business (under the License line), which is interesting vis-a-vis TFC. Would love to hear other people’s thoughts on this.
Was it 4m net profit for cloud hosted services? That’s much smaller than I would expect… but maybe I’m conflating their licensing with their HCP offering. I know that they charge millions outright for Vault licenses and plenty of big enterprises are paying them big bugs for that.
It’s actually 4m in revenue. Net profit was negative (which means they’re spending more on cloud costs than they’re making).
Yeah, the license business is much better than the cloud business.
Yeah, the license business is much better than the cloud business.
I wonder how cloud infrastructure providers can make money if they are also running on top of a IAAS?
Software. People pay for the software a lot more than they’d pay for the infra.
OIC, yes, of course…. and I suppose the branding, etc…
Hello team! I’m relatively new to terraform (and your slack, so apologies if posting in wrong channel), but I think I found a (somewhat critical) bug in the terraform-aws-dynamic-subnets module so wanted to share here. It appears that the private_network_acl_id
and public_network_acl_id
don’t behave consistent with spec. For example, the description of public_network_acl_id
indicates that Network ACL ID that will be added to public subnets
. However, I don’t see either getting added anywhere. Instead, it seems to just drop both the ACLs entirely and prevent creation of ACLs in the module. Am I mistaken? btw changes were introduce in PR 15. Also, manually attaching the subnet IDs afterwards using the private_subnet_ids
and public_subnet_ids
output seems to enable/disable the custom ACLS on each update, but could be my configuration - still investigating.
What Added the ability to create Network ACLs Removed null_resource for cidrsubnet Removed aws_internet_gateway datasource Why We need this in order to consolidate network security rules to secu…
FYI a workaround/hack to prevent this behavior is to provide a random string to private_network_acl_id
and public_network_acl_id
, which seems to confirm veracity of bug.
I see two options:
- Fix the current behavior so it’s consistent with the description
- Modify the
private_network_acl_id
andpublic_network_acl_id
variables to use a boolean and denote enable (i.e. rather than id) Regardless of solution it might be a good idea to expose theprivate_network_acl_id
andpublic_network_acl_id
as output, which will allow people to utilize ACLs instantiated inside the module in theaws_network_acl_rule
resource.
What Added the ability to create Network ACLs Removed null_resource for cidrsubnet Removed aws_internet_gateway datasource Why We need this in order to consolidate network security rules to secu…
2021-11-07
Hey guys, I’m getting this error when creating a vpc module . A week ago everything worked without any issues and I didn’t change the configuration.
Error:
Error creating Redshift Subnet Group: ClusterSubnetGroupAlreadyExists: The Cluster subnet group 'my-vpc' already exists.
│ status code: 400, request id: ***********
│
│ with module.network.module.vpc.aws_redshift_subnet_group.redshift[0],
│ on .terraform/modules/network.vpc/main.tf line 530, in resource "aws_redshift_subnet_group" "redshift":
│ 530: resource "aws_redshift_subnet_group" "redshift" {
Configuration:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.63.0"
}
}
}
provider "aws" {
region = "eu-central-1"
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
name = "my-vpc"
cidr = "10.0.0.0/16"
azs = ["eu-central-1a", "eu-central-1b", "eu-central-1c"]
public_subnets = ["10.0.0.0/24", "10.0.1.0/24", "10.0.2.0/24"]
private_subnets = ["10.0.3.0/24", "10.0.4.0/24", "10.0.5.0/24"]
redshift_subnets = ["10.0.6.0/24", "10.0.7.0/24", "10.0.8.0/24"]
enable_nat_gateway = true
single_nat_gateway = true
}
resource:
<https://github.com/terraform-aws-modules/terraform-aws-vpc>
the error says there is another subnet group with the same name my-vpc. changing the name would remove the conflict
if you use the cloudposse vpc module you can take advantage of its null label which allows you to use namespace, environment, stage, name, and attributes to ensure all the resource identifiers in the module will be unique
I’ve changed the name and it worked! the weird thing that there is no resource on/activate with this name - this is an empty new account. Thanks for your help and the tip! I’ll explore it :)
Join us tomorrow for The Sydney HashiCorp User Group Monthly Meetup where we will be going over Terraform: https://www.meetup.com/sydney-hashicorp-user-group/events/281535723/
Tue, Nov 9, 2021, 12:00 PM: Hug Sydney is a meetup for all things HashicorpIn this month’s meetup, we will be starting off with an introduction to Terraform.If you ever want to speak at this meetup pl
is this the new hashicorp user group in syd? is the old one gone now? I know old organisers have moved around
Tue, Nov 9, 2021, 12:00 PM: Hug Sydney is a meetup for all things HashicorpIn this month’s meetup, we will be starting off with an introduction to Terraform.If you ever want to speak at this meetup pl
hey @Chris Fowles sorry i didnt see this message sorry. yes its a new one, wasnt aware there was an old one…
2021-11-08
2021-11-09
Is there any way to force a data source to be lookup up only during an apply
? I have a case where I want to lookup a Vault secret via a data source, and I need to ensure it’s picking up the latest secret at apply
time, while using a planfile
made much earlier
put something in the data source which must be evaluated at apply-time
like timestamp()
or reference to a resource that will always be updated
data external always_refreshed {
program = ["/bin/echo {}"]
}
data vault_secret {
path = "/my/path"
lifecycle {
depends_on = [data.external.always_refreshed]
}
}
beautiful, thank you very much! I’ll try this out
please share a working solution in the other places you asked
Will do! I’m initially getting Data resources do not have lifecycle settings, so a lifecycle block is not allowed.
But I think with timestamp()
I can get something working here, and will post my solution once I have it finalized
Well, depends_on should not be nested in a lifecycle block, so there’s that….
But also depends_on only orders create/destroy dependencies, iirc. On updates, no more guarantee
make the data source echo some static string, use that in the vault data source
This seems to be working: https://github.com/transcend-io/terraform-aws-fargate-container/pull/28/files
It’s pretty ugly though, I’m going to sleep on it so that I don’t have to do the trimprefix + concat to simulate a noop
resource "foo" "bar" { foobar = "${file("foobar")}" } resource "bar" "foo" { depends_on = ["foo.bar"] } bar.foo is not modified…
Thanks for that context!
In this particular case though, even though I see in the plan output that the vault value will be read during the apply (see pic), my apply
fails when specifying a plan file from more than 15 minutes ago (see second pic)
Is it possible that terraform would be trying to use the provider from the plan
during the apply
?
Running just a terraform apply
without the planfile succeeds, so it’s definitely something to do with the planfile
looks like something to do with the vault provider
Hi, I hope this isn’t a FAQ. I am trying to use tenant
in my labels, but many of the CP modules still have a [context.tf](http://context.tf)
using cloudposse/label/null
v. 0.24.1
, for example: https://github.com/cloudposse/terraform-aws-named-subnets/blob/master/context.tf – is that intentional? Is there a workaround I can use to avoid this.
I have a module that calls a number of CP modules, and as I hit the ones that haven’tt been updated things bomb out trying to use the tenant
var.
Terraform module for named subnets provisioning. Contribute to cloudposse/terraform-aws-named-subnets development by creating an account on GitHub.
These have to be updated to 0.25.0
before tenants can be used
Terraform module for named subnets provisioning. Contribute to cloudposse/terraform-aws-named-subnets development by creating an account on GitHub.
We’ve been upgrading the modules when we need to so we haven’t gotten to all of them yet
what Use context 0.25.0 make github/init why Allow tenant usage references N/A
New version released that supports tenants
https://github.com/cloudposse/terraform-aws-named-subnets/releases/tag/0.11.3
:rocket: Enhancements Chore: Use [context.tf](http://context.tf)
from null-label:0.25.0
@nitrocode (#41) what Use context 0.25.0 make github/init why Allow tenant usage references N/A
@RB Thanks for updating that module, the “current” list of the modules that are causing me issues are:
Downloading cloudposse/label/null 0.24.1 for alb.access_logs.s3_bucket.this...
Downloading cloudposse/label/null 0.24.1 for ecs_alb_service_task.exec_label...
Downloading cloudposse/label/null 0.24.1 for ecs_alb_service_task.security_group.this...
Downloading cloudposse/label/null 0.24.1 for ecs_alb_service_task.service_label...
Downloading cloudposse/label/null 0.24.1 for ecs_alb_service_task.task_label...
Downloading cloudposse/label/null 0.24.1 for ecs_alb_service_task.this...
Downloading cloudposse/label/null 0.24.1 for private_subnets.private_label...
Downloading cloudposse/label/null 0.24.1 for private_subnets.public_label...
Downloading cloudposse/label/null 0.24.1 for public_subnets.private_label...
Downloading cloudposse/label/null 0.24.1 for public_subnets.public_label...
Downloading cloudposse/label/null 0.24.1 for route53_url.this...
Downloading cloudposse/label/null 0.24.1 for vpc_endpoints.this...
What’s the best way to get these updated? I could do a PR for each one if that is SOP. Is there a doc on PR requirements I need to follow?
there would need to be a pr for each module to update its context
i know the ecs service one is blocked due to upgrading it’s security group module to 4.0 which will take some time. there is already someone working on unblocking that
OK, sounds like I should holdoff on using tenant
for a bit then while that work is going on. If it will help if I generate PRs for the others, let me know. I can copy what you did on the named-subnets
one.
I’ll use an attribute
for now, instead of tenant
.
i simply ran make init && make github/init
to update the context
if you put in the prs, I’ll review them
I’ll dig in after lunch.
Ooff – @RB I’m not sure the best way to handle this – looks like a cascading problem. So, for 1 example:
• terraform-aws-alb
version 0.35.3
with submodule access_logs
calls terraform-aws-lb-s3-bucket
version 0.14.1
.
• terraform-aws-lb-s3-bucket
version 0.14.1
with submodule s3_bucket
calls terraform-aws-s3-log-storage
version 0.24.0
.
• terraform-aws-s3-log-storage
version 0.24.0
calls the null-label
module version 0.24.1
in [context.tf](http://context.tf)
.
Someone has already updated terraform-aws-s3-log-storage
to use null-label
v. 0.25.0
– the version terraform-aws-s3-log-storage
with that change is 0.24.1
.
So, if I update terraform-aws-lb-s3-bucket
to use terraform-aws-s3-log-storage
v. 0.24.1
then the new version of terraform-aws-lb-s3-bucket
will become 0.14.2
. Then I’ll have to update terraform-aws-alb
to point to that new version. And on-and-on back up the chain.
It’s doable but it seems error prone and tedious. Is there a better way?
I think my enthusiasm for using tenant
in our labels has waned .
i think it shouldn’t take more than a couple hours
2021-11-10
Hey Friends. I stood up an EKS stack with this stack: https://github.com/cloudposse/terraform-aws-eks-cluster , I modified it to set up an ALB with https://github.com/cloudposse/terraform-aws-alb-ingress for ingress… I am not sure where to go from here to use this ingress within my stack though. Do yall have any docs you can point me at or something to help me get unstuck? I’d appreciate it greatly.
For EKS you need https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html instead of creating LB by yourself
The AWS Load Balancer Controller manages AWS Elastic Load Balancers for a Kubernetes cluster. The controller provisions the following resources.
Ha. Okay. This makes way more sense.
We have a component for this here https://github.com/cloudposse/terraform-aws-components/tree/master/modules/alb-controller
Opinionated, self-contained Terraform root modules that each solve one, specific problem - terraform-aws-components/modules/alb-controller at master · cloudposse/terraform-aws-components
:wave: hi. I’m slightly dense sometimes and I’m having trouble grokking the note about aws-auth
here: https://github.com/cloudposse/terraform-aws-eks-cluster … I’m currently seeing a situation where my clusters aws-auth
configMap lacks the values set in the terraform for map_additional_iam_roles
. I suspect it has to do with whats in the notes in the readme. can i get an ELI5?
What’s the general consensus on tf config scanning tools. tfsec, checkov, etc… Any of them stand out one way or the other?
Why would one use env0 over TFE?
There are a few main differences between TFE and env0 like also supporting Terragrunt, custom flows, drift detection, TTL and scheduling and much more.
You can check out our website for more details - https://www.env0.com/why-env0
• Disclaimer - I am the CTO and co-founder of env0.
I’ve used checkov and tfsec together, there is some overlap, but pretty useful. it gets annoying if you have to put ignores in for both tooling
@Michael Bottoms take a look at https://github.com/iacsecurity/tool-compare
Disclaimer: I’m the CEO of Indeni (company behind Cloudrail)
Contribute to iacsecurity/tool-compare development by creating an account on GitHub.
@Yoni Leitersdorf (Indeni Cloudrail) huge thanks for the link! Found this pretty useful)
Happy to. Let me know if I can help in anything else
v1.0.11 1.0.11 (November 10, 2021) ENHANCEMENTS: backend/oss: Added support for sts_endpoint (#29841) BUG FIXES: config: Fixed a bug in which ignore_changes = all would not work in override files (<a href=”https://github.com/hashicorp/terraform/issues/29849” data-hovercard-type=”pull_request”…
This PR aims to add new attribute sts_endpoint to support setting custom Security Token Service endpoint.
Fixes #21474, in which ignore_changes = all does not survive a module merge.
2021-11-11
Hello friends! quick question, is there any terraform module I can use for creating global aws documentdb clusters?
We’re planning our DR and we wanted to automate infrastructure as much as possible
Hi Daniel, have you seen this module?
https://github.com/cloudposse/terraform-aws-documentdb-cluster
Terraform module to provision a DocumentDB cluster on AWS - GitHub - cloudposse/terraform-aws-documentdb-cluster: Terraform module to provision a DocumentDB cluster on AWS
I have, but I see no mention of global cluster configurations https://aws.amazon.com/es/documentdb/global-clusters/
Amazon DocumentDB Global Clusters provides cross region replication for disaster recovery from region-wide outages and enables low-latency global reads.
is it possible to enable a global cluster in the terraform resource itself? i don’t see it here
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/docdb_cluster
There is a PR in the works https://github.com/hashicorp/terraform-provider-aws/pull/20978
Community Note Please vote on this pull request by adding a reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave "…
ah ok so it doesn’t seem possible yet :(
Too bad. I guess we’ll just have to wait
upvotes help!
Looks like the resource landed in 3.65.0… https://github.com/hashicorp/terraform-provider-aws/pull/20978
Community Note Please vote on this pull request by adding a reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave "…
Interesting project for anyone consuming a lot of modules: https://github.com/keilerkonzept/terraform-module-versions
CLI tool that checks Terraform code for module updates. Single binary, no dependencies. linux, osx, windows. #golang #cli #terraform - GitHub - keilerkonzept/terraform-module-versions: CLI tool tha…
we got a ticket to add it to packages https://github.com/cloudposse/packages/issues/1674 for any enterprising contributors
https://github.com/keilerkonzept/terraform-module-versions Note: This may pull in pre-release versions so each version bump should be checked to make sure the latest versions are non-pre-release. ✗…
very cool, thanks for sharing, this is a must have for terraform helpers
I’m thinking of adding this to PR checks or cron job
@Mohammed Yahya cool! please do let me know if you find any mismatches with your workflow. my current projects are all not using terraform so it’s not getting any bullet-proofing from my side as a result
hey @Sergey! I’ll bring this up on #office-hours today
hey @Erik Osterman (Cloud Posse), awesome, thank you! Sorry I couldn’t make it, I’m pretty swamped rn unfortunately..
2021-11-12
HashiCorp Terraform Cloud variable sets let you simplify the management of reusable variables across an entire organization. This feature is now available in public beta.
Terraform Cloud workspace variables let you customize configurations, modify Terraform’s behavior, and store information like provider credentials.
similar to spacelift contexts. interesting
very cool ^^
2021-11-13
Hi, I’m using https://github.com/cloudposse/terraform-aws-ecs-web-app/tree/0.65.2 and I face a strange problem. I’m doing it the way that is presented in “without_authentication”
alb_security_group = module.alb.security_group_id
alb_target_group_alarms_enabled = true
alb_target_group_alarms_3xx_threshold = 25
alb_target_group_alarms_4xx_threshold = 25
alb_target_group_alarms_5xx_threshold = 25
alb_target_group_alarms_response_time_threshold = 0.5
alb_target_group_alarms_period = 300
alb_target_group_alarms_evaluation_periods = 1
alb_arn_suffix = module.alb.alb_arn_suffix
alb_ingress_healthcheck_path = "/"
# Without authentication, both HTTP and HTTPS endpoints are supported
alb_ingress_unauthenticated_listener_arns = module.alb.listener_arns
alb_ingress_unauthenticated_listener_arns_count = 2
# All paths are unauthenticated
alb_ingress_unauthenticated_paths = ["/*"]
alb_ingress_listener_unauthenticated_priority = 100
error I got
Error: Invalid count argument
│
│ on .terraform/modules/gateway.alb_ingress/main.tf line 50, in resource "aws_lb_listener_rule" "unauthenticated_paths":
│ 50: count = module.this.enabled && length(var.unauthenticated_paths) > 0 && length(var.unauthenticated_hosts) == 0 ? length(var.unauthenticated_listener_arns) : 0
│
│ The "count" value depends on resource attributes that cannot be determined until apply, so Terraform
│ cannot predict how many instances will be created. To work around this, use the -target argument to first
│ apply only the resources that the count depends on.
By chance you may know what I’m doing wrong here?
Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - GitHub - cloudposse/terraform-aws-ecs-web-app at 0.65.2
It’s most likely because of this alb_ingress_unauthenticated_listener_arns
Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - GitHub - cloudposse/terraform-aws-ecs-web-app at 0.65.2
if you do a targetted apply of -target module.alb
first, it should work
that helped a bit since after that I got this error:
│ Error: error creating application Load Balancer: InvalidSubnet: VPC vpc-024b08d14c04aa553 has no internet gateway
│ status code: 400, request id: a5e9dd93-d1a3-49bf-a8d3-0748dcb1afe7
│
│ with module.alb.aws_lb.default[0],
│ on .terraform/modules/alb/main.tf line 64, in resource "aws_lb" "default":
│ 64: resource "aws_lb" "default" {
with this vpc/subnets code
module "vpc" {
source = "cloudposse/vpc/aws"
version = "0.28.1"
name = "microservices"
cidr_block = "20.0.0.0/16"
assign_generated_ipv6_cidr_block = true
context = module.this.context
}
## Subnets
module "subnets" {
source = "cloudposse/dynamic-subnets/aws"
version = "0.39.7"
name = "microservices"
availability_zones = var.availability_zones
vpc_id = module.vpc.vpc_id
igw_id = module.vpc.igw_id
cidr_block = module.vpc.vpc_cidr_block
nat_gateway_enabled = true
nat_instance_enabled = false
aws_route_create_timeout = "5m"
aws_route_delete_timeout = "10m"
context = module.this.context
}
by my understanding nat_gateway_enabled: true in that case it means internet gateway, right?
try setting enable_internet_gateway = true
in the vpc module
same thing
added both internet_gateway_enabled
and enable_internet_gateway
but I see that internet gateway was not created
same goes for nat gateway
is the alb module using the same vpc from the vpc module ?
yes
so local.internet_gateway_enabled
should resolve to true
and then that true should be passed in here to create the internet gateway
Terraform Module that defines a VPC with public/private subnets across multiple AZs with Internet Gateways - terraform-aws-vpc/variables.tf at a3c4b1598942f3ae7a259d3f2823761a97befbd4 · cloudposse/…
Terraform Module that defines a VPC with public/private subnets across multiple AZs with Internet Gateways - terraform-aws-vpc/main.tf at a3c4b1598942f3ae7a259d3f2823761a97befbd4 · cloudposse/terra…
can you try a completely fresh terraform module and only provision the vpc and show the plan ?
yeah, gimme few min since i need to manuallyu clear all resources
since tdestroy doesn’t work with -target
it might help to create a new directory
and then create a new [main.tf](http://main.tf)
file within that directory
that [main.tf](http://main.tf)
should only consume the vpc module
then check the plan and see if it makes sense ( it should show the igw is going to be created)
so the case here, might be that I’m using several files ?
i have no idea what it could be, but if you follow the steps above, you take out a lot of variables
if you can verify the plan above shows the igw then we know the issue is not related to the vpc module.
yea, there’s igw
but with plan for module.alb target there’s no
same goes for subnets
ok so in the new module, now copy over the alb and have it use the vpc module
in the new module, also provision the subnets. do it one at a time.
so one at a time passed
perfect!
then the issue isn’t with the upstream modules, the issue must be with your other root module directory
yeah maybe as well because I’ve got there a 2 modules for different microservices?
and on the other hand the thing I also wanted to ask is how to enable ALB just for a single service rather than for all of them
since communication between them is done through queue
one of the terraform principles is to keep modules (directories) as small as possible
are you putting more than once service in the same terraform directory ?
then only two modules where in in gateway service where alb module existed
and on the other hand the thing I also wanted to ask is how to enable ALB just for a single service rather than for all of them
it depends on how you have it setup. are you using 1 ALB and multiple services via different listener arns ? or 1 ALB per service ?
1 ALB for gateway, other microservices doesn’t have a listener anr because they use rabbitmq for communication between each other
enabling an ALB for a service i think you want to create alb listeners for only your api services and no listener arns for your non-api services ?
yeah
what are you using to create the listener arns on the alb ?
raw resource or module ? if module, which one ?
hmm i don’t have a listener
the only thing I’ve added for a gateway service was
# Without authentication, both HTTP and HTTPS endpoints are supported
alb_ingress_unauthenticated_listener_arns = module.alb.listener_arns
alb_ingress_unauthenticated_listener_arns_count = 1
# All paths are unauthenticated
alb_ingress_unauthenticated_paths = ["/*"]
alb_ingress_listener_unauthenticated_priority = 100
on a module which use cloudposse/ecs-web-app/aws
oh ok so ecs-web-app cp module, uses ecs-service-task cp module
if you want to disable the listener arn, youd have to use the ecs-service-task cp module directly as the ecs-web-app (with web in the name) implies that the service is an http service
@Marcin Mrotek see this one https://github.com/cloudposse/terraform-aws-ecs-alb-service-task
Terraform module which implements an ECS service which exposes a web service via ALB. - GitHub - cloudposse/terraform-aws-ecs-alb-service-task: Terraform module which implements an ECS service whic…
that will make the service task and will not set the ingress on the alb (like you want)
ok so that one should be used on services that I don’t want to use ALB right?
exactly
but auto scalling for taht one would have to be handled on my own, right?
the ecs service task handles auto scaling i believe
oh wait no i dont see autoscaling in the ecs service task
yes i suppose youll have to handle yourself
Terraform module to autoscale ECS Service based on CloudWatch metrics - GitHub - cloudposse/terraform-aws-ecs-cloudwatch-autoscaling: Terraform module to autoscale ECS Service based on CloudWatch m…
you can see how this module and other modules are used directly in the ecs web app module
Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - terraform-aws-ecs-web-app/main.tf at ce7c53a7eb0c75592b67098a043ebe7…
2021-11-14
Hi guys. Is there a possibility to use terraform import to module https://github.com/cloudposse/terraform-aws-ec2-instance?
Terraform module for provisioning a general purpose EC2 host - GitHub - cloudposse/terraform-aws-ec2-instance: Terraform module for provisioning a general purpose EC2 host
Sure. What are you trying to import ?
If you’re trying to import the ec2 instance and the name of the module is ec2_instance
, it should be as easy as this.
terraform import module.ec2_instance.aws_instance.default i-12345678
The full list of resources and their resource names are documented here
Terraform module for provisioning a general purpose EC2 host - GitHub - cloudposse/terraform-aws-ec2-instance: Terraform module for provisioning a general purpose EC2 host
thanks
2021-11-15
Hi everyone, I was wondering if it was possible to stop the regex from stripping “/” from the name of cloudwatch log groups, we use these to create virtual directory paths https://github.com/cloudposse/terraform-aws-cloudwatch-logs
Terraform Module to Provide a CloudWatch Logs Endpoint - GitHub - cloudposse/terraform-aws-cloudwatch-logs: Terraform Module to Provide a CloudWatch Logs Endpoint
this is due to the null label argument regex_replace_chars
Terraform Module to Provide a CloudWatch Logs Endpoint - GitHub - cloudposse/terraform-aws-cloudwatch-logs: Terraform Module to Provide a CloudWatch Logs Endpoint
try setting that to "/[^a-zA-Z0-9-\/]/"
(or similar) to avoid the replacement
Thank you so much, this should work and I should have spotted this earlier, will report back
that’s ok. i think this is a bug. we should create a separate null label reference with this argument just for the log group name.
So i had a look at this, but the above didn’t work, it wasn’t escaping with one slash so I had to add two which gave me a plan
"/[^a-zA-Z0-9-\\/]/"
however, it still removes the
/
the one
\
gave me tf errors
something really wacky going on with this module
# module.cloudwatchlogs.aws_cloudwatch_log_group.default[0] will be created
+ resource "aws_cloudwatch_log_group" "default" {
+ arn = (known after apply)
+ id = (known after apply)
+ name = "dev"
+ retention_in_days = 14
+ tags = {
+ "Name" = "/aws/kinesisfirehose/aws-waf-logs-dev-app"
+ "Stage" = "dev"
}
+ tags_all = {
+ "Name" = "/aws/kinesisfirehose/aws-waf-logs-dev-app"
+ "Stage" = "dev"
}
}
# module.cloudwatchlogs.aws_cloudwatch_log_stream.default[0] will be created
+ resource "aws_cloudwatch_log_stream" "default" {
+ arn = (known after apply)
+ id = (known after apply)
+ log_group_name = "dev"
+ name = "/aws/kinesisfirehose/aws-waf-logs-dev-app"
}
I would’ve expected log_group_name to be /aws/kinesisfirehose/aws-waf-logs-dev-app
the Name tag looks correct. the name argument should match that of the Name tag. looks like something isn’t getting cleared. try consuming this cloudwatch logs module in a separate terraform directory and just run a plan.
# module.cloudwatchlogs.aws_cloudwatch_log_stream.default[0] will be created
+ resource "aws_cloudwatch_log_stream" "default" {
+ arn = (known after apply)
+ id = (known after apply)
+ log_group_name = "awskinesisfirehoseaws-waf-logs-dev-app"
+ name = "/aws/kinesisfirehose/aws-waf-logs-dev-app"
}
this looks better, the reason it’s being changed to ‘dev’ is because ‘stage’ is being populated which we use elsewhere, but still can’t get past this issue where the ‘/’ are being stripped by the regex
i could make an MR to the repository so that the log group name/stream name is not validated by the id field?
just let me know thoughts on moving this forward
I’ll have to test this out locally and get back to you. there must be a way to override this. if not, maybe we need a new escape hatch in the module itself
ok let me know if there’s anything i can do to facilitate i’ve tried setting regex_replace_chars = “” regex_replace_chars = null regex_replace_chars = “/[^a-zA-Z0-9-\/]/” regex_replace_chars = “/[^a-zA-Z0-9-\/]/”
so it looks like it needs a custom label
i put in this pr and it shows the correct log group https://github.com/cloudposse/terraform-aws-cloudwatch-logs/pull/26
what Custom label for cloudwatch log group name why Allow slash in log group names references https://sweetops.slack.com/archives/CB6GHNLG0/p1636985722167300 test provider "aws" { …
we’ll have to wait until it’s approved. for now, you can use the source in the pr description until its merged
@David new version released https://github.com/cloudposse/terraform-aws-cloudwatch-logs/releases/tag/0.6.1
Docs: Fix usage snippet (missing source attribute) @korenyoni (#25) what Fix usage snippet (missing source attribute) Fix module block name in usage snippet (does not match module name) why The …
Hello there! I am looking at forking <https://github.com/cloudposse/terraform-aws-ec2-instance-group> into terraform-linode-instance-group
. I’d love to feed that back into your repo when it’s done. Is that desirable from your end, and if so, are there guidelines for making that as smooth as possible?
Terraform Module for provisioning multiple general purpose EC2 hosts for stateful applications. - GitHub - cloudposse/terraform-aws-ec2-instance-group: Terraform Module for provisioning multiple ge…
usually we allow single feature improvements through a pr. what improvements are you planning?
it sounds like you’re going to make a linode specific module repo?
Terraform Module for provisioning multiple general purpose EC2 hosts for stateful applications. - GitHub - cloudposse/terraform-aws-ec2-instance-group: Terraform Module for provisioning multiple ge…
Yeah the idea is to take this as-is and make it work for Linode, not AWS
we can take all the aws related prs as we are exclusively aws
Good to know. So anything not AWS is basically “feel free to fork but it can’t live with CloudPosse”
Thanks for the clarification!
if we ever branched out to additional clouds, we’d most likely create a separate repo and maintain that like you plan to do
that seems to be the common standard across the terraform ecosystem and that’s why most aws specific repos contain that in the name of the repo
Yep
This’ll be used to spin up first 100 and, when that goes well, 1k instances. AWS pricing for that is a little scary, hence Linode.
2021-11-16
Hi folks, I’m trying to use the terraform-aws-components
to bootstrap an account on an already available organisation. I went through as a first step to the account module but pretty stuck with understanding how it works and how the yaml found in the README can help with specifying the organization_config
etc.
Any hints or example will be highly appreciated!
Opinionated, self-contained Terraform root modules that each solve one, specific problem - terraform-aws-components/modules/account at 0.150.0 · cloudposse/terraform-aws-components
@Grubhold check out the docs in https://docs.cloudposse.com/ — That will help you understand your path forward.
Specifically: https://docs.cloudposse.com/tutorials/atmos-getting-started/
Opinionated, self-contained Terraform root modules that each solve one, specific problem - terraform-aws-components/modules/account at 0.150.0 · cloudposse/terraform-aws-components
what are you using for inputs? are you getting any error messages?
@Matt Gowie Thanks for pointing out, I will go through the documentation to try and understand how it works.
@RB
the default.auto.tfvars
currently has these inputs
I did get a successful plan even though the inputs are just generic currently, but as Matt suggested I’ll go through the documentation to understand. Because I’m not sure how I can use this to bootstrap an account on an existing organisation. What will I need to deploy an account to that organisation etc.
As far as I understand Atmos is working like a wrapper. And the components don’t necessarily need atmos to work unless you want to use the stacks structure. Is that correct?
Because in our organisation’s case its very hard to get such tools accepted
So I’m assuming that if we don’t use Atmos and stacks we need to use the .tfvars
file to provide the information needed to create an account under the existing organisation?
Therefore, can I not use the service_control_policies_config_paths
to the yaml stacks?
you can use the default auto tfvars instead of atmos but you’re giving up all the yaml.
armos will simply convert the yaml into terraform var inputs for you and then run terraform workspace selection, terraform init/plan/apply
the service control policies config path input for the account component is for yaml that contain service control policies which is very different from the stack yaml configs
@RB Thanks for your reply. It makes sense to me but I’m afraid that we won’t be able to use it on Azure Pipelines that we use for CICD because new tasks pretty much doesn’t get approved at all. I’m afraid Atmos would need it. I might be very mistaken about how Atmos works..
Also, I hope you don’t mind I keep getting this error while trying to provision the account
│ Error: Error creating organization: AlreadyInOrganizationException: The AWS account is already a member of an organization.
│
│ with aws_organizations_organization.this,
│ on main.tf line 91, in resource "aws_organizations_organization" "this":
│ 91: resource "aws_organizations_organization" "this" {
I have the root account credentials. What am I missing here? Can’t I just deploy the account on an already existing org with this component?
it looks like you already have an organization created and will have to import the org here
that’s unfortunate that a tool like atmos wouldn’t be approved because that’s the tool that makes our refarch powerful. however, the cool thing about atmos is that it’s just a wrapper for terraform. all it does is deliver terraform var inputs from yaml and so you can substitute the yaml by using default auto tfvars directly
Yes but I won’t give up on it just yet, I’ll see if we can use it somehow in the pipeline. Otherwise indeed we’d just use tfvars then.
Can you please define “import the org here”? I think that’s where I’m stuck
terraform import your organization resource into your current terraform state
!! Ok that is a very good progress. So I imported the organization to my state file, then ran apply
and it added outputs as well. But then it says that the real infra matches with the current configuration so nothing changes. Why is it not trying to provision the account I have specified?
@RB Sorry for the ping. I would really appreciate your help with this. I can’t seem to manage to skip organization creation and just create an account under this existing one. I’ve been working on https://github.com/cloudposse/terraform-aws-components/blob/master/modules/account/main.tf to try and skip the organization with no use.
2021-11-17
Any one think this as a bug in terraform-tfe-cloud-infrastructure-automation - renaming workspace under project is creating new workspace with new name and deleting workspace with old name, expected behavior is in-place rename of workspace. Tried renaming this to new name after this workspace is created and tf plan/apply
deleted the workspace and created new workspace which is not the expected behavior as per the API tfe_provider
uses as per following references.
go-tfe library used by this update call in tfe-provider
Unfortunately, this module is probably pretty stale
No customers are currently using / paying for maintenance on this module
All our effort is currently on https://github.com/cloudposse/terraform-spacelift-cloud-infrastructure-automation
Terraform module to provision Spacelift resources for cloud infrastructure automation - GitHub - cloudposse/terraform-spacelift-cloud-infrastructure-automation: Terraform module to provision Spacel…
@Erik Osterman (Cloud Posse) thanks for the response
Hi Everyone, Anybody tried creating AWS aurora global db with 5 secondary clusters (or atleast more than 2 secondary clusters) using terraform module https://registry.terraform.io/modules/cloudposse/rds-cluster/aws/latest or any related module ?
I just published a terraform provider for Pulumi Cloud, allowing terraform to directly read pulumi stack outputs, like:
terraform {
required_providers {
pulumi = {
version = "0.0.2"
source = "hashicorp.com/transcend-io/pulumi"
}
}
}
provider "pulumi" {}
data "pulumi_stack_outputs" "stack_outputs" {
organization = "transcend-io"
project = "some-pulumi-project"
stack = "dev"
}
output "version" {
value = data.pulumi_stack_outputs.stack_outputs.version
}
output "stack_outputs" {
value = data.pulumi_stack_outputs.stack_outputs.stack_outputs
}
This code has helped my company transition our large terraform codebase to a hybrid model using both terraform and pulumi, and the source is available here: https://github.com/transcend-io/terraform-provider-pulumi
I’ll be writing a blog post soon about our strategy in the migration as well
Contribute to transcend-io/terraform-provider-pulumi development by creating an account on GitHub.
Automated Terraform AWS Provider — Will be great once AWS starts to move a lot of their services to Cloud Control.
This new provider for HashiCorp Terraform — built around the AWS Cloud Control API — is designed to bring new services to Terraform faster.
@Erik Osterman (Cloud Posse) this would be a good one to chat about during #office-hours
This new provider for HashiCorp Terraform — built around the AWS Cloud Control API — is designed to bring new services to Terraform faster.
wow!
oh, i read “control tower”
but yes, let’s discuss
, does anyone have a clean way to generate outputs/variable files?
i dunno how you’d identify outputs to generate them. but here’s a tool that will generate a variables.tf from all var references… https://github.com/YakDriver/scratchrelaxtv
Terraform module development tool. Contribute to YakDriver/scratchrelaxtv development by creating an account on GitHub.
Thanks for that!
Mainly looking for a way to generate all the boilerplate stuff for a module. Say given a main.tf with xyz resource, a tool that will generate a bunch of generic outputs for xyz resource
it’s not for everyone, or every module, but i often just output the entire resource. that might give you a way to generate outputs automatically, since you wouldn’t need to know the attributes… but you’d still have to name the output at least…
resource "aws_iam_role" "this" {}
output "aws_iam_role_this" {
value = aws_iam_role.this
}
That’s a great idea, thanks for that @loren!
I’ve never used either actually
v1.1.0-beta2 1.1.0 (Unreleased) UPGRADE NOTES:
Terraform on macOS now requires macOS 10.13 High Sierra or later; Older macOS versions are no longer supported.
The terraform graph command no longer supports -type=validate and -type=eval options. The validate graph is always the same as the plan graph anyway, and the “eval” graph was just an implementation detail of the terraform console command. The default behavior of creating a plan graph should be a reasonable replacement for both of the removed…
2021-11-18
Hi - I have an old PR here which I’ve just got around to fixing. https://github.com/cloudposse/terraform-aws-code-deploy/pull/9 - there is an associated thread which explains the ARN issue. The tagging I hope is self explanatory. https://sweetops.slack.com/archives/CB6GHNLG0/p1633728090408800?thread_ts=1633617559.399300&cid=CB6GHNLG0 cc @RB who last cast his/her eye on the code
@RB - I’ve created a new PR here: https://github.com/cloudposse/terraform-aws-code-deploy/pull/9/files - not sure why my old commits are included in the PR though. I did a merge from upstream so I don’t get it.
if someone could re-run the terratest, I’m sure it will pass this time
@RB - I’ve created a new PR here: https://github.com/cloudposse/terraform-aws-code-deploy/pull/9/files - not sure why my old commits are included in the PR though. I did a merge from upstream so I don’t get it.
Thanks to @RB ( I think ) for commenting on the PR. I’ve made the changes requested and I think this is ready to merge now
I’ve updated the PR again to include the enabled logic!
argh:
Because data.aws_partition.current has "count" set, its attributes must be
46
accessed on specific instances.
47
48
For example, to correlate with indices of a referring resource, use:
49
data.aws_partition.current[count.index]
not sure exactly what is going on here
@Stephen Tan i pushed an update
thanks
Hey comrades. I’ve been running the EKS stack provided by your TF template. Its been pretty successful until now. My pipeline appears to be failing when it hits the terraform plan
path. The error being:
module.eks_cluster.kubernetes_config_map.aws_auth[0]: Refreshing state... [id=kube-system/aws-auth]
╷
│ Error: configmaps "aws-auth" is forbidden: User "system:anonymous" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
│
│ with module.eks_cluster.kubernetes_config_map.aws_auth[0],
│ on .terraform/modules/eks_cluster/auth.tf line 132, in resource "kubernetes_config_map" "aws_auth":
│ 132: resource "kubernetes_config_map" "aws_auth" {
│
╵
I’ve confirmed that my auth is valid for at least the state resource, its pulling from the remote state from the right account. it should be using the same credentials to deploy the cluster.
a lot has changed in this module since 0.42.0
have you read over these release notes ? https://github.com/cloudposse/terraform-aws-eks-cluster/releases/tag/0.42.0
one of those options may not be toggled in your arguments
This release resumes compatibility with v0.39.0 and adds new features: Releases and builds on PR #116 by providing the create_eks_service_role option that allows you to disable the automatic creat…
I’m currently running 0.43.2 and its based on the eks example project. I should be using that preferred method. For posterity, I added it and same issue.
I’ll check out kube_exec_auth_enabled
…i don’t quite understand though.
try these settings https://github.com/cloudposse/terraform-aws-components/blob/677eb313a7b096aa41025930ba4795849ab677a0/modules/eks/main.tf#L47-L57
Opinionated, self-contained Terraform root modules that each solve one, specific problem - terraform-aws-components/main.tf at 677eb313a7b096aa41025930ba4795849ab677a0 · cloudposse/terraform-aws-co…
checking it out.
That did it. What causes this?
I have another cluster that was deployed the same way but doesn’t have these issues.
@Maya Aravot
I’m not sure of the right path forward to resolve this.
Aside from the HashiCorp LMS, what are the best resources for beginners to learn terraform? (e.g. udemy courses, et al)
Learn to provision infrastructure with HashiCorp Terraform
As of a few years ago, https://www.oreilly.com/library/view/terraform-up/9781492046899/ was wonderful. I’m not sure how updated it is for newer tf versions, but it explains the details of why things like statefiles, locking, etc. are parts of terraform
Terraform has become a key player in the DevOps world for defining, launching, and managing infrastructure as code (IaC) across a variety of cloud and virtualization platforms, including AWS, Google … - Selection from Terraform: Up & Running, 2nd Edition [Book]
A Cloud Guru (soon to be integrated with Pluralsight) has a slew of Terraform classes: https://acloudguru.com/search?s=terraform
And Pluralsight as well: https://www.pluralsight.com/courses/terraform-getting-started
I should also mention, Pluralsight has a free weekend starting today!!!!! https://www.pluralsight.com/offer/2021/q4-free-weekend
Learn in the course Terraform – Getting Started, all about the amazing tools for the public cloud in the software terraform. Start developing skills now!
We’re making all of our expert-led video courses, interactive courses and projects free for one weekend only—starting 11/19
Thanks!
2021-11-19
So, how are you all handling bootstrapping roles for CICD in your projects? A service account with admin permissions in multiple projects? I’m taking an approach of scoping the role per service per repo basically. I’m using a cf template to create a role, then assuming that role in my pipeline. Just wondering if that’s a valid approach or if anyone has anything else they do.
Am I right in thinking that the terraform-aws-transit-gateway
module doesn’t add any routes for the tgw to the subnet_ids
within vpc_attachments
? Does the cloudposse tgw module handle this?
this looks as tho it does add subnet table route ids to the transit gateway
oh awesome! I wasn’t expecting it to be so well commented. Will hunt a bit deeper in the repo next time
2021-11-20
2021-11-22
With Terraform 1.0.10 I always see TRACE level logs in /tmp for core & providers regardless of the TF_LOG/TF_LOG_CORE/TF_LOG_PROVIDER setting. With tf 1.1b2 I don’t see them. Anyone else seeing this behaviour?
i personally havent tested the beta yet. It might be worth creating an issue with hashicorp/terraform
I see this behaviour with 1.0.10
I wouldn’t expect these logs, I mean
thats interesting. i never noticed that before. do you see this as a security issue ? or just annoying that you have to remove these logs ?
i checked my /tmp
directory and i dont see any terraform logs. im using terraform 1.0.9
They appear while running and are deleted afterwards
Security issue, but also it seems to incur performance cost
Hi everyone! Where can I ask a question about one of your modules?
here seems fine
yay Came across the terraform-aws-transit-gateway module, and looked for inter region peering, but couldn’t find it. Do you support it? )
hmm would that require peering 2 transit gateways ? like this https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ec2_transit_gateway_peering_attachment ?
yeap
we don’t support this resource but if you provision 2 transit gateways (with or without our module), you can use the above resource to peer them together
Great thx!!
plz let me know how it goes. I’m curious if this is the only resource needed or if you also have to update route tables or if they propagate automatically
I will.
Hey all :wave: I was wondering how my approach sounds. I have a repo with a few related services. I would like to run a basic CICD workflow with gh actions. I’m using OIDC to authenticate into our AWS account. I’m running into the typical chicken/egg problem with iac, I’d like to control the iam role used in the CICD workflow, but need to create it prior to triggering a workflow.
My idea was to create a module for the iam role, and bootstrap that resource from my local machine (same backend as cicd) by using the target resource ability. tf plan --target=module.iam_role
and tf apply --target=module.iam_role
. My thought is this will bootstrap that resource, so my cicd can take over from there. Does this sound like a sane approach? I was going to ask during office hours this week
This is exactly what we’re doing so seems sane to me
Oh great, awesome to hear. I was spinning my wheels over thinking this
It’s just like the backend problem - you need to build your S3 bucket in order to use it for the state: https://github.com/cloudposse/terraform-aws-tfstate-backend
Terraform module that provision an S3 bucket to store the terraform.tfstate
file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption. - GitHub - c…
Yep, that’s actually just like it.
Does anyone know of a reliable way to determine at runtime within terraform code if it’s the first time Terraform is being applied? Without using an annoying first_apply
variable of course.
maybe terraform state list
is empty?
sorry, should have threaded
All good. Could hack around that idea, but it would require the first_apply variable. And I’d need to populate that variable with a true / false according to a pre-plan script.
Which would be a bummer.
So not runtime, but yeah possible.
oh you want to do it in a normal terraform workflow…
Yeah — At runtime being the key bit.
i was figuring some kind of wrapper would be in place to handle the logic
still runtime, to me. just runtime of the wrapper
Good point. I wasn’t thinking a wrapper.
Probably doesn’t matter as TF doesn’t have a date comparison function. Which I’m unpleasently surprised about.
invoke a lambda as a data source that manages the date comparison or even just returns the “correct” ami id?
Lambdas make me die inside.
But yes.
That would be a fine way of doing it.
hahaha, they are the glue for everything that doesn’t exist out of the box
an external data source would probably work also
But better than the lambda.
write the external data source in python and maybe it can sometimes be a lambda
Hahah yeah, I’ll use the random provider to coin toss determine if it should be invoked using the external data source or via the lambda via the http data source. That’ll be good.
Has anyone built rotating AWS AMI IDs on a time schedule? As in, every 3 months I want to update my AMI ID to latest? A colleague and I are working through trying to do this with time_rotating
and we’re continually hitting walls due to Terraform’s lack of capability to compare date values and store / trigger updating values outside of locals.
Discussion and likely path forward discussed here: https://sweetops.slack.com/archives/CB6GHNLG0/p1637606453235000
maybe terraform state list
is empty?
cc @matt
I am using the firewall manager module to provision (https://github.com/cloudposse/terraform-aws-firewall-manager). To enable firehose, there is a var firehose_enabled
which creates a firehose kinesis destination and stores the logs in S3 bucket. Underneath, it’s using a S3 module (https://registry.terraform.io/modules/cloudposse/s3-bucket/aws/latest) to provision. I am getting an error (probably because bucket name exists) when provisioning the firewall manager module:
Error: Error creating S3 bucket: AuthorizationHeaderMalformed: The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'ap-southeast-1'
│ with module.firewall-manager.module.firehose_s3_bucket[0].aws_s3_bucket.default[0],
│ on .terraform/modules/firewall-manager.firehose_s3_bucket/main.tf line 5, in resource "aws_s3_bucket" "default":
│ 5: resource "aws_s3_bucket" "default" {
Can I declare a S3 module and feed the S3 bucket name from S3 module into the firewall manager module? How would I go about doing that? Thanks in advance for your guidance, much appreciated!
does the firewall manager require the firehose and s3 bucket to be in the same region?
it doesn’t look like you can supply your own s3 bucket if you enable firehose
Terraform module to configure AWS Firewall Manager - terraform-aws-firewall-manager/firehose.tf at 82c2264e4c666ec28467cbca301c587ab4a4731c · cloudposse/terraform-aws-firewall-manager
ya if the s3 bucket already exists, then it will error out when creating the same bucket
@Anand Gautam please feel free to open a pr and we’ll take a look!
hey RB, thanks for digging in. I see that bucket name is derived from cloudposse/label/null
module -https://github.com/cloudposse/terraform-aws-firewall-manager/blob/82c2264e4c666ec28467cbca301c587ab4a4731c/firehose.tf#L22
Would that not make it unique?
not if the same bucket already exists
are you saying that the bucket it’s trying to create does not exist yet ?
ok I see that this is hardcoded attributes = ["firehose"]
I also didn’t pass in these vars in firewall module:
namespace = var.namespace
stage = var.stage
Once I pass these in, I assume it will make it unique. I will give that a shot
those are passed in via the context
im not sure i fully understand the issue
is the issue
• you have a bucket that you want to reuse that the module is trying to create ?
• you dont have a bucket that you want to reuse and youd like the module to create it and even tho it doesnt exist, it’s failing to create it ?
in either case, it would help to see a plan of the s3 bucket resource when youre using the module
module "firewall-manager" {
version = "0.2.2"
source = "cloudposse/firewall-manager/aws"
admin_account_enabled = var.admin_account_enabled
admin_account_id = var.admin_account_id
firehose_enabled = var.firehose_enabled
firehose_arn = var.firehose_arn
security_groups_common_policies = var.security_groups_common_policies
security_groups_content_audit_policies = var.security_groups_content_audit_policies
security_groups_usage_audit_policies = var.security_groups_usage_audit_policies
shiled_advanced_policies = var.shield_advanced_policies
waf_policies = var.waf_policies
waf_v2_policies = var.waf_v2_policies
dns_firewall_policies = var.dns_firewall_policies
network_firewall_policies = var.network_firewall_policies
namespace = var.namespace
stage = var.stage
providers = {
aws.admin = aws.admin
aws = aws
}
I didn’t pass in namespace and stage before so the bucket was not unique
Running into a new error now when applying a wafv2 policy:
│ Error: Creating Policy Failed: InvalidInputException: Error in the SecurityServiceData.ManagedServiceData at [Source: (String)"{"defaultAction":{"type":"ALLOW"},"loggingConfiguration":"{\"logDestinationConfigs\":[\"arn:aws:firehose:us-east-1:xxxxxx:deliverystream/aws-waf-logs-xxxx-xxxx\"],\"redactedFields\":[{\"redactedFieldType\":\"SingleHeader\",\"redactedFieldValue\":\"Cookies\"},{\"redactedFieldType\":\"Method\"}]}","overrideCustomerWebACLAssociation":false,"postProcessRuleGroups":[],"preProcessRuleGroups":[{"excludeRules":[],"managedRuleGroupIdentifier":{"managedRuleGroupName":"AWSManagedRulesLinuxRule"[truncated 145 chars]; line: 1, column: 58]
I added this policy as a default: https://github.com/cloudposse/terraform-aws-firewall-manager/blob/82c2264e4c666ec28467cbca301c587ab4a4731c/examples/complete/main.tf#L70
how my variable looks:
variable "waf_v2_policies" {
type = list(any)
default = [{
name = "linux-policy"
resource_type_list = ["AWS::ElasticLoadBalancingV2::LoadBalancer", "AWS::ApiGateway::Stage"]
policy_data = {
default_action = "allow"
override_customer_web_acl_association = false
pre_process_rule_groups = [
{
"managedRuleGroupIdentifier" : {
"vendorName" : "AWS",
"managedRuleGroupName" : "AWSManagedRulesLinuxRuleSet",
"version" : null
},
"overrideAction" : { "type" : "NONE" },
"ruleGroupArn" : null,
"excludeRules" : [],
"ruleGroupType" : "ManagedRuleGroup"
}
]
}
}
]
}
Any idea as to the cause?
2021-11-23
does anyone know if its possible to output the GCP service account key as json ?
i was trying to do the below to no avail …
output "github-action-json" {
value = base64decode(google_service_account_key.ci_tools["github-actions"].private_key)
sensitive = true
}
try outputting the full object and see what it returns
note: use underscores instead of dashes in any resource/data/output names (best practice)
output "github_action_json" {
value = google_service_account_key.ci_tools
sensitive = true
}
yeh i normally do, my bad
thanks for pointing that out
@Steve Wade (swade1987) np.
did outputting the full object allow you to figure out how to output the base64’ed private key ?
i actually just wrote the output to a file for the time being
2021-11-24
any idea to have a nice way to handle schema creation with terraform on rds mysql?
I find a tones of terraform to do this in postgresql but othing nice in mysql
for now the only solution that I have seen is using mysql cli from an ec2/ecs instance that have access to the rds instance with an hash of an sql file (see https://github.com/hashicorp/terraform/issues/10740#issuecomment-267224310 or https://stackoverflow.com/a/59928898/7015902 )
With Terraform being responsible for creating my instances, databases, and container services, the biggest gap right now is schema management. You can use a local-exec provisioner on a mysql_databa…
I created RDS instance using aws_db_instance (main.tf): resource “aws_db_instance” “default” { identifier = “${module.config.database[“db_inst_name”]}” allocated_storage = 20 …
@Grummfy just a clarification: are you interested only in the schema creation but not the DDL objects inside it, is that correct? If so, how are you planning to maintain you DDL objects ?
for the datastructure, it will be deployed by the application, but the infrastrure must provide the schema
so yes, only schema
ideally, it would be create user with specific priviledge for eachs chema
@RB - you recently merged: https://github.com/cloudposse/terraform-aws-code-deploy/pull/9 - thank you for the prompt merge! However, I’ve found an issue with the tagging for which you asked that this line was set: https://github.com/cloudposse/terraform-aws-code-deploy/blob/master/main.tf#L192
for_each = length(var.ec2_tag_set) > 0 ? [] : var.ec2_tag_set
The problem is that when I run terraform, we get the value of ec2_tag_set being NULL - ie not being set at all. The logic can’t be producing what a non zero result. We have other variables checked using:
for_each = var.deployment_style == null ? [] : [var.deployment_style]
so I’m going to create another PR which sets this instread - will you do me the kindness of approving etc? Thank you!
what The tagging for EC2 tag sets is broken. This is now sorted. I have added a variable to allow ec2 filters and not just tag sets. There is a bit where the ARN string for the role is missing &qu…
Terraform module to provision AWS Code Deploy app and group. - terraform-aws-code-deploy/main.tf at master · cloudposse/terraform-aws-code-deploy
sure please submit a pr
what The tagging for EC2 tag sets is broken. This is now sorted. I have added a variable to allow ec2 filters and not just tag sets. There is a bit where the ARN string for the role is missing &qu…
Terraform module to provision AWS Code Deploy app and group. - terraform-aws-code-deploy/main.tf at master · cloudposse/terraform-aws-code-deploy
could you also put the error that you’re seeing in the pr itself? I’m curious because we’ve seen this work in other places
it’s not causing an error - but the entire variable is now missing so the created CodeDeployment Group is missing the tagging
I’m just checking my change now
it’s throwing another issue strangely
you mean the ec2 tag set argument is not provided to the raw resource?
yah
can you share the inputs and plan in the pr description
sure - thank you
np. this may take me until later to get to but let’s try to resolve this today
Hi - I didn’t get around to sending a PR as I ran into a problem when setting up a conditional for the ec2_tag_filter variable. For some reason, setting up
for_each = var.ec2_tag_filter == null ? [] : [var.ec2_tag_filter]
results in the following error when running a plan:
module.notify_slack.aws_sns_topic_subscription.sns_notify_slack[0]: Refreshing state... [id=arn:aws:sns:eu-central-1:615854049996:stargate:0fd866e3-2b52-438e-819b-73b98ee08701]
╷
│ Error: Invalid index
│
│ on .terraform/modules/code-deploy-stargate/main.tf line 184, in resource "aws_codedeploy_deployment_group" "default":
│ 184: key = ec2_tag_filter.value["key"]
│ ├────────────────
│ │ ec2_tag_filter.value is empty set of object
│
│ Elements of a set are identified only by their value and don't have any separate index or key to select with, so it's only possible to perform operations across all elements of the set.
╵
╷
│ Error: Invalid index
│
│ on .terraform/modules/code-deploy-stargate/main.tf line 185, in resource "aws_codedeploy_deployment_group" "default":
│ 185: type = ec2_tag_filter.value["type"]
│ ├────────────────
│ │ ec2_tag_filter.value is empty set of object
│
│ Elements of a set are identified only by their value and don't have any separate index or key to select with, so it's only possible to perform operations across all elements of the set.
╵
╷
│ Error: Invalid index
│
│ on .terraform/modules/code-deploy-stargate/main.tf line 186, in resource "aws_codedeploy_deployment_group" "default":
│ 186: value = ec2_tag_filter.value["value"]
│ ├────────────────
│ │ ec2_tag_filter.value is empty set of object
│
│ Elements of a set are identified only by their value and don't have any separate index or key to select with, so it's only possible to perform operations across all elements of the set.
╵
╷
│ Error: Unsupported attribute
│
│ on .terraform/modules/code-deploy-stargate/main.tf line 197, in resource "aws_codedeploy_deployment_group" "default":
│ 197: for_each = ec2_tag_set.value.ec2_tag_filter
│ ├────────────────
│ │ ec2_tag_set.value is set of object with 1 element
│
│ Can't access attributes on a set of objects. Did you mean to access an attribute across all elements of the set?
The tag of the repo is here: https://github.com/StephenTan-TW/terraform-aws-code-deploy/blob/6145b27203c832cd66176369dd3d326a55c14a74/main.tf . When I set up the conditional for ec_tag_filter to not have any conditional:
for_each = var.ec2_tag_filter
then the plan works as “normal”. It’s quite bizarre. Here is the plan output:
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
-/+ destroy and then create replacement
Terraform will perform the following actions:
# module.code-deploy-stargate.aws_codedeploy_deployment_group.default[0] is tainted, so must be replaced
-/+ resource "aws_codedeploy_deployment_group" "default" {
~ arn = "arn:aws:codedeploy:eu-central-1:615854049996:deploymentgroup:stargate/stargate" -> (known after apply)
- autoscaling_groups = [] -> null
~ compute_platform = "Server" -> (known after apply)
~ deployment_group_id = "e0942c61-47bd-4098-903f-c5331d181d7c" -> (known after apply)
~ id = "e0942c61-47bd-4098-903f-c5331d181d7c" -> (known after apply)
tags = {
"Name" = "stargate"
}
# (5 unchanged attributes hidden)
+ blue_green_deployment_config {
+ deployment_ready_option {
+ action_on_timeout = (known after apply)
+ wait_time_in_minutes = (known after apply)
}
+ green_fleet_provisioning_option {
+ action = (known after apply)
}
+ terminate_blue_instances_on_deployment_success {
+ action = (known after apply)
+ termination_wait_time_in_minutes = (known after apply)
}
}
+ ec2_tag_set {
+ ec2_tag_filter {
+ key = "Service"
+ type = "KEY_AND_VALUE"
+ value = "stargate-frontend"
}
+ ec2_tag_filter {
+ key = "Service"
+ type = "KEY_AND_VALUE"
+ value = "stargate-processing"
}
}
# (3 unchanged blocks hidden)
}
Terraform module to provision AWS Code Deploy app and group. - terraform-aws-code-deploy/main.tf at 6145b27203c832cd66176369dd3d326a55c14a74 · StephenTan-TW/terraform-aws-code-deploy
The TLDR is that in order for this to work, we need to remove the conditional from the ec2_tag_filter variable! No idea why this is required
I’ll push a PR based on the WORKING code. I know that this isn’t consistent with CloudePosse conventions, but then it’s something we can work on. I think you are in the US ( I’m in the UK ) so I won’t expect an answer for some time
here is the PR: https://github.com/cloudposse/terraform-aws-code-deploy/pull/13
what In order to ensure that tagging is processed correctly, I have created a PR of working code. For some reason, we need to disable conditionals for ec2_tag_filter variable and we need to set the…
thanks in advace @RB
thank you for pointing out the flawed logic @RB - I couldn’t see the wood for the trees! It’s so obvious in hindsight. I’ve updated the PR now
this should all just work fine and dandy
this looks pretty interesting … https://nubesgen.com/
With NubesGen, going to production on Azure is only one git push
away.
Hey there - Im trying to use the new SLO module here and passing this yaml for it . But my plan is saying force_delete ,groups, message, query, thresholds and validate are required
. It suggest to be it is trying to use type metric
instead of monitor
, but I don’t know why?
prod/test-slo:
name: Test SLO
type: monitor
description: Test SLO
monitor_ids: []
tags: ["managedby:terraform", "env:prod"]
Terraform module to configure and provision Datadog monitors, custom RBAC roles with permissions, Datadog synthetic tests, Datadog child organizations, and other Datadog resources from a YAML confi…
@Ben Smith (Cloud Posse)
Terraform module to configure and provision Datadog monitors, custom RBAC roles with permissions, Datadog synthetic tests, Datadog child organizations, and other Datadog resources from a YAML confi…
Hey sorry for the late response, So due to how terraform deals with objects we have to pass all those variables - meaning you have to pass those variables as null or a default value. Also at this time the SLO module is more focused at metric based - this is because its very easy to write a numerator and denominator query. We plan to add monitor based support soon, but it requires integration with our datadog-monitor
module and fetching those monitor IDs.
Hi everyone, I was wondering if anyone has tried setup vpc peering using this https://registry.terraform.io/modules/cloudposse/vpc-peering-multi-account/aws/latest?
I couldn’t set it up as it’s giving me the below error
Error: query returned no results. Please change your search criteria and try again
│
│ with module.vpc_peering_cross_account.data.aws_route_table.accepter[0],
│ on .terraform/modules/vpc_peering_cross_account/accepter.tf line 67, in data "aws_route_table" "accepter":
│ 67: data "aws_route_table" "accepter" {
This is what I have in my main.tf
module "vpc_peering_cross_account" {
source = "git::<https://github.com/cloudposse/terraform-aws-vpc-peering-multi-account.git?ref=tags/0.17.1>"
namespace = "ex"
stage = "dev"
name = "vpc_peering_cross_account"
requester_aws_assume_role_arn = "arn:aws:iam::xxxx:role/BURoleForCrossAccountVpcPeering"
requester_region = "eu-west-2"
requester_vpc_id = "vpc-04dcfe9aaaxxxxxx"
requester_allow_remote_vpc_dns_resolution = "true"
requester_subnet_tags = { "Name" = "vpc-central-subnet-3"}
accepter_aws_assume_role_arn = "arn:aws:iam::yyyy:role/BURoleForCrossAccountVpcPeering"
accepter_region = "eu-west-1"
accepter_vpc_id = "vpc-0a28ca6a26dyyyyy"
accepter_allow_remote_vpc_dns_resolution = "true"
accepter_subnet_tags = { "Name" = "vpc-private-data-1"}
context = module.this.context
}
Any hints or advise will be highly appreciated! Thanks!
Looks like it’s not able to find the route table in your accepter vpc
Terraform module to provision a VPC peering across multiple VPCs in different accounts by using multiple providers - terraform-aws-vpc-peering-multi-account/accepter.tf at 0cad62a594f9a116a3a2a395e…
It uses the found accepter subnets to query the route table
i’d double check the accepter input vars and verify that you have a route table connected to those subnets
do you have any advise on how i can proceed?
did you double check the accepter inputs and verify the subnet route table matches that of the tag you supplied to the module?
my accepter input vars:
accepter_region = "eu-west-1"
accepter_vpc_id = "vpc-0a28ca6a26d8df84b"
accepter_allow_remote_vpc_dns_resolution = "true"
accepter_subnet_tags = { "Name" = "vpc-pcc-dev-private-data-1"}
this mean the route tables query should have result right?
it should as long as vpc-0a28ca6a26d8df84b
is in eu-west-1
yes, vpc-0a28ca6a26d8df84b
is in eu-west-1
can you try creating a new module and add a tf file trhat contains something like this
provider "aws" {
alias = "accepter"
region = "eu-west-1"
}
data "aws_subnet_ids" "accepter" {
provider = aws.accepter
vpc_id = "vpc-0a28ca6a26d8df84b"
tags = { "Name" = "vpc-pcc-dev-private-data-1" }
}
output "subnets" {
value = data.aws_subnet_ids.accepter
}
what does that return when you do an apply ?
Changes to Outputs:
+ subnets = {
+ filter = null
+ id = "vpc-0a28ca6a26d8df84b"
+ ids = [
+ "subnet-085758c0b28a31721",
]
+ tags = {
+ "Name" = "vpc-pcc-dev-private-data-1"
}
+ vpc_id = "vpc-0a28ca6a26d8df84b"
}
You can apply this plan to save these new output values to the Terraform state, without changing any real infrastructure.
so it looks like it can find it. im at a loss why the module doesn’t find it since its using the same logic
i tried with below:
provider "aws" {
alias = "requester"
region = "eu-west-2"
}
data "aws_route_table" "requester" {
provider = aws.requester
vpc_id = "vpc-04dcfe9aaafa9a768"
subnet_id = "subnet-0d74248d90195f61b"
}
output "requester_routes" {
value = data.aws_route_table.requester
}
The output is
Error: query returned no results. Please change your search criteria and try again
│
│ with data.aws_route_table.requester,
│ on main.tf line 47, in data "aws_route_table" "requester":
│ 47: data "aws_route_table" "requester" {
If i comment out the subnet_id
attribute in data.aws_route_table.requester
, i’ll get some output
provider "aws" {
alias = "requester"
region = "eu-west-2"
}
data "aws_route_table" "requester" {
provider = aws.requester
vpc_id = "vpc-04dcfe9aaafa9a768"
#subnet_id = "subnet-0d74248d90195f61b"
}
output "requester_routes" {
value = data.aws_route_table.requester
}
The output (partially):
Changes to Outputs:
+ requester_routes = {
+ arn = "arn:aws:ec2:eu-west-2:757726977567:route-table/rtb-0e7c5763cd6513f34"
+ associations = [
+ {
+ gateway_id = ""
+ main = true
+ route_table_association_id = "rtbassoc-03d465d3131e67c64"
+ route_table_id = "rtb-0e7c5763cd6513f34"
+ subnet_id = ""
},
]
+ filter = null
+ gateway_id = null
+ id = "rtb-0e7c5763cd6513f34"
+ owner_id = "757726977567"
+ route_table_id = "rtb-0e7c5763cd6513f34"
+ routes = [
+ {
+ carrier_gateway_id = ""
+ cidr_block = "194.60.191.6/32"
+ destination_prefix_list_id = ""
+ egress_only_gateway_id = ""
+ gateway_id = "vgw-0c0438f78768596fa"
+ instance_id = ""
+ ipv6_cidr_block = ""
+ local_gateway_id = ""
+ nat_gateway_id = ""
+ network_interface_id = ""
+ transit_gateway_id = ""
+ vpc_endpoint_id = ""
+ vpc_peering_connection_id = ""
},
+ {
+ carrier_gateway_id = ""
+ cidr_block = "194.60.191.20/32"
could it be the way the route tables are setup without explicit subnet association? so it didn’t work with the query?
after adding the explicit subnet association, i managed to create the vpc peering connection
thanks for your help @RB
another question, how is it possible to make this module work with for_each?
i tried below
module "vpc_peering_cross_account" {
for_each = var.remote_peering_data
source = "git::<https://github.com/cloudposse/terraform-aws-vpc-peering-multi-account.git?ref=tags/0.17.1>"
but encountered the error :
Error: Module module.vpc_peering_cross_account contains provider configuration
│
│ Providers cannot be configured within modules using count, for_each or depends_on.
ah yes it’s not possible because the providers are defined in the module itself
is there any plan to remove the providers from the module itself so that the for_each/count will work?
for folks who are not using TFE or Spacelift what do you use as private module registry?
Am aware that 99% are using GH as a source for their modules however due to lack of supporting dynamic version - ie source = "git::<https://[email protected]/org/tf-db.git?ref=$VERSION>
is a nightmare to keep updating it especially since you are in the world of poly-repo (each child module with is own git repo).
Am also aware of https://github.com/outsideris/citizen but it feels like is not ready for prod usage, thoughts?
I use github. What’s wrong with using github as a private source ? You can always create a private module per repo.
I’ve read of citizen but I haven’t personally used it. TBH github has been so easy that when I worked with TFE or Spacelift, i’ve always used github as the point of truth.
Gitlab is a valid registry so you can hit it as you would terraform cloud and such. https://docs.gitlab.com/ee/user/packages/terraform_module_registry/index.html
What’s wrong with using github as a private source
@RB the prob comes with the fact that you can’t have s’thing like
terraform {
required_version = ">= 0.13"
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 3.25.0"
}
and so you need to keep have a mechanism to update the version in the module source ...
And that is not so much fun when you have a dozen of root modules with various permutations of child modules (scattered across +100 git repos)
@aimbotd that is very neat, wasn’t aware of that, thx a bunch !
You got it. I also wasn’t aware of it until recently. I’ve since moved over a bunch of my projects from GH over to GL. They even have a remote state feature if you’d like GL to manage the remote states. Super easy to get up and running. Honestly, migrating over to GL was one of the easier migrations I’ve had in some time.
i’m starting to consider this too especially when i take into account their CI/CD offer vs GHA which for CD part … is limited (to be polite).
I use dependabot to update the git source ref. Renovate also works
I may or may not have purchased Gitlab Ultimate to exercise all of the features. I’m really enjoying their pipeline and various security related features. I honestly thought GL was subpar because I used it many, many moons ago and just didn’t like it. My most recent role has me pretty embedded into it and I have to say, it’s been lovely.
I use dependabot to update the git source ref. Renovate also works
@loren thanks for mentioning , never came across that path but i’ll certainly check it
I use dependabot to update the git source ref. Renovate also works
same, it’s super convenient
had a quick look at dependatbot and will need to double check if this works with non public/ private TF rregistry (i.e - spacelift i know has provide one and now GL however not GH). Anyway will crack on with a quick playground to see. thx folks for your help
Be sure to look at renovate also. It’s been moving more quickly with new features than dependabot the last couple years. Dependabot does just enough for me, but sounds like your use case might require more
right, both been marked to look at, much thanks
how do you configure dependabot to do that? Is there specific support for Terraform or can you add your own languages?
fyi @Alex Jurkiewicz - https://sysdogs.com/articles/automating-terraform-updates-with-dependabot although that is using spacelift (and its private registry) which is not what i have/ can use atm . HTH
in .github/dependabot.yml
, for each directory with .tf files, add an entry like this (changing the directory as needed):
- package-ecosystem: terraform
directory: "/"
schedule:
interval: daily
open-pull-requests-limit: 10
so that will make dependabot generate updates for public provider versions in when they are released?
that’s amazing
I do think dependabot added support for managing provider versions recently… Renovate has done that for a while. But personally I only use it for bumping the version in a module’s source url git ref
(fwiw, it also works for the source url in terragrunt.hcl files)
different thread:
in one of the talk (maybe office hours, i forgot) it was mentioned that will be better to separate the labmda zip creatoin from the deployment of the fct by having the “Build” phase pushing it to S3 bucket while TF lambda module will just consume it from S3 bucket.
Here is my q:
the CD part of TF which will deploy the new fct, how will it know which S3 bucket to use? To be specific, should the S3 objects be prefix with a version number which can be consumed in various envs deployments.
Have you considered turning your lambda into a Docker container and deploying the lambda as a single container ?
Then you dont have to use a null provider or zip anything up
Have you considered turning your lambda into a Docker container and deploying the lambda as a single container ?
negative, not have yet as the cold start is still a prob for us. Maybe in the near future
2021-11-25
2021-11-26
I want to build a new AWS ecs cluster with cloudposse templates. How can get my ecs cluster building code along with the terraform modules required on my local machine Should I clone the entire cloudposse terraform repository?.
Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - GitHub - cloudposse/terraform-aws-ecs-web-app: Terraform module that…
2021-11-28
2021-11-29
Hi All, did anyone of you experience following terraform auth errors intermittently in the CI/CD pipeline?
Error: error configuring Terraform AWS Provider: no valid credential sources for Terraform AWS Provider found.
Please see <https://registry.terraform.io/providers/hashicorp/aws>
for more information about providing credentials.
Error: WebIdentityErr: failed to retrieve credentials
caused by: InvalidIdentityToken: Couldn't retrieve verification key from your identity provider, please reference AssumeRoleWithWebIdentity documentation for requirements
status code: 400
It feels like the problem is network issues inside kubernetes but I am not sure yet. Any idea why this could be happening?
did you read the link provided?
I mentioned it happens intermittently
Hi all, I wanted to use https://github.com/cloudposse/terraform-aws-iam-role to create a policy and add to the role, This is my code:
data "aws_iam_policy_document" "test-cloudwatch-put-metric-data" {
statement {
effect = "Allow"
resources = ["*"]
actions = ["cloudwatch:PutMetricData"]
}
}
module "test-instance-role" {
source = "git::<https://github.com/cloudposse/terraform-aws-iam-role.git?ref=tags/0.13.0>"
enabled = true
namespace = "myproject"
stage = "dev"
name = "test-instance-role"
policy_description = "test policy"
role_description = "test permissions role"
policy_documents = [
data.aws_iam_policy_document.test-cloudwatch-put-metric-data.json
]
}
But after making the plan I get an error:
Error: Unsupported argument
on .terraform/modules/test-instance-role/main.tf line 17, in data "aws_iam_policy_document" "assume_role_aggregated":
17: override_policy_documents = data.aws_iam_policy_document.assume_role.*.json
An argument named "override_policy_documents" is not expected here.
Error: Unsupported argument
on .terraform/modules/test-instance-role/main.tf line 33, in data "aws_iam_policy_document" "default":
33: override_policy_documents = var.policy_documents
An argument named "override_policy_documents" is not expected here.
Where am I making a mistake?
A Terraform module that creates IAM role with provided JSON IAM polices documents. - GitHub - cloudposse/terraform-aws-iam-role: A Terraform module that creates IAM role with provided JSON IAM poli…
what version of the aws provider are you using ?
A Terraform module that creates IAM role with provided JSON IAM polices documents. - GitHub - cloudposse/terraform-aws-iam-role: A Terraform module that creates IAM role with provided JSON IAM poli…
try terraform init -upgrade
the argument override_policy_documents
does exist on that data source.
I’m using:
terraform version
Terraform v0.13.5
+ provider registry.terraform.io/hashicorp/archive v2.0.0
+ provider registry.terraform.io/hashicorp/aws v2.70.0
+ provider registry.terraform.io/hashicorp/local v1.4.0
+ provider registry.terraform.io/hashicorp/null v2.1.2
+ provider registry.terraform.io/hashicorp/template v2.2.0
aws v2.70.0
is old, can you upgrade your provider to 3.x ?
not now, I have big infrastructure
well that’s a thing apparently… https://aws.amazon.com/blogs/aws/new-aws-control-tower-account-factory-for-terraform/
AWS Control Tower makes it easier to set up and manage a secure, multi-account AWS environment. AWS Control Tower uses AWS Organizations to create what is called a landing zone, bringing ongoing account management and governance based on our experience working with thousands of customers. If you use AWS CloudFormation to manage your infrastructure as […]
the module feels like a bit of a mess. aws trying to do terraform, but pulling from concepts only valid in cdk and cfn
AWS Control Tower makes it easier to set up and manage a secure, multi-account AWS environment. AWS Control Tower uses AWS Organizations to create what is called a landing zone, bringing ongoing account management and governance based on our experience working with thousands of customers. If you use AWS CloudFormation to manage your infrastructure as […]
oh lord, and the providers are all hard-coded
the more i poke at it, the more frightened i get
AWS Control Tower Account Factory. Contribute to aws-ia/terraform-aws-control_tower_account_factory development by creating an account on GitHub.
Yeah…. no. They can’t even name their module according to the standards. Weak.
Typical aws fauxpen-source… https://github.com/aws-ia/terraform-aws-control_tower_account_factory/blob/main/CONTRIBUTING.md
AWS Control Tower Account Factory. Contribute to aws-ia/terraform-aws-control_tower_account_factory development by creating an account on GitHub.
Thank you for your interest in contributing to the AWS Control Tower Account Factory for Terraform.
At this time, we are not accepting contributions. If contributions are accepted in the future, the AWS Control Tower Account Factory for Terraform is released under the Apache license and any code submitted will be released under that license.
“some big customer asked us enough that we devoted a sprint to this. we don’t plan to touch it again”
My thoughts exactly!
Has anyone migrated an existing organisation into control tower? How did it go? The process seems extremely scary to me
I have migrated org running legacy LZ into a new Org with CT .
We are running without any LZ/CT currently
if ur migrating the accounts of the same org to CT - I think AWS has automation in place from the CT console ?
if u run CT checks it will tell you if it can convert the org or not
Would probably advise against CT. There are many caveats.. for eg. CT logs being expired after a year