#geodesic (2020-03)
Discussions related to https://github.com/cloudposse/geodesic
Archive: https://archive.sweetops.com/geodesic/
2020-03-02
There are no events this week
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions about geodesic
, get live demos and learn from others using it. Next one is Mar 11, 2020 11:30AM.
Register for Webinar
#office-hours (our channel)
2020-03-04
2020-03-05
2020-03-09
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions about geodesic
, get live demos and learn from others using it. Next one is Mar 18, 2020 11:30AM.
Register for Webinar
#office-hours (our channel)
There are no events this week
2020-03-10
tl;dr:
Re: https://github.com/cloudposse/terraform-aws-acm-request-certificate
Do I need to maintain remote state resources (S3 and DynamoDB, aka tfstate-backend
) in each region we need to create ACM certs?
This module works well, however we’re trying to create a certificate for example.com in two regions: ap-southeast-2 for direct usage in ELB’s and us-east-1 for CloudFront.
ap-southeast-2 is our default region and is where the remote state is stored.
The first run works, but when changing the provider region I found that, because we’re still using remote state in ap-southeast-2, Terraform thinks the certificate already exists in us-east-1 and fails when trying to refresh state.
I’m using geodesic so I’m thinking I need to change the region in following variables (probably in terraform.envc
?) to make this work:
TF_VAR_aws_default_region=ap-southeast-2
TF_VAR_tf_bucket_region=ap-southeast-2
TF_CLI_ARGS_init=-backend-config=region=ap-southeast-2 -backend-config=key=app-cert/terraform.tfstate -backend-config=bucket=acme-prod-terraform-state -backend-config=dynamodb_table=acme-prod-terraform-state-lock -backend-config=encrypt=true -from-module=git::<ssh://git>@bitbucket.org/acme/acme.infra.modules.git//modules/acm-request-certificate?ref=master .module
TF_VAR_aws_region=ap-southeast-2
Thanks!
Terraform module to request an ACM certificate for a domain name and create a CNAME record in the DNS zone to complete certificate validation - cloudposse/terraform-aws-acm-request-certificate
@Joe Niland that’s how we did it before https://github.com/cloudposse/terraform-root-modules/blob/master/aws/acm-cloudfront/main.tf
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
or you can provide a separate provider to the module by using
providers = {
aws = aws.east
}
Providers are responsible in Terraform for managing the lifecycle of a resource: create, read, update, delete.
@Andriy Knysh (Cloud Posse) thanks! I did set aws provider region to var.aws_region and then it does hit the acm.us-east1. endpoint, but it seems the issue is that I need to change the terraform backend to be in us-east-1 also. Would that be correct?
Since we have already created the cert in another region, Terraform expects it to be there and then fails because the API call to acm.us-east-1 returns 404
At least, that’s what I think is happening
backend needs to have the region where you provisioned the bucket and dynamo table for it
backend "s3" {
encrypt = true
bucket = "..."
key = "terraform.tfstate"
dynamodb_table = "...."
region = "us-east-1"
}
I used the cold start process so it’s all defined in the Dockerfile
# Terraform State Bucket
ENV TF_BUCKET_PREFIX_FORMAT="basename-pwd"
ENV TF_BUCKET_ENCRYPT="true"
ENV TF_BUCKET_REGION="${AWS_REGION}"
ENV TF_BUCKET="${NAMESPACE}-${STAGE}-terraform-state"
ENV TF_DYNAMODB_TABLE="${NAMESPACE}-${STAGE}-terraform-state-lock"
so I guess I need to override it for this module?
so for that particular project, you specify the backend with one region, then add a provider for another region and add that provider to the cert module
ok, thanks very much. I will try that.
TF will use that backend but will read the cert resources using that provider
ok, cool. I’ll see what I can do. Thanks @Andriy Knysh (Cloud Posse)!
yea, so in short, you can specify any regions for the backend and for a set of providers, and TF will use them. Or you can even go cross-account (for any resource you want) if you specify assume_role
(granted that the user/role you are using to provision has permissions to assume that cross-account role, and the cross-account role has enabled assuming it in its trust policy)
ok, I see
And is it assumed that I need to run https://github.com/cloudposse/terraform-root-modules/blob/master/aws/tfstate-backend for each region as well?
Example Terraform service catalog of “root module” blueprints for provisioning reference architectures - cloudposse/terraform-root-modules
that’s how you decide to use it. You can provision backend in just one account/region and store states from diff accounts/regions in it
or you can provision backend for each account (or region, if it makes sense)
I would prefer to use one region but it seems that this is causing my issue.
So I created a cert for example.com in ap-southeast-2. Then I change the aws provider region to us-east-1 and run the module again to try to create a cert for example.com in ap-southeast-2
I get this error:
Error: Invalid index
on .terraform/modules/acm_request_certificate/main.tf line 16, in locals:
16: domain_validation_options_list = local.process_domain_validation_options ? aws_acm_certificate.default.0.domain_validation_options : []
|----------------
| aws_acm_certificate.default is empty tuple
The given key does not identify an element in this collection value.
And this is because:
2020-03-10T10:38:29.979Z [DEBUG] plugin.terraform-provider-aws_v2.52.0_x4: X-Amz-Target: CertificateManager.DescribeCertificate
2020-03-10T10:38:29.979Z [DEBUG] plugin.terraform-provider-aws_v2.52.0_x4: Accept-Encoding: gzip
2020-03-10T10:38:29.979Z [DEBUG] plugin.terraform-provider-aws_v2.52.0_x4:
2020-03-10T10:38:29.979Z [DEBUG] plugin.terraform-provider-aws_v2.52.0_x4: {"CertificateArn":"arn:aws:acm:ap-southeast-2:xx:certificate/4252bd2a-...-...-...-..."}
...
2020-03-10T10:38:30.949Z [DEBUG] plugin.terraform-provider-aws_v2.52.0_x4: 2020/03/10 10:38:30 [DEBUG] [aws-sdk-go] DEBUG: Validate Response acm/DescribeCertificate failed, attempt 0/25, error ResourceNotFoundException: Could not find Certificate for Account xx
2020/03/10 10:38:30 [ERROR] module.acm_request_certificate: eval: *terraform.EvalLocal, err: Invalid index: The given key does not identify an element in this collection value.
The ARN it’s trying to find is from the ap-southeast-2 region
Maybe the issue is that internally the ACM cert state doesn’t contain region
I guess I can get around this by just using multiple modules actually - since then they’ll have their own state file in s3
If you need two certificates in different regions, you instantiate the module two times and give them different providers, one per region
Where the backend is, doesn’t matter - it has it’s own region param
It could be in one of those regions
Or in a completely different
Backend is a separate part of terraform
Not related to AWS providers
OK, I see.
I think that is how it should work, but it isn’t because aws_acm_certificate.default.0.domain_validation_options
is referencing something that is found in the backend but isn’t found in the region.
I will keep messing around and see what I can find. Thanks again for your help!
looks like the state is messed up
I would delete eveything and start with instantiating the module two times with diff provider per region
ok good advice - will start from scratch
2020-03-11
We may have a scenario where we need to change our Root AWS Account for our organization. While I’ve done this for accounts without Geodesic, is it recommended for a set accounts setup against the reference architecture? Any recommended steps to take and things to watch out for?
can’t say we’ve encountered this specific situation
happy to jump on a call to discuss the situation
Welcome to my scheduling page. Please follow the instructions to add an event to my calendar.
We decided against going down this path.
The thought process that I had begun to think through was:
• Create new AWS organization and initialize via Reference architecture — only root account
• Transfer ownership of all AWS accounts to new root account I started thinking down how to have accounts reference a new root but we changed course before proceeding further down that path
2020-03-16
There are no events this week
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions about geodesic
, get live demos and learn from others using it. Next one is Mar 25, 2020 11:30AM.
Register for Webinar
#office-hours (our channel)
2020-03-23
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions about geodesic
, get live demos and learn from others using it. Next one is Apr 01, 2020 11:30AM.
Register for Webinar
#office-hours (our channel)
Hi, if i want to start with the reference architecture, shall i follow this https://docs.cloudposse.com/reference-architectures/cold-start/ or this https://github.com/cloudposse/reference-architectures ?
Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures
Actually, both guides are quite out of date.
Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures
Sorry to say - we don’t have anything updated and public for HCL2
We’ve overhauled the reference architectures entirely for our latest customers
but have not had a chance to open source it yet.
We’ve also changed the strategy and moved away from multiple repos - one per AWS acccount as it proved too cumbersome to promote changes across environments with pipelines (gitops)
our current focus is on radical simplification of our reference architecture in a way that makes gitops a first-class consideration.
I see, thanks for the clarification
@Erik Osterman (Cloud Posse) is there a migration path from the current ref arch to what you are working on?
possibly - but not anything we’ll automate. it would largely consist of moving statefiles around into a centralized bucket with workspaces broken out by member account.
I think it would be great if there were a procedure to ensure we can migrate to the new structure, even if it is not automated.
we could arrange a call with you or anyone else to help go over that
2020-03-27
Adding @discourse_forum bot
@discourse_forum has joined the channel
2020-03-30
:zoom: Join us for “Office Hours” every Wednesday 11:30AM (PST, GMT-7) via Zoom.
This is an opportunity to ask us questions about geodesic
, get live demos and learn from others using it. Next one is Apr 08, 2020 11:30AM.
Register for Webinar
#office-hours (our channel)