Discussions related to https://github.com/cloudposse/geodesic
RE: Atmos – kudos to you @Joe Hosteny for your advice today in getting my Atmos compiles going. Everything works now. Cheers –
Hi all, I had some more time to work on our account conversion today. I got to the
dns-primary component, but I seem to be missing something now. What is the expectation for how to run non-bootstrap terraform plan / apply for the non-bootstrap components using the terraform profiles, rather than the org role ARNs (I am looking at the
all-new-components branch)? For example, it was easy for the
iam-delegated-roles, as that component’s provider performs an
assume_role into the destination account. Is there a short blurb that gives a high level view of how things are to fit together?
Specifically, I kind of blindly configured the stack and started a run, then was surprised to see the zones appearing in the root account. That’s when I noticed that the component was using the named terraform profile (or named terraform role ARN on master)
@Joe Hosteny the current status of assume_role vs profile is a bit in flux. I know Cloud Posse folks are moving towards using profiles for everything since that covers some issue that they were having that I’m not 100% up-to-speed on.
I personally had issues with this so I created a potential abstract aws-provider.tf proposal which is up on PR: https://github.com/cloudposse/terraform-aws-components/pull/322
Does that possibly help you if it were used more widespread through out the components?
what Proposes a new pattern for abstractly defining the aws provider via a common providers.tf why This enables usage of the following types of AWS auth for components: Environment credentials …
Thanks @Matt Gowie. I will take a look at this.
Are you just using an IAM user that is able to assume the *-terraform role in the identity account?
@Matt Gowie FYI, I was able to finally get this to work (deployed DNS primary stack in the
dns account using the proper
terraform role). It was death by a thousand papercuts, but I think I have it all down now.
And by papercuts, I mean my mistakes
This was done via a login for a user, not system account, provisioned via GSuite and using SAML SSO. I did use the
terraform_profile_name too, not the dynamic assume role. It fell into place once I was able to figure out how to get the
iam-delegated-roles to deploy in root
Ah sorry I didn’t see your other reply Joe. Glad you got it figured. Sorry I can’t be of much help — I’m not utilizing this 100% yet, just piecemeal so I don’t use the IAM roles stuff unfortunately.
No worries! I realize I am working on it probably a little too early anyway.
Is tfstate now being saved only in in the root account even in a multi-account and multi-region setup? That is what I am understanding based on
Yeah, that’s the pattern going forward. One tfstate bucket with each component workspace separated using ‘workspace_key_prefix’.
Thank you. Follow up question, would that allow us to run
terraform to do region failover if s3 (and ddb) in the region with bucket is down?
Good question… I’m not 100%. S3 has 9s out to wazoo, so I’m not sure that is a much of a concern but it’s likely worth looking into. If DDB is down then you can skip the lock I believe.
You see any issues maintaining a replica s3 and ddb in another region? If a failure does occur, we could set a variable to switch to the failover state bucket and ddb.
Yea, so what we did was add support to our state bucket module to replicate the state to a backup region
In the failover scenario, we would flip to using the backup bucket
Sweet. That’s what I figure.
Is it possible to have multiple components based on the same component? Let’s say I wanted multiple vpcs defined in the same stack? I don’t want to create a unique root module direct for each vpc that duplicates the code.
I think this should be pretty straightforward once the
vendir PR is completed. This should allow you to copy the upstream component into multiple locations locally, with different configs.
Issue: #37 Created a function that will copy existing files listed in the destination directory to the staging area prior to deletion preserving the content. I felt that pulling the content into th…
@Brian Ojeda check out https://docs.cloudposse.com/reference/stacks/#component-inheritance
Oh slick. So in that example, vendir would download to
Vendir is earlier on in the lifecycle in this case. Component inheritence comes into play when you have a single component folder and you want to utilize that component multiple times in a particular stack.
Right - I think that is what I was envisioning. My prior suggestion was that you would use vendir to download to two locations,
components/terraform/export-data in this case. But this is better!
Yeah this is one of the more advanced bits that’s a nicely hidden unless you find this documentation.
oh that’s rad! I didn’t see that someone opened that PR.
Hi @Erik Osterman (Cloud Posse), yeah, that was someone on my team, We’re waiting for a review on one more point and also someone to sign off on the contributor agreement. But I’ve been using it locally.
ohhhhhhhh rad! thanks Joe
cc: @Andriy Knysh (Cloud Posse)
yep I saw the PR, thanks @Joe Hosteny