#terraform (2021-12)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2021-12-01
hi, I’m trying to use terraform-aws-ec2-client-vpn, but getting Error creating Client VPN endpoint: InvalidParameterValue: Certificate arn::see_no_evil: does not have a domain
which has confused me slightly as the parameters for the vpn module doesn’t seem to accept a domain for the cert or to allow one to be given by arn
v1.1.0-rc1 1.1.0-rc1 (Unreleased) UPGRADE NOTES:
Terraform on macOS now requires macOS 10.13 High Sierra or later; Older macOS versions are no longer supported.
The terraform graph command no longer supports -type=validate and -type=eval options. The validate graph is always the same as the plan graph anyway, and the “eval” graph was just an implementation detail of the terraform console command. The default behavior of creating a plan graph should be a reasonable replacement for both of the removed…
2021-12-02
hey guys and gals, has anyone done the work to allow for awsfirelens
in fargate https://github.com/cloudposse/terraform-aws-ecs-web-app/blob/master/variables.tf#L522
Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - terraform-aws-ecs-web-app/variables.tf at master · cloudposse/terraf…
Yep at the moment you can only do it if you were to use each of those modules directly
Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - terraform-aws-ecs-web-app/variables.tf at master · cloudposse/terraf…
Feel free to submit a PR to override the log configuration with a custom one
tell me what you think of this approach @RB
log_configuration = var.cloudwatch_log_group_enabled ? {
logDriver = var.log_driver
options = {
"awslogs-region" = coalesce(var.aws_logs_region, data.aws_region.current.name)
"awslogs-group" = join("", aws_cloudwatch_log_group.app.*.name)
"awslogs-stream-prefix" = var.aws_logs_prefix == "" ? module.this.name : var.aws_logs_prefix
}
secretOptions = null
} : {
logDriver = var.log_driver
options = {
"papertrail_port" = "40723"
"papertrail_host" = "logsn.papertrailapp.com"
"@type" = "papertrail"
}
secretOptions = null
}
except for options ill just have a new var call log_options
then use the defaults as the cwl ones
its super messy
why not just override all of log_configuration
was trying to minimize the amount of changes
but thinking about it more i might just create the container def outside the web-app module
and leave it be
ya that would work too
sorta wish i would have thought of that earlier
lol me too. i forgot we can override the entire container
while i have you any thought on removing the github provider from the web app module
its in the codepipeline module
breaks my for_each loop and from what i was reading is moving away from best practice
seems to me the way the options are configured wouldnt allow it https://github.com/cloudposse/terraform-aws-ecs-web-app/blob/master/main.tf#L95
Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more. - terraform-aws-ecs-web-app/main.tf at master · cloudposse/terraform-a…
2021-12-03
hi folks, not sure if this is the right channel to ask:
how do you manage the K8s objects?
- tf (which will deploy eks ) + helm + charts ?
- tf (which will deploy eks) + argocd + helm charts ?
- tf + kustomize (no helm)?
- others ? anyone has any feedback between choosing helm vs kustomize ?
we use helm terraform provider
Create helm release and common aws resources like an eks iam role - GitHub - cloudposse/terraform-aws-helm-release: Create helm release and common aws resources like an eks iam role
I would say it depends
We do use this terraform module for backing services including deploying argocd
And argocd for custom apps
right, i see. The challenge i have is that i have a mix of different services / infra
• infra: AWs vpc/TGW/ RDS/ EC2/ ECS/ EKS spread across N accounts
• apps: containers running in ECS/ EKS/ some 3rd party helm apps/ lambdas/ api gw / step fcts etc Ideally i’d like to stick to one CI and one CD to drive it via TF. I’ve tried and learnt through pain that the simple GHA + TF or TF apply after merged to main br doesn’t work( even though the majority of the web rave about it except folks in this place),
will read more on Argo to see if it can drive no K8s apps using TF.
I had used Argo + kustomize, and they work perfectly.
Going back on when to choose helm or kustomize question. I’m on a project with a couple of applications (nothing complex), and they made charts for them.
Is it a good idea to use Argo to deploy and update the charts? Or would it be better to go back to kustomize and a central repo?
Thanks
Please upvote if you want a progress bar in terraform https://github.com/hashicorp/terraform/issues/28512
Current Terraform Version v0.15 Use-cases Terraform command like plan/apply/destroy can tell users an estimation of the time needed to finish the operation, so that users can have a sense on how lo…
2021-12-04
Hey guys! I hope you are all right! I would like to know what different tasks the terraform-provider-awsutils provider performs. Would someone know how to answer me?
Thanks!
Anyone tried to tf apply org cloudtrail and got denied because the “cloudtrail service” hasnt been enabled for the org? Any hack to automate this step and not have to resort to manual clicking in the console?
• seems like it’s not recommended to enable cloudtrail as part of the org…
• but then i dont see any “toggle” under cloudtrail besides literally creating a new trail and check “enable for all accounts in my organization”
Reading the messages, it’s really confusing.
isnt that the case for everything in aws…?
seems like this is the trick with cli
aws organizations enable-aws-service-access --service-principal [cloudtrail.amazonaws.com](http://cloudtrail.amazonaws.com)
Create, update, and manage a trail for an organization with the AWS Command Line Interface.
Issue opportunity!
aws_organizations_enable_aws_service_access
hmm come to think of it the cli is still initiating from “organizations” perspective (i.e. aws organizations….
) and not from cloudtrail. So i guess it’s the same button as the one in this screenshot i attached
aws organizations enable-aws-service-access --service-principal [cloudtrail.amazonaws.com](http://cloudtrail.amazonaws.com)
2021-12-05
2021-12-06
Is anyone familiar with a method to get this dynamic statement
dynamic "statement" {
for_each = var.principals_lambda
content {
effect = "Allow"
actions = [
"ecr:BatchGetImage",
"ecr:GetDownloadUrlForLayer"
]
principals {
type = "Service"
identifiers = ["lambda.amazonaws.com"]
}
condition {
test = "StringLike"
values = formatlist("arn:${data.aws_partition.current.partition}:lambda:*:%s:function:*", var.principals_lambda)
variable = "aws:sourceArn"
}
}
}
to return the following:
+ {
+ Action = [
+ "ecr:GetDownloadUrlForLayer",
+ "ecr:BatchGetImage",
]
+ Condition = {
+ StringLike = {
+ aws:sourceArn = [
+ "arn:aws:lambda:*:222222222222:function:*",
+ "arn:aws:lambda:*:333333333333:function:*"
]
}
}
+ Effect = "Allow"
+ Principal = {
+ Service = "lambda.amazonaws.com"
}
+ Sid = "LambdaECRImageCrossAccountRetrievalPolicy"
},
rather than
+ {
+ Action = [
+ "ecr:GetDownloadUrlForLayer",
+ "ecr:BatchGetImage",
]
+ Condition = {
+ StringLike = {
+ aws:sourceArn = [
+ "arn:aws:lambda:*:222222222222:function:*",
+ "arn:aws:lambda:*:333333333333:function:*"
]
}
}
+ Effect = "Allow"
+ Principal = {
+ Service = "lambda.amazonaws.com"
}
+ Sid = ""
},
+ {
+ Action = [
+ "ecr:GetDownloadUrlForLayer",
+ "ecr:BatchGetImage",
]
+ Condition = {
+ StringLike = {
+ aws:sourceArn = [
+ "arn:aws:lambda:*:222222222222:function:*",
+ "arn:aws:lambda:*:333333333333:function:*"
]
}
}
+ Effect = "Allow"
+ Principal = {
+ Service = "lambda.amazonaws.com"
}
+ Sid = ""
},
why use the dynamic block at all?
you are not accessing the dynamic values, so it seems unnecessary
e.g. statement.key
and statement.value
what With the introduction of cross-account ECR for lambda functions, I have put together the necessary code to allow for this functionality why Cross-account ECR is a feature many would use as …
this should help provide the reasoning for the dynamic statement
i don’t actually see any reasoning described in that pr. just code
so we might not necessarily want to add the lambda statement in, i.e not allowing lambda access to the ECR repo
but for other repositories that statement might be reversed and we do want lambda to have access to it
using a dynamic block will allow us to control whether or not the statement should be added
ahh, ok. so you only want to add the statement if the user specifies a value for var.principals_lambda
, but otherwise do not add it?
exactly
principals_lambda is a list of arns
well account ids
but you don’t care what the value is, and don’t need to create multiple statements?
yes looking to avoid multiple statements as it’ll clutter the policy needlessly and has potential to grow out of control and breach the policy length limit set my AWS
try this:
dynamic "statement" {
for_each = len(var.principals_lambda) > 0 ? [1] : []
content {
effect = "Allow"
actions = [
"ecr:BatchGetImage",
"ecr:GetDownloadUrlForLayer"
]
principals {
type = "Service"
identifiers = ["lambda.amazonaws.com"]
}
condition {
test = "StringLike"
values = formatlist("arn:${data.aws_partition.current.partition}:lambda:*:%s:function:*", var.principals_lambda)
variable = "aws:sourceArn"
}
}
}
jesus if this works… i didn’t think about len
give me 2 mins
sorry for the run around. just needed to confirm that the shape of var.principals_lambda was not actually pertinent to the number of statements, and that it really was just a 1 or 0 requirement
it’s all good
but it looks to be working
just checking with principals_lambda set to null
yes
this lgtm
dynamic "statement" {
for_each = length(var.principals_lambda) > 0 ? [1] : []
content {
sid = "LambdaECRImageCrossAccountRetrievalPolicy"
effect = "Allow"
actions = [
"ecr:BatchGetImage",
"ecr:GetDownloadUrlForLayer"
]
principals {
type = "Service"
identifiers = ["lambda.amazonaws.com"]
}
condition {
test = "StringLike"
values = formatlist("arn:${data.aws_partition.current.partition}:lambda:*:%s:function:*", var.principals_lambda)
variable = "aws:sourceArn"
}
}
}
boom
if the user actually passes var.principals_lambda = null
that will bomb, but if the user lets it pick up a default value of []
or passes []
explicitly then it works
yep
default is set to []
will update MR
what’s your github username
don’t want to steal your thunder
i’ve seen some people test for null
to handle that, but personally i like to let it explode, or use variable validation to improve the error message
@lorengordon
for_each is acting as intended, but we only want it to loop once
did you try updating your foreach to simply run on a match condition instead of iterating?
for_each = var.principals_lambda != {} ? ["statement"] : []
basically, yes, https://sweetops.slack.com/archives/CB6GHNLG0/p1638807818349600?thread_ts=1638806026.343700&cid=CB6GHNLG0
dynamic "statement" {
for_each = length(var.principals_lambda) > 0 ? [1] : []
content {
sid = "LambdaECRImageCrossAccountRetrievalPolicy"
effect = "Allow"
actions = [
"ecr:BatchGetImage",
"ecr:GetDownloadUrlForLayer"
]
principals {
type = "Service"
identifiers = ["lambda.amazonaws.com"]
}
condition {
test = "StringLike"
values = formatlist("arn:${data.aws_partition.current.partition}:lambda:*:%s:function:*", var.principals_lambda)
variable = "aws:sourceArn"
}
}
}
Fellas, I do have a module that basically creates S3 bucket (using typical terraform resource "aws_s3_bucket"
) with given terraform code that includes properties like “lifecycle_rule, server_side_encryption_configuration & logging” added in my main.tf file and we use Gitlab for the CI setup.
However, I am trying to setup my Gitlab merge request process only goes through when ONLY RUNS WHEN THE ABOVE PROPERTIES ARE ADDED TO THE MAIN.TF FILE. I am not sure if this can be controlled at Terraform plan
level or even before.
Anyone here in the community set this type of setup before?
2021-12-07
If have time and do not mind to help promote with :thumbsup: on a PR for a new terraform aws resource rds_cluster_activity_stream
one my colleagues is trying to get merged. I will be would be greatly appreciated.
Thank you in advance. https://github.com/hashicorp/terraform-provider-aws/pull/22097
Community Note Please vote on this pull request by adding a reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave "…
It is doubtful it will get acted on anytime time soon with 491
PRs but any thumbs up can’t hurt.
Community Note Please vote on this pull request by adding a reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave "…
Hi all. I’ve got this code in my main.tf:
module "user" {
source = "cloudposse/iam-user/aws"
version = "0.8.1"
name = "adam"
user_name = "[email protected]"
pgp_key = "keybase:awmckinley"
groups = []
}
Getting this error message:
Do you want to perform these actions in workspace "gbl-root"?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
module.user.aws_iam_user.default[0]: Creating...
Error: Error creating IAM User [email protected]: InvalidClientTokenId: The security token included in the request is invalid
status code: 403, request id: e55b8264-73ea-47d8-865b-712c193054fb
Any suggestions?
AWS is still pooping all over the bed. I wouldn’t really attempt to work on AWS until all issues are resolved.
Got it.
Didn’t see the news about the outage.
Thanks!
Oh yea. It’s been a dumpster fire all morning.
Looking at the Personal Health Dashboard shows issues only in us-east-1
Yes and also very much no.
Various services have dependencies on services within us-east-1, such as STS, iam, auth, etc.
us-east-1 is the center of everything pretty much
what he said
S3 auth, SSL certs, Cloudfront TLS and a bunch of other stuff
I’m in us-east-2 but I put a freeze on all deployments because theres a chance that a deployment to prod may not come online due to some unseen dependencies within AWS.
For instance, the aws console was down for everyone in every region today for the better part of the day. 5 hours at the very least.
Dang. Well, thanks for educating me about the hidden dependencies on us-east-1. Didn’t know.
my favorite is that within the console (if you had a session from before the outage began), it started reporting that the us-east-1 region was “invalid”
they fixed that maybe 30 minutes ago. i’m really glad that they found it
We always joke about regional failure and what to do and more often than not, people are like, well, its an entire region goes out, we’re probably at war.
And then AWS comes to the party and says ha, heres your regional outage ya plebs.
lol
For instance, amazon warehouses and deliveries that we’re out by 8am pst today, have been grounded since 8am today.
https://www.cnbc.com/2021/12/07/amazon-web-services-outage-causes-issues-at-disney-netflix-coinbase.html
The outage also brought down critical tools used inside Amazon. Warehouse and delivery workers, along with drivers for Amazon’s Flex service, reported on Reddit that they couldn’t access the Flex app or the AtoZ app, making it impossible to scan packages or access delivery routes.
I was going to share some tweets about it but they appear to all have been deleted.
At least the ones I had from earlier.
next time someone mentions how “simple” multi-region redundancy is, or how it’s “critical” for your business to grow to the next level, point out AWS has probably 0.1% annual downtime because of us-east-1, and they are doing fine
To be fair, as long as you don’t consider the data requirement, it is pretty simple.
what do you mean by the data requirement?
Where does the data live and how are you going to replicate it across regions.
oh yes! It must be a lot easier to run a service without state
Even then, you can fix that. A gluster cluster with cross region failover can be configured relatively painlessly. It could be pretty expensive though depending on requirements.
I completely agree, if by “relatively painless” you mean “relative to medieval torture” or perhaps “relative to dental work without anaesthetic”
actually, I think it might be in the same “pain ballpark” as the latter
Nah. Not even a days worth of work and most of it can be automated via ansible or the like.
Ive done a number of cross region redundancy deployments. The largest hurdle has always been cost. Hot instances, vs warm, vs cold. Smaller footprint with the ability to scale at the touch of a button.
I’d claim that if cost and bias of services wasn’t a factor, setting up redundancy is easy. We don’t live in such a world though.
Even if you do consider it, RDS has cross region replication.
its not that hard … but its definitely a time/effort/cost triangle.
Just do not use Databases
it is 2021!!!!
2021-12-08
Hi all. I’m still getting the same problem as last night. So no one has to scroll: I’ve got this code in my main.tf:
module "user" {
source = "cloudposse/iam-user/aws"
version = "0.8.1"
name = "adam"
user_name = "[email protected]"
pgp_key = "keybase:awmckinley"
groups = []
}
Getting this error message:
Do you want to perform these actions in workspace "gbl-root"?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
module.user.aws_iam_user.default[0]: Creating...
Error: Error creating IAM User [email protected]: InvalidClientTokenId: The security token included in the request is invalid
status code: 403, request id: e55b8264-73ea-47d8-865b-712c193054fb
I’m using the root account with access ID and secret key. I know it’s not best practice, but is there any reason this should fail besides the AWS outage?
looks like an issue either with authentication or with aws because the upstream module resource for the iam user is very very simple
Terraform Module to provision a basic IAM user suitable for humans. - terraform-aws-iam-user/main.tf at 5d953db7244b2cf81bb6f29813a03ccbe76b8684 · cloudposse/terraform-aws-iam-user
you can try copying and pasting that resource in its own main.tf and try applying it and im sure you would be able to reproduce the issue
Other modules seem to work. For example:
module "tfstate_backend" {
source = "cloudposse/tfstate-backend/aws"
version = "0.33.0"
force_destroy = var.force_destroy
prevent_unencrypted_uploads = var.prevent_unencrypted_uploads
enable_server_side_encryption = var.enable_server_side_encryption
context = module.this.context
}
module "s3_bucket" {
source = "cloudposse/s3-bucket/aws"
version = "0.44.1"
acl = "private"
enabled = true
user_enabled = false
versioning_enabled = false
allowed_bucket_actions = ["s3:GetObject", "s3:ListBucket", "s3:GetBucketLocation"]
name = "bar432"
stage = "root"
namespace = "foo253"
}
Ran fine just now with this result:
Do you want to perform these actions in workspace "uw2-root"?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
module.s3_bucket.aws_s3_bucket.default[0]: Creating...
module.s3_bucket.aws_s3_bucket.default[0]: Creation complete after 5s [id=foo-root-bar]
module.s3_bucket.data.aws_iam_policy_document.bucket_policy[0]: Reading...
module.s3_bucket.data.aws_iam_policy_document.bucket_policy[0]: Read complete after 0s [id=561002259]
module.s3_bucket.aws_s3_bucket_public_access_block.default[0]: Creating...
module.s3_bucket.data.aws_iam_policy_document.aggregated_policy[0]: Reading...
module.s3_bucket.data.aws_iam_policy_document.aggregated_policy[0]: Read complete after 0s [id=561002259]
module.s3_bucket.aws_s3_bucket_public_access_block.default[0]: Creation complete after 1s [id=foo-root-bar]
module.s3_bucket.time_sleep.wait_for_aws_s3_bucket_settings[0]: Creating...
module.s3_bucket.time_sleep.wait_for_aws_s3_bucket_settings[0]: Still creating... [10s elapsed]
module.s3_bucket.time_sleep.wait_for_aws_s3_bucket_settings[0]: Still creating... [20s elapsed]
module.s3_bucket.time_sleep.wait_for_aws_s3_bucket_settings[0]: Still creating... [30s elapsed]
module.s3_bucket.time_sleep.wait_for_aws_s3_bucket_settings[0]: Creation complete after 30s [id=2021-12-08T15:55:08Z]
module.s3_bucket.aws_s3_bucket_ownership_controls.default[0]: Creating...
module.s3_bucket.aws_s3_bucket_ownership_controls.default[0]: Creation complete after 1s [id=foo-root-bar]
Apply complete! Resources: 4 added, 0 changed, 0 destroyed.
Outputs:
tfstate_backend_dynamodb_table_arn = "arn:aws:dynamodb:us-west-2:{redacted}:table/{redacted}-uw2-root-tfstate-lock"
tfstate_backend_dynamodb_table_id = "{redacted}-uw2-root-tfstate-lock"
tfstate_backend_dynamodb_table_name = "{redacted}-uw2-root-tfstate-lock"
tfstate_backend_s3_bucket_arn = "arn:aws:s3:::{redacted}-uw2-root-tfstate"
tfstate_backend_s3_bucket_domain_name = "{redacted}-uw2-root-tfstate.s3.amazonaws.com"
tfstate_backend_s3_bucket_id = "{redacted}-uw2-root-tfstate"
Is there anything I can check about my AWS settings to see what IAM user creation fails?
not all modules are the same tho
you can try to create an iam user from the cli with similar inputs and see if you get a failure
aws iam create-user --user-name [[email protected]](mailto:[email protected])
resulted in
An error occurred (InvalidClientTokenId) when calling the CreateUser operation: The security token included in the request is invalid
I’m using an access key for the root user on an almost brand-new account (created Monday). Are there any possibilities besides AWS outage-related issues?
im not sure tbh. their status page shows all green checks
what does aws sts get-caller-identity
return for you ?
√ . [default] app ⨠ aws sts get-caller-identity
{
"UserId": "158459863977",
"Account": "158459863977",
"Arn": "arn:aws:iam::158459863977:root"
}
wow that looks right
must be aws api issues
I have a situation where I need to manually set up the .terraform directory for locally running init and validate. I’d like init to not change (at least two modules that) I set up. Is there a way to do that?
if you run terraform init again, it shouldn’t overwrite it unless you run terraform init -upgrade
(i think)
Part of the issue is that I’m not able to run init the first time, for a few of the modules, which are pulled from repos where the key isn’t available locally, and mangling .terraform/modules.json to match. But, for some reason, even though I think this tactic worked once before, this time, it erased the repo I had set up when init was run.
Can’t edit the above. I’m going to try again, that didn’t quite come out right.
.. isn’t available locally. So, instead, I tried mangling ..
so the root issue is that terraform init doesn’t work without updating the module json file
doesn’t make sense but i can’t say more without looking at the code
I can’t provide code in this case. How does it work, as far as you know?
Maybe I missed a small detail.
¯_(ツ)_/¯
Fair enough.
i can’t really say tbh, sorry. the modules can be pulled directly from the registry if they are posted there
if they are private modules then they need to be sourced using git ssh
either way, tf init should work out of the box
It’s a corporate thing — policies prevent direct access to repo used in the pipeline, which includes a way to read from the repo that init attempts to pull. Can get around it with *_override.tf files, but that doesn’t work for child modules.
** from the repo that init attempts to pull locally
maybe instead of using a locally-inaccessible remote path for the module, use a directory path. Then for local dev, you can stub the module, and in your real pipeline, you can manually install the module to the same path
eg from
module "foo" {
source = "github.com/privateorg/foo"
to
module "foo" {
source "./foo-module"
I think the main issue is that they are child modules that are causing the issues. That, and the (human) policies at the company where I’m working where the child policies aren’t readable locally. I don’t think I can see the above approach working for child modules, without changing the calling modules in the pipeline. But it’s been a long day. I’ll revisit this tomorrow to see if I missed something. Thanks!
Yeah, you are right, you’d have to change the modules including those child modules
v1.1.0 1.1.0 (December 08, 2021) Terraform v1.1.0 is a new minor release, containing some new features and some bug fixes whose scope was too large for inclusion in a patch release. NEW FEATURES:
moved blocks for refactoring within modules: Module authors can now record in module source code whenever they’ve changed the address of a resource or resource instance, and then during planning Terraform will automatically migrate existing objects in the state to new addresses. This therefore avoids the…
hi folks, in case someone has real exp with TF and CDK, you mind sharing your thoughts wrt why should you embrace one or another?
Bit of context:
Am familiar with CFN and TF but not much with CDK (even though i know the more you go to L1 construct the more you gonna hit same issues with CFN yaml) and i’d like to try out and not judge based on:
• CDK is vendor lock-in -> not caring as is AWS shop only for now and forever
• marketing or personal pref in picking one or another In essence i’m thinking for a cdk aws shop, what will be required to move away from it to TF and what business value will bring (not just for the sake of doing it cause … is cool)
These don’t need to be mutually exclusive.
Here’s how we describe it in our “4 layers of infrastructure”
To do everything we do at cloudposse, we have no choice but to use something like Terraform or Pulumi, since we’re provisioning way more than what’s in CDK. In our model, we acknowledge that for different purposes better tools exist. Primarily, this affects layer-4, which is application deployments where developers are using other tools like the serverless framework, or CDK. That’s fine. Everything can co-exist.
My belief is you need a strong foundation, the sort of which we deliver for our customers. We use terraform for that, since we have 180+ terraform modules for that today. But just because we use terraform for the foundation and platform, doesn’t mean it’s required all the way to the top of the stack.
i see, never thought of having CDK at the top layer of applications, i’ve always seen it in very close proximity of TF overlapping each other. The separation of duties between tools is hard work i’d say …
I don’t see that necessarily as the case. In this model, any parameters you need to share between them, store in SSM. Anything you deploy via CDK/CFT would have a much, much smaller scope than what’s been deployed with terraform. It’s building on top of, rather injecting in the middle. Deploying lambdas with terraform has historically sucked. On the other hands, serverless and SAM were built to make this very easy. Typically you’ll just need to know things like account IDs, VPC IDs, and the rest should plug and play.
I would say terraform is a great language for platform engineering teams. It definitely works for everyone, but once you get into the realm of software development, opinions on languages and I like to stay out of it. For example, deploying containers on kubernetes, we don’t dictate every image must only deploy rust apps on alpine. Instead, we provide a platform that enables anyone to ship a container, regardless of what’s inside.
Much the same, for things outside of kubernetes & containers, there should be similar analogs to other technologies. That’s why I like to say we provision a solid foundation with terraform, for everyone else to build on, however they need to build it.
Wow, you’ve touched so many points, thanks for opening yourself on all this topics, i can see your philosophy here.
Where do stateful services like database & s3 fit into your infrastructure layers? Are they just part of “Backing services”?
It depends, a database is frequently shared by more than one service, therefore it’s a platform service and deployed in a separate phase
S3 buckets can be deployed with the service, but then there’s a theoretical question: if the service is the deleted, is the bucket deleted? if not, then it’s a platform-level service, since the lifecycle is different
2021-12-09
Hi, how can I output ip addresses of the nlb created with terraform?
look at the doc : https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lb#import you have all the attribute et argument reference :
• ipv6 : https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lb#ipv6_address
• ipv4 https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lb#private_ipv4_address
• allocation id https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lb#allocation_id (so you can load it with the data of eip)
Question about the Cloudwatch Log Groups that are enabled by turning on RDS Enabling Exports (ie. ["alert", "audit", "listener", "trace"]
), do those log groups get tagged by terraform? I’m finding that they’re not. Is there a param I’m supposed to set?
resource "aws_db_instance" "default" {
enabled_cloudwatch_logs_exports = ["alert", "audit", "listener", "trace"]
tags = var.tags
https://www.cnbc.com/amp/2021/12/09/cloud-software-maker-hashicorp-hcp-starts-trading-on-nasdaq.html
Almost all of the company’s revenue comes from subscriptions, but just 7% comes from cloud-based services, although that’s the fast-growing part of the company.
Almost all of the company’s revenue comes from subscriptions, but just 7% comes from cloud-based services, although that’s the fast-growing part of the company.
OMG, it must be very good time for all Hashi employees who joined a while ago… after hard work and high risks …well deserved rewarding time
instant billionaire. I hope they sell a lot of shares before it slumps. Can’t see the value staying so high
Yeah 14 billion is a lot. I wonder what ARR they currently have?
2021-12-10
This looks like a breaking change was introduced where replica DNS for RDS clusters is only created if the configuration is serverless? https://github.com/cloudposse/terraform-aws-rds-cluster/blob/master/main.tf#L331 would this have been intentional for any sort of reason? trying to pull my configuration forward and its not serverless and will remove the DNS record.
what Add performance_insights_retention_period Add ca_cert_identifier Add preferred_maintenance_window to instances Add timeout to instances why Performance insights retention Add a ca cert iden…
Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - terraform-aws-rds-cluster/main.tf at master · cloudposse/terraform-aws-rds-cluster
nice catch
what Add performance_insights_retention_period Add ca_cert_identifier Add preferred_maintenance_window to instances Add timeout to instances why Performance insights retention Add a ca cert iden…
Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - terraform-aws-rds-cluster/main.tf at master · cloudposse/terraform-aws-rds-cluster
created quick pr here https://github.com/cloudposse/terraform-aws-rds-cluster/pull/128
what Restore original logic why Previous logic was to create the record when module was not serverless references Previous PR #124
the orig logic was to create the dns record only if the engine is not serverless
sweet!
@Jamie K please use https://github.com/cloudposse/terraform-aws-rds-cluster/releases 0.49.2 release
Terraform module to provision an RDS Aurora cluster for MySQL or Postgres - cloudposse/terraform-aws-rds-cluster
2021-12-12
Hi! I’m looking for someone who can help me with small (paid) ad-hoc projects involving terraform and kubernetes. I’m a devops beginner launching a new product. The terraform/kubernetes side of it is really small and simple, but occasionally challenges come up that I can’t solve, (or which take much too long for me to learn how to solve).
For example, right now, I have kubernetes ingress working fine with the AWS Load Balancer Controller. (AWS Load balancers are created to serve the k8 ingresses). But I’m having trouble installing certmanager and solving letsencrypt http challenges. The first project would be to get the tls features working.
Onboarding is really fast. It’s all on github with a remote backend, and I’ve setup a docker container command tools environment that has everythign you need in just a couple of minutes onboarding.
I’d really appreciate (and enjoy) having a friend/consultant to help out when it’s too hard for me. Thank you in advance for reaching out.
I wonder if you’d find it easier to use AWS ACM certificates, which you can very easily create programatically with Terraform
Any way to do that from within kube?
@Alex Jurkiewicz, that’s a very nice suggestion. I believe ACM can also update certificates automatically? How would one then use the certificate in a kubernetes ingress?
@steenhoven, yes, one can set it up using yaml and kubectl. But then history/configuration is not clear to others coming later to the project, hence my desire to use terraform for it.
(Mostly I’m interested in finding someone willing to help out as a paid consultant).
Right. You say its possible to create an ACM cert from a kube manifest?
@Benjamin Boyle you can create ACM in terraform and set ACM certificate ARN in ingress annotations.
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:eu-west-1:xyz:certificate/bfbfa4ab-6b51-4575-92f1-56e2a31f0fbd
full example
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/tags: app=platform
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:eu-west-1:xyz:certificate/bfbfa4ab-6b51-4575-92f1-56e2c21f0fbd
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}, {"HTTP": 8080}, {"HTTPS": 8443}]'
alb.ingress.kubernetes.io/ip-address-type: ipv4
alb.ingress.kubernetes.io/backend-protocol: HTTP
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
alb.ingress.kubernetes.io/healthcheck-path: /
alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-2-2017-01
alb.ingress.kubernetes.io/load-balancer-attributes: routing.http.drop_invalid_header_fields.enabled=true
alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true,stickiness.type=lb_cookie,stickiness.lb_cookie.duration_seconds=172800,load_balancing.algorithm.type=least_outstanding_requests
spec:
rules:
- http:
paths:
Terraform module to request an ACM certificate for a domain name and create a CNAME record in the DNS zone to complete certificate validation - GitHub - cloudposse/terraform-aws-acm-request-certifi…
I was just trying with this one, but it doesnt validate: https://github.com/terraform-aws-modules/terraform-aws-acm
Terraform module which creates and validates ACM certificate - GitHub - terraform-aws-modules/terraform-aws-acm: Terraform module which creates and validates ACM certificate
@ismail yenigul does the module create the required CNAME records for the validation?
I am using another terraform module for acm. but for this module, yes it does https://github.com/terraform-aws-modules/terraform-aws-acm/blob/master/main.tf#L34 https://github.com/terraform-aws-modules/terraform-aws-acm/blob/master/variables.tf#L7 Double check if you provide correct zone id.
Terraform module which creates and validates ACM certificate - terraform-aws-acm/main.tf at master · terraform-aws-modules/terraform-aws-acm
Terraform module which creates and validates ACM certificate - terraform-aws-acm/variables.tf at master · terraform-aws-modules/terraform-aws-acm
Thanks
This is the .tf code I’m having trouble with (as you can see, it’s very small) https://github.com/FastFinTech/FFT.Signals.GitOps/blob/main/ingress.tf
Terraform definition for the FFT.Signals infrastructure - FFT.Signals.GitOps/ingress.tf at main · FastFinTech/FFT.Signals.GitOps
Hi, Can someone please take a look at this reddit post and help me out? https://www.reddit.com/r/Terraform/comments/rf2473/access_value_from_map_of_list/ I am trying to remove a duplicate var in the main.tf.
Hello Folks, ​ I am working on developing an S3 module based on the TF community module. As part of that, I am trying to access a value…
2021-12-13
Is it possible to create multiple Client VPNs in the same region with different VPCs using https://github.com/cloudposse/terraform-aws-ec2-client-vpn? I’m using the SSM functionality to store the cert information and it seems like there’s no way to specify new key names for the SSM keys, and collisions occur?
Contribute to cloudposse/terraform-aws-ec2-client-vpn development by creating an account on GitHub.
Have you tried changing the context inputs for each module ref ?
for example
module "ec2_client_vpn_blue" {
source = "cloudposse/ec2-client-vpn/aws"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"
name = "blue"
# ... etc
}
module "ec2_client_vpn_orange" {
source = "cloudposse/ec2-client-vpn/aws"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"
name = "orange"
# ... etc
}
Contribute to cloudposse/terraform-aws-ec2-client-vpn development by creating an account on GitHub.
I have, in my digging, the only thing that would actually change the SSM key names is the secret_path_format in https://github.com/cloudposse/terraform-aws-ssm-tls-self-signed-cert but I don’t think I can affect that without the client vpn module supporting it directly?
This module creates a self-signed certificate and writes it alongside with its key to SSM Parameter Store (or alternatively AWS Secrets Manager). - GitHub - cloudposse/terraform-aws-ssm-tls-self-si…
This module creates a self-signed certificate and writes it alongside with its key to SSM Parameter Store (or alternatively AWS Secrets Manager). - terraform-aws-ssm-tls-self-signed-cert/ssm.tf at …
the name contains module.this.name
so if you put a diff name for each module ref of ec2 client vpn, it would feed a diff name to each ssm tls self signed cert, which would create a separate ssm resource
I thought I’d tried it, but did it again just to double check and it doesn’t appear to affect it… am I missing something? Trying both with context & name… The private key name is still /self-signed-cert-server.key
hmmm that’s very strange. it’s like the name is completely skipped from the formatting
could you create an issue in the client vpn github repo and our sme will get to it ? or if you figure it out, feel free to put in a pr :)
will do…
is this overriding whatever I pass in and ‘hardcoding’ it? https://github.com/cloudposse/terraform-aws-ec2-client-vpn/blob/master/main.tf#L24
Contribute to cloudposse/terraform-aws-ec2-client-vpn development by creating an account on GitHub.
as well as lines 58, 93, etc
That appears to have been it. Didn’t see it earlier, https://github.com/cloudposse/terraform-aws-ec2-client-vpn/pull/24
what The certificate names are all hardcoded, not allowing modification via context. why In order to have multiple Client VPNs in the same region, the keys stored in SSM need to be unique, the hard…
cc: @Leo Przybylski
What are the options for keeping secrets out of the state file?
Would anyone have a working example of an AWS IAM module that uses the resource aws_iam_instance_profile and is able to produce a password? Ideally with pgp and not Keybase
gpg --gen-key
gpg --export MyKey | base64 > pgp_key
then in TF:
resource "aws_iam_user_login_profile" "me" {
pgp_key = file("./pgp_key")
# ...
}
i’ll give that a go, tyvm!
this worked, appreciate the response a lot. Cheers!
@Andriy Knysh (Cloud Posse) is it possible to create resources without tags? ( using cloudposse module that uses context)
@jose.amengual did you try setting tags = {}
?
module.this.tags
always returns tags
I did
what if you create a label outside of the module and then pass that label’s context into the module ?
because the null label append the tags base on namespace etc
this is my problem : https://github.com/cloudposse/terraform-aws-ec2-instance/blob/master/main.tf#L106
it is already embeded
what happens if you pass in
tags = {
pepe = "true"
}
does it overwrite the tags or append them to each resource ?
append
well merge
have you tried playing with the other context inputs like labels_as_tags
?
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - terraform-null-label/context.tf at 488ab91e34a24a86957e397d9f7262ec5925586a · cloudposse/terraf…
I have not
maybe try setting it to ["unset"]
awesome
w00t
Hi all, does anyone know the correct process for creating a Terraform AWS keypair for a Windows instance? I have the resource/parameters correct, but consistently get an error:
error importing EC2 Key Pair (KP-production-Management-0): MissingParameter: The request must contain the parameter PublicKeyMaterial
I am just unsure of what I’m missing in regards to PublicKeyMaterial TIA
@Lloyd O’Brien are you using https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/key_pair?
To answer your question, though, the PublicKeyMaterial is the content of the public key associated with a key pair. so you need to provide the actual contents of the file containing the public key that you want to associate with the private key.
You can paste the contents of the file directly into your terraform or you can reference the file externally.
The docs give a good example of adding the public key material to the terraform code:
resource "aws_key_pair" "deployer" {
key_name = "deployer-key"
public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD3F6tyPEFEzV0LX3X8BsXdMsQz1x2cEikKDEY0aIj41qgxMCP/iteneqXSIFZBp5vizPvaoIR3Um9xK7PGoW8giupGn+EPuxIA4cDM4vzOqOkiMPhz5XK0whEjkVzTo4+S0puvDZuwIsdiW9mxhJc7tgBNL0cYlWSYVkz4G/fslNfRPW5mYAM49f4fhtxPb5ok4Q2Lg9dPKVHO/Bgeu5woMc7RY0p1ej6D4CKFE6lymSDJpW0YHX/wqE9+cfEauh7xZcG0q9t2ta6F6fmX0agvpFyZo8aFbXeUBr7osSCJNgvavWbM/06niWrOvYX2xwWdhXmXSrbX8ZbabVohBK41 [email protected]"
}
Hey @managedkaos thanks for taking a look at my post. Yes I am using that resource you linked. I think my issue is generating the key (for Windows) and having that key meet one of the 3 bullet points in the doc you linked. The example seems to be for SSH, but hard to find material on keys for Windows.
The key material will be the same. How are you generating your key?
Ideally you’d use ssh-keygen
and do something like:
/usr/bin/ssh-keygen -t ed25519 -C "This is a comment" -f this_is_the_key_name
the .pub
part of that output is what you provide.
For windows servers in AWS, you provide the private key when you decrypt the admin password. so its not really an SSH operation, but the features of the key are used for encryption/decryption.
Anyway, I hope you get it worked out!
2021-12-14
We really need some kind of Cloudposse published artifact list for helping https://github.com/cloudposse/terraform-external-module-artifact with “broken defaults”… because https://github.com/cloudposse/terraform-aws-ses-lambda-forwarder/issues/13 seems to be back, but since the module doesnt log the actual curl request anywhere I have no log of the actually attempted url, and also no way to know what the correct one to try explicitly setting it to is… because its just an S3 bucket and if i don’t know the key… well then I just get 404.
Terraform module to fetch any kind of artifacts using curl (binary and text okay) - GitHub - cloudposse/terraform-external-module-artifact: Terraform module to fetch any kind of artifacts using cur…
When running the example hcl from the readme I'm getting the following error: Error: failed to execute "curl": curl: (22) The requested URL returned error: 404 on .terraform/modules/s…
I think the better way forward for us on this module is to rewrite it to use a dockerized lambda
Terraform module to fetch any kind of artifacts using curl (binary and text okay) - GitHub - cloudposse/terraform-external-module-artifact: Terraform module to fetch any kind of artifacts using cur…
When running the example hcl from the readme I'm getting the following error: Error: failed to execute "curl": curl: (22) The requested URL returned error: 404 on .terraform/modules/s…
and publish a public ECR image for it
we do have some plans for developing a module for that in the next month or so as part of another requirement.
@Leo Przybylski you are having this problem
This turns out to be a problem with the github action
it’s failing on deploy to S3. something changed with how the aws cli runs under GHA. exploring options.
This was fixed in https://github.com/cloudposse/terraform-aws-ses-lambda-forwarder/pull/35
what Setting environment variable AWS_EC2_METADATA_DISABLED: true as a solution why github actions is unable to push artifacts to s3 because of an error with the awscli. references aws/aws-cli…
for the record, the published artifact should always match the commit of the release. if there’s no artifact, there’s a problem with the pipeline.
“dynamic subgraph encountered errors: failed to execute “curl”: curl: (22) The requested URL returned error: 404” is not a lot to go on for debugging…
Any thoughts/opinions on kitchen-terraform?
2021-12-15
Hi, using the https://github.com/cloudposse/terraform-aws-sso repo and getting some strange errors when applying changes:
ConflictException: Could not delete because PermissionSet has ApplicationProfile associated with it
Has anyone ever seen those?
hmm interesting. could you create an issue with all of your inputs?
Hey it looks like permission sets were associated manually to accounts OUTSIDE of TF, which caused the issue, sorry nothing to see here
v1.1.1 1.1.1 (December 15, 2021) BUG FIXES: core: Fix crash with orphaned module instance due to changed count or for_each value (#30151) core: Fix regression where some expressions failed during validation when referencing resources expanded with count or for_each (<a href=”https://github.com/hashicorp/terraform/issues/30171“…
Fixes #30110. These commits are also on a shared working branch, which I've rebased and squashed so that we don't have a broken commit on main if this is merged. From @apparentlymart's …
Revert the evaluation change from #29862. While returning a dynamic value for all expanded resources during validation is not optimal, trying to work around this using unknown maps and lists is cau…
2021-12-16
Hi. I need to iterate through a list of objects that I’m using to build WAF ACLs. The trick here is that the order in the list absolutely matters. I’ve tried googling a number of keyword combinations of “for_each preserve order”, “for_each dynamic guarantee order”, and so on, and haven’t really found anything that can answer my question. How might I go about doing this? Any code that includes for
/ for_each
and dynamic
inside of a resource should give me what I need to get it implemented in my use-case.
What have you tried that is not preserving order the way you expect?
maps/objects are unordered by design.. you may need to use a list/tuple
I was under the impression that for_each didn’t handle lists, that toset() needed to be used.
correct, for_each
cannot.. do you have the option of using a list?
Is that not true for dynamic blocks? I wasn’t able to verify that it wasn’t true for dynamic blocks.
Option of using a list? Yes. But how to set that up?
Well, I don’t know. It needs to be within a dynamic block.
otherwise, create an intermediate map and specify an ordered key
I considered doing that. But I really don’t want to force future users to manually order the keys in a map in order to preserve the order …
yeah unfortunately not much else you can do
Hmm… in that case, is there a way to abstract that kind of ugly behavior, so that the user doesn’t need to order the list elements into a map?
any chance you can share what your object looks like?
I just found evidence that a dynamic block for_each can accept lists. It seems that I’ve confused the syntax.
Thanks!
yeah, although they look similar, dynamic block for_each and resource for_each can accept different data types
What do you need to preserve order for with WAF?
The rules are processed top down.
First one to match terminating action wins.
So the order matters.
( that’s how I understand it, anyway )
correct, a for_each expression on a dynamic block will accept a list, and if you use a list then the resulting generated blocks will maintain the order of the list
good to know. We use WAF but only with managed rulesets, so I’ve never noticed that
I’m implementing WAF for custom rule groups, and wanted to be able to allow for more strictly managed rule groups to be combined with custom rule groups, as needed.
Under some conditions, you can replace dynamic
with primitive types btw (link):
resource aws_foo bar {
block {
name = "one"
}
block {
name = "two"
}
}
equivalent to
resource aws_foo bar {
block = [
{ name = "one" },
{ name = "two" },
]
}
Not sure if that still works, or if it ever worked globally or was implement per-provider or even per-resource . I often wish we could do this when writing hairy dynamic logic that is hard to understand
i dearly miss that syntax Alex. they’ve definitely been moving away from supporting attribute assignment as an alternative syntax for blocks
their reasoning being something about making the json format a first-class citizen and distinguishing between null values vs absence, blah blah blah. still annoys me
oh yeah, what’s the issue with json representation for assignment syntax? I can’t see it immediately
it was too deep in the internals for me to really understand it all, especially not well enough to regurgitate, hence blah blah blah
For historical reasons, certain arguments within resource blocks can use either block or attribute syntax.
i will admit i haven’t done much with dynamic blocks.. i’ve gotten around all my problems by creating maps as required.. maybe i should start looking into dynamic blocks..
Or better yet, check out https://github.com/cloudposse/terraform-aws-waf
Contribute to cloudposse/terraform-aws-waf development by creating an account on GitHub.
This also supports ordered lists of rules
hey guys.. with authoring modules, do you prefer putting the complexity of your module in the front or backend? For e.g. assume i’m writing a module to create security groups. The expectation is that another team member can directly reference this module, pass it a YAML file with all the ingress/egress rules and a vpc_id
.. should i be building my module to handle a single vpc_id
and expect the user/consumer to handle all the front-end logic i.e. create for_each
loops within their reference to the module, or should i make the front-end simple (where they can pass me either a single vpc_id
or a list of vpc_ids
) and then within the module, do my for_each
loops etc; i keep switching back and forth between the 2 different ways.. part of me wants to keep the front end simple so consuming it is easy but then i’m having to write additional logic to handle the various permutations so thinking of shifting the complexity back to the user/consumer.. any ideas? cheers guys!
we’ve gone a separate route where we use yaml to define inputs, use the tool atmos to convert the yaml inputs into a terraform varfile string/map/list/etc, then define individual arguments for a specific terraform root module, which then consumes modules like our sg module to define rules.
anyway long story short, you should build the module to take a single vpc id
here is our sg module where the vpc id can be set
https://github.com/cloudposse/terraform-aws-security-group
and heres atmos https://github.com/cloudposse/atmos
thanks RB. I think it’s reasonable to expect consumers of our modules to have some understanding of Terraform so if our module is built to take a single vpc_id
, the consumer of that module should align to that.
2021-12-17
Anyone here using Terraform Enterprise? We are migrating from atlantis to TFE and was wondering how people deal with the issue where you can’t do targeted applies when using VCS integration. Sometimes terraform has issues with planning and needs some help with targeted applies.
v1.1.2 1.1.2 (December 17, 2021) If you are using Terraform CLI v1.1.0 or v1.1.1, please upgrade to this new version as soon as possible. Terraform CLI v1.1.0 and v1.1.1 both have a bug where a failure to construct the apply-time graph can cause Terraform to incorrectly report success and save an empty state, effectively “forgetting” all existing infrastructure. Although configurations that already worked on previous releases should not encounter this problem, it’s possible that incorrect future…
the joy of being a public company … pressure to “show” new stuff ….
2021-12-18
2021-12-19
Good morning, how can I change the configuration items not created by terraform with terraform?
The best you can hope to achieve is define the resources in terraform, run terraform import
on those resources.
Terraform is not optimized to modify resources managed outside of terraform
only after you import them https://www.terraform.io/cli/import
Terraform can import and manage existing infrastructure. This can help you transition your infrastructure to Terraform.
2021-12-20
is there a way to pass a permission boundary to TF provider instead of to a resource?
i’m not really sure what that means exactly… can you expand on the desired outcome?
ok, nevermind this only apply to role creation
Terraform module for provisioning a general purpose EC2 host - GitHub - cloudposse/terraform-aws-ec2-instance: Terraform module for provisioning a general purpose EC2 host
yeah, it’s an argument for iam principals, so roles and users
I was having trouble to create some resources and I though it was related to the boundary ( which is very restrivtive)
but it was not
it’s kinda an interesting idea to be able to apply a permissions boundary to an assume-role call though…
so i wasn’t sure where you were going…
I mean, if there were a lot of resources that needed the boundary then it will make sense to pass it as a parameter on the provider but it is not
ahh, not quite what i mean… you can set the policy on an assume-role call. so regardless of what the role is, you can pass a more restrictive policy… but, what if you could pass a permissions-boundary? that way you could say, instead, basically, “this temporary credential should be constrained by the permissions in this boundary policy”….
ahhhhhh I c
interesting
but how is the boundary gets applied to the role? on aws organizations?
(I have not done it myself)
the boundary right now is only an argument of the actions iam:CreateRole and iam:CreateUser. a similar but different feature in organizations is “service control policies” or SCPs
so right now, the permissions boundary is at the account level, on every role or user where you want the boundary
ahhhh I though boundaries were made in a central location
I do not see the usefulness of them in that case
it seems to be made to avoid creating multiple policies in one account , but no matter what you need to manage the boundary per account so there is no much difference
yeah, both permissions boundaries and SCPs have a lot of warts when it comes to the user experience
i believe the primary use case for SCPs is to disable entire services and regions
and the primary use case for permissions boundaries is to prevent privilege escalation. you have to grant developers the ability to create roles/users (so a blanket SCP deny won’t work), but you also want to prevent them from granting more permissions than they have themselves. so you write their role such that they have to attach the boundary to any role/user they create
ahhhhh I see ok
so that they do not become themselfs admins
yep exactly, here’s the setup for that, https://aws.amazon.com/premiumsupport/knowledge-center/iam-permission-boundaries/
2021-12-21
Hello Guys, I am working on a Postgres Flexible Server Terraform Module on Azure. I have found a documentation on Terraform but I am a little bit lost. Can anybody help me with that please? Does anybody ever worked on the same or similar project? Thanks
hi. i’m using
https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn
and wondering about error_document
. The doc says the value used here is used for all 4xx errors.
When i set this value to a string like “error.html, however, it doesn’t seem to do anything.
What is the expected result when using this input?
Terraform module to easily provision CloudFront CDN backed by an S3 origin - GitHub - cloudposse/terraform-aws-cloudfront-s3-cdn: Terraform module to easily provision CloudFront CDN backed by an S3…
2021-12-22
This message was deleted.
2021-12-29
has anyone used https://registry.terraform.io/providers/paultyng/sql/latest/docs/resources/migrate to initialize a DB (in RDS or Redshift) after creating it in AWS?
It seems a much nicer solution, state based, than using a local (which relies on local OS to provide db client and ssh client) or remote exec (which requires an ec2 to ssh into and it must have a db client like psql, not really appropriate for a bastion).
I’m guessing the provider tracks migrations via a table in the db but I have not checked the code.
I’ve not used that provider before but taking a quick look at it I can definitely ask “where have been all my life?”
Looking at the schema example, and not having delved into the Go code yet, I’m going to assume that it’s storing the statements like anything else in state and using that to determine if it’s been executed or not if something changes. I could see where that might have some downfalls though if the statement itself is modified after having been executed rather than not creating a new migratation
block
good points
there is also the issue that typically, the DB is not reachable from the machine on which terraform runs
I would still have to create a tunnel through a bastion, but at least no host OS dependency / need to install DB client locally
like a null_resource with local exec for the ssh tunnel, then the sql migrate with a dependency on that null resource
(and would need different solution if we did not already have a bastion and did not want to create one just for this)
yeah network topology would need to be accounted for. Digging around in the code a bit and I saw:
func completeMigrationsAttribute() *tfprotov5.SchemaAttribute {
return &tfprotov5.SchemaAttribute{
Name: "complete_migrations",
Computed: true,
Description: "The completed migrations that have been run against your database. This list is used as " +
"storage to migrate down or as a trigger for downstream dependencies.",
DescriptionKind: tfprotov5.StringKindMarkdown,
...
in the internal/provider/resource_migrate_common.go
code so it does appear it’s keeping track somewhere in the code
looks like it copies internally from migrations
struct to complete_migrations
while processing so I’m assuming it would then compare the later against the former to decide if it should be executed again or not due to change. Whether that accounts for the up/down SQL changing or not I’ve not determined yet
Hello, I am creating and deleting eks clusters using the https://github.com/cloudposse/terraform-aws-eks-cluster complete example and have run into this error multiple times. Is there anyway to resolve properly? My workaround has been terraform state rm module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes
.
module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes[0]: Refreshing state... [id=kube-system/aws-auth]
╷
│ Error: Get "<http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth>": dial tcp [::1]:80: connect: connection refused
│
│ with module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes[0],
│ on .terraform/modules/eks_cluster/auth.tf line 115, in resource "kubernetes_config_map" "aws_auth_ignore_changes":
│ 115: resource "kubernetes_config_map" "aws_auth_ignore_changes" {
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
if you get that when deleting yah that’s about the only solution that I’ve found
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
got it thanks
@Andriy Knysh (Cloud Posse) do we have any work arounds for this?
that is a very common error, and it could be anything related to kubeconfig
e.g. can’t connect to the cluster to load KUBECONFIG, and then the aws provider tries to connect to the local cluster
also depends on the module version
@Alec Fong did you try to use the latest example (which uses the latest module) https://github.com/cloudposse/terraform-aws-eks-cluster/tree/master/examples/complete ?
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
the example gets executed by terratest automatically every time we open a PR, and it gets deleted after that (so it’s working, but prob does not cover all the use-cases you could encounter)
this latest PR https://github.com/cloudposse/terraform-aws-eks-cluster/pull/138 was working ok for both creating the cluster and destroying it after the test
what Update to use the Security Group module Add migration doc Update README and GitHub workflows why Standardize on using https://github.com/cloudposse/terraform-aws-security-group in all modu…
That’s pretty much how mine is setup (minus the vpc setup and adding in some security group and IAM stuff) but I’ve had the aws-auth issue on destroy very recently, on 0.44.0 of the cluster and 0.27.0 of the node-group modules
the recent versions did solve the random “can’t find the aws-auth” issue on plan/apply though
so the error you see is (almost always) b/c the provider could not access the cluster to load KUBECONFIG to get the keys and creds, and by default the provider (if you check the Go code) will try to access a (non-existing) local cluster (dial tcp [::1]:80
)
sure, just only see that on a destroy now. I don’t know if terraform is removing something in an unexpected sequence or what
what TF version are you using?
(destroy was always an issue with Tf with count
logic, it was extremely bad in TF 0.13)
1.0.11 (held off on 1.1 after encountering some very nasty bugs in it)
2021-12-30
Q: Does anybody have any examples of AWS RDS Events to Pagerduty terraform code/modules? Lot of examples I see are RDS cloudwatch alarms, but not necessarily RDS events. Am I supposed to search for “SNS to Pagerduty” instead?
right, once the event is on SNS you just need to wire that to pagerduty. Doesn’t matter that its an RDS or EC2 or lambda event at that point
this is how the cloudposse RDS alarms module sets it up