#terraform (2023-02)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2023-02-01
Greetings, i am trying to use your https://github.com/cloudposse/terraform-aws-service-control-policies module with terragrunt. My question is how can i fit the value of service_control_policy_statements while not using your cloudposse/config/yaml module. The terragrunt.hcl file is like this.
include {
path = find_in_parent_folders()
}
locals {
common_vars = yamldecode(file(find_in_parent_folders("common_vars.yaml")))
service_control_policy_description = "OwnerTeam Tag Require Service Control Policy"
service_control_policy_statements = here_should_come_the_statement
name = "service-control-policy"
target_id = "141757035671"
}
terraform {
source = "../../..//modules/terraform-aws-service-control-policies"
}
inputs = {
name = "${local.common_vars.namespace}-${local.common_vars.environment}-${local.name}"
service_control_policy_description = local.service_control_policy_description
service_control_policy_statements = local.service_control_policy_statements
target_id = local.target_id
}
Terraform module to provision Service Control Policies (SCP) for AWS Organizations, Organizational Units, and AWS accounts
Might be more of a terragrunt question?
Terraform module to provision Service Control Policies (SCP) for AWS Organizations, Organizational Units, and AWS accounts
The module itself was designed to use a YAML catalog.
Hi all, I’m using the cloudposse/rds-cluster/aws module to create an rds aurora cluster. I now have the requirement to restore from a snapshot.
How do I do so?
I tried changing the snapshot_identifier, but the plan shows an update (and the execution does nothing) when it should show a destroy/create for the cluster.
Terraform module to provision an RDS Aurora cluster for MySQL or Postgres
discovered this as well https://github.com/hashicorp/terraform-provider-aws/issues/15563 looks like i might be out of luck with this
When the snapshot identifier for a aws_rds_cluster resource is updated the resource is not recreated.
Expected Behavior
New cluster created with the snapshot.
Actual Behavior
Terraform apply does not replace the cluster
Steps to Reproduce
• Create a aws_rds_cluster resource • add s snapshot identifier • run terraform apply
Terraform v0.13.3
2023-02-02
So I could use some assistance with understanding the whole “aws-teams” “aws-teams-roles” and “aws-sso” components.. Currently I have an account created in my org that I would like to enable a set of users access to create aws resources in using atmos.. I have setup this group with the AdministratorAccess permissions set tied to the account in question.. My issue arises in how I configure them with the identity account’s roles which is what is used by atmos. Now I only want this team to have Admin access for this one account only. Suggestions as to how I could go about this?
Let’s move this to a new channel I created called refarch
So I could use some assistance with understanding the whole “aws-teams” “aws-teams-roles” and “aws-sso” components.. Currently I have an account created in my org that I would like to enable a set of users access to create aws resources in using atmos.. I have setup this group with the AdministratorAccess permissions set tied to the account in question.. My issue arises in how I configure them with the identity account’s roles which is what is used by atmos. Now I only want this team to have Admin access for this one account only. Suggestions as to how I could go about this?
2023-02-03
@Erik Osterman (Cloud Posse) can the terraform-yaml-config
module be used outside of cloudposse’s opinionated workflow or is couple in one way or another that will ultimately make it a pain?
It would be better to use our terraform provider instead
This module should be updated to use it, but it hasn’t been updated yet.
The Cloud Posse Terraform Provider for various utilities (e.g. deep merging, stack configuration management)
The assumption is you are attempting to deep merge YAML in terraform
I opted for a simple yamldecode because I don’t even want any of the deep merge features at this point, it’ll just be overkill given we’re not really doing yaml tf anywhere outside this one library. thanks for the insight!
Makes sense
does anyone know how to get every non master account bar a specific one?
i want to execute the following but not in our security account
resource "aws_guardduty_member" "this" {
provider = aws.security-account
count = length(data.aws_organizations_organization.this.non_master_accounts)
account_id = data.aws_organizations_organization.this.non_master_accounts[count.index].id
detector_id = aws_guardduty_detector.security_account.id
email = data.aws_organizations_organization.this.non_master_accounts[count.index].email
lifecycle {
ignore_changes = [email]
}
}
I m trying to do route53 vpc association to zones as below and aws_route53_vpc_association_authorization
unable to identify the depends on
attribute . Can anyone help me why i m not able to do this
resource "aws_route53_vpc_association_authorization" "master" {
for_each = { for entry in local.zone_appvpc: "${entry.zone_id}.${entry.app_vpc}" => entry }
vpc_id = each.value.app_vpc
zone_id = each.value.zone_id
depends_on = [
aws_route53_zone.private
]
}
resource "aws_route53_zone" "private" {
count = "${length(var.r53_zones)}"
name = "${element(var.r53_zones, count.index)}"
force_destroy = true
vpc {
vpc_id = var.master_account_vpc_id
}
}
locals {
zone_ids = aws_route53_zone.private.*.id
app_vpcs = var.app_accounts[*].app_vpc_id
zone_appvpc = distinct(flatten([
for app_vpc in local.app_vpcs : [
for zone_id in local.zone_ids : {
zone_id = zone_id
app_vpc = app_vpc
}
]
]))
}
r53_zones = ["aws.org","google.org"]
master_account_vpc_id = "vpc-0b887bb1ccgh0a"
app_accounts = [
{ "account" : "49563775878", "app_vpc_id" : "vpc-0c8aeb58gty6be45e"},
{ "account" : "49556775878", "app_vpc_id" : "vpc-0c8adhghgty6be45e"}
]
I m getting below error as its unable to identify depends on
for_each = { for entry in local.zone_appvpc: "${entry.zone_id}.${entry.app_vpc}" => entry }
│ ├────────────────
│ │ local.zone_appvpc is a list of object, known only after apply
│
│ The "for_each" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work
│ around this, use the -target argument to first apply only the resources that the for_each depends on.
it’s not related to depends_on
this is a common “problem” with terraform
2023-02-05
Hey guys! What do you think about having a visual Terraform editor?
It looks like No Code tools are becoming quite popular so I was thinking how awesome it would be if you could create your Terraform configuration with No Code approach!
Would anyone be interested in using such a tool?
That might be tricky to maintain but yeah I’d be interested if i could just get the output i want without having to make code changes
Would it support gitops or would it be console ops but for terraform?
Of course, you see both GitHub and GitLab icons in the corner? It would sync your Terraform code directly to your repo. So you can work both from the UI and VS Code.
I actually started building this tool recently, so if you want to play with it, its available here. https://codesmash.studio/schema
CodeSmash is the scalable no-code platform that offers you full transparency to your code and the freedom to continue building.
Its still in early development so you can’t save your code just yet! :)
As a DevOps/Infra engineer I find it hard to see a real use case where I would want to use this tool over my current setup using Vs code + terraform cli + terraform extensions for Vs code + copilot/codex. I do wonder if this is something that would appeal to devs more than infra engineers.
Maybe ask in #office-hours ?
Okay, one question, what does your current setup give you that the no-code approach doesn’t?
If I knew how you are using your system, I might be able to implement that experience into my tool out of the box! :)
Sure! Happy to share. My typically work flow looks like this…
• learn about the underlying resource I am trying to provision (ex. Mysql rds from AWS)
• Look at the terraform docs for that resource/provider
• Implement happy path of that resource to get something deployed quickly
• Iterate and fine tune the resource with the settings I want
• Convert it into a module for reuse
• Pin all my versions to help ensure things remain consistent/reproducible.
• Deploy using the module into one or more environments (dev/prod).
• Ensure it’s all living in source control. Currently I use terraform cloud for deployments. And in some cases I run it locally and have a backend in s3 or another storage provider.
Oh yeah I’ve also added chatGPT to a lot of my workflows as well. So an AI assistant would be nice too but I imagine I could still use chatGPT with your tool as I do today.
ok, first things that comes to my mind is that you mentioned the documentation. I should add a link to each resource which would point to the official Terraform documentation page
All the other things you mentioned would be supported, since the whole thing saves everything in a git repo by default. So your CICD process can run as usual
but one thing which would offer you which you currently might not have is a package manager like NPM. You would actually be able to install several repos together into a larger repo, which you would then simply provision into a new infrastructure. Do you currently have something like that?
This is different than using modules, correct? I could say deploy this repo and then this other repo right after?
Yes, so here’s the idea. You would have many modules in many different repos. But you could then combine them into another repo based on your needs. Then you would just lauch this larger repo and have your infrastructure up and ready.
for example, you have lets say API repo and DB repo. Then you create APP1 repo which consists of both API and DB repo, and then create APP2 repo which consists of also API repo and DB repo but with different parameters.
Sorry. I can’t see much use for this compared to using terraform-lsp & Copilot (for autocompletion) and modules (for composition)
Visual display could be useful for visualising dependencies, which are hard to track when reading linear code spread over multiple files. But this is a niche use-case for me.
Visual editors can make sense for people less familiar with the underlying tooling. But then you end up needing to support people/code ejecting from your system once they get more experienced/complex.
And more broadly, I am all for lowering the barrier to entry for Terraform, but I don’t want it more widely used at work. The hard part about Terraform is understanding the underlying AWS (et al) APIs. Not the language itself.
the idea is more that you would have a packaging system for your modules. The UI really does lower the barrier of entry but isn’t the main feature. The main feature is that you would be able to compose your whole infrastructure from smaller modules in a more maintainable way.
I think I’d need to see a demo of this to understand. But I wouldn’t say composition is a painful part of the Terraform experience
okay, that’s good. I’ll be sure to let you know once I have an MVP ready.
This reminds me of working with CAD tools from my days designing circuits with VHDL. Same scenario: we had code that represented circuits (like TF modules) and we connected them using inputs and outputs (like TF variables and outputs).
We used code editors to do the design work but it was nice to have tools that visualized the code as a system, showing what modules were connected and how.
I see the same benefit from this tool. Not sure I would do the design work in a GUI but I would appreciate being able to see the complete system as the design came together.
In the GUI view, links to the source of a module (or TF docs for base modules) would be a nice touch.
One thing I would suggest is putting the inputs along the left edge of the block and the outputs along the right edge of the block. considering flow diagrams, that may make it easier to read (for left to right reading) than having the inputs and output in a top to bottom list.
I will be interested to see what are the benefits but as other said I will need to see a demo to compare with my current setup and see the benefits
Another UI/UX point I would suggest: indicate if a module is a resource module or if it contains other modules.
I imagine a tool like this (again, thinking about CAD tools I’ve used) would start at the top level of the system. The top level might be just one block with inputs and outputs. inside that block (say, double click to get into it) would be other modules that are either base resources or perhaps contain other modules, and so on going down until there are only base resources.
not sure you have that in mind but I think that would be a useful feature for super modules that collect multiple submodules into a single, callable module.
@managedkaos yup, that’s the idea. But in my case, I would totally leave it up to the user how his module behaves. The community can create any type of module they want and use the git url as a source to it. Once its loaded into the UI, only the top level input and output variables will be picked up. Then you can connect them with other repos to form a cohesive infrastructure.
You would then be able to save this templete to your own repo and reuse it in other projects.
@jose.amengual thanks, I’ll contact you when my MVP is ready
Just to clarify some things. This is not a Terraform specific project. Terraform is just the starting point. The community would be open to making their own providers which would be able to handle other tools as well. So you would then be able to chain multiple repos and process them using this system. You would then package them together to create full-stack apps where you would have all your code available in your repos. So, no more-lock in like with Bubble. This is more of an App Package Manager, but with a visual UI editor. Also, there would ba an API/CLI access as well.
2023-02-06
Hello Terraformers! I just finished the 2nd part of my AWS PrivateLink series, Check it out and let me know what you think. Also, are you currently leveraging Privatelink? https://www.taccoform.com/posts/aws_pvt_link_2/
Overview In the previous PrivateLink post, we went through the separate resources that make up AWS PrivateLink. In this post, we will be provisioning a PrivateLink configuration which will allow resources in one VPC to connect to a web service in another VPC. You can use several different AWS services to accomplish the same goal, but PrivateLink can simplify some setups and meet security expectations with its standard one-way communication.
2023-02-07
What are people’s opinions regarding storing state (not in a TACO)? This would be for a monorepo of ops code and there are maybe 20 accounts. So one option would be one-state-account-to-rule-them-all or basically a state bucket in every account. Personally I like the latter, but the former definitely seems easier in the beginning. Curious if people have hard-won experience in both these areas that have helped them form opinions?
ive always thought a single bucket is the easiest. ive had clients that required multiple buckets for compliance reasons
We use a monorepo which both contains the application code and most of the supporting infra. We got a state bucket in each of our accounts. So far we are happy with that set up.
+1 for the state bucket per account. Within that bucket, each workload has its own folder. Teams I’ve worked with used this approach for dozens of accounts and it helped with knowing where to look for things.
I went with per account to folks – wow: I APPRECIATE the input.
No problem! I wanted to share an example at the time but was away from a computer. Here’s a sample of how we set the backend up with S3:
terraform {
required_version = "~> 1"
required_providers {
aws = {
source = "hashicorp/aws"
}
}
backend "s3" {
bucket = "acme-terraform-state-files"
key = "project-name-here/infrastructure.tfstate"
region = "us-west-2"
}
}
each project uses the same format and updates the key
section.
I’d also add a version constraint on the aws provider and add dynamodb locking
If you decide to use multiple s3 buckets, id still use a single locking dynamodb
If you decide to use multiple s3 buckets, you may also want to decide where you want to keep those buckets (all in one account or one in each account), and the bucket naming convention
If youre not using a terraform wrapping tool like atmos (made by cloudposse), or terragrunt (by gruntworks), or terramate, then you may also want to define the key naming convention and/or the workspace key structure (if using workspaces)
Does anyone have a good way to deal with a null value in a ternary that is breaking higher up in the dictionary?
bucket = var.build_config.source.storage_source.bucket != null ? var.build_config.source.storage_source.bucket : google_storage_bucket.bucket.name
=>
Error: Attempt to get attribute from null value
│
│ on ../../../modules/cloud_function/main.tf line 26, in resource "google_cloudfunctions2_function" "function":
│ 26: bucket = var.build_config.source.storage_source.bucket != null ? var.build_config.source.storage_source.bucket : google_storage_bucket.bucket.name
│ ├────────────────
│ │ var.build_config.source is null
│
│ This value is null, so it does not have any attributes.
In other words it’s breaking because before it’s even getting to the final key to check, the key above that is null so it has no attribute to even look for.
This is a block list of Max 1. I don’t using a dynamic would help really. I have marked var.build_config.source
as optional because it is, as are all other values below it.
I guess I could use a nested ternary lol where I first check source and then check storage for nulls but oof, man that is kludgy.
try(var.build_config.source.storage_source.bucket, null) != null
The try function tries to evaluate a sequence of expressions given as arguments and returns the result of the first one that does not produce any errors.
@Andriy Knysh (Cloud Posse) you’re a mensch, buddy. .
nice to see you again @Jonas Steinberg. Looks like you are working with GCP now
always a pleasure to see you my friend – yes GCP for the time being, but AWS is always in the background.
This message was deleted.
This message was deleted.
does anyone know how to create an IAM role that allows federated SSO users access in the “assume role policy”
data "aws_iam_policy_document" "trust" {
statement {
effect = "Allow"
actions = ["sts:AssumeRoleWithWebIdentity", "sts:TagSession"]
principals {
type = "Federated"
identifiers = [aws_iam_openid_connect_provider.this.arn]
}
}
}
This sort of trust policy is what you need
i am using IAM identity center though e
If I can help it I don’t want to have to add AWSReservedSSO_AdministratorAccess_REDACTED/[[email protected]](mailto:[email protected])
for each of the people I want to have access
oh, whoops. https://aws.amazon.com/premiumsupport/knowledge-center/iam-identity-center-federation/ might be related
2023-02-08
is this valid …
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::aws:policy/AWSServiceRoleForSSO"
}
}
]
}
No, policies are not principals
it doesn’t seem that a permission set in IAM identity center allows inline policies that include assume roles
at present we have …
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::REDACTED:role/atlantis-ecs_task_execution",
"arn:aws:iam::REDACTED:root",
]
},
"Action": "sts:AssumeRole"
}
]
}
I want to remove the root
and add a specific group from SSO
it is not possible to add a group from sso. the principal will be either a role arn or an sts session arn
the combo of the group and account can be assigned to a permission set. that assignment will create a role named for the permission set in that account. when the user accesses the permission set, they assume that role in that account. that is the role you can use as a principal for assume-role actions
gotcha that makes sense
the issue i have is that SSO role arn is random in its naming per account.
do you have many of them you want to trust? or just one?
its just one
but i can’t find a way of “finding the arn” which some regex
if you have many, you can use a condition instead of a principal…
"Condition": {
"ArnLike": {
"aws:PrincipalARN": [
"arn:*:iam::*:role/aws-reserved/sso.amazonaws.com/*/AWSReservedSSO_ROLENAME_*",
]
}
}
if you have just one, create the assignment, and then hardcode the value
in the Principal field, you cannot use wildcards, that’s not allowed
but it works fine in conditions
what do i set the identifier
block to?
sorry a little bit of an IAM n00v
probably leave it as:
"AWS": [
"arn:aws:iam::REDACTED:root",
]
where REDACTED is the current account?
REDACTED is the account to which the permission set is assigned. that’s where the role will be created
the permission set lives in the users account
so it that case it needs to be the user account id?
oh the role is created inside the specific account itself
permission sets “live” in the sso account, nowhere else. the assignment of a permission set to a group:account creates a role in that account. roles live in accounts
yeh so redacted should be the current account id
let me take that for a spin, thanks @loren
cheers
i need to allow users (in a specific group from SSO) the ability to assume a specific role inside each accounte
This message was deleted.
hello
Is there a doc anywhere on how you make sure your modules remain compliant at cloudposse?
define compliant
The readme of each module
in which sense is not compliant?
2023-02-09
Anyone experiencing Terraform Cloud hickups? Pending jobs etc?
v1.4.0-beta1 Version 1.4.0-beta1
Version 1.4.0-beta1
Hi,
Is this https://github.com/cloudposse/terraform-aws-tfstate-backend still recommended or is there a better one to use on Cloudposse ?
Terraform module that provision an S3 bucket to store the terraform.tfstate
file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption.
yep, that is the standard
Terraform module that provision an S3 bucket to store the terraform.tfstate
file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption.
Thankx
Thanks
Though it is not aws 4.x provider compatible
or 3.x
A terraform module to set up remote state management with S3 backend for your account.
why does it have to be compatible with 4?
To use a supported and developed version of provider
3.x is still supported
3.76.1 PUBLISHED a month ago
by the way did you checked the [versions.tf](http://versions.tf)
, it is not pinned to version 3
fresh install :
Initializing provider plugins...
- Finding hashicorp/local versions matching ">= 1.3.0"...
- Finding hashicorp/aws versions matching ">= 2.0.0, >= 3.0.0"...
- Finding hashicorp/time versions matching ">= 0.7.0"...
- Installing hashicorp/local v2.3.0...
- Installed hashicorp/local v2.3.0 (signed by HashiCorp)
- Installing hashicorp/aws v4.53.0...
- Installed hashicorp/aws v4.53.0 (signed by HashiCorp)
- Installing hashicorp/time v0.9.1...
- Installed hashicorp/time v0.9.1 (signed by HashiCorp)
apply is failing not thte init
because of s3 migrate
anyway i used nozaq
I use this every day and no issues, if you can’t post more details, is hard to tell
I’m having trouble getting YAML into a heredoc:
# files.yml
file1:
key1: "value1"
key2:
- subkey1: "subvalue1"
subkey2: "subvalue2"
...
# module instantiation
module "my-mod" {
source = "../foo"
files = yamldecode(file(files.yml))
}
# module source
locals {
service_definitions = { magic }
}
resource "datadog_service_definition_yaml" "service_definition" {
for_each = local.service_definitions
service_definition = <<EOF
"${each.value}"
EOF
}
errors is:
Error: Invalid template interpolation value
│
│ on ../../../modules/datadog-service-definitions/main.tf line 12, in resource "datadog_service_definition_yaml" "service_definition":
│ 11: service_definition = <<EOF
│ 12: "${each.value}"
│ 13: EOF
│ ├────────────────
│ │ each.value is object with 9 attributes
│
│ Cannot include the given value in a string template: string required.
I’ve outputted the object: it does indeed have 9 attributes and looks correct as far as an object goes. Any hacks for this? I recognize it’s a type thing.
I have the raw YAML approach working:
# works
resource "datadog_service_definition_yaml" "service_definition" {
service_definition = <<EOF
schema-version: v2
...
can you just use yamlencode?
service_definition = yamlencode(each.value)
I just cracked it.
Thanks @loren
It turns out that the file
built-in function returns a string literal which removes the need for a HEREDOC so all I have to do is simply pass my yaml file to file
function and then return my value to the input variable, not HEREDOC needed.
yeah i assumed you were doing some manipulation on the object, since you read in the file and decode it above…
module "my-mod" {
source = "../foo"
files = yamldecode(file(files.yml))
}
if not, then yeah, just file()
will give you your serialized yaml string
Try this:
# files.yml
file1:
key1: "value1"
key2:
- subkey1: "subvalue1"
subkey2: "subvalue2"
# module instantiation
module "my-mod" {
source = "../foo"
files = yamldecode(file("files.yml"))
}
# module source
locals {
service_definitions = { magic = "magic" }
}
resource "datadog_service_definition_yaml" "service_definition" {
for_each = local.service_definitions
service_definition = <<EOF
${each.value}
EOF
}
ah – files
is the for_each
source. anyway thanks && fwiw I solved it.
turns out file returns a string lol. makes sense!
v1.4.0-beta1 1.4.0 (Unreleased) UPGRADE NOTES:
config: The textencodebase64 function when called with encoding “GB18030” will now encode the euro symbol € as the two-byte sequence 0xA2,0xE3, as required by the GB18030 standard, before applying base64 encoding.
config: The textencodebase64 function when called with encoding “GBK” or “CP936” will now encode the euro symbol € as the single byte 0x80 before applying base64 encoding. This matches the behavior of the Windows API when encoding to this…
1.4.0 (Unreleased) UPGRADE NOTES:
config: The textencodebase64 function when called with encoding “GB18030” will now encode the euro symbol € as the two-byte sequence 0xA2,0xE3, as required by th…
v1.3.8 1.3.8 (February 09, 2023) BUG FIXES: Fixed a rare bug causing inaccurate before_sensitive / after_sensitive annotations in JSON plan output for deeply nested structures. This was only observed in the wild on the rancher/rancher2 provider, and resulted in glitched display in Terraform Cloud’s structured plan log view. (#32543) A…
Go’s append() reserves the right to mutate its primary argument in-place, and expects the caller to assign its return value to the same variable that was passed as the primary argument. Due to what…
2023-02-10
Selenium C# WebDriver not resizing browser window correctly I have to make visual tests for test automation purposes, with window browser size being (x=1366,y=668). When I’m running these tests in a headless mode, everything works fine and the window is as large as it should be. However, when I run the same tests WITHOUT headless mode, the browser window always is a bit smaller than it should be. How can it be made that the window is being sized as it is specified? Here we can see that the size of the window is not as expected - (1366, 668) <a…
I have to make visual tests for test automation purposes, with window browser size being (x=1366,y=668). When I’m running these tests in a headless mode, everything works fine and the window is as …
How do you resize images css [closed] I thought i had everything correctly but i can’t get CSS to resize the image, someone please help I was trying to resize the image
I thought i had everything correctly but i can’t get CSS to resize the image, someone please help I was trying to resize the image
Angular Material 15: How to resize focused components such as autocomplete or datepicker? i updated to material 15, i want ask how i can change height, or overall size of focused component such as autocomplete or datepicker, https://m2.material.io/design/layout/applying-density.html#usage , this page is telling me i can’t use density on such components so what is the other way of making my components smaller? before this i used material 12 and all i had to do was change…
i updated to material 15, i want ask how i can change height, or overall size of focused component such as autocomplete or datepicker, https://m2.material.io/design/layout/applying-density.html#usa…
Build beautiful, usable products faster. Material Design is an adaptable system—backed by open-source code—that helps teams build high quality digital experiences.
Cypress drag and drop by offset I am trying to make DnD work in Cypress and React app. At first I was trying all examples around those with simulating by mouse events but it is not working in general in my environment. :( I was able to write a test where I know the drag element and drop element using this code. const dataTransfer = new DataTransfer();
cy.get(dragSelector).trigger(“dragstart”, { dataTransfer, force: true, });
cy.get(dropSelector).trigger(“drop”, { dataTransfer, force:…
I am trying to make DnD work in Cypress and React app. At first I was trying all examples around those with simulating by mouse events but it is not working in general in my environment. :( I was a…
CSS img won’t resize Total newbie here, tried to google my issue, even get on the second page of results… I’m stuck on resizing image and I have no idea why it is not working, hoping someone can help. By using Inspect on Chrome I can see that element for img is not being connected to the img and I have no idea why. Block below is within
Here is CSS that I have for the whole part. main { display: inline-block; background-color: var(–main-bgc);…
Total newbie here, tried to google my issue, even get on the second page of results… I’m stuck on resizing image and I have no idea why it is not working, hoping someone can help. By using Inspec…
How do I resize views when using UITabBarController as popover I have a UITabBarController that is created as a popover view on an iPad screen. There are three tabs associated with it. I’m trying to make one popover (the first tab) smaller than the other two. Once the popover is created and displayed how can I change the size of the popover window when the first tab is selected vs the other two views? This is the code that creates the popover. let storyboard = UIStoryboard(name: “Flight”, bundle: nil) let tbc =…
I have a UITabBarController that is created as a popover view on an iPad screen. There are three tabs associated with it. I’m trying to make one popover (the first tab) smaller than the other two. …
How to make columns of a table resizable with CSS without property of table width 100% This question is similar to that you find here Would it be possible to do the same without using width 100%? I have a large table and I want to keep it scrollable horizontally without squeezing other columns as I enlarge one… Thank you so much!
This question is similar to that you find here Would it be possible to do the same without using width 100%? I have a large table and I want to keep it scrollable horizontally without squeezing other
Short Version Run the code snippet below (skip over the code; i would collapse it if i could):
table.listview { background-color: #fcfcfc; color: #061616; white-space: nowrap; border-co…
In Flutter, how to reduce the image data before saving it to a file? I’m working on a screen recording app in Flutter. In order to keep the quality of the captured screen image, I have to scale the image as shown below. However, it makes the image data very large. How can I reduce the data size (scale back to the original size) before saving it to a file? For performance reasons, I want to work with raw RGBA instead of encoding it PNG. double dpr = ui.window.devicePixelRatio; final boundary = key.currentContext!.findRenderObject() as…
I’m working on a screen recording app in Flutter. In order to keep the quality of the captured screen image, I have to scale the image as shown below. However, it makes the image data very large. H…
Dependencies array issue with useMemo and a debounced function
I have the following code that uses lodash.debounce:
// Get Dimensions
// Capture dimensions when App
first mounts, immediately
// Listen for resize events and trigger ‘getDimensions’ (debounced)
const getDimensions = () => dispatch(setDimensions({
viewportWidth: window.innerWidth,
viewportHeight: window.innerHeight
}))
const handleResize = useMemo(() => debounce(getDimensions, 250) , [getDimensions])
useEffect(() => { handleResize() window.addEventListener(‘resize’,…
I have the following code that uses lodash.debounce:
// Get Dimensions
// Capture dimensions when App
first mounts, immediately
// Listen for resize events and trigger ‘getDimensions’ (debounced)…
How to insert a picture from an existing table with file path and desired placement I’m trying to create the following template: The user creates a table in a “Data Entry” worksheet that lists the following: File path ie: P:\Phone Camera Dump\20121224_111617.jpg Range where the picture is to be placed in the “PICS” worksheet. https://i.stack.imgur.com/bvsJs.jpg Once the list is finalized, the user executes and images are placed within the ranges specified on the “PICS” worksheet and dynamically re-sized. Presently the range has a set…
I’m trying to create the following template: The user creates a table in a "Data Entry" worksheet that lists the following:
File path ie: P:\Phone Camera Dump\20121224_111617.jpg Range w…
How to make SVG image to use available space? (and to avoid size-hardcoding) I have an SVG image as a child (at the begin) of a Row. I have wrapped it in a FractionallySizedBox() to make sure that image takes a certain amount of space in a Row. Now, I want Image to use that space, without setting fix width or height (since it can make problems on different devices). I have used attribute fit: BoxFit.scaleDown, but it is not working. Any suggestions? :) here my code: Row( children: [ Flexible( child: FractionallySizedBox(…
I have an SVG image as a child (at the begin) of a Row. I have wrapped it in a FractionallySizedBox() to make sure that image takes a certain amount of space in a Row. Now, I want Image to use that…
Flutter Web - Widgets not resizing with browser zoom I am currently building a Flutter web application and have encountered an issue where some of my widgets do not resize properly with the browser zoom. When the user decreases the zoom level, some of the widgets become cutoff and disappear from the screen. I would like to request help in finding a solution to ensure that all of my widgets remain fully visible and properly resized regardless of the user’s browser zoom level. Steps to Reproduce Open the Flutter web application in a browser. Zoom…
I am currently building a Flutter web application and have encountered an issue where some of my widgets do not resize properly with the browser zoom. When the user decreases the zoom level, some o…
resize2fs not resizing the filesystem size equal to disk size? While trying online file system resize. I see the file system size is less than the disk size by 2MB. Can not figure out what is the issue. Tried with ext3/ext4 file system. I see same issue. I see these logs in dmesg again and again. dmesg, [Fri Feb 10 0411 2023] EXT4-fs (nvme1n1): resizing filesystem from 168460288 to 168460800 blocks [Fri Feb 10 0411 2023] EXT4-fs (nvme1n1): resized filesystem to 168460800 [Fri Feb 10 0434 2023] EXT4-fs (nvme1n1): resizing filesystem from…
While trying online file system resize. I see the file system size is less than the disk size by 2MB. Can not figure out what is the issue. Tried with ext3/ext4 file system. I see same issue.
I see
finally
resource/aws_lambda_function: Add replace_security_groups_on_destroy and replacement_security_group_ids attributes (#29289)
4.54.0 (Unreleased)
NOTES:
• provider: Resolves provider crashes reporting Error: Plugin did not respond
and fatal error: concurrent map writes
with updated upstream package (terraform-plugin-log
) (#29269)
• resource/aws_networkmanager_core_network: The policy_document
attribute is being deprecated in favor of the new aws_networkmanager_core_network_policy_attachment
resource. (#29097)
FEATURES:
• New Resource: aws_evidently_launch
(#28752)
• New Resource: aws_lightsail_bucket_access_key
(#28699)
• New Resource: aws_networkmanager_core_network_policy_attachment
(#29097)
ENHANCEMENTS:
• data-source/aws_cloudtrail_service_account: Add service account ID for ap-southeast-4
AWS Region (#29103)
• data-source/aws_elb_hosted_zone_id: Add hosted zone ID for ap-southeast-4
AWS Region (#29103)
• data-source/aws_lb_hosted_zone_id: Add hosted zone IDs for ap-southeast-4
AWS Region (#29103)
• data-source/aws_s3_bucket: Add hosted zone ID for ap-south-2
AWS Region (#29103)
• data-source/aws_s3_bucket: Add hosted zone ID for ap-southeast-4
AWS Region (#29103)
• provider: Support ap-southeast-4
as a valid AWS region (#29329)
• resource/aws_dynamodb_table: Add arn
, stream_arn
, and stream_label
attributes to replica
to obtain this information for replicas (#29269)
• resource/aws_efs_mount_target: Add configurable timeouts for Create and Delete (#27991)
• resource/aws_lambda_function: Add replace_security_groups_on_destroy
and replacement_security_group_ids
attributes (#29289)
• resource/aws_networkfirewall_firewall: Add ip_address_type
attribute to the subnet_mapping
configuration block (#29010)
• resource/aws_networkmanager_core_network: Add base_policy_region
and create_base_policy
arguments (#29097)
BUG FIXES:
• data-source/aws_kms_key: Reinstate support for KMS multi-Region key ID or ARN values for the key_id
argument (#29266)
• resource/aws_cloudwatch_log_group: Fix IAM eventual consistency error when setting a retention policy (#29325)
• resource/aws_dynamodb_table: Avoid recreating table replicas when enabling PITR on them (#29269)
• resource/aws_ec2_client_vpn_endpoint: Change authentication_options
from TypeList
to TypeSet
as order is not significant (#29294)
• resource/aws_kms_grant: Retries until valid principal ARNs are returned instead of not updating state (#29245)
• resource/aws_opsworks_permission: stack_id
and user_arn
are both Required and ForceNew (#27991)
• resource/aws_prometheus_workspace: Create a logging configuration on resource update if none existed previously (#27472)
• resource/aws_s3_bucket: Fix crash when logging
is empty (#29243)
• resource/aws_sns_topic: Fixes potential race condition when reading policy document. (#29226)
• resource/aws_sns_topic_policy: Fixes potential race condition when reading policy document. (#29226)
4.53.0 (February 3, 2023)
ENHANCEMENTS:
• provider: Adds structured fields in logging (#29223) • provider: Masks authentication fields in HTTP header logging (#29223)
4.52.0 (January 27, 2023)
NOTES:
• resource/aws_dynamodb_table: In the past, in certain situations, kms_key_arn
could be populated with the default DynamoDB key alias/aws/dynamodb
. This was an error because it would then be sent back to AWS and should not be. (#29102)
• resource/aws_dynamodb_table: In the past, in certain situations, server_side_encryption.0.kms_key_arn
or replica.*.kms_key_arn
could be populated with the default DynamoDB key alias/aws/dynamodb
. This was an error because it would then be sent back to AWS and should not be. (#29102)
• resource/aws_dynamodb_table: Updating replica.*.kms_key_arn
or replica.*.point_in_time_recovery
, when the replica
’s kms_key_arn
is set, requires recreating the replica. (#29102)
• resource/aws_dynamodb_table_replica: Updating kms_key_arn
forces replacement of the replica now as required to re-encrypt the replica (#29102)
FEATURES:
• New Data Source: aws_auditmanager_framework
(#28989)
• New Resource: aws_auditmanager_assessment_delegation
(#29099)
• New Resource: aws_auditmanager_framework_share
(#29049)
• New Resource: aws_auditmanager_organization_admin_account_registration
(#29018)
ENHANCEMENTS:
• resource/aws_wafv2_rule_group: Add oversize_handling
argument to body
block of the field_to_match
block (#29082)
BUG FIXES:
• resource/aws_api_gateway_integration: Prevent drift of connection_type
attribute when aws_api_gateway_deployment
triggers
are used (#29016)
• resource/aws_dynamodb_table: Fix perpetual diffs when using default AWS-managed keys (#29102)
• resource/aws_dynamodb_table: Fix to allow updating of replica.*.kms_key_arn
(#29102)
• resource/aws_dynamodb_table: Fix to allow updating of replica.*.point_in_time_recovery
when a replica
has kms_key_arn
set (#29102)
• resource/aws_dynamodb_table: Fix unexpected state ‘DISABLED’ error when waiting for PITR to update (#29086)
• resource/aws_dynamodb_table_replica: Fix to allow creation of the replica without errors when kms_key_arn
is set (#29102)
• resource/aws_dynamodb_table_…
2023-02-12
Does cloudposse provide a secure account baseline module with all the other modules stitched together?
see atmos
2023-02-13
Is it possible to pass a variable to downstream modules via context.tf, or do they have to be explicitly defined as variables in the module? Specifically I’m trying to use the module cloudposse/s3-bucket/aws
with user_enabled = true
and also passing path
as a variable to the downstream cloudposse/iam-s3-user/aws
module
The path variable would need to be passed within the s3 module. If it’s not then PR time.
Alternative would be passing the user into the module.
Right, thanks. I took out the S3 user part and do that separately. Had to create a PR for passing permission boundary on the S3 user-module, but I’ll let that follow it’s intended workflow
I’m hitting a wall here and I’m pretty sure is something stupid, I’m using this module https://github.com/cloudposse/terraform-aws-iam-role and passing policy_documents
and with one policy is all good but I need to pass the policy names base on the names entered as inputs from a variable and it does not work
A Terraform module that creates IAM role with provided JSON IAM polices documents.
like :
locals {
permissions_policies = [for policyname in var.permissions_policies : format("data.aws_iam_policy_document.%s.json", policyname)]
}
A Terraform module that creates IAM role with provided JSON IAM polices documents.
if I pass the local to the module
policy_documents = local.permissions_policies
policy_document_count = local.policy_document_count
I get :
Error: "override_policy_documents.0" contains an invalid JSON: invalid character 'd' looking for beginning of value
│
│ with module.github_dev_role.data.aws_iam_policy_document.default[0],
│ on .terraform/modules/github_dev_role/main.tf line 42, in data "aws_iam_policy_document" "default":
│ 42: override_policy_documents = var.policy_documents
try?
locals {
permissions_policies = [for policyname in var.permissions_policies : data.aws_iam_policy_document[policyname].json]
}
and I think is because is taking the names as string instead of resouce names
but I have 2 data resources
2 different policies
is not one policy
I don’t use that module, so am not sure what it expects. Just commenting on what you’re sharing here…
what I mean is :
permissions_policies = [ data.aws_iam_policy_document.AllowLambda.json, data.aws_iam_policy_document.AllowECS.json]
if I pass that it works
Yes, I believe my edit will get you to that
the module input var is a list(string)
so I thought it should work
But the way you’re using format() does not work
effectively, your approach using format()
is passing this:
permissions_policies = [
"data.aws_iam_policy_document.AllowLambda.json",
"data.aws_iam_policy_document.AllowECS.json"
]
not this:
permissions_policies = [
data.aws_iam_policy_document.AllowLambda.json,
data.aws_iam_policy_document.AllowECS.json
]
my approach gets you to the latter
let me try
main.tf line 2, in locals:
│ 2: permissions_policies = [for policyname in var.permissions_policies : data.aws_iam_policy_document[policyname].json]
│
│ The "data" object must be followed by two attribute names: the data source type and the resource name.
I added the .
but did not worked either
Oh I see what you’re doing. You have two separate, named data sources. You’ll need to use one, with for_each to handle both. Or just hardcode the named references
I was trying to build a list of recources ids basically
There’s no way to dynamically generate/resolve the data sources name like you’re trying
no issues, I thought there was a way
Would you include your full or partial data source definition?
sure
data "aws_iam_policy_document" "GithubActionsAssumeLambdaPolicy" {
statement {
effect = "Allow"
actions = [
"lambda:*",
"logs:*",
"iam:PassRole",
"iam:GetRole",
"iam:ListRoles",
"iam:GetRoles",
"iam:CreateRole",
"iam:PutRolePolicy",
"iam:AttachRolePolicy",
"iam:DetachRolePolicy",
"iam:DeleteRole",
"iam:ListPolicies",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:DeleteObject",
"cloudformation:CreateStack",
"cloudformation:CreateChangeSet",
"cloudformation:ListStacks",
"cloudformation:UpdateStack",
"cloudformation:DescribeChangeSet",
"cloudformation:ExecuteChangeSet",
"cloudformation:DescribeStacks",
"cloudformation:GetTemplateSummary",
"cloudformation:DescribeStackEvents"
]
resources = [
"arn:aws:logs:${var.region}:${var.account_number}/*",
"arn:aws:lambda:${var.region}:${var.account_number}:function:{namespace}-{environment}-*",
"arn:aws:iam:${var.account_number}:role/*",
"arn:aws:s3:*",
"arn:aws:cloudformation:${var.region}:${var.account_number}:stack/*"
]
}
provider = aws.automation
}
data "aws_iam_policy_document" "GithubActionsAssumeECSDeployPolicy" {
statement {
sid = "RegisterTaskDefinition"
effect = "Allow"
actions = [
"ecr:CompleteLayerUpload",
"ecr:GetAuthorizationToken",
"ecr:UploadLayerPart",
"ecr:InitiateLayerUpload",
"ecr:BatchCheckLayerAvailability",
"ecr:PutImage",
"ecr:BatchGetImage",
"ecr:GetDownloadUrlForLayer",
"ecs:RegisterTaskDefinition",
"iam:PassRole",
"ecs:UpdateService",
"ecs:DescribeServices"
]
resources = [
"arn:aws:iam:${var.account_number}:role/*",
"arn:aws:ecr:${var.region}:${var.account_number}:pepe/*",
"arn:aws:ecs:${var.region}:${var.account_number}:*"
]
}
}
Why not just combine those policies into an aggregate then consume the aggregate?
data "aws_iam_policy_document" "aggregated" {
override_policy_documents = [
data.aws_iam_policy_document.base.json,
data.aws_iam_policy_document.resource_full_access.json
]
}
Then you can use this
data.aws_iam_policy_document.aggregated.json
because this component can create roles that can have many different policies attached
Try this
provider "aws" {
region = "us-east-1"
}
variable "policies" {
type = set(string)
default = []
}
locals {
policies = {
"1" = data.aws_iam_policy_document.example_1.json
"2" = data.aws_iam_policy_document.example_2.json
}
override_policy_documents = compact([
for p in var.policies:
lookup(local.policies, p, "")
])
}
data "aws_iam_policy_document" "example_1" {
statement {
sid = "1"
actions = [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation",
]
resources = [
"arn:aws:s3:::*",
]
}
}
data "aws_iam_policy_document" "example_2" {
statement {
sid = "2"
actions = [
"s3:ListAllMyBuckets",
"s3:GetBucketLocation",
]
resources = [
"arn:aws:s3:::*",
]
}
}
data "aws_iam_policy_document" "aggregated" {
count = length(local.override_policy_documents) > 0 ? 1 : 0
override_policy_documents = local.override_policy_documents
}
output "aggregated" {
value = one(data.aws_iam_policy_document.aggregated[*].json)
}
I tested this with the following inputs
terraform plan -var='policies=[]'
terraform plan -var='policies=[""]'
terraform plan -var='policies=["1"]'
terraform plan -var='policies=["1","2"]'
2023-02-14
peeps any experienced scalr users in here? would love to setup a time to talk and get our takes. I’m a spacelift fan myself, but have some counter scalr price points, as well and would love to get a sense of product maturity, feature release schedule, pricing, ease of setup and use, plan/apply latency, things like this. thanks!
posting this once more lol:
peeps any experienced scalr users in here? would love to setup a time to talk and get our takes. I’m a spacelift fan myself, but have some counter scalr price points, as well and would love to get a sense of product maturity, feature release schedule, pricing, ease of setup and use, plan/apply latency, things like this. thanks!
how would you guys handle an API-driven TFC workspace which needs to call multiple modules? for e.g. a module for security group creation, a module for ec2 instance creation etc; thinking of uploading different config. versions and triggering the run to reference the relevant config. version
2023-02-15
hello team i am using terraform module for lambda. Here’s my code:-
module "c6_emr_failed_jobs_lambda" {
source = “terraform-aws-modules/lambda/aws”
function_name = “test”
description = “Lambda function to monitor emr failed jobs”
handler = “failed.lambda_handler”
runtime = “python3.7”
architectures = [“x86_64”]
vpc_subnet_ids = module.vpc.private_subnets
vpc_security_group_ids = [“sg-xxxxxxxxxxxx”]
environment_variables = {
ENVIRONMENT = “${local.env_name}”,
REGION = var.region
}
create_package = false
timeout = 30
package_type = “Zip”
s3_existing_package = {
s3_bucket = “bucket-test-temp”
s3_key = “emr-failed-job-monitoring/karan.zip”
}
tags = local.tags
}
I am getting this error –> Error: filename, s3_* or image_uri attributes must be set
can anyone help what am i missing?
I want to pick my lambda code from s3
as I see in docs
s3_existing_package = {
bucket = aws_s3_bucket.builds.id
key = aws_s3_object.my_function.id
}
keys are bucket
and key
, not s3_bucket
and s3_key
oh i completely missed that . Thanks @Andrey Taranik
posting this once more lol:
peeps any experienced scalr users in here? would love to setup a time to talk and get our takes. I’m a spacelift fan myself, but have some counter scalr price points, as well and would love to get a sense of product maturity, feature release schedule, pricing, ease of setup and use, plan/apply latency, things like this. thanks!
@Sebastian Stadil can maybe make some intros or connect you with someone
nice, thanks erik.
@Jonas Steinberg Sorry for the delay, I’m happy to help
quick question, lets say I need to create a few secrets, the content for those secrets are tokens for external services like facebook and twitter. how do you guys handle that kinda of stuff ? create the secret and then manually add the secret there ?
Yes, we call those “integration secrets” and they are typically (manually) stored in SSM or ASM.
Alternatively you can use something like sops secrets with KMS+IAM.
There are a number of new tools on the market (open source) that slap a UI ontop of the secrets to make them easier for develoeprs/teams to manage.
see announcements in #office-hours
v1.4.0-beta2 1.4.0 (Unreleased) UPGRADE NOTES:
config: The textencodebase64 function when called with encoding “GB18030” will now encode the euro symbol € as the two-byte sequence 0xA2,0xE3, as required by the GB18030 standard, before applying base64 encoding.
config: The textencodebase64 function when called with encoding “GBK” or “CP936” will now encode the euro symbol € as the single byte 0x80 before applying base64 encoding. This matches the behavior of the Windows API when encoding to this…
1.4.0 (Unreleased) UPGRADE NOTES:
config: The textencodebase64 function when called with encoding “GB18030” will now encode the euro symbol € as the two-byte sequence 0xA2,0xE3, as required by th…
v1.3.9 1.3.9 (February 15, 2023) BUG FIXES: Fix crash when planning to remove already-deposed resource instances. (#32663)
This is a highly local fix for a panic which seems to arise when planning orphaned deposed instances. There are two other things to investigate in response to this issue, likely addressing each in …
2023-02-16
short question vpc peering multi account, is there a possibility to split the requester part to one terraform cloud workspace and the accepter part to another one, so no cross account permissions are needed?
I think you can do this by using the respective IAM access credentials per workspace?
seams that resource aws_vpc_peering_connection_accepter can’t work without aws_vpc_peering_connection, as it trying to spin up a new vpc peering, or am I wrong
did you test this kind of workflow?
I have not tested this. Have you tried using a data resource instead? https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/vpc_peering_connection
good point, will test it and give you feedback
was good idea but there are 2 different remote states, and the accepter part don’t know about the requester
so I think the idea of splitting the requester and accepter in 2 different terraform cloud workspaces that one can create the request and the second one can accept it any time it’s not doable
So looking at the TF resources the vpc_peering_connection is a requirement for the accepter however you should be able to fetch that data using data resources. So you will definitely have to create the connection request first but I imagine you will be able to accept anytime after in the separate workspace using that data resource I shared. If you want to accept before the connection request was made, I do not believe this would be doable.
I have a question regarding the terraform-aws-ecs-alb-service-task module… I’m trying to create a service associated with two target_groups (through the ecs_load_balancers variable) but I’m having a dependency problem. But there is no problem if I just try to associate a single target group, I think it’s due to this condition enable_ecs_service_role = module.this.enabled && var.network_mode != "awsvpc" && length(var.ecs_load_balancers) >= 1
because of this check length(var.ecs_load_balancers) >= 1
2023-02-17
I have a silly question for some people but I’ve been banging my head on this for a while.
I’m using a laptop that uses z scaler. I was able to export the self-signed cert for python using python -m certifi
.
i’m seeing this issue when doing terraform init. this is using terraform enterprise and modules from terraform enterprise. the tfe is self hosted on internal domain.
this error repeats for every module. let me know y’all thoughts.
╷
│ Error: Failed to download module
│
│ Could not download module "r53_private_zone" (.terraform/modules/tfvpc-us-west-2-devqa/modules/dns/main.tf:9) source code from
│ "git::<https://github.com/terraform-aws-modules/terraform-aws-route53?ref=v2.10.2>": error downloading
│ '<https://github.com/terraform-aws-modules/terraform-aws-route53?ref=v2.10.2>': /opt/homebrew/bin/git exited with 128: Cloning into
│ '.terraform/modules/tfvpc-us-west-2-devqa.vpc_nfw_nsfw-required.dns.r53_private_zone'...
│ fatal: unable to access '<https://github.com/terraform-aws-modules/terraform-aws-route53/>': error setting certificate verify locations: CAfile: <name of my ca file>
│ CApath: none
│ .
╵
2023-02-18
Any opinions on what to do when either a provider is incomplete missing certain resources or the provider doesn’t exist at all. It seems some people use curl with local-exec but that is pretty hacky.
Other options:
• improve the provider
• Don’t use Terraform for this, deal with it your current way and do other higher value work for a year
I’d actually encourage you to engage with the creators of the provider. If you can, fix/improve it so the community can benefit.
Thanks for the responses, they confirm what I suspected were the only options. Too bad the mastercard/restapi is somewhat clunky, it could be good to use while a better solution (like improving the providers is in progress)
@jonjitsu just saw this one also, terracurl… https://www.hashicorp.com/blog/writing-terraform-for-unsupported-resources
TerraCurl is a utility Terraform provider that allows engineers to make managed and unmanaged API calls in their Terraform code.
@Erik Osterman (Cloud Posse) that one is new, might be good for #office-hours and a linkedin share
2023-02-19
2023-02-20
Hi Team, I just facing same problem with this issue, any update guys? https://github.com/cloudposse/terraform-aws-cloudfront-cdn/issues/74
2023-02-21
Been thinking about the null-label
module and maybe adapting/forking it to make ansible equivalent - our projects use ansible null-provisioner to run ansible code on the machine running the terraform but different projects requiring stuff like group_vars ssh configuration options etc make any form of shared module across our org brittle.
However using a shared context object with ansible config options (that can be injected into small modules and manipulated if required) seems like a novel use of the null-label
style module. Love to hear folks thoughts, i’d be something i’d be open sourcing if others might benefit from such a thing.
Ultimately leading to something like this:
// default project ansible object
module "ansible" {
source = "org/ansible/null"
ssh_user = "centos"
ssh_extra_args = "-o IdentitiesOnly=yes -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"
}
// override the default ansible module with some specific option
module "ansible_al2" {
source = "org/ansible/null"
context = module.ansible.context
ssh_user = "ec2-user"
}
// pass the overriden module into module making ec2 instances etc.
// this would still include the ssh_extra_args value due to inheritance
module "ec2_al2_instances {
instance_type = "t2.small"
instance_count = 5
ansible_context = module.ansible_al2.context
}
IMO, the context
part of null-label is extremely complex and mostly useful when you are doing something multi-tenanted or with extremely large number of practitioners.
Are you sure your needs are complex enough that simply composing values isn’t sufficient?
variable "ansible_defaults" {
type = object({
ssh_user = string
ssh_extra_args = string
})
}
module "ansible" {
ssh_user = "centos"
ssh_extra_args = var.ansible_defaults.ssh_extra_args
}
module "ansible2" {
ssh_user = "ec2-user"
ssh_extra_args = var.ansible_defaults.ssh_extra_args # duplicated from the above module, but is that so bad?
}
I’ve been thinking about the consequences of the “wrong abstraction.” My RailsConf 2014 “all the little things” talk included a section where I asserted: > duplication is far cheaper than the wrong abstraction And in the summary, I went on to advise: >
Yeah, i guess that’s what i’m tossing up with right now. In our org we have essentially multi-tenant(s) w.r.t to the different teams leveraging some of these shared modules.
i’m trying to avoid other teams sooking that they need a specific argument in ansible to get it to work etc and trying to shield the module author from having to take that into consideration, it could probably be avoided with a good set of input vars and some locals logic, but as we approach potentially a hundred or so of these modules that the use of an ansible a shared module responsible for generating a valid ansible-playbook
command is appealing.
Good feedback tho - seems like more thought is required.
hey everyone, I reported a bug that we’ve been seeing in our Geodesic Shell builds for when installing terraform. https://github.com/cloudposse/packages/issues/3339 has anyone else seen it?
Found a bug? Maybe our Slack Community can help.
Describe the Bug
in our Dockerfile to build a Geodesic Shell we have a line to add terraform
RUN apk add terraform_0.12@cloudposse
this is now giving us an error
fetch <https://alpine.global.ssl.fastly.net/alpine/v3.11/main/x86_64/APKINDEX.tar.gz>
fetch <https://alpine.global.ssl.fastly.net/alpine/v3.11/community/x86_64/APKINDEX.tar.gz>
fetch <https://apk.cloudposse.com/3.11/vendor/x86_64/APKINDEX.tar.gz>
fetch <https://alpine.global.ssl.fastly.net/alpine/edge/testing/x86_64/APKINDEX.tar.gz>
Segmentation fault (core dumped)
Expected Behavior
the line should run terraform
Steps to Reproduce
Steps to reproduce the behavior:
- Add the cloudposse package
- Run ‘apk add terraform_0.12@cloudposse’
- See error
Screenshots
Environment (please complete the following information):
Anything that will help us triage the bug will help. Here are some ideas:
• OS: Linux Alpine
Additional Context
We’ve also tried multiple vbersions of terraform (0.13, 0.14, 0.15) and they all fail with it
Probably incompatible with your OS and/or architecture
Found a bug? Maybe our Slack Community can help.
Describe the Bug
in our Dockerfile to build a Geodesic Shell we have a line to add terraform
RUN apk add terraform_0.12@cloudposse
this is now giving us an error
fetch <https://alpine.global.ssl.fastly.net/alpine/v3.11/main/x86_64/APKINDEX.tar.gz>
fetch <https://alpine.global.ssl.fastly.net/alpine/v3.11/community/x86_64/APKINDEX.tar.gz>
fetch <https://apk.cloudposse.com/3.11/vendor/x86_64/APKINDEX.tar.gz>
fetch <https://alpine.global.ssl.fastly.net/alpine/edge/testing/x86_64/APKINDEX.tar.gz>
Segmentation fault (core dumped)
Expected Behavior
the line should run terraform
Steps to Reproduce
Steps to reproduce the behavior:
- Add the cloudposse package
- Run ‘apk add terraform_0.12@cloudposse’
- See error
Screenshots
Environment (please complete the following information):
Anything that will help us triage the bug will help. Here are some ideas:
• OS: Linux Alpine
Additional Context
We’ve also tried multiple vbersions of terraform (0.13, 0.14, 0.15) and they all fail with it
If you download the same binary from terraform releases, does it work?
All we do is repackage that.
Its an older version of the shell, seems like its a problem with the apk from that. we were on a very old version (0.136.1) I upgraded to 0.152.5-alpine and everything worked.
I’ll close the ticket with the solution. thanks!
Ok, great!
Anyone know if you can use terraform (module or resource) to customize the AWS access portal URL? By default, you can access the AWS access portal by using a URL that follows this format: [d-xxxxxxxxxx.awsapps.com/start](http://d-xxxxxxxxxx.awsapps.com/start)
. But, I want to customize it with TF.
don’t believe there is a resource for that. you can set it easily in the console as part of the bootstrap process
Yep, looks like there isn’t an AWS api endpoint for it.
Hey,
We are seeing this error in datadog:
Failed to process: error unmarshaling JSON: json: cannot unmarshal string into Go struct field GitHubOptions.appID of type int64
from our argocd-notifications-controller. Does someone know how to fix this?
what does this have to do with Terraform?
The error seems straightforward even if the use case isn’t. The field is passing a string instead of an int.
2023-02-22
Hello, I am trying to set up an AWS SQS dead letter queue using this simple example from the resource doc:
resource "aws_sqs_queue" "terraform_queue_deadletter" {
name = "terraform-example-deadletter-queue"
redrive_allow_policy = jsonencode({
redrivePermission = "byQueue",
sourceQueueArns = [aws_sqs_queue.terraform_queue.arn]
})
}
However when I open the console I can see that the queue was not set up as a DLQ (screenshot attached)
Is it something that I am missing? Or something in the AWS Docs is not very clear?
Anyone using dynamodb_endpoint
in your backend configuration? Im running Atlantis in account A and our current Ddb table is in account B. If I disable the dynamodb_endpoint and dynamodb_table from my backend configuration everything works fine. The S3 bucket is in account B so I know i have my IAM roles and what not configured right. Basically everything works right until I enable dynamo state locking. The error I get from Atlantis is
ResourceNotFoundException: Required resource not found
Im assuming it cant find my table since it works otherwise. Here are the values im using in my config…
dynamodb_endpoint = "dynamodb.us-east-1.amazonaws.com" <- Is this the correct value for the endpoint assuming in im us-east-1?
dynamodb_table = "tf-state-lock"
As background if I manually run terraform plans and applies from within the B account as a terraform user it also works. Is there some trick here that im missing?
did you set the correct Dynamo permissions? https://developer.hashicorp.com/terraform/language/settings/backends/s3#dynamodb-table-permissions
Terraform can store state remotely in S3 and lock that state with DynamoDB.
also, for cross-account access, the permissions must be on both sides
The role that Atlantis assumes must have permissions to the Dynamo table
and the table must allow the other role to access it
The permissions are correct in account B. I’ll have to double check in account A tomorrow. thanks for the tip
this is usually done using roles: RoleA (in account A) has the permissions to assume RoleB (in accountB). RoleB has the permissions to access the bucket and Dynamo, and also has a trust policy to allow RoleA to assume it
then for S3 backend, you specify RoleB
Yep… exactly how I’m doing it
so when TF runs under RoleA, it assumes RoleB to access the backend
Hey @Andriy Knysh (Cloud Posse) I checked all the permissions in my roles and they are all fine. I used the policy simulator in Account A and cheked the role there and it shows that I have the correct permissions. Same thing in Account B. I already know the role in Account B works because that is what our current Jenkins uses for terraform plans/applies. Is there some other mechanism that might keep someone from acccessing a table from outside on an account? I looked and from what im reading Ddb doesnt use ACLs so its not that. The table is actually there and works…again its what we already currently use for state locking. I just keep running into this resource not found
error. I even tried just describing the table from my admin user that assumed the role in Account A and it wasnt able to access it either. Kind of unsure what else to check or try. Any ideas?
This is what I have in a policy in both accounts and those policies are attached to the roles in each account.
{
"Sid": "ReadWriteLocks",
"Effect": "Allow",
"Action": [
"dynamodb:PutItem",
"dynamodb:GetItem",
"dynamodb:DeleteItem",
"dynamodb:DescribeTable"
],
"Resource": [
"arn:aws:dynamodb:us-east-1:111111111:table/table-name"
]
}
Im wondering now if you can even use dynamodb state locking if that table is in another account. There isnt a way to specify a different account in the backend provider.
Dynamo table in another account works ok. You need to make sure you add a role to the backend config
{
"terraform": {
"backend": {
"s3": {
"acl": "bucket-owner-full-control",
"bucket": "xxx-use1-root-tfstate",
"dynamodb_table": "xxx-use1-root-tfstate-lock",
"encrypt": true,
"key": "terraform.tfstate",
"region": "us-east-1",
"role_arn": "arn:aws:iam::xxxxxxxx:role/xxxxx-gbl-root-tfstate",
"workspace_key_prefix": "xxxxxxx"
}
}
}
}
in the example above, the bucket and the table are in the root
account in us-east-2
role_arn
is added to the backend config of the component in accountA
so TF itself assumes RoleA to provision the resources in accountA, but for the backend it assumes role_arn
in the other account to access the s3 bucket and Dynamo table. (role_arn
must have all the required permissions as described here https://developer.hashicorp.com/terraform/language/settings/backends/s3
Terraform can store state remotely in S3 and lock that state with DynamoDB.
so having the S3 and Dynamo in another account works all the time
1) add a role to the backend, 2) check the role’s permissions, 3) make sure the RoleA can assume the role_arn
for the backend and 4) role_arn
has a trust policy to allow RoleA assuming it
dumb question but when doing stuff like
module "ops_siem_label" {
source = "../../../modules/null-label"
name = "ops-siem"
context = module.this.context
}
Is there a preffered way by CP for how you’d do the name? so that the -
uses the seperator? We often have 2-3 word names and my OCD is like this should respect the seperator option the owner requested
Use attributes
and label_order
if required.
using the name like that is perfect ok
2023-02-23
2023-02-24
Hi All, What’s the best way to learn Terraform. Is there any course with better hands-on ?
I would agree with using the docs. the best way to learn Terraform is to:
- Think about what you want to build (ie, website with a loadbalancer, webserver, and external database for example)
- Decide what cloud you want to build it on
- Read the docs to start building the resources you need for step 1
When using this EKS module (terraform-aws-modules/eks/aws) you need to specify a kubernetes provider in order to modify the aws-auth configmap. I’m trying to create a kubernetes provider with an alias, but I’m not sure how to pass the alias information to the EKS module. Anyone know how to do this?
v1.4.0-rc1 1.4.0 (Unreleased) UPGRADE NOTES:
config: The textencodebase64 function when called with encoding “GB18030” will now encode the euro symbol € as the two-byte sequence 0xA2,0xE3, as required by the GB18030 standard, before applying base64 encoding.
config: The textencodebase64 function when called with encoding “GBK” or “CP936” will now encode the euro symbol € as the single byte 0x80 before applying base64 encoding. This matches the behavior of the Windows API when encoding to this…
1.4.0 (Unreleased) UPGRADE NOTES:
config: The textencodebase64 function when called with encoding “GB18030” will now encode the euro symbol € as the two-byte sequence 0xA2,0xE3, as required by th…
2023-02-25
2023-02-26
Hi folks.. I’m trying to create a eks cluster using cloudposse/eks-cluster/aws
. I tried to refer eks_cloudposse\terraform-aws-eks-cluster\examples\complete\README.yaml
but it seems outdated. Where can I get the latest example ?
Thanks in Advance
the example https://github.com/cloudposse/terraform-aws-eks-cluster/tree/master/examples/complete is a working example, it gets deployed to AWS on every PR, and we don’t merge a PR unless the example gets deployed successfully. Please try it
I tried it and it is working now. Earlier I was referring to README.yaml file. I made a custom main.tf file using Readme file and was using latest cloudposse modules of vpc(2.0.0),subnet etc..
I noticed in vpc 2.0.0 ipv4_primary_cidr_block
is used instead of cidr_block
used in vpc 1.1.0
2023-02-27
https://medium.com/p/cc2d1d148fcd may be this article helps
Lets learn to build a EKS cluster using Terraform in a simple method.
Hi Radha, this article is using terraform-aws-modules/eks/aws
module and I was trying to do with cloudposse/eks-cluster/aws
module.
Lets learn to build a EKS cluster using Terraform in a simple method.
2023-02-28
Hi folks I was trying to combine eks-cluster module with bastion server module(cloudposse/ec2-bastion-server/aws
)and I have done that successfully.
But I need to pass some user_data to bastion server.I tried passing file("my_file.sh") but it is not working because the variable user_data is of type list.
How can I achieve this ?
I also tried using user_data_template variable.But it is trying to fetch the file directly from the module rather than my files.
And how can I do kubeconfig with the output of eks-cluster module ?
Can I use
remote-exec` provisioner for this purpose ?
Thanks in Advance
But I need to pass some user_data to bastion server
I also tried using user_data_template variable.
I think you’ll want to use an absolute path that points to your template file.
https://github.com/cloudposse/terraform-aws-ec2-bastion-server/blob/master/user_data/amazon-linux.sh
#!/usr/bin/env bash
exec > >(tee /var/log/user-data.log | logger -t user-data -s 2>/dev/console) 2>&1
##
## Setup SSH Config
##
cat <<"__EOF__" > /home/${ssh_user}/.ssh/config
Host *
StrictHostKeyChecking no
__EOF__
chmod 600 /home/${ssh_user}/.ssh/config
chown ${ssh_user}:${ssh_user} /home/${ssh_user}/.ssh/config
##
## Enable SSM
##
if [ "${ssm_enabled}" = "true" ]
then
systemctl enable amazon-ssm-agent
systemctl start amazon-ssm-agent
systemctl status amazon-ssm-agent
else
systemctl disable amazon-ssm-agent
systemctl stop amazon-ssm-agent
systemctl status amazon-ssm-agent
fi
${user_data}
how can I do kubeconfig with the output of eks-cluster module
aws eks update-kubeconfig --region <region> --name <cluster-name>
I tried like this: user_data_template =”my_user_data/install_kubectl.sh”
But I’m getting this error:
are you using the module variable user_data
?
https://github.com/cloudposse/terraform-aws-ec2-bastion-server/blob/master/examples/complete/fixtures.us-east-2.tfvars#L17 is an example of how to add more lines to the boot-time script
user_data = [
Actually I tried both user_data
and user_data_template
In user_data
how can I add multiple lines?
My user data is : (file attached)
@tamsky Bro you got any idea ?
Seeing this now. Can you share your attempts to use user_data
– share your full terraform config – either inline or as a gist?
and have you tried using filebase64("install_kubectl.sh")
and providing that output to the module using the module parameter user_data_base64
?
Somewhat of a generic question about how cross-account transit gateway resources work…. (module or no module) https://github.com/cloudposse/terraform-aws-transit-gateway
Our org currently creates each environment’s vpc from a separate terraform directory using distinct account credentials. We have a separate-aws-accounts-per-environment type of setup (dev/stage/prod).
So far, I’ve only found examples that are structured around a single terraform config and a single terraform apply
operation.
The single apply
requires the credentials for two accounts, and creates resources in both accounts, and connects the transit gateways will full knowledge of both sides.
We’d prefer to keep terraform configs separate (per account) and run separate terraform apply
actions to stand up transit-gateways.
If we can’t do it that way, I’m hoping to understand why not.
The setup we’re looking to create is two transit gateway(s) in two regions :
• existing vpc1, in region1, in accountA
• existing vpc2, in region2, in accountB
Long story short… my question:
I’m looking for an example where this is being done. *
My guesses:
So far I’ve come to believe that accountA must be able to discover accountB’s transit_gateway_id
after it gets created there (guessing it would need to read its remote_state?) – any other tips folks can offer?
*
(I’ll be happy to contribute it as another multi-account /example/
in the module once I get it working.)
Terraform module to provision AWS Transit Gateway, AWS Resource Access Manager (AWS RAM) Resource, and share the Transit Gateway with the Organization or another AWS Account.
It’s something of an intricate dance, creating the vpc attachment (tgw member), accepting the attachment (tgw owner), and creating routes in the tgw route tables (tgw owner). but should be possible.
We’ve published our own module for tgw, which has invidual sub-modules for each of those steps. You’re welcome to see if it suits your use case. Here’s the cross-account module, which is just composed of calls to the other submodules, which you could certainly split out in your own configs… https://github.com/plus3it/terraform-aws-tardigrade-transit-gateway/blob/master/modules/cross-account-vpc-attachment/main.tf
module "vpc_attachment" {
source = "../vpc-attachment"
subnet_ids = var.subnet_ids
transit_gateway_id = var.transit_gateway_id
cross_account = true
appliance_mode_support = var.appliance_mode_support
dns_support = var.dns_support
ipv6_support = var.ipv6_support
tags = var.tags
vpc_routes = [for route in var.vpc_routes : route if route.provider == "aws"]
}
module "vpc_accepter" {
source = "../vpc-accepter"
providers = {
aws = aws.owner
}
transit_gateway_attachment_id = module.vpc_attachment.vpc_attachment.id
vpc_routes = [for route in var.vpc_routes : route if route.provider == "aws.owner"]
tags = var.tags
transit_gateway_route_table_association = var.transit_gateway_route_table_association
transit_gateway_route_table_propagations = var.transit_gateway_route_table_propagations
# default assocation and propagation values must be:
# `false` if the transit gateway has no default route table (== "disable")
transit_gateway_default_route_table_association = (
data.aws_ec2_transit_gateway.this.default_route_table_association == "disable") ? false : (
var.transit_gateway_default_route_table_association
)
transit_gateway_default_route_table_propagation = (
data.aws_ec2_transit_gateway.this.default_route_table_propagation == "disable") ? false : (
var.transit_gateway_default_route_table_propagation
)
}
module "routes" {
source = "../route"
for_each = { for route in var.routes : route.name => route }
providers = {
aws = aws.owner
}
destination_cidr_block = each.value.destination_cidr_block
transit_gateway_attachment_id = module.vpc_accepter.vpc_attachment_accepter.id
transit_gateway_route_table_id = each.value.transit_gateway_route_table_id
}
data "aws_ec2_transit_gateway" "this" {
provider = aws.owner
id = var.transit_gateway_id
}
the tests/
directory has complete working examples
I’m wondering how much resource_id leakage there is across accounts… and if I’ll need to read state from the other account to accomplish what I’ve described.
(thank you for the link to your module, btw)
in the vpc attachment, it’s pretty much just the transit_gateway_id, and any vpc routes you want added to the member vpc.
and then in the vpc_accepter, you need the attachment id from the member vpc_attachment
I think I’m also wishing someone had a timeline-type diagram that describes the order of operations. It’s all still a bit confusing to me.
you could mock up an environment that uses that cross-account module, and generate a terraform graph, and use graphviz to plot it?
less of a resource graph, and more of a sequence diagram
mostly the same thing, since terraform is a DAG
but I guess a resource graph would show that
did you have an opinion on whether I’d need to use remote_state to read resource values from the other account?
it is confusing though, took me quite a while to work out the kinks in that module
no opinion really, no. i use terragrunt. you could use terraform_remote_state, or parameter store. or just hardcode the value (unless you’re expecting the tgw id to change frequently)
i ended up just hardcoding the value in terragrunt, rather then messing with the dependency block
yeah - that’s the notion I have coming into this – that some number of values/strings need to be shared across terraform runs
sure thing!
Our root module implementation of TGW using our modules is here: https://github.com/cloudposse/terraform-aws-components/tree/master/modules/tgw
Back in the day, we did it the way you described, but the DX was poor and more complicated. Setting it up this way allows us to keep the state of each TGW spoke separate, but also automate the end-to-end connection in one fell swoop.
Using IAM assumed roles, it’s able to apply that changes in both accounts.
Absolutely, this:
Setting it up this way allows us to keep the state of each TGW spoke separate, but also automate the end-to-end connection in one fell swoop.
That is also why we keep this cross-account piece together, in one tfstate
Thanks for sharing your experience @Erik Osterman (Cloud Posse)! – it’s super useful to hear (ahead of time) that my design is actually a footgun…