#terraform (2023-01)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2023-01-01

Adnan avatar

Hi. With sso and cross-account access, some of the options are:

• use users, groups and permission sets

• use users, groups and custom roles across accounts Both can be managed via terraform, but what might be benefits or drawbacks of each? Why would we choose one over the other?

aimbotd avatar
aimbotd


use users, groups and permission sets
This generally works well when you have one or two accounts, but more than that and you’re gunna start running into management issues. Operations will have a time keeping that list curated as people come and go. Security would be cumbersome. As a user, you’d have to maintain multiple API keys and passwords. Switching accounts would suck.

It’s easier to manage policies and roles than users, so just deploy those via TF into the accounts.
use users, groups and custom roles across accounts
IMO, a much better way to travel. One account that manages only the users. Accessing accounts is much cleaner since you have a single set of credentials but n roles to leverage. This helps curate a least privileged access across the board. You can move around accounts from within the UI using the roles, this means you can log into the account and then have a collection of links to different accounts so that you can seemlessly navigate around. If a users creds get compromised, the threat can be reduced to, it doesn’t really matter unless they know what roles they have access to and what accounts they can access. (I am forgetting if the user needs permission to list roles to assume a role).

Adnan avatar

@aimbotd thanks for the feedback. is this all related to users and groups with AWS SSO (IAM Identity Center)

2023-01-02

Vinko Vrsalovic avatar
Vinko Vrsalovic

~ task_definition = "arn:aws:ecs:eu-north-1:116778842250:task-definition/app-back-dev:23" -> "arn:aws:ecs:eu-north-1:116778842250:task-definition/app-back-dev:20"

I added some CI/CD deployment of ECS tasks on Github Actions. It creates the new version based on the code using the aws CLI. When I come back to terraform to change something else, I see the above, so it downgrades the task version, even after a terraform refresh

Vinko Vrsalovic avatar
Vinko Vrsalovic

I think the problem is that the value is stored in the task definition, which is what’s rebuilt in Github Actions and Terraform is not catching up even after a refresh. Can I somehow import the Github Actions task definition into my terraform state? Refresh is not cutting it

RB avatar

You can use a life cycle ignore definition in terraform to ignore those changes

Vinko Vrsalovic avatar
Vinko Vrsalovic

Apply the ignore to the task_definition, you mean?

Vinko Vrsalovic avatar
Vinko Vrsalovic

on the ecs_service I can add ignore task_definition

Vinko Vrsalovic avatar
Vinko Vrsalovic

seems to work for now

Vinko Vrsalovic avatar
Vinko Vrsalovic

thanks

2023-01-03

Jonas Steinberg avatar
Jonas Steinberg

In case anyone has ever wondered: a Terraform Cloud annual contract provisioned for about 10K - 15K applies per month works out to approximately *$40 USD/apply*. That’s not a guess.

fb-wow2
3
Giomar Garcia avatar
Giomar Garcia

does that 40 USD/apply is directly proportional to the 15,000? doing the math is 600K USD, is that correct?

btw happy new year man! glad to hear from you!

Jonas Steinberg avatar
Jonas Steinberg

@Giomar Garcia haha hey brother, glad to hear from you as well. Yep, you got it. Literally $40/apply => $600K USD.

Jonas Steinberg avatar
Jonas Steinberg

Does anyone have recommendations on undrifting a “medium-sized” number of resources across numerous environments? This would be something on the order of say between 5 and 10 resource configurations that are drifted from current state across about 15 environments. I’m not sure if terraform import will help in this case? Also not sure if there is a better option.

Chris Dobbyn avatar
Chris Dobbyn

Drift detection is important (terraform cloud has this), beyond that it’s a matter of wack-a-mole till a plan has no changes. Sometimes this will be a state rm and import, sometimes a state mv, but hopefully it’s just changing config to align. I haven’t found a better way across hundreds of states. Make a liberal usage of —target on your plans to limit their scope (increase speed).

Jonas Steinberg avatar
Jonas Steinberg

@Chris Dobbyn Drift detection is important. So for you this isn’t an idealism? By this I mean you have an alerting system setup where your TACO runs and posts to a slack channel or whatever every time state drifts? Or you have drift detection setup to auto-resolve back to what’s in state storage? I’m definitely not a fan of the latter because I have repeatedly found that what happens in that case is people learn to shut off the --auto-resolve feature and it becomes drift detection in name only and then things get out of control.

I really appreciate the state surgery takes above because I’m thinking that’s really my only option!

Chris Dobbyn avatar
Chris Dobbyn

Drift detection will detect and notify, I do not trust it to resolve.

Jonas Steinberg avatar
Jonas Steinberg

Hm. Well in spacelift I don’t think it notifies out of the box, I believe you have to hand-jam a webhook, effectively. I think it’s the same for TFC, but I could be wrong there.

Jonas Steinberg avatar
Jonas Steinberg

So in terraform > 0.15.3 there is terraform plan -refresh-only which will apparently only STDOUT drift. Fyi.

David avatar

Hi @Jonas Steinberg I’m a noob in this and don’t have any solutions to your question but I’m curious as to how this drift occurs

David avatar

From a very simplistic view, I’d assume if the state is hosted remotely, a few people given access to the state and TF server and only the TF server has credentials to apply changes, there shouldn’t be any drift

David avatar

by a few people, I mean the team, and since the state is shared, then each change is applied from a single point/ state

Jonas Steinberg avatar
Jonas Steinberg

@David There is a difference between theory/ideology and practice. In theory and ideology yes state, as you’ve implied above, should never drift. In practice, “humans gonna human” and will change stuff in the cloud console or wherever for various reasons either in an outage, because they didn’t understand how those cloud objects were created, but still have access to the cloud console via federation or because people were “testing stuff” and forgot to revert.

It is possible to have orgs without drift. One such department at my current shop has no drift and they’re a large group; they have a sibling department that is much smaller and has tremendous drift.

If you work with terraform for any length of time you’re going to encounter drift.

Here are some resources in case you’re interested in learning about what it is, how it happens and how to deal with it:

  1. https://developer.hashicorp.com/terraform/tutorials/state/resource-drift
  2. https://www.hashicorp.com/resources/the-state-of-infrastructure-drift-what-we-learned-from-100-devops-teams
  3. https://developer.hashicorp.com/terraform/cli/commands/plan#planning-modes
  4. https://developer.hashicorp.com/terraform/tutorials/state/refresh
  5. https://developer.hashicorp.com/terraform/cli/state The most important aspect of recovering from drift is understanding how to safely manipulate state.
1
David avatar

Oh, very true, thanks a lot for the clarity

David avatar

and awesome saying, “humans gonna human”

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also, AWS changes/improves their API all the time, terraform changes their API often, some things go obsolete or depreciated, some things that are in TF state change inside AWS (e.g. a resource state changes and TF sees the diff), with many stacks you will always have drift almost every day, and that’s a big issue b/c all of those reasons require diff approaches to reconcile the drift

David avatar

oh, true, thanks for the info @Andriy Knysh (Cloud Posse)

David avatar

did some research on generating terraform code from existing infra, seems there are a number of articles on this, but no real solution except for terraformer for GCP

David avatar

the above might be helpful @Jonas Steinberg

Jonas Steinberg avatar
Jonas Steinberg

@David Thanks. I’m very familiar with terraformer. FYI GCP has a native --export-to-terraform command that can be used as a bulk export operation so terraform isn’t even needed.

1
RO avatar
  1. I am sure you guys know all of it so here is my bit, if there is any mistake I would happily like to see your comments so I can learn more or correct misconception ; 1. Understand the current state of the resources across all environments: Before attempting to recover from drift, it is essential to understand the current state of the resources in each environment. This can be done by using Terraform’s “terraform state show” command to view the state file for each environment. This will give an overview of the current state of the resources and their configurations in each environment.
  2. Identify and document the resources that are drifted: After understanding the current state of the resources, the next step is to identify and document the resources that have drifted. This can be done by comparing the state files for each environment against the configuration files that were used to create the resources in the first place.
  3. Create a plan of action: Once the resources that have drifted have been identified, the next step is to create a plan of action to recover from the drift. This plan should include a list of the resources that need to be updated, the desired state for each resource, and the steps that will be taken to bring the resources back to their desired state.
  4. Use Terraform import command to recover from drift: The terraform import command can be used to safely manipulate state in order to recover from drift. The import command can be used to update the state file with the desired state of the resources. This can be done by providing the import command with the resource address and the id of the resource. For example: “terraform import <resource_address> <resource_id>”
  5. Test the resources in a staging environment: Before implementing the plan in all environments, it’s a good practice to test the changes in a staging environment first. This will allow you to verify that the changes will work as intended and fix any issues that may arise before implementing the plan in the production environment.
  6. Implement the plan in all environments: Once the changes have been
RO avatar

terraform state list It will return a list of all the resources that terraform is managing. If you find that a specific resource has drifted, you can use the terraform import command to update the state file to the desired state. For example, to import an S3 bucket with the resource address aws_s3_bucket.example and the resource ID example-bucket, you would use the following command: terraform import aws_s3_bucket.example example-bucket - screeshoot as it didn;t fit the line here;

RO avatar

In this example, it’s creating a plan of action for the S3 bucket resource that has drifted. To create the plan, you might want to take a look at the current state of the resource, so you can check what changes to make to reach the desired state of the resource.

RO avatar

Once you know what changes need to be done to bring the resources back to the desired state, you can test the changes in a staging environment before implementing them in production. For example, you can use the terraform plan command to check what changes will be made to the S3 bucket resource: Then apply terraform apply when confident that its good to go. After implementing the plan, you should continuously monitor and maintain the resource to ensure that it remains in the desired state. You can run the terraform plan command periodically to check for drift and terraform apply if necessary. Also, before importing it is always a good idea to make a backup of your state file in case things went wrong you can roll back to a working state: terraform state pull > tf-state-backup.json

RO avatar

Please for you guys who got more experience is this process correct?

2023-01-04

Eric avatar

Is there a good mechanism for “booping” a bug report?

Release notes from terraform avatar
Release notes from terraform
03:43:37 PM

v1.3.7 1.3.7 (January 04, 2023) BUG FIXES: Fix exact version constraint parsing for modules using prerelease versions (#32377) Prevent panic when a provider returns a null block value during refresh which is used as configuration via ignore_changes (<a href=”https://github.com/hashicorp/terraform/issues/32428“…

Use the apparentlymart/go-versions library to parse module constraints by liamcervante · Pull Request #32377 · hashicorp/terraformattachment image

Update the module constraint checking logic to use the apparentlymart/go-versions library instead of the hashicorp/go-versions library. This copies the logic of the provider installer which relies …

don't panic with a null list block value in config by jbardin · Pull Request #32428 · hashicorp/terraformattachment image

Using ignore_changes with a list block, where the provider returned an invalid null value for that block, can result in a panic when validating the plan. Future releases may prevent providers from …

Ray Botha avatar
Ray Botha

Is anyone managing google workspace in terraform? Noticed they released a provider but it’s not really clear to me in what situation that would be a good pattern

Eric avatar

Seems like it’d be nice for managing user on- and off-boarding at a minimum. Especially if you’re using it for auth for external services

2023-01-05

2023-01-06

James Clinton avatar
James Clinton

Hello * - does anyone know how to configure this ALB module in Terraform to have set Target Group attributes such as Stickiness? I have multiple instances in the group and need to pin the request to a healthy instance they first land on. https://github.com/terraform-aws-modules/terraform-aws-alb

James Clinton avatar
James Clinton

Thank you Warren, if this helps others using this module - great.

target_groups = [
    {
      .
      .
      .
      stickiness = {
        type            = "lb_cookie"
        cookie_duration = 86400
        enabled         = true
      }

      health_check = {
        healthy_threshold   = 2
        unhealthy_threshold = 2
        timeout             = 15
        interval            = 6
        path                = "/healthCheck"
        port                = 80
        matcher             = "200"
      }
    }
  ]
RO avatar

• To configure an Application Load Balancer (ALB) in Terraform to set Target Group attributes such as stickiness, you’ll need to use the terraform-aws-alb module. This module allows you to create an ALB and its associated resources, such as listeners and target groups, in Terraform. • To use the terraform-aws-alb module, you’ll need to:

  1. add this module to your terraform configuration file [main.tf](http://main.tf), you can use the following command: • module “alb” { source = “terraform-aws-modules/alb/aws” version = “2.2.0”
  2. then configure the attributes of the target group according to your needs • resource “aws_alb_target_group” “example” { name = “example” port = 80 protocol = “HTTP” vpc_id = aws_vpc.main.id stickiness { type = “lb_cookie” enabled = true duration = 1800 } • name: is the name of the target group • port: the port the target group listens on • protocol: the protocol the target group uses • vpc_id: the VPC the target group belongs to ◦ stickiness: the stickiness attributes of the target group,type: determines the type of stickiness that is used. In this case, it’s “lb_cookie”, which uses a cookie to ensure that requests are sent to the same instance. ◦ enabled: enable stickiness for the target group ◦ duration: the time, in seconds, for which requests are sent to the same instance.
  3. Add instances to the target group • resource “aws_alb_target_group_attachment” “example” { target_group_arn = aws_alb_target_group.example.arn target_id = aws_instance.example.id port = 80 } • target_group_arn: the ARN of the target group to which the instances are added. • target_id: the ID of the instances that are added to the target group. • port: the port the instances listen on.
  4. Define a listener on the ALB that forwards requests to the target group • resource “aws_alb_listener” “example” { load_balancer_arn = aws_alb.example.arn protocol = “HTTP” port = 80 default_action { type = “forward” target_group_arn = aws_alb_target_group.example.arn } } • load_balancer_arn: the ARN of the ALB • protocol: the protocol the listener uses • port: the port the listener listens on ◦ default_action: the action that is taken when a request is receivedtype: the type of action. In this case, it’s “forward”, which forwards requests to the target

2023-01-08

olad avatar

Hello,

Is there any mechanism in Terraform that let you run some tasks in between terraform plan and terraform apply ? Trying to enforce some policies (not via sentinel) before provisioning AWS resources.

Example task could be validate Change Request number or validate image in ECR is scanned&blessed and once validated successfully, then proceed from plan to apply. Appreciate any info.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Not really a good way to do this natively in terraform. There’s a shell provider that might do the job

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

The way most people do this is with their workflow automation tool.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You can use atmos for this or #terragrunt or things like Makefiles

olad avatar

thanks @Erik Osterman (Cloud Posse) we have some controls already in Jenkins before triggering plan/apply. But there are some folks using VCS directly to connect to Terraform Cloud. So I was wondering how to bake in controls on VCS runs. sounds like we have to force the VCS folks to use workflow automation tool like jenkins

jsreed avatar

bash script

2023-01-09

2023-01-10

jose.amengual avatar
jose.amengual

Anyone here have implemented/used Drift detection for Iac (terraform base)? what was the user flow, did it work well, if not why? autoremediation was a thing?

Evanglist avatar
Evanglist

I have implemented drift detection in my org. The flow is:

• there is a script(bash) which gets the module name, account name, product (unique) name and send a asynchronous call to lambda.

• Lambda received the payload and runs the terraform plan and stores the plan output in an s3 bucket which is rewritten everyday. (the drift detection framework runs everyday). Our lambda code is written using custom runtime (https://docs.aws.amazon.com/lambda/latest/dg/runtimes-custom.html)

Custom AWS Lambda runtimes - AWS Lambda

You can implement an AWS Lambda runtime in any programming language. A runtime is a program that runs a Lambda function’s handler method when the function is invoked. You can include a runtime in your function’s deployment package in the form of an executable file named

jose.amengual avatar
jose.amengual

and how do you deal with the drift?

jose.amengual avatar
jose.amengual

what is the User interaction if any

Evanglist avatar
Evanglist

planning and applying so it will remove any drifts. There are some places where there is a genuine need of change so that is coded in terraform regularly based on reviews

jose.amengual avatar
jose.amengual

I see ok so it auto-remediated it , cool

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We do in spacelift, does that count? It shows up as a proposed terraform run that should get applied. Separately, we have policies to auto confirm changes in non-production environments. So that would then auto remediate.

jose.amengual avatar
jose.amengual

it counts, any product workflow is welcome in this thread

Joe Perez avatar
Joe Perez

if you’re using github actions to run terraform, you can extend it to include drift detection on a schedule. I wrote about it here https://www.taccoform.com/posts/tfg_p3/

Drift Detection with Github Actions

Overview It’s Friday at 5PM and an urgent request comes in from the Chief Infotainment Officer. You know exactly what you have to update in terraform and create a new feature branch to make the changes. You’re rolling through the lower environments and finally get to production. You run a terraform plan and unexpected resource deletions show up in the output. You panic a little, then realize that your work didn’t trigger the deletion.

Joe Perez avatar
Joe Perez

you also don’t need github actions and can use the same solution in your preferred automation system as a cron-like task

loren avatar

A scheduled plan is how we do it also, using -detailed-exitcode to fail the run if any drift is detected

jose.amengual avatar
jose.amengual

interesting

jose.amengual avatar
jose.amengual

we are planning on adding this to Atlantis and I’m gathering ideas to implement it

1
1
Mohammed Yahya avatar
Mohammed Yahya

I used https://driftctl.com/ and pass down multiple state files, across all project stacks, seems promising knowing that it still have a big list of un-supported services, but for the basics its working really well

driftctlattachment image

Catch infrastructure drift

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)


A scheduled plan is how we do it also, using -detailed-exitcode to fail the run if any drift is detected
@loren how are you reporting on it? Are you just relying on status check badges (:x: ) in the commit log against main or did you do some other instrumentation?

loren avatar

nothing so fancy… no “dashboard” where we’d get a visual for such a failed status check… no, just a webhook to a slack channel. job runs in the ~morning, so we see it as we sign in

1
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Makes sense - that’s a simple enough solution

1
SergeiV avatar
SergeiV

Jenkins job that runs on scheduled and goes through terraform repository looking for version files. The existense of such file means this is a location for a terraform plan. For each folder where it finds that file, it runs terraform plan -detailed-exitcode and sends a message to Slack with summary for for failed or drifiting plans

sripe avatar

hello all, looking for some guidance here, i m trying to deploy a k8s resource using terraform as below and have been sending karpenter_instance_family = ["r5d","r5ad","r6gd","r6id"] as input to this and i m struggling to get this work. any suggestions please. error in the thread

resource "kubectl_manifest" "default-pa" {
  yaml_body = <<YAML
apiVersion: karpenter.sh/v1alpha5
kind: Provisioner
metadata:
  name: default-pa
spec:
  requirements:
    - key: karpenter.k8s.aws/instance-family
      operator: In
      values: ${var.karpenter_instance_family}
  limits:
    resources:
      cpu: 1000
  provider:
    launchTemplate: 'karpenter'
    subnetSelector:
      kubernetes.io/cluster/karpenter: '*'
  labels:
    type: karpenter
    provisioner: default-pa
  ttlSecondsAfterEmpty: 30
  ttlSecondsUntilExpired: 2592000
  YAML
}

variable "karpenter_instance_family" {
  description = "Instance family"
  type        = list(string)
  default     = []
}
sripe avatar
│  177:     type: karpenter
│  178:     provisioner: default-pa
│  179:   ttlSecondsAfterEmpty: 30
│  180:   ttlSecondsUntilExpired: 2592000
│  181:   YAML
│     ├────────────────
│     │ var.karpenter_instance_family is list of string with 4 elements
│ 
│ Cannot include the given value in a string template: string required.
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
jsonencode(["r5d","r5ad","r6gd","r6id"])
yamlencode(["r5d","r5ad","r6gd","r6id"])
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
values: ${jsonencode(var.karpenter_instance_family)}
RO avatar

I know way less about it than everyone here but researching that I found the following below don’t know if it is usefull..;

RO avatar

The issue is that karpenter_instance_family variable is defined as a list of strings, and the provisioner resource yaml_body is using it as list without any manipulation, this is causing an issue in the yaml formation and it will not work. To fix this issue, i would suggest to modify the resource yaml_body, you need to convert this variable karpenter_instance_family list into string and then add it to the yaml_body. Here is an example of how you can fix it.

RO avatar

resource “kubectl_manifest” “default-pa” { yaml_body = <<YAML apiVersion: karpenter.sh/v1alpha5 kind: Provisioner metadata: name: default-pa spec: requirements: - key: karpenter.k8s.aws/instance-family operator: In values: [${join(“,”, var.karpenter_instance_family)}] limits: resources: cpu: 1000 provider: launchTemplate: ‘karpenter’ subnetSelector: kubernetes.io/cluster/karpenter: ‘*’ labels: type: karpenter provisioner: default-pa ttlSecondsAfterEmpty: 30 ttlSecondsUntilExpired: 2592000 YAML }

variable “karpenter_instance_family” { description = “Instance family” type = list(string) default = [] }

RO avatar

In case it is confuse above ; To fix the issue, I added the [${join(“,”, var.karpenter_instance_family)}] statement in the values field. This statement is used to join the list of values in the variable “karpenter_instance_family” and separate them with a comma. This ensures that the values are passed as a list of strings instead of a single string, which is the correct format for this field. It will make sure that the module is going to work with the correct input and match the request format the variable should have been in.

sripe avatar

thank you , it worked both ways

1
RO avatar

Let me know if you have more questions, its good because give me more scenarios that I can learn from as well.

2023-01-11

David Lundgren avatar
David Lundgren

Would I be able to get a review for https://github.com/cloudposse/terraform-aws-ec2-instance/pull/148 I currently have to manually set the ssm_patch_manager_iam_policy_arn and can’t use the ssm patch s3 bucket (as-is) easily on govcloud instances

1
David Lundgren avatar
David Lundgren

Sorry, not sure on the best place to get eyes on this PR.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

approved and merged, thanks @David Lundgren

1
mado avatar

I want to prepare for terraform certification, any recommendations? Material or guide welcome!

2023-01-12

Bart Coddens avatar
Bart Coddens

Hi al, I am using the module: terraform-aws-modules/vpc/aws and here I define one public subnet:

Bart Coddens avatar
Bart Coddens

public_subnets = [“10.10.1.0/24”]

Bart Coddens avatar
Bart Coddens

When I want to refer to this in my ec2 instance, it fails

Bart Coddens avatar
Bart Coddens

subnet = module.vpc.public_subnets

Bart Coddens avatar
Bart Coddens

Do I miss something ?

Marco Frömbgen avatar
Marco Frömbgen

Hello everyone! I would like to get your opinion on the following issue and proposed implementation if anyone has time(https://github.com/cloudposse/terraform-aws-ecs-alb-service-task/issues/184).

#184 make aws_ecs_service `name` configurable

Describe the Feature

It would be a benefit if you cloud change the label_order for all aws_ecs_service resources.

Expected Behavior

ECS service name should be independent.

Use Case

If you parse a lot of context down to your module it will bloat your service name very quickly and make it unfriendly to read (especially inside the AWS console). Modifying the whole context isn’t a solution because it would change the other resource names and tags as well which is not ideal or even not possible if you have for example multiple environments with the same service name.

Describe Ideal Solution

I would like to have a variable that allows me to modify only the name of the ecs service name.

Alternatives Considered

Alternatives to #183 could be to just introduce a variable only for the aws_ecs_service name.

Additional Context

Scenario: You have one AWS account for your SDLC(software development life cycle) in which you have an ECS cluster for each stage or environment. Your cluster name would be something like namespace-environment or namespace-environment-stage.

Current implementation

Input:

  namespace   = "test"
  environment = "sandbox1"
  stage       = "stage1"
  name        = "myservice1"
  attributes  = ["consumer"]

Result:

name = test-sandbox1-stage1-myservice1-consumer

In that case, you will have a lot of bloated prefixes that don’t provide any value.

Recommended implementation

Input:

  namespace   = "test"
  environment = "sandbox1"
  stage       = "stage1"
  name        = "myservice1"
  attributes  = ["consumer"]
  ecs_service_label_order = ["name", "attributes"]

Result:

name = myservice1-consumer

In that case, you would modify the id without losing out on the tags but have a much more useful and easy-to-view name

Important
You don’t want to modify the inputs in general because you will need them for other resources like aws_iam_role with there full name (id) on a multi environment or stage account.

Ramon de la Cruz Ariza avatar
Ramon de la Cruz Ariza

Hello!! Will this issue be addressed? https://github.com/cloudposse/terraform-aws-documentdb-cluster/issues/19 we can’t attach more than 1 SG in the VPC side for DocumentDB clusters

Thanks!!

#19 Add possibility for adding several security groups

Describe the Feature

Currently there is no way to have several security groups on the created cluster.
I would like to be able to add extra security groups, not only as rules in the main security group but as separate rules.

Expected Behavior

The resource for security group can take a list, so the current only security group that is added to the cluster should bascially be made into a merged list of that and an input list variable with more ids).

So solution is an optional list variable with security groups, so that the resource can be setup/modified both with and without those extra groups.

Use Case

For example Glue requires that both the job and the database have the same security group (that have ALL traffic open attached to both). So more security groups need to be added. Now we have to add manually, and they get removed with each terraform apply and need to be readded, quite annoying..

Alex Jurkiewicz avatar
Alex Jurkiewicz

this isn’t official support, so we can’t answer questions like “will this issue be addressed?”

#19 Add possibility for adding several security groups

Describe the Feature

Currently there is no way to have several security groups on the created cluster.
I would like to be able to add extra security groups, not only as rules in the main security group but as separate rules.

Expected Behavior

The resource for security group can take a list, so the current only security group that is added to the cluster should bascially be made into a merged list of that and an input list variable with more ids).

So solution is an optional list variable with security groups, so that the resource can be setup/modified both with and without those extra groups.

Use Case

For example Glue requires that both the job and the database have the same security group (that have ALL traffic open attached to both). So more security groups need to be added. Now we have to add manually, and they get removed with each terraform apply and need to be readded, quite annoying..

venkata.mutyala avatar
venkata.mutyala

I think the cloudposse team is pretty quick to review PRs. Might be worth asking it to be assigned to you and then just submitting a PR with the proposed changes.

2023-01-15

2023-01-16

Stephen Bennett avatar
Stephen Bennett

Hi, is there a simple way to make a terraform resource with a dynamic lifecycle policy?

ie: I want my production s3 buckets to have prevent_destory = true and all others to have false ive tried variables and adding a count = to crreate it. But both error.

RB avatar

lifecycles cannot be related to an input, afaik

Stephen Bennett avatar
Stephen Bennett

i got a working example going with something like:

variable "destroy" {
  default = false
}

resource "aws_security_group" "safe_sg" {
  count = var.destroy ? 1 : 0
  lifecycle {
    prevent_destroy = true
  }
  name        = "allow_tls"
  ...
}

resource "aws_security_group" "unsafe_sg" {
  count = var.destroy ? 0 : 1
  lifecycle {
    prevent_destroy = false
  }

  name        = "allow_tls"
  ...
}
Stephen Bennett avatar
Stephen Bennett

but now trying to take into a s3 is fun

RB avatar

oh I thought you were trying to do something like prevent_destroy = var.some_input

Stephen Bennett avatar
Stephen Bennett

yeah that failed real quick! it was trying to find a workout around since that doesnt work..

RB avatar

yes, if you instantiate a completely new resource with a hard coded prevent_destroy then it should work.

Stephen Bennett avatar
Stephen Bennett

sorry properly poorly worded

RB avatar

would a local like this work?

locals {
  security_group_id = var.destroy ? one(aws_security_group.safe_sg).id : one(aws_security_group.unsafe_sg).id
}

output "security_group_id" {
  value = local.security_group_id
}
Alex Atkinson avatar
Alex Atkinson

Checkout this dynamic backend configurator with env0. Can’t put vars in backend config, it’s fine. :) https://github.com/env0/custom-flows-examples/blob/main/dynamic-backend/README.md

env0 Custom Flow: Dynamic Backend

This project custom flow facilitates dynamically generated backend configurations, enabling a one-to-many posture for a Terraform codebase and mitigates all discipline dependency surrounding workspace naming schemas, as workspaces are not used with this kit.

Features:

• Normalization of env0 “Environment”/stack names for use in the backend key/prefix. • Dynamically constructed s3 bucket name. IE: ‘--tf-state-'. • Dynamically constructed key/prefix. IE: '//state' • Dependency Checks. • Backend key/prefix collision prevention. • Disabling of env0 default workflow handling. • Garbage collection upon "Environment"/stack destroy action.

This will be of immediate value to anyone familiar with the infamous “Variables may not be used here.” error thrown when attempting to use a variable in the backend configuration.

One point of particular value is how this enables permission policies to be formed. For example, an s3 bucket policy might permit access access to specific workload environment states based on prefix, while restricting others. You can also create two env0 projects for each workload env, one for C&C where sensitive state data might reside, and one for less sensitive stacks. For example, you might run an iam user module in a project called dev-cmd, and a vpc module in dev, allowing all to see the outputs for the vpc, as they’re typically required as data sources for other modules.

Note specifically that adjusting this kit to work with other remote backends, or as stack specific kit should be trivial.

Remember, this is only an example. While it will work out of the box as described, feel free to adjust it as necessary to suit your particular use case.

Usage

This example directory structure is setup to mimic what an actual Terraform repository might look like, including a directory for the env0 custom flow, and a modules directory. Yes, a single output is a valid module, and will suffice to demonstrate this kit.

  1. Include the custom-flow code in your repo as described below.
  2. Add the env0 project level environment variables as described below.
  3. Configure the custom flow as a project policy as described below.
  4. Run the project.

Dependencies

Assuming you have a project setup with the necessary AWS credentials to use an s3 backend, the remaining dependencies are as follows:

Code Repository

Add this example code to the repository containing your Terraform modules.

Suggested location:

/env0/custom_flows/dynamic_backend/

NOTE: If another path is used, adjust ‘env0.yaml’ and ‘dynamic_backend_configurator.sh’ accordingly.

Project Environment Variables

The following environment variables MUST be set at the project level.

• ENV0_SKIP_WORKSPACE • Must be set to ‘true’ • BACKEND_S3_BUCKET • The s3 bucket to use. • The credential configured for the project must have appropriate permissions. • PRODUCT_GROUP_STUB • The OU or product group initials. This will be used to construct the key/prefix. • WORKLOAD_ENVIRONMENT • The workload environment, such as dev, qa, etc. This will be used to construct the key/prefix.

Project Policy: Custom Flow

Configure the project level custom flow as described here.

Note specifically that env0 requires a specific yaml file to be defined, and not a directory.

Assuming this custom flow was included in the code repository as described here, the ‘ENV0_ROOT_DIR’ envar will ensure relativistic sanity between the custom flow working directory and the dependent scripts.

1
Soren Jensen avatar
Soren Jensen

I got an issue with the S3 etag. I’m building a custom authorizer lambda function like this, to make sure it’s always rebuilt. The build.sh is checking out another repo and building the lambda zip.

resource "null_resource" "custom_authorizer_build" {
  triggers = {
    always_run = timestamp()
  }

  provisioner "local-exec" {
    command = "bash ${path.module}/../build.sh"

The zip is then uploaded to an S3 bucket

resource "aws_s3_object" "custom_authorizer_s3_object" {
  bucket = aws_s3_bucket.lambda_bucket.id

  key    = "custom-authorizer.zip"
  source = "${path.module}/../dist/custom-authorizer/custom-authorizer.zip"

  etag = fileexists("${path.module}/../dist/custom-authorizer/custom-authorizer.zip") ? filemd5("${path.module}/../dist/custom-authorizer/custom-authorizer.zip") : ""
  
  depends_on = [
    null_resource.custom_authorizer_build
  ]
}

In the aws_lambda_function I got the following like source_code_hash = aws_s3_object.custom_authorizer_s3_object.etag Applying the terraform code I end up with the following error on the etag

╷
│ Error: Provider produced inconsistent final plan
│ 
│ When expanding the plan for aws_s3_object.custom_authorizer_s3_object to include new values learned so far during apply, provider "registry.terraform.io/hashicorp/aws" produced an invalid
│ new value for .etag: was cty.StringVal("2be93e215ac8a6f3b243ceac4550e124"), but now cty.StringVal("36b50bace3da160d46f259118d41d359").
│ 
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
╵ 

Anyone got an idea why I get this error? I got a feel it’s because the zip exists and the value is taken before the build is complete but not sure.. Any help will be much appreciated

Alex Jurkiewicz avatar
Alex Jurkiewicz

Generally, trying to run procedural things like this inside Terraform is an anti-pattern. It’s not designed for this and leads to pain like you’re experiencing.

Instead, I’d suggest you wrap your build process in a script like ./deploy.sh which does:

../build-lambda.sh
terraform apply -var lambda_zipfile=../dist/lambda.zip

Then Terraform can statically know the etag and so on.

Soren Jensen avatar
Soren Jensen

We already got a deploy script wrapping the terraform actions. Interesting idea to move the build out.

1
John Stilia avatar
John Stilia

same here on terraform cloud with aws provider on 4.50.0

John Stilia avatar
John Stilia
#17860 Provider produced inconsistent final plan `aws_s3_bucket_object`

Community Note

• Please vote on this issue by adding a :+1: reaction to the original issue to help the community and maintainers prioritize this request • Please do not leave “+1” or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform CLI and Terraform AWS Provider Version

terraform - 0.13.5
terragrunt - 0.25.5
AWS provider - latest, even tested with v3.24.0 also.

Affected Resource(s)

• aws_s3_bucket_object

Terraform Configuration Files

Example Terraform code

terraform {
  required_version = ">= 0.13"
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">=3.15.0"
    }
  }
}

provider "aws" {
  region = var.aws_region
}


resource "aws_security_group" "cluster_security_group" {
  name        = "cluster-sg-connectivity"
  vpc_id      = var.vpc_id
  tags = {
    Name   = "cluster-sg-connectivity",
    source = "terraform"
  }
}


module "primary_subcluster_node" {
  source = "git::<https://github.com/gruntwork-io/terraform-aws-server.git//modules/single-server?ref=v0.9.4>"

  count                         = var.primary_subcluster_node_count
  name                          = "primary_subcluster_node"
  iam_role_name                 = "primary-role-${count.index}"
  instance_type                 = "c5d.4xlarge"
  ami                           = <replace_ami>
  keypair_name                  = var.instance_key_pair_name
  user_data_base64              = data.template_cloudinit_config.userdata.rendered
  root_volume_type              = "standard"
  root_volume_size              = 30
  vpc_id                        = var.vpc_id
  subnet_id                     = var.subnet_id
  additional_security_group_ids = [aws_security_group.cluster_security_group.id]
  allow_ssh_from_cidr_list      = []
  allow_all_outbound_traffic    = false
  attach_eip                    = false
  tags = {
    Name   = "${local.primary_subcluster_node_name}-${count.index}"
    source = "terraform"
  }
}

module "secondary_subcluster_node" {
  source = "git::<https://github.com/gruntwork-io/terraform-aws-server.git//modules/single-server?ref=v0.9.4>"

  count                         = var.secondary_subcluster_node_count
  name                          = "secondary_subcluster_node"
  iam_role_name                 = "secondary-role-${count.index}"
  instance_type                 = "c5d.4xlarge"
  ami                           = <replace_ami>
  keypair_name                  = var.instance_key_pair_name
  root_volume_type              = "standard"
  root_volume_size              = 30
  vpc_id                        = var.vpc_id
  subnet_id                     = var.subnet_id
  additional_security_group_ids = [aws_security_group.cluster_security_group.id]
  allow_ssh_from_cidr_list      = []
  allow_all_outbound_traffic    = false
  attach_eip                    = false
  user_data_base64              = data.template_cloudinit_config.userdata.rendered
  tags = {
    Name   = "${local.secondary_subcluster_node_name}-${count.index}",
    source = "terraform"
  }
}

resource "aws_s3_bucket_object" "s3_default_subcluster_instance_ip_list" {
  bucket       = var.s3_bucket_name
  acl          = "private"
  key          = "cluster/config/default_subcluster_instance_ip_list"
  content_type = "text/plain"
  content      = join("\n", [for v in module.primary_subcluster_node : v.private_ip], [""])
}

resource "aws_s3_bucket_object" "s3_analytics_subcluster_instance_ip_list" {
  bucket       = var.s3_bucket_name
  acl          = "private"
  key          = "cluster/config/default_subcluster_instance_ip_list"
  content_type = "text/plain"
  content      = join("\n", [for v in module.secondary_subcluster_node : v.private_ip], [""])
}

Please include all Terraform configurations required to reproduce the bug. Bug reports without a functional reproduction may be closed without investigation.

terraform {
  source = "<our-repo>"
}

include {
  path = find_in_parent_folders("")
}

inputs = {
  aws_region                      = "us-east-2"
  primary_subcluster_node_count   = 3
  secondary_subcluster_node_count = 0
  instance_key_pair_name          = <required_keypair>
  vpc_id                          = <required_vpc_id>
  subnet_id                       = <required_subnet_id>
  s3_bucket_name                  = <require_s3_bucket_name>
}

Panic Output

aws_s3_bucket_object.s3_secondary_subcluster_instance_ip_list to include new
values learned so far during apply, provider
registry.terraform.io/hashicorp/aws” produced an invalid new value for
.version_id: was known, but now unknown.

This is a bug in the provider, which should be reported in the provider’s own
issue tracker.

Expected Behavior

Ideally when the value ‘secondary_subcluster_node_count’ changed from 0 to any non-zero value and then when we re-apply, the newly created nodes under the secondary subcluster private ip address need to write into “cluster/config/secondary_subcluster_instance_ip_list” file.

Actual Behavior

Instead of writing a new node private IP address into “cluster/config/secondary_subcluster_instance_ip_list” file under s3 bucket, it is throwing provider error when we do re-apply after changing the ‘secondary_subcluster_node_count’ value.

Steps to Reproduce

  1. With the provided values need to do terragrunt apply.
  2. When the first time applied, it will deploy primary subcluster nodes with specified value ‘primary_subcluster_node_count’ and secondary subcluster won’t create any nodes as its ‘secondary_subcluster_node_count’ value is 0.
  3. After primary subcluster nodes created, change ‘secondary_subcluster_node_count’ value from 0 to non-zero.
  4. Again do terragrunt apply and will throw an error.
John Stilia avatar
John Stilia

also @Soren Jensen I might need your help as it seems that we write the same code

1

2023-01-17

Rayen avatar

Hello everyone, I’m new with terraform and I’m using cloudposse modules. My question is how I create infrastructure when I have architecture like this

RB avatar

Have you seen terraform-aws-components repo? Thats how all the root tf directories are structured

https://github.com/cloudposse/terraform-aws-components/tree/master/modules

RB avatar

You may also want to look at https://atmos.tools since its helpful to separate configuration from code

Rayen avatar

not seperate folders with each module inside

brad.whitcomb97 avatar
brad.whitcomb97

Hi All,

I’m looking to output the aws_instance public IP address via the terminal following a TF config apply.

However, I’m using a terraform-aws-modules/ec2-instance/aws module and unsure how to output the ec2-instance IP address due to the instance being within a external module and not entered as a resource. Here’s what I entered to attempt to get the intended output, but it was to no avail -

output "Public IP"
    value = module.ec2_instance.aws_instance.public_ip

Please let me know if what I’m trying to achieve is possible and if so, how?!

Warren Parad avatar
Warren Parad

if the module doesn’t output it, you can’t get it

Warren Parad avatar
Warren Parad

what are all the outputs of the module, could you try using one with an AWS data resource to pull it back in?

1
John Stilia avatar
John Stilia

you could kind of thought Do a datalookup based on the instance arn or id and take it from there ? ]]

1
John Stilia avatar
John Stilia

aw, great minds think alike @Warren Parad

2
brad.whitcomb97 avatar
brad.whitcomb97

Thanks for the suggestions!

1

2023-01-18

Andres De Castro avatar
Andres De Castro

Hi all! Is there someone actively using the VPN cloudposse module? https://github.com/cloudposse/terraform-aws-ec2-client-vpn I’m running into an issue when it tries to create the aws_ec2_client_vpn_endpoint resource but it fails because the certificate (also created by the same module) does not have a domain. Any idea of how can I fix this one?

cloudposse/terraform-aws-ec2-client-vpn
RB avatar

Could you create an issue with the appropriate inputs you’re providing it?

cloudposse/terraform-aws-ec2-client-vpn
Andres De Castro avatar
Andres De Castro
1
RB avatar

The client vpn endpoint uses another module to create the ssl cert btw

RB avatar

I wonder if this issue is in a recent aws provider version ?

Andres De Castro avatar
Andres De Castro

Yeah.. I inspect the child module and never sets the domain, Probably should be set somewhere here? https://github.com/cloudposse/terraform-aws-ssm-tls-self-signed-cert/blob/master/acm.tf

resource "aws_acm_certificate" "default" {
  count             = local.acm_enabled ? 1 : 0
  private_key       = local.tls_key
  certificate_body  = local.tls_certificate
  certificate_chain = var.basic_constraints.ca ? null : var.certificate_chain.cert_pem
}

RB avatar

if this succeeds, then the module is still working as expected https://github.com/cloudposse/actions/actions/runs/3950558796

1
Andres De Castro avatar
Andres De Castro

Once I set these values: ca_common_name , root_common_name , server_common_name as shown in the example it works, however those values have a default in place. Maybe the module should enforce those inputs? I commented back into the issue for visibilty: https://github.com/cloudposse/terraform-aws-ec2-client-vpn/issues/57

#57 InvalidParameterValue: Certificate does not have a domain

Describe the Bug

When using this module the server certificate is created successfully, but when trying to create the aws_ec2_client_vpn_endpoint.default[0] resource it fails as the created certificate does not have a domain:

Error: error creating EC2 Client VPN Endpoint: InvalidParameterValue: Certificate <certiicate_arn> does not have a domain

My configuration is very simple:

module "ec2_client_vpn" {
  source  = "cloudposse/ec2-client-vpn/aws"
  version = "0.13.0"

  associated_subnets  = var.private_subnets
  client_cidr         = var.client_cidr
  logging_stream_name = null
  organization_name   = <my_org_name>
  vpc_id = var.vpc_id

  additional_routes = [
    {
      destination_cidr_block = "0.0.0.0/0"
      description            = "Internet Route"
      target_vpc_subnet_id   = element(var.private_subnets, 0)
    }
  ]
}

Expected Behavior

The module.ec2_client_vpn.aws_ec2_client_vpn_endpoint.default[0] resource should be created.

Steps to Reproduce

Steps to reproduce the behavior:

  1. Provide private_subnets, vpc_id, client_cidr and a valid org_name to the following snippet
module "ec2_client_vpn" {
  source  = "cloudposse/ec2-client-vpn/aws"
  version = "0.13.0"

  associated_subnets  = var.private_subnets
  client_cidr         = var.client_cidr
  logging_stream_name = null
  organization_name   = <my_org_name>
  vpc_id = var.vpc_id

  additional_routes = [
    {
      destination_cidr_block = "0.0.0.0/0"
      description            = "Internet Route"
      target_vpc_subnet_id   = element(var.private_subnets, 0)
    }
  ]
}
  1. Run terraform apply
  2. The module should fail with the error Error: error creating EC2 Client VPN Endpoint: InvalidParameterValue: Certificate <certiicate_arn> does not have a domain

Environment (please complete the following information):

• Using Mac OS silicon • Monterey v12.3.1 • terraform 1.1.9

2023-01-19

Soren Jensen avatar
Soren Jensen

Hi all, I’m a happy user of the cloudposse/s3-bucket/aws module. We are tightening security where we can, and trying to update everything to the latest best practice and recommendations ahead of an audit. My understanding from the AWS Documentation is to keep away from using ACL on new buckets. Still it looks like the module is using the ACL. Is there a more up to date example of how to use this module without ACL enabled?

source = "cloudposse/s3-bucket/aws"
# Cloud Posse recommends pinning every module to a specific version
version = "3.0.0"
enabled = true

bucket_name = random_pet.upload_bucket_name[each.key].id

acl                     = "private"
block_public_acls       = true
block_public_policy     = true
ignore_public_acls      = true
restrict_public_buckets = true
Radha avatar

Hi all, I am using redis terraform module from https://github.com/cloudposse/terraform-aws-elasticache-redis.git. I want to add prevent_destroy lifecycle event in Redis. But I dont see this option available for input in the terraform module. Could someone let me know if there is an option to add it using this module?

Alex Atkinson avatar
Alex Atkinson

As far as state migrations go, is there any downside to simply moving the state file from the old location to the new, updating the backend config, and verifying no change detected with tf apply?… I can’t think of any, and you get to skip tf init -migrate-state, etc.

RB avatar

Nope, that’s what we all do

2
Alex Atkinson avatar
Alex Atkinson

Somehow I’ve never had to migrate state until now. Been lucky I guess. Lucky again with it’s simplicity.

1
Alex Atkinson avatar
Alex Atkinson

I googled state migration…. The internet would have you think it’s something complicated, unless I’m missing something.

Alex Jurkiewicz avatar
Alex Jurkiewicz

It’s considered complicated because mistakes can cause massive problems. Also, I think what’s generally considered complicated is moving resources within state

Sandro Manke avatar
Sandro Manke

even thats not actually complicated but more like a royal pain in the **

1
Joe Perez avatar
Joe Perez

State migrations are not fun and can be error prone. The import commands in the documentation are also incomplete, so there’s extra fumbling there

1
Joe Perez avatar
Joe Perez

I’ve blogged a little bit about these areas https://www.taccoform.com/posts/tfg_p1/

Terraform Import

Overview Terraform is a great tool for managing infrastructure, but let’s be honest, sometimes it can put you into precarious situations. After working with Terraform for a while, you will start to develop some kind of muscle memory for what works and what doesn’t. One thing you’ll start to notice is that your Terraform workspace is taking longer and longer to run. You need to address this problem, but you also can’t delete production resources and recreate them in a different terraform workspace.

Joe Perez avatar
Joe Perez
Terraform Upgrades

Overview You’ve started down a path to implement a change requested by your product team and you are quick to realize that your provider version and/or terraform version are too old to make the change. What seemed like a minor change turned into hours or days of yak-shaving. To help avoid these issues and security vulnerabilities, it’s best to keep your terraform code up to date. Lesson Why Should I Upgrade?

2023-01-20

HS avatar

Also on the subject of terraform => redis. I’m building a dashboard for our engineers where they can perform some configurations from the dashboard and when they click on deploy, we get the json payload, convert it into a tfvars file and use it to run terraform apply. However I’m looking for a better apporach to this.

Idea 1: Save all these config into redis, and create a terraform provider that gets the variables values from redis key store instead onf using tfvars or terraform environment variables.

What do you guys think of this? Does it make sense? or its a waste of time? or perhaps there’s a public provider that already does this?

Fizz avatar

AWS proton does this

1
HS avatar

Oh yes, but we use multicloud env hence why we need a self served platform, however the focus is actually on getting terraform variable values from redisdb idea, whether it makes sense or not or if there are other smarter ways to go about it

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Fwiw, we do something similar with atmos. All configuration is defined as YAML and then atmos writes the .tfvar.json files which get passed to terraform. We don’t yet have a UI, as we’ve optimized more for a gitops workflow. Is the UI the selling point for this? or automation?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Atmos Stacks | atmos

Use JSON Schema and OPA policies to validate Stacks.

1
HS avatar

Yes, the UI is the selling point, the automation bit has pretty much been figured out, just a way to slap a dashboard to it

2023-01-22

Fredistelroy avatar
Fredistelroy

Hello, it is stated here that the Cloudposse team no longer use AWS Beanstalk all that much anymore: https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment Does anybody know the reasoning? What do you guys use instead these days?

cloudposse/terraform-aws-elastic-beanstalk-environment

Terraform module to provision an AWS Elastic Beanstalk Environment

Warren Parad avatar
Warren Parad

EB is terrible, Lambda always and sometimes ECS, you never need anything else

cloudposse/terraform-aws-elastic-beanstalk-environment

Terraform module to provision an AWS Elastic Beanstalk Environment

Brandon Cruz avatar
Brandon Cruz

this migrate to lambda

Fredistelroy avatar
Fredistelroy

Thank you

2023-01-23

Christopher Pieper avatar
Christopher Pieper

So having this issue now with my spacelift-workers.. And I am not exactly sure whats happening, me thinks its generating a new workspace after it inits the backend, and needs to maybe reverse order and gen workspace then gen backend then init.. But this hasn’t worked for me..

Alex Jurkiewicz avatar
Alex Jurkiewicz

it looks like your stack is not configured with a backend

Alex Jurkiewicz avatar
Alex Jurkiewicz

jq: error: Could not open file backend.tf.json: No such file or directory

Alex Jurkiewicz avatar
Alex Jurkiewicz

Another win for reading the error message

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Maybe better for spacelift channel

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Christopher Pieper are you using atmos with spacelift?

Christopher Pieper avatar
Christopher Pieper

So I noticed the jq error after sharing, and fixed.. Still having issue.. And yes we are using atmos with spacelift..

Christopher Pieper avatar
Christopher Pieper
Christopher Pieper avatar
Christopher Pieper

I was thinking this was a terraform issue, re: workspace state mgmt

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Dan Miller (Cloud Posse) @Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if in atmos.yaml you have

auto_generate_backend_file: true
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then you need to execute a command to generate the backend for the component

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in .spacelift/config.yml you need to have something like this:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
stack_defaults:
  before_init:
    - spacelift-configure
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in spacelift-configure script you need to have this:

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
#!/bin/bash

# Add -x for troubleshooting
set -ex -o pipefail

function main() {
  if [[ -z $ATMOS_STACK ]] || [[ -z $ATMOS_COMPONENT ]]; then
    echo "Missing required environment variable" >&2
    echo "  ATMOS_STACK=$ATMOS_STACK" >&2
    echo "  ATMOS_COMPONENT=$ATMOS_COMPONENT" >&2
    return 3
  fi

  echo "Writing Stack variables to spacelift.auto.tfvars.json for Spacelift..."

  atmos terraform generate varfile "$ATMOS_COMPONENT" --stack="$ATMOS_STACK" -f spacelift.auto.tfvars.json >/dev/null
  jq . <spacelift.auto.tfvars.json

  echo "Writing Terraform state backend configuration to backend.tf.json for Spacelift..."

  atmos terraform generate backend "$ATMOS_COMPONENT" --stack="$ATMOS_STACK"
  jq . <backend.tf.json

  atmos terraform workspace "$ATMOS_COMPONENT" --stack="$ATMOS_STACK" --auto-generate-backend-file=false || {
    printf "%s\n" "$?"
    set +x
    printf "\n\nUnable to select workspace\n"
    false
  }
}

main

# Remove -x for security
set +x
Christopher Pieper avatar
Christopher Pieper

hmmm so what I have in place currently is the three scripts found in the terraform-aws-components/modules/spacelift/bin spacelift-configure spacelift-tf-workspace spacelift-write-vars

setup to run in that order.. I will give this a shot and report back with results momentarily

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

check that you have atmos generate backed command in your script

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the number of scripts doesn’t matter

Christopher Pieper avatar
Christopher Pieper

wasn’t suggesting that they did.. just providing context as to what is in place currently.. from the looks of it your script is consolidation of those 3

Christopher Pieper avatar
Christopher Pieper

so that appears to have broken past the issue

Christopher Pieper avatar
Christopher Pieper

thinks appear to be working now

Christopher Pieper avatar
Christopher Pieper

thank you SO MUCH

1
Christopher Pieper avatar
Christopher Pieper

So I made the suggested modifications to spacelift-configure and it seemed to work, but once again I am running into same issue.. It appears that terraform workspace is somehow screwing with the backend setup.. I am not sure.. Been searching the google for anything similar and nothing is jumping out at me.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

not clear why it’s looking for the backend file in the .terraform folder (which is def should not be)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

please show what you have in .spacelift/config.yml file

Christopher Pieper avatar
Christopher Pieper

so I don’t seem to have a .spacelift/config.yml file

Christopher Pieper avatar
Christopher Pieper

everything seems to work if I add the “nobackend” label to the stack

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

I’m not sure that is a correct way of doing things. This is a working example of the file, you can try adding it to the repo

version: "1"

stack_defaults:
  before_init:
    - spacelift-configure

  before_plan:
    - spacelift-configure

  before_apply:
    - spacelift-configure

  environment:
    AWS_SDK_LOAD_CONFIG: true
    AWS_CONFIG_FILE: /etc/aws-config/aws-config-spacelift
    ATMOS_BASE_PATH: /mnt/workspace/source
    TF_CLI_ARGS_plan:  -parallelism=50
    TF_CLI_ARGS_apply:  -parallelism=50
    TF_CLI_ARGS_destroy: -parallelism=50

stacks:
  infra:
    before_init: []
    before_plan: []
    before_apply: []
Christopher Pieper avatar
Christopher Pieper

where do I add it? rootfs?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

.spacelift/config.yml in the root of the repo, not in rootfs

Christopher Pieper avatar
Christopher Pieper

ok

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Configuration - Spacelift Documentation

Collaborative Infrastructure For Modern Software Teams

Christopher Pieper avatar
Christopher Pieper

and this will need to be deployed by pushing the docker container up to artifacts ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Spacelift reads the file when it downloads the repo

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so the file needs to be in the repo, not in the container

Christopher Pieper avatar
Christopher Pieper

ok just making sure…

Christopher Pieper avatar
Christopher Pieper

so no luck with that chnage

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

in cases like that (when it’s not easy to understand what’s going on), I usually say please zip up the repo and DM me - I could look if there is something obvious

Christopher Pieper avatar
Christopher Pieper

honestly I would be tremendously grateful.. let me get that for you

techpirate avatar
techpirate

here anyone know how to resolve this error?

techpirate avatar
techpirate

i am using terragrunt to create aurora-data

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

maybe use #terragrunt

2023-01-24

2023-01-25

Vladimir avatar
Vladimir

Hi, colleagues, does anyone know the best practice to create resources in different regions? Context: some services are not available in certain regions and need to be created in other regions

Evanglist avatar
Evanglist

use alias option in provider file and use it to create resource in another region

this2
RB avatar

We use atmos to reuse root modules and expose the region as an input

RB avatar

Then we use tfvars that are populated from yaml and pass it to the root module in a custom region-account workspace e.g. ue1-dev, ue2-prod, etc

RB avatar

Here is the website https://atmos.tools/ and the repo if you’re interested https://github.com/cloudposse/atmos

shamwow avatar
shamwow

hello folks, weird issue, I have a list of cidrs as type string, and a string variable that if filled should be added to the list if != null Im trying to do this in locals{} to no avail, Im just playing with vars for now and testing locally with outputs set but it keeps adding the null to the array:

variable "aaa" {
  type        = string
  default     = null
}

variable "bbb" {
  description = "List of vpc cidr blocks to route this endpoint to"
  type        = list(string)
  default     = ["10.1.0.0/16", "10.2.0.0/16"]
}

locals {
  # new_list = "${coalescelist(tolist(var.aaa),var.bbb)}"
  newlist = (contains(var.bbb, var.aaa) ? var.bbb : concat(var.bbb, [var.aaa]))
}

I have tried a bunch of different methods here like try(var.aaa, false) and != null etc but ultimately it just keeps appending a null value to the list, any ideas?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
newlist = var.aaa == null || contains(var.bbb, var.aaa) ? var.bbb : concat(var.bbb, [var.aaa])
1
shamwow avatar
shamwow

Thank you @Andriy Knysh (Cloud Posse)! worked without the OR part:

locals {
  newlist = var.aaa == null ? var.bbb : concat(var.bbb, [var.aaa])
}

otherwise if I include the OR

│ Error: Invalid function argument
│ 
│   on main.tf line 27, in locals:
│   27:   newlist = var.aaa == null || contains(var.bbb, var.aaa) ? concat(var.bbb, [var.aaa]) : var.bbb
│     ├────────────────
│     │ var.aaa is null
│ 
│ Invalid value for "value" parameter: argument must not be null.
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this newlist = var.aaa == null ? var.bbb : concat(var.bbb, [var.aaa]) will allow duplicates added to the list

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

here

on main.tf line 27, in locals:
│   27:   newlist = var.aaa == null || contains(var.bbb, var.aaa) ? concat(var.bbb, [var.aaa]) : var.bbb
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

looks like you reverted the logic, and that’s why it’s not working

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

try

newlist = var.aaa == null || contains(var.bbb, var.aaa) ? var.bbb : concat(var.bbb, [var.aaa])
1
shamwow avatar
shamwow

same error, I actually left the logic but removed your dupe check (the || part) which seemed to cause the problem, but, adding it as a list worked:

newlist = var.aaa == null || contains(var.bbb, [var.aaa]) ? var.bbb : concat(var.bbb, [var.aaa])
Wil avatar

Hey there, I hope this is an easy question but I don’t see how to do this.

Using the datadog module for synthetics here: https://github.com/cloudposse/terraform-datadog-platform/tree/master/examples/synthetics

RB avatar

Your alerts require certs?

Could you explain more @Wil

Wil avatar

They’re synthetic checks, one of them has a cert associated yes

Wil avatar
doozers_status_api:
  name:  Doozers Status
  message: Operational health warning on Doozers status page
  type: api
  tags:
    app: doozers
  locations: 
    ["pl:my-fancy-location"]
  status: live
  request_definition:
    method: GET
    url: <https://example.net:9090/ws/rest/v1/Employee?id=00000>
    timeout: 0
  options_list:
    allow_insecure: true
    tick_every: 300
    follow_redirects: false
    retry:
      count: 2
      interval: 10000
    monitor_options:
      renotify_interval: 0
  request_client_certificate:
    key:
      filename: svc.okta-datadog-nopass.key
      content: ${ var.doozers_nopasskey }
      #content: hello
    cert:
      filename: svc.okta-datadog_synthetics.crt
      content: ${ var.doozers_cert }
Wil avatar

see the request_client_certificate key and cert

Wil avatar
#711 [Synthetics] Add support to dnsServer and client certificate options

This PR updates the go-client to add support to dnsServer and client certificate for synthetics tests.

Wil avatar

I suppose request_basicauth would have the same issue here

Wil avatar

Inside we’ve got all these nice YAML files that creates our alerts.. but some require certs that are in our secret store. I can get them using a data.external command but can’t seem to figure out how to get that injected into the YAML files inside the catalog.

Wil avatar
simple:
  name: "cloudposse.com website test"
  message: "Cloud Posse website is not reachable"
  type: api
  subtype: http
  tags:
    ManagedBy: Terraform
  locations:
    - "aws:us-west-2"
  status: "live"
  request_definition:
    url: "<https://cloudposse.com>"
    method: GET
  request_headers: {}
  request_query: {}
  set_cookie: ""
  options_list:
    tick_every: 1800
  assertion:
    - type: statusCode
      operator: is
      target: "200"

complex:
  name: "cloudposse.com complex test"
  message: "Cloud Posse website is not reachable (advanced test)"
  type: api
  subtype: http
  tags:
    ManagedBy: Terraform
  locations:
    - "all"
  status: "live"
  request_definition:
    url: "<https://cloudposse.com>"
    method: GET
  request_headers:
    Accept-Charset: "utf-8, iso-8859-1;q=0.5"
    Accept: "text/html"
  request_query:
    hello: world
  set_cookie: ""
  options_list:
    tick_every: 1800
    follow_redirects: true
    retry:
      count: 3
      interval: 5
    monitor_options:
      renotify_interval: 1440
  assertion:
    - type: statusCode
      operator: is
      target: "200"
    - type: body
      operator: validatesJSONPath
      targetjsonpath:
        operator: is
        targetvalue: true
        jsonpath: status

2023-01-26

Oleksandr Lytvyn avatar
Oleksandr Lytvyn

Hello, i would like to ask advice regarding deployment of AWS CloudFormation stacks in multiple regions. (in this case is is about regular stacks and not StackSets). Use case: I’ve deployed AWS Config inside AWS Organization via CloudFormation StackSets. Config was deployed in all selected regions and all AWS Org member accounts (except Management account, which is expected). Now i’m trying to deploy AWS Config inside AWS Org Management account in multiple regions, possibly without too much ctrl+c ctrl+v method

Looking at Terraform resource https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudformation_stack i don’t see there a way to specify region (and therefore can’t iterate on it). Only 1 method that came to my mind is to use more aws providers (with aliases) and then just ctrl+c & ctrl+v resource aws_cloudformation_stack block and use different providers/aliases.

But maybe there is a better way ?

sripe avatar

hi , i m trying to create few resources reading data from yaml and unable to read/ values and getting errors. any ideas? error in comments

module "yaml_json_multidecoder" {
  source  = "levmel/yaml_json/multidecoder"
  version = "0.2.1"
  filepaths = ["./examples/*.yaml"]
}

locals {
  team_members = module.yaml_json_multidecoder.files.team.team_members
  project_team_mapping = module.yaml_json_multidecoder.files.project.project_team_mapping
}


resource "mongodbatlas_project" "test" {
  for_each = local.project_team_mapping

  name   = each.key
  org_id = var.organization_id

  dynamic "teams" {
    for_each = {
      for team in each.value: team.team_name => team.roles
    }
    content {
      team_id =   each.key
      role_names = each.value
    }
  }
}

and the yaml file as below YAML file —

project_team_mapping:
  Terraform-Project-1:
    - team_name: dx-example-team-1
      roles:
        - GROUP_READ_ONLY
        - GROUP_DATA_ACCESS_READ_WRITE
    - team_name: dx-example-team-2
      roles:
        - GROUP_READ_ONLY
        - GROUP_DATA_ACCESS_READ_WRITE
  Terraform-Project-2:
    - team_name: dx-example-team-3
      roles:
        - GROUP_READ_ONLY
        - GROUP_DATA_ACCESS_READ_WRITE
    - team_name: dx-example-team-4
      roles:
        - GROUP_READ_ONLY
        - GROUP_DATA_ACCESS_READ_WRITE
sripe avatar
│ Error: Incorrect attribute value type
│ 
│   on teams.tf line 37, in resource "mongodbatlas_project" "test":
│   37:       role_names = each.value
│     ├────────────────
│     │ each.value is tuple with 2 elements
│ 
│ Inappropriate value for attribute "role_names": element 0: string required.
╵
╷
│ Error: Incorrect attribute value type
│ 
│   on teams.tf line 37, in resource "mongodbatlas_project" "test":
│   37:       role_names = each.value
│     ├────────────────
│     │ each.value is tuple with 2 elements
│ 
│ Inappropriate value for attribute "role_names": element 0: string required.
techpirate avatar
techpirate

here can anyone help me with this why its throwing error

RB avatar

The error says that it’s trying to save to a directory instead of a file

RB avatar

Try changing to a file

techpirate avatar
techpirate

if i give directory. its shows same error

techpirate avatar
techpirate
RB avatar

Try changing to a file

techpirate avatar
techpirate

ex?

techpirate avatar
techpirate

it shows this one

techpirate avatar
techpirate

i have tried in many ways

techpirate avatar
techpirate

but no progress if anyone can help that would be great help to me

RB avatar

It’s hard to say without being able to see the code

RB avatar

Looks like you’re trying to create a file in a read only directory

RB avatar

Try commenting it out and see if this example works

resource "local_file" "foo" {
  content  = "foo!"
  filename = "${path.module}/foo.bar"
}
techpirate avatar
techpirate

ssh key path

2023-01-27

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

does anyone know if its possible to leverage aws_cloudwatch_log_subscription_filter but where kinesis firehose is in a different account?

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i want to run a firehose for datadog from our security account but have cloudwatch log groups push messages to it

2023-01-29

2023-01-30

Active questions tagged cloudposse - Stack Overflow avatar
Active questions tagged cloudposse - Stack Overflow
06:27:41 PM

How to switch my Worker to TestWorker with Hilt? I am doing end-to-end test which use WorkManager in my Application. My DelegatingWorkerFactory() is hilt injected and creates the different factories, which in turn create the different ListenableWorkers in my app. For testing, I would like to bind to these workers to the test version i.e. TestListenableWorker(). What is the best way to do this using Hilt? Appreciate there could be XY problem here and there might be a different approach to achieve this. Essentially I just want to seamlessly…

How to switch my Worker to TestWorker with Hilt?

I am doing end-to-end test which use WorkManager in my Application. My DelegatingWorkerFactory() is hilt injected and creates the different factories, which in turn create the different ListenableW…

Active questions tagged cloudposse - Stack Overflow avatar
Active questions tagged cloudposse - Stack Overflow
06:27:41 PM

WorkManager is not initialized I got this error on sentry panel only on production build variant.

IllegalStateException WorkManager is not initialized properly. You have explicitly disabled WorkManagerInitializer in your manifest, have not manually called WorkManager#initialize at this point, and your Application does not implement Configuration.Provider. java.util.concurrent.ThreadPoolExecutor in runWorker at line 1167

unlike the error i have no work manager dependency in project could anyone help …thanks in advance

WorkManager is not initialized

I got this error on sentry panel only on production build variant.

IllegalStateException WorkManager is not initialized properly. You have explicitly disabled WorkManagerInitializer in your manif…

Active questions tagged cloudposse - Stack Overflow avatar
Active questions tagged cloudposse - Stack Overflow
06:27:41 PM

How to fix crash IllegalStateException: The file system on the device is in a bad state. WorkManager cannot access the app’s internal data store I have got this crash report in firebase crashlytics, titled - SQLiteConnection.java android.database.sqlite.SQLiteConnection.nativeExecute and the stacktrace lists these error: Fatal Exception: java.lang.IllegalStateException The file system on the device is in a bad state. WorkManager cannot access the app’s internal data store. androidx.work.impl.utils.ForceStopRunnable.run (ForceStopRunnable.java:128) androidx.work.impl.utils.SerialExecutor$Task.run (SerialExecutor.java:91)…

How to fix crash IllegalStateException: The file system on the device is in a bad state. WorkManager cannot access the app's internal data store

I have got this crash report in firebase crashlytics, titled - SQLiteConnection.java android.database.sqlite.SQLiteConnection.nativeExecute and the stacktrace lists these error: Fatal Exception: java.

Active questions tagged cloudposse - Stack Overflow avatar
Active questions tagged cloudposse - Stack Overflow
06:27:41 PM

OneTimeWorkRequest does not run when display is off I need to do fast periodic background tasks, but PeriodicWorkRequest is limited to 15 mins, so I use OneTimeWorkRequest and set it again on itself. I set constraints that do not force charging, but according to my test OneTimeWorkRequest does not run when the display is off and the charging cable is not connected (about after 1 minute stoped) but when the charging cable is connected it works well! I have this issue only on android 12 whether it works on android 6 - 7- 8 and 10 that tested!…

OneTimeWorkRequest does not run when display is off

I need to do fast periodic background tasks, but PeriodicWorkRequest is limited to 15 mins, so I use OneTimeWorkRequest and set it again on itself. I set constraints that do not force charging, but

Active questions tagged cloudposse - Stack Overflow avatar
Active questions tagged cloudposse - Stack Overflow
06:27:41 PM

Flutter: How to pause music playing with cron [workmanager]? How can I pause music playing using workmanager? I got an exception error: MissingPluginException(No implementation found for method pause on channel assets_audio_player) when trying to pause the music. I want to implement a function Bed time which allow user to set their bed time, so I will pause the music on time set. Dependencies: workmanager: ^0.5.1 assets_audio_player: ^3.0.5 //GLOBLE @pragma(‘vm:entry-point’) void callbackDispatcher() { Workmanager().executeTask((taskName, inputData)…

Flutter: How to pause music playing with cron [workmanager]?

How can I pause music playing using workmanager? I got an exception error: MissingPluginException(No implementation found for method pause on channel assets_audio_player) when trying to pause the m…

Active questions tagged cloudposse - Stack Overflow avatar
Active questions tagged cloudposse - Stack Overflow
06:27:41 PM

WorkManager OneTimeWorkRequest shows only the last notification my app tracks the movie release dates, notifying the user the day of the release date. It could happen that two or more elements have the same release date and then, at midnight on that day, more notifications must be sent. The problem is that the notifications instead of accumulating replace each other, only showing the latest one. MainAcitvity: createWorkRequest(element.getName(), notification_delay_in_seconds); private void createWorkRequest(String message, long timeDelayInSeconds) {…

WorkManager OneTimeWorkRequest shows only the last notification

my app tracks the movie release dates, notifying the user the day of the release date. It could happen that two or more elements have the same release date and then, at midnight on that day, more

Active questions tagged cloudposse - Stack Overflow avatar
Active questions tagged cloudposse - Stack Overflow
06:27:41 PM

How to run CoroutineWorker synchronously with Android WorkManager? I use WorkManager in my app to chain multiple CoroutineWorkers together. I am trying to test my app and have these workers run synchronously before continuing with rest of test. As CoroutineWorker is a suspend function it does not use the SynchronousExecutor() provided to the workmanager Configuration. The Android Docs mentions that CoroutineContext can be…

How to run CoroutineWorker synchronously with Android WorkManager?

I use WorkManager in my app to chain multiple CoroutineWorkers together. I am trying to test my app and have these workers run synchronously before continuing with rest of test. As CoroutineWorker …

Active questions tagged cloudposse - Stack Overflow avatar
Active questions tagged cloudposse - Stack Overflow
06:27:41 PM

How can I use my ApiService and Room Dao into WorkManager with Hilt, implementing MVVM pattern? I want to use my ApiService and my Room Dao in the WorkManager class implementing the MVVM pattern and Hilt as dependency injection. This is my code: ViewModel @HiltViewModel public class SyncViewModel extends ViewModel {

private final SyncRepository mRepository;

@Inject
public SyncViewModel(SyncRepository repository) {
    mRepository = repository;
}

public LiveData getObservable() {
    return mRepository.getFromDB();
}

public void launchSync(){...
How can I use my ApiService and Room Dao into WorkManager with Hilt, implementing MVVM pattern?

I want to use my ApiService and my Room Dao in the WorkManager class implementing the MVVM pattern and Hilt as dependency injection. This is my code: ViewModel @HiltViewModel public class SyncViewM…

Active questions tagged cloudposse - Stack Overflow avatar
Active questions tagged cloudposse - Stack Overflow
06:27:41 PM

Updating foreground service everyday at an exact time I’ve written an application with a foreground service. The service notification must be updated precisely at 6 pm every day. I’ve tried to achieve this functionality with AlarmManager but most of the time it is not working. Would WorkManager solve this problem (if it is, please explain how I should use it in this case) or is there a way to do this?

Updating foreground service everyday at an exact time

I’ve written an application with a foreground service. The service notification must be updated precisely at 6 pm every day. I’ve tried to achieve this functionality with AlarmManager but most of the

Active questions tagged cloudposse - Stack Overflow avatar
Active questions tagged cloudposse - Stack Overflow
06:27:41 PM

Serializing list of Files with gson not working in Android 12+ (API 32, 33) [duplicate] I do have an object with a list of Files (java.io.File): class NewRequestBody { val files: MutableList = ArrayList() }

I need that object in my WorkManager class, so I use Gson to convert it to Json and add it to the WorkRequest.Builder as InputData: val inputData = Data.Builder().putString(BODY_KEY, Gson().toJson(body)).build()

val myWorker = OneTimeWorkRequest.Builder(MyWorker::class.java) .setInputData(inputData) .build()

Then in my WorkManager class I’m converting it…

Gson.toJson for java.io.File giving "{}" after migrating to API 31 Android 12

After updating the targetSdk to 31, gson.ToJson started giving empty results for List<File> on android 12 device (vivo v2036). Tried passing TypeToken as well still remains the same. Funny th…

Active questions tagged cloudposse - Stack Overflow avatar
Active questions tagged cloudposse - Stack Overflow
06:27:41 PM

Detect if Expedited is OutOfQuota for WorkManager Android When use WorkManager with one of these: .setExpedited(OutOfQuotaPolicy.RUN_AS_NON_EXPEDITED_WORK_REQUEST) .setExpedited(OutOfQuotaPolicy.DROP_WORK_REQUEST)

At this time, I want to detect if the device could not run as expedited. Is it possible to Try-catch or any other ways to detect when OutOfQuota? Thank you.

Detect if Expedited is OutOfQuota for WorkManager Android

When use WorkManager with one of these: .setExpedited(OutOfQuotaPolicy.RUN_AS_NON_EXPEDITED_WORK_REQUEST) .setExpedited(OutOfQuotaPolicy.DROP_WORK_REQUEST)

At this time, I want to detect if the d…

Active questions tagged cloudposse - Stack Overflow avatar
Active questions tagged cloudposse - Stack Overflow
06:27:41 PM

How to set a Work Manager to listen to an unlock screen event (android.intent.action.USER_PRESENT) I’m trying to set up an Android Work Manager in Java which is triggered every Time the screen gets unlocked. With Android

How to set a Work Manager to listen to an unlock screen event (android.intent.action.USER_PRESENT)

I’m trying to set up an Android Work Manager in Java which is triggered every Time the screen gets unlocked. With Android <= 8 it was possible to use the Broadcast service and define the trigger…

Active questions tagged cloudposse - Stack Overflow avatar
Active questions tagged cloudposse - Stack Overflow
06:27:41 PM

How to start a startForegroundService using WorkManager Hi due to issues with startForegroundService not able to run in the background on certain android version. I am trying to startForegroundService using a WorkManager but unable to get it to work. this is what I tried so far. Forground service that needs to start ContextCompat.startForegroundService(context, CarInfoProcessingService.createIntent(context = applicationContext, pushMsgData = message.data))

class that starts the foreground service class BackupWorker(private val…

How to start a startForegroundService using WorkManager

Hi due to issues with startForegroundService not able to run in the background on certain android version. I am trying to startForegroundService using a WorkManager but unable to get it to work. th…

Active questions tagged cloudposse - Stack Overflow avatar
Active questions tagged cloudposse - Stack Overflow
06:27:41 PM

Android WorkManager sometimes retries after a success I have a SyncWorker class I am using to capture offline actions and relay them to the server. It works, but I am currently trying to improve the success rate by investigating cases of failed sync operations captured in logs. A substantial number of these failures were reported as missing the data provided for sync (stored in a local database). I initially assumed these to be indications of local database failures, but upon investigating specific cases of this failure I found that every single…

Android WorkManager sometimes retries after a success

I have a SyncWorker class I am using to capture offline actions and relay them to the server. It works, but I am currently trying to improve the success rate by investigating cases of failed sync

Active questions tagged cloudposse - Stack Overflow avatar
Active questions tagged cloudposse - Stack Overflow
06:27:41 PM

How to listen to an unlock screen event on Android I’m trying to get a function to trigger whenever the screen gets unlocked, even if the app is not running. With Android

How to listen to an unlock screen event on Android

I’m trying to get a function to trigger whenever the screen gets unlocked, even if the app is not running. With Android <= 8 it was possible to use the Broadcast service and define the triggers …

Active questions tagged cloudposse - Stack Overflow avatar
Active questions tagged cloudposse - Stack Overflow
06:27:41 PM

I want to implement WorkManager for Periodic work every one minute but WorkManager not work properly so how to implement? Here is my worker class: public class DataContinueWork extends Worker {

public DataContinueWork(Context context,
                        WorkerParameters workerParams) {
    super(context, workerParams);
}

@Override
public Result doWork() {
    addInDatabase();
    return Result.Success.success();
}

private void addInDatabase() {
    DatabaseHelper database = Room.databaseBuilder(getApplicationContext(), DatabaseHelper.class,...
I want to implement WorkManager for Periodic work every one minute but WorkManager not work properly so how to implement?

Here is my worker class: public class DataContinueWork extends Worker {

public DataContinueWork(Context context, WorkerParameters workerParams) { super(cont…

Active questions tagged cloudposse - Stack Overflow avatar
Active questions tagged cloudposse - Stack Overflow
06:27:41 PM

Where should WorkManager be placed within Clean Architecture? I created a WorkManager that retrieves the current Bitcoin price from an API every 60 minutes and displays it to the user as a notification. However, I am confused about how to integrate WorkManager with Clean Architecture. I created an infrastructure layer for WorkManager. Do you think this is correct? In your opinion, where should WorkManager be placed in Clean Architecture? Scheme of Current Architecture <a…

Where should WorkManager be placed within Clean Architecture?

I created a WorkManager that retrieves the current Bitcoin price from an API every 60 minutes and displays it to the user as a notification. However, I am confused about how to integrate WorkManage…

attachment image
Active questions tagged cloudposse - Stack Overflow avatar
Active questions tagged cloudposse - Stack Overflow
06:27:41 PM

WorkManager - @HiltWorkManager I have viewmodel: @HiltViewModel class SplashViewMode @Inject constructor( private val repository: DataStoreRepository, private val workManager: PeriodicNotificationWorkManager ) : ViewModel() {

init {
    workManager.startPeriodicNotifications()
} }

and a class where I start my periodic work class PeriodicNotificationWorkManager @Inject constructor( private val context: Context, private val workManager: WorkManager ) { private val WORK_TAG = “my_work”…

WorkManager - @HiltWorkManager

I have viewmodel: @HiltViewModel class SplashViewMode @Inject constructor( private val repository: DataStoreRepository, private val workManager: PeriodicNotificationWorkManager ) : ViewMode…

Active questions tagged cloudposse - Stack Overflow avatar
Active questions tagged cloudposse - Stack Overflow
06:27:41 PM

Is it possible to use workmanager in Flutter to relaunch the app after a few hours (depending on the situation) when the app is completely closed? I want to create an app that shows an alarm screen that sounds when the app is restarted when certain conditions are met after communicating with the api using workmanager in Flutter. However, the app does not restart even if I use the code below. void callbackDispatcher() { Workmanager().executeTask((task, inputData) async { do{ code that api communication…. }while(When conditions are satisfied)

Restart.restartApp();//When I try to relaunch the app, nothing happens...
Is it possible to use workmanager in Flutter to relaunch the app after a few hours (depending on the situation) when the app is completely closed?

I want to create an app that shows an alarm screen that sounds when the app is restarted when certain conditions are met after communicating with the api using workmanager in Flutter. However, the …

Active questions tagged cloudposse - Stack Overflow avatar
Active questions tagged cloudposse - Stack Overflow
06:27:42 PM

Is it good approach to cancel WorkManger task in executeTask() function in flutter I have to cancel/dispose workmanager task as soon as it gets completed, but most of the times app is not in running state. So is it good or bad to cancel that task in executeTask() function. Here is code example: Register task first Workmanager().registerOneOffTask(‘taskABC’,’taskABC’, inputData: {‘data’:’some data here’},);

And here is an code inside callbackDispathcer() Workmanager().executeTask((taskName, inputData) async { try { switch(taskName) { case ‘taskABC’:…

Is it good approach to cancel WorkManger task in executeTask() function in flutter

I have to cancel/dispose workmanager task as soon as it gets completed, but most of the times app is not in running state. So is it good or bad to cancel that task in executeTask() function. Here i…

Active questions tagged cloudposse - Stack Overflow avatar
Active questions tagged cloudposse - Stack Overflow
06:27:42 PM

Observers don’t get notified when LiveData is set through FileObserver I have a sharedViewModel with LiveData properties. MainActivity: private val logViewModel by viewModels()

fun startLocationUpdates() {
   val locationItem = LocationItem(...)
   logViewModel.addLogEntry(locationItem)
}

override fun onCreate(savedInstanceState: Bundle?) {

    val observer = object : FileObserver(applicationContext.filesDir.path + "/" + jsonFileName) {
        override fun onEvent(event: Int, file: String?) {
            if (event ==...
Observers don't get notified when LiveData is set through FileObserver

I have a sharedViewModel with LiveData properties. MainActivity: private val logViewModel by viewModels<LogViewModel>()

fun startLocationUpdates() { val locationItem = Locatio…

Active questions tagged cloudposse - Stack Overflow avatar
Active questions tagged cloudposse - Stack Overflow
06:27:42 PM

is there any way or package which can run background tasks every minute periodically in flutter I am trying to build a flutter app for my company (just for android), in which we assign tasks to employees and track their location. What I want to do, is to send http request every minute periodically to track their location, but I could not find any package or way to do that in the way I want. any help would be appreciated, thanks. I have tried a lot of packages for background tasks. Including alarm_manager_plus, background_location, and work_manager package. the best one was workmanager…

is there any way or package which can run background tasks every minute periodically in flutter

I am trying to build a flutter app for my company (just for android), in which we assign tasks to employees and track their location. What I want to do, is to send http request every minute periodi…

Active questions tagged cloudposse - Stack Overflow avatar
Active questions tagged cloudposse - Stack Overflow
06:27:42 PM

I want to make a DayCounter app that add one day to my TextView everyday and save it in Data base And do in background when phone turned off in Kotlin Var count = 0 If(textClock.text == “00:00”) { Count++ TextDayCount.text = count.toString() } //Do I have to Use WorkManager?

I want to make a DayCounter app that add one day to my TextView everyday and save it in Data base And do in background when phone turned off in Kotlin

Var count = 0 If(textClock.text == &quot;00:00&quot;) { Count++ TextDayCount.text = count.toString() } //Do I have to Use WorkManager?

Active questions tagged cloudposse - Stack Overflow avatar
Active questions tagged cloudposse - Stack Overflow
06:27:42 PM

SynchronousExecutor not running WorkManager synchronously for CoroutineWorkers Following Android’s documentation, I created the following minimal viable example: @RunWith(AndroidJUnit4::class) class UpdatePlatformEndpointTest { private lateinit var context: Context

@Before
fun setUp() {
    context = ApplicationProvider.getApplicationContext()
    val config = Configuration.Builder()...
SynchronousExecutor not running WorkManager synchronously for CoroutineWorkers

Following Android’s documentation, I created the following minimal viable example: @RunWith(AndroidJUnit4::class) class UpdatePlatformEndpointTest { private lateinit var context: Context

@…

Active questions tagged cloudposse - Stack Overflow avatar
Active questions tagged cloudposse - Stack Overflow
06:27:42 PM

Android AppWidget using WorkManager experiencing network issues I have an application in production that contains an AppWidget that schedules periodic refreshes via a WorkManager CoroutineWorker. My RefreshWidgetWorker simply requests some data from a Retrofit service & then persists that data via a Room DAO. I am receiving hundreds of crash logs of network connectivity issues & reports from users that the widget will not load for them. The exception is typically: java.net.UnknownHostException: Unable to resolve host “[my-server]”: No address associated…

Android AppWidget using WorkManager experiencing network issues

I have an application in production that contains an AppWidget that schedules periodic refreshes via a WorkManager CoroutineWorker. My RefreshWidgetWorker simply requests some data from a Retrofit

Active questions tagged cloudposse - Stack Overflow avatar
Active questions tagged cloudposse - Stack Overflow
06:27:42 PM

non-local notification using workManager flutter i tried to send notification to another device when the app is terminated using workManager it does not work, however the same function works if i use it without workManager here is the code in the main await Workmanager().initialize(callBAckDispatcher, isInDebugMode: true);

callBackDispathcer callBAckDispatcher() { WidgetsFlutterBinding.ensureInitialized(); Firebase.initializeApp();

Workmanager().executeTask((taskName, inputData) async { if (taskName == “t”) { int id =…

non-local notification using workManager flutter

i tried to send notification to another device when the app is terminated using workManager it does not work, however the same function works if i use it without workManager here is the code in the…

Active questions tagged cloudposse - Stack Overflow avatar
Active questions tagged cloudposse - Stack Overflow
06:27:42 PM

Getting Exception kotlinx.coroutines.JobCancellationException: Job was cancelled in CoroutineWorker in Kotlin android I have created a simple CoroutineWorker that run loop for 1000 time with delay of 1000 milliseconds. This worker is unique periodic worker with 15 minutes as repeat interval and with ExistingPeriodicWorkPolicy as KEEP But When I start worker and after some time during execution worker gets cancelled with Exception JobCancellationException The Full Exception: Exception kotlinx.coroutines.JobCancellationException: Job was cancelled; job=JobImpl{Cancelling}@ba57765 1247.088 WM-Wor…rapper…

Getting Exception kotlinx.coroutines.JobCancellationException: Job was cancelled in CoroutineWorker in Kotlin android

I have created a simple CoroutineWorker that run loop for 1000 time with delay of 1000 milliseconds. This worker is unique periodic worker with 15 minutes as repeat interval and with

Active questions tagged cloudposse - Stack Overflow avatar
Active questions tagged cloudposse - Stack Overflow
06:27:42 PM

Compose WorkManager not getting triggered I have an app that uses a Worker to update its services via the internet. However, the worker is not getting triggered. Worker.kt: class MyWorker( private val container: AppContainer, ctx: Context, params: WorkerParameters ) : CoroutineWorker(ctx, params) {

override suspend fun doWork(): Result {

    return withContext([Dispatchers.IO](http://Dispatchers.IO)) {
        return@withContext try {

            val response = container.onlineRepository.getData()

            // Load the...
Compose WorkManager not getting triggered

I have an app that uses a Worker to update its services via the internet. However, the worker is not getting triggered. Worker.kt: class MyWorker( private val container: AppContainer, ctx:

Active questions tagged cloudposse - Stack Overflow avatar
Active questions tagged cloudposse - Stack Overflow
06:27:42 PM

Flutter workmanager with BLOC and repository pattern I use bloc and with repository pattern. There was a need to execute the code in the background according to the schedule, for this I use workmanager The problem is that the workmanager is executed in a separate method, which is initialized outside the main method, and it does not know anything about the context or the code below like MyApp…

Flutter workmanager with BLOC and repository pattern

I use bloc and with repository pattern. There was a need to execute the code in the background according to the schedule, for this I use workmanager The problem is that the workmanager is executed …

bloc | Dart Package

A predictable state management library that helps implement the BLoC (Business Logic Component) design pattern.

workmanager | Flutter Package

Flutter Workmanager. This plugin allows you to schedule background work on Android and iOS.

Active questions tagged cloudposse - Stack Overflow avatar
Active questions tagged cloudposse - Stack Overflow
06:27:42 PM

Where do you store your WorkManager files in a Multi-Module app? For most classes it can be intuitive what module you would want to place them in, but for WorkManager it is less clear. For example you could place it in top level app module designated worker module domain or data module Then there is the question of the actual workers and worker factories themselves and if they would be placed in the module they are used or all aggregated into some higher level directory. Is there any consensus or best practice on how you structure WorkManager in…

Where do you store your WorkManager files in a Multi-Module app?

For most classes it can be intuitive what module you would want to place them in, but for WorkManager it is less clear. For example you could place it in

top level app module designated worker mod…

loren avatar

ahh what is happening please stop

jose.amengual avatar
jose.amengual

I was about to mute the channel

2
Alex Atkinson avatar
Alex Atkinson

Is there a limit to the size of a variable map?

Alex Atkinson avatar
Alex Atkinson

4MB too much?

Alex Jurkiewicz avatar
Alex Jurkiewicz

I don’t think there’s a limit, but that much data sounds a little unusual and would count as a code smell IMO

1
Alex Atkinson avatar
Alex Atkinson

It turned out to be more like 2 and change. It’s a vpc cidr map and subnet cidr map. Together they lay out all the breakdowns for 10.0.0.0/8 into a combo of /20 and /19 subnets. Now when folks get assigned a new cidr it’s just updating the key to the assignment groups name and they’re good to go.

2023-01-31

Bart Coddens avatar
Bart Coddens

HI all, while using the dynamic subnets module, I want to assign a route table to all the subnet id’s in the vpc

Bart Coddens avatar
Bart Coddens

I have this:

Bart Coddens avatar
Bart Coddens

resource “aws_route_table_association” “all” { subnet_id = [module.subnets.private_subnet_ids[*]] route_table_id = aws_route_table.other.id }

Bart Coddens avatar
Bart Coddens

but terraform requires a string

Bart Coddens avatar
Bart Coddens

what am I doing wrong ?

Bart Coddens avatar
Bart Coddens

can anyone give me some assistance with this ? I am getting confused

Si Walker avatar
Si Walker

@Bart Coddens if subnet_id is singular not plural, the value would normally be a single object or string, not contained in square brackets. Brackets are to pass it a list or a tuple or map or whatever. In this case, without complicating things, you might have to have a code block for each az/subnet, something like:

resource "aws_route_table_association" "all" {
  subnet_id      = module.subnets.private_subnet_ids[0]
  route_table_id = aws_route_table.other.id
}
resource "aws_route_table_association" "all" {
  subnet_id      = module.subnets.private_subnet_ids[1]
  route_table_id = aws_route_table.other.id
}
Bart Coddens avatar
Bart Coddens

well I tried doing this:

Bart Coddens avatar
Bart Coddens

resource “aws_route_table_association” “gatewayendpoints” { count = length(var.availability_zones) subnet_id = module.subnets.private_subnet_ids[count.index] route_table_id = aws_route_table.other.id }

Bart Coddens avatar
Bart Coddens

but this fails with :

Bart Coddens avatar
Bart Coddens

Resource.AlreadyAssociated: the specified association for route table

Si Walker avatar
Si Walker

And is it already associated? In the AWS console, if you view the subnet, does it have the route(s) in question? If so you’ve got 2 options, either import it (bit of a faff), or remove it in the console and let terraform re-add it, wouldn’t be doing anything like that for production systems though

Bart Coddens avatar
Bart Coddens

no it’s a demo setup

Bart Coddens avatar
Bart Coddens

but nothing is associated

Bart Coddens avatar
Bart Coddens

that confuses me a bit

Si Walker avatar
Si Walker

Hi all, didn’t know if this classed as either a bug or feature request so joined here instead. We previously had a local copy of the Label module and we had a lot of security alerts from Dependabot on github.com with regards to the gopkg version being used.

Our github alerts are an easy fix, map to the module instead of having a local copy and don’t push .terraform up to github, but I believe the security issue still exists with 3 or 4 references to versions less than v.2.2.4

cloudposse/terraform-null-label

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes])

Si Walker avatar
Si Walker

@Bart Coddens Try this

resource "aws_route_table_association" "gatewayendpoints" {
  count = length(var.availability_zones)
  subnet_id = module.subnets[count.index].private_subnet_ids
  route_table_id = aws_route_table.other.id
}

or

resource "aws_route_table_association" "gatewayendpoints" {
  count = length(module.subnets.private_subnet_ids)
  subnet_id = module.subnets[count.index].private_subnet_ids
  route_table_id = aws_route_table.other.id
}

As long as AZ’s match private subnet numbers i.e. 1 to 1, then the top one should work, don’t think the count.index should be at the end though, often it’s before the element

Bart Coddens avatar
Bart Coddens

I think I solved it

Bart Coddens avatar
Bart Coddens

by default the dynamic subnets module creates a route table for the private subnets, you cannot associate two route tables to a subnet

Bart Coddens avatar
Bart Coddens

so I disabled the creation of the default route table for the subnet, created one with my gateway endpoints in and assigned that

Bart Coddens avatar
Bart Coddens

now it works fine

1
aimbotd avatar
aimbotd

Hey friends. I’m peeping at the dynamic subnets tf module. I notice that the NAT Instances cannot be set up with SPOT instances. I am ignorant in this space so just looking for clarity. Is it a bad idea to set up nat’s as spot instances? If so, why is that?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

Spot instances can be interrupted and removed any time (although how often that happens depends on the instance types and many other factors)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you prob don’t want your NAT traffic to be interrupted

aimbotd avatar
aimbotd

Thats…actually a super valid point that I dont know why I didn’t consider.

aimbotd avatar
aimbotd

Thanks for the sanity.

RB avatar

This is worth looking into if you’re trying to cut nat costs

https://github.com/1debit/alternat

1
aimbotd avatar
aimbotd

Thanks @RB

jose.amengual avatar
jose.amengual

CloudPosse folks, is there a way, elegant way to populate the entire module.this.context output to parameter store?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

jsonencode ?

jose.amengual avatar
jose.amengual

mmm that could work

jose.amengual avatar
jose.amengual

How does CloudPosse deal with ECS service task definitions and the deployment of services?( from the point of view of using atmos and the null-label module to infer information to then deploy the task and custom image from a github action)(related to the previous question)

jose.amengual avatar
jose.amengual

the idea is to avoid hardcoding of values on the github action task definition and potentially discover those values from another source

jose.amengual avatar
jose.amengual

one idea I had was to output my ECS module output to parameterStore and then search for parameter store and find the cluster base on a service name and all the values related to that service

jose.amengual avatar
jose.amengual

for example

jose.amengual avatar
jose.amengual

I thought it was possible to no fail on data lookups or get empty results but it looks like is only for some resources? is that correct? ( I was looking trough some TF PRs and it looks that way)

jose.amengual avatar
jose.amengual

like this error :

 no matching Route53Zone found
loren avatar

Correct. Singular data sources will fail if none found. Plural data sources will return an empty list. This is specific to the provider, not general/enforced terraform core semantics

jose.amengual avatar
jose.amengual

mmm ok, I was hoping this changed but nope

jose.amengual avatar
jose.amengual

anyone knows if aws_service_discovery_private_dns_namespace is just a route53 local zone or there is some hiden metadata to it?

    keyboard_arrow_up