#terraform (2023-11)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2023-11-01

Release notes from terraform avatar
Release notes from terraform
03:53:33 PM

v1.6.3 1.6.3 (November 1, 2023) ENHANCEMENTS:

backend/s3: Adds the parameter skip_s3_checksum to allow users to disable checksum on S3 uploads for compatibility with “S3-compatible” APIs. (#34127)

backend/s3: Adds parameter `skip_s3_checksum` to skip checksum on upload by gdavison · Pull Request #34127 · hashicorp/terraform

Some “S3-compatible” APIs do not support the header x-amz-sdk-checksum-algorithm. In the S3 API, a checksum is recommended and is required when Object Lock is enabled. Allow users to disable the he…

2023-11-02

Bart Coddens avatar
Bart Coddens

Hi All, I would like to use a value fetched with a datasource in my terraform.tfvars

Bart Coddens avatar
Bart Coddens

Like this:

Bart Coddens avatar
Bart Coddens

aws-accounts = {

Bart Coddens avatar
Bart Coddens

“0” = [“NAME”,”ARN”,”data.aws_ssm_parameter.iam-external-id.value”]

Bart Coddens avatar
Bart Coddens

}

Bart Coddens avatar
Bart Coddens

But that does not seem to work

Dominique Dumont avatar
Dominique Dumont

remove the quotes around [data.xxx](http://data.xxx)

Josh Pollara avatar
Josh Pollara

Head on over to opentofu for OpenTofu-related conversations and release notifications

3
smaranankari.devops avatar
smaranankari.devops

Hello all, In a terraform module, locals.tf we are hard-coding the UUIDs of the RDS cluster’s secrets. Can we can accomplish this with a data lookup instead?

1
smaranankari.devops avatar
smaranankari.devops

thanks for the response @Dominique Dumont to provide more context the secret name (named after cluster UUID). so want to find out if this can be accomplished using tag’s of the secret (in data source using tags vs name or arn)

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Dan Miller (Cloud Posse)

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

It’s always best to not hard-code any secrets in terraform. There are many ways to retrieve an existing secret from data source, but I prefer using AWS SSM. Check out this blog post: https://blog.gruntwork.io/a-comprehensive-guide-to-managing-secrets-in-your-terraform-code-1d586955ace1

A comprehensive guide to managing secrets in your Terraform codeattachment image

One of the most common questions we get about using Terraform to manage infrastructure as code is how to handle secrets such as passwords…

smaranankari.devops avatar
smaranankari.devops

Hey Dan - the secret is not hard coded. Not sure if I put my question right.

the only way to retrieve a secret is thru name or arn and cannot be done via tags right?

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

Where is the secret stored?

smaranankari.devops avatar
smaranankari.devops

secrets manager

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

with secrets manager you can only use name or arn to find the secret: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/secretsmanager_secret

smaranankari.devops avatar
smaranankari.devops

thank you @Dan Miller (Cloud Posse) for your responses and @Gabriela Campana (Cloud Posse) for looping in Dan to address questions of many that were left unanswered.

np2
Kris Musard avatar
Kris Musard

Hi! Ran Atmos on AWS for last 18 months. I have now landed on Azure and need to refactor some TF. Anybody using Atmos on Azure? Tips? Examples?

jose.amengual avatar
jose.amengual

Atmos is cloud agnostic

jose.amengual avatar
jose.amengual

atmos does not know anything about cloud, it only understand yaml and render input variables used for different things

jose.amengual avatar
jose.amengual

if you want cloudposse like modules for azure, we have a few here: https://github.com/slalombuild/terraform-accelerator

slalombuild/terraform-accelerator

An opinionated, multi-cloud, multi-region, best-practice accelerator for Terraform.

2
2
Kris Musard avatar
Kris Musard

Hi @jose.amengual This is exactly it! Starting with the azure state module! Thanks!

Hao Wang avatar
Hao Wang

Thanks Pepe for the helpful project

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Kris Musard sorry I missed your message.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We have an enormous enterprise customer using Atmos on Azure.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@jose.amengual super cool!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Here are some examples for Digital Ocean https://github.com/leb4r/terraform-do-components

leb4r/terraform-do-components

Terraform modules that can be re-used in other projects. Built specifically for DigitalOcean.

1

2023-11-03

OliverS avatar
OliverS

Any advice on how to support more than 1 person updating the same terraform state? eg one team member is editing some iam roles, and another some s3 buckets, both defined in the same stack; they can each create their plan, but sometimes it can be useful to apply the plan as some issues only emerge during apply.

So basically to avoid having a PR accepted only to find during apply that some changes are broken, it is good to apply the plan, if only for a short period (say 10-30 minutes). There is a window of time where the tfstate has changed but the shared tf code has not.

Similarly, once a PR is merged, there is a window of time where the code has changed but the plan on master has not yet been applied. Esp if terraform is not yet under github actions.

RB avatar

dynamodb locking should force only one user to plan and apply

OliverS avatar
OliverS

that’s not what I’m referring to, not at the same moment, in the same time window

if I apply my changes it looks, makes the changes to tfstate in s3, unlocks; I haven’t committed the code from my branch yet because I’m still testing that my changes will do what I think (eg, IAM changes, I am using the policy simulator on the new group I created via my tf apply)

then another devops person wants to test their changes so they apply, but tf will plan to remove the resources that my apply created because my code changes are not on that user’s checkout

OliverS avatar
OliverS

what I’m doing right now is making my changes in a submodule and terraform apply there without a backend; however this only works because my changes are pure creation of new resources, no edits to existing resources

it’s like the only way to do this is to share branches or base my branch on the other other’s, but this only helps me see the other user’s changes, does not help them see my changes

Fizz avatar
  1. Maintain the lock until you are ready to merge
  2. Tf cloud does a plan in the PR and runs the apply on merge into master
  3. Run the apply in the PR and merge immediately and if you need to make changes open a new PR. Those are the three ways I have seen it done. Presumably you are running your changes in a lower env first so a broken env is tolerable for a short time
OliverS avatar
OliverS

how do you do #1? AFAIK the lock is automatically released at the end of an apply

Fizz avatar

You would have to relock after the apply. So technically you are releasing and then reacquiring the lock. https://github.com/minamijoyo/tflock

minamijoyo/tflock

Lock your Terraform state manually

OliverS avatar
OliverS

Interesting, I’ve often wished for a terraform lock command for this reason… I’ll have to check how robustly this can work for our team.

OliverS avatar
OliverS

Thanks for sharing your ideas. #2 and 3 I had thought of but no gha in place yet so had to eliminate (or at least delay till other things have been done)

kallan.gerard avatar
kallan.gerard

So much to unpack here

kallan.gerard avatar
kallan.gerard
  1. I’d get your TF applying exclusively through a workflow triggered by a push to main.
kallan.gerard avatar
kallan.gerard

Secondly, it seems like you are worried about people creating some configuration, doing a pull request, waiting for approval, then merging it in and seeing something didn’t work and having to do that all over again

kallan.gerard avatar
kallan.gerard

If that’s your main concern I would just create processes to let people merge their own PRs without approval or push straight to main.

There’s no law that says you pull requests require code reviews before merging. And if people are applying from their laptops all your PRs are just empty ceremony anyways

OliverS avatar
OliverS

can’t do that because that would like not having PRs

kallan.gerard avatar
kallan.gerard

Pull requests != Code Reviews

kallan.gerard avatar
kallan.gerard

And Code Reviews != Approval Gates

2023-11-04

suzuki-shunsuke avatar
suzuki-shunsuke

tfprovidercheck is a simple command line tool to prevent malicious Terraform Providers from being executed. You can define the allow list of Terraform Providers and their versions, and check if disallowed providers aren’t used. https://github.com/suzuki-shunsuke/tfprovidercheck

suzuki-shunsuke/tfprovidercheck

CLI to prevent malicious Terraform Providers from being executed. You can define the allow list of Terraform Providers and their versions, and check if disallowed providers aren’t used

2
RB avatar

Neat! Is there an out of the box action for this tool?

suzuki-shunsuke/tfprovidercheck

CLI to prevent malicious Terraform Providers from being executed. You can define the allow list of Terraform Providers and their versions, and check if disallowed providers aren’t used

RB avatar

It would be nice to add it to this list too

https://github.com/shuaibiyy/awesome-terraform

suzuki-shunsuke avatar
suzuki-shunsuke
#233 Add tfprovidercheck

https://github.com/suzuki-shunsuke/tfprovidercheck

tfprovidercheck is a command line tool for security.
It prevents malicious Terraform Providers from being executed.
You can define the allow list of Terraform Providers and their versions, and check if disallowed providers aren’t used.

1
suzuki-shunsuke avatar
suzuki-shunsuke


Is there an out of the box action for this tool?
There is no GitHub Actions or something like that, but you can install and run this easily.

suzuki-shunsuke avatar
suzuki-shunsuke

Wow! The pull request was merged in a flash!

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Also cool would be to integrate with GitHub CodeQL.

See example: https://aquasecurity.github.io/tfsec/v1.0.11/getting-started/configuration/github-actions/github-action/

      - name: Upload SARIF file
        uses: github/codeql-action/upload-sarif@v1
        with:
          # Path to SARIF file relative to the root of the repository
          sarif_file: tfsec.sarif  
Github Action - Code Scanning - tfsec

Adding tfsec to your public GitHub project

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Then it would show up on the security tab

2023-11-05

2023-11-06

Brian Adams avatar
Brian Adams

Hey all, I have a super basic question here. The docs for digitalocean kubernetes provider say the following:

terraform {
  required_providers {
    digitalocean = {
      source = "digitalocean/digitalocean"
      version = "2.31.0"
    }
  }
}

When I run terraform init I get the following error:

│ Error: Failed to query available provider packages
│
│ Could not retrieve the list of available versions for provider
│ digitalocean/digitalocean: no available releases match the given constraints
│ 2.31.0

Anyone have any recommendations here?

RB avatar

try this instead

terraform {
  required_providers {
    digitalocean = {
      source  = "digitalocean/digitalocean"
      version = "~> 2.0"
    }
  }
}
RB avatar

what you have should work… the 2.31.0 version is pushed on github and the tf registry. if the ~> 2.0 works then the issue is something else.

OliverS avatar
OliverS

@Brian Adams are you sure that this is the only place the version is constrained? Double-check the output of the init for contradictory version contraints.

Brian Adams avatar
Brian Adams

I am just going to download the provider and use it locally.

RB avatar

did using ~> 2.0 not work?

Brian Adams avatar
Brian Adams

Nope.

1
OliverS avatar
OliverS

If you can share a screen shot of the init output and the output of terraform versions, might help us help you

2023-11-07

James Stallings avatar
James Stallings

is the github_oauth_token variable in https://github.com/cloudposse/terraform-aws-cicd a standard PAT or some other token I need to generate?

cloudposse/terraform-aws-cicd

Terraform Module for CI/CD with AWS Code Pipeline and Code Build

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Max Lobur (Cloud Posse)

cloudposse/terraform-aws-cicd

Terraform Module for CI/CD with AWS Code Pipeline and Code Build

Max Lobur (Cloud Posse) avatar
Max Lobur (Cloud Posse)
Appendix A: GitHub version 1 source actions - AWS CodePipeline

View procedures for managing OAuth-based apps and webhooks for GitHub version 1 actions

Max Lobur (Cloud Posse) avatar
Max Lobur (Cloud Posse)

Note that cicd and ecs-codepipeline might not work with the current version of AWS APIs because of deprecations of Github version : https://docs.aws.amazon.com/codepipeline/latest/userguide/update-github-action-connections.html . We will be doing this refactoring in related modules soon

Update a GitHub version 1 source action to a GitHub version 2 source action - AWS CodePipeline

In AWS CodePipeline, there are two supported versions of the GitHub source action:

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Anyone know what the behavior is of the Terraform Registry when a repo is transferred from one GitHub organization to another? specifically, for modules that have been registered with the registry.

Josh Pollara avatar
Josh Pollara
Renaming Github Namespace with modules published

If I have a Github team/account which I have published modules from and I rename the team/account, will it break the module? How does Terraform registry handle the renaming? Thanks.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ugh, that sucks.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Thanks @Josh Pollara

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

2023-11-08

Renesh reddy avatar
Renesh reddy

Hi Team,

When doing terraform plan we are Getting below error, does anyone have any idea:

“ Error: Unsupported block type

on .terraform/modules/codebuild-build/aws-modules/aws-codebuild/main.tf line 321, in resource “aws_codebuild_project” “default”: 321: dynamic “auth” {

Blocks of type “auth” are not expected here. “

Joe Perez avatar
Joe Perez

guessing the aws_codebuild_project resource doesn’t have an auth parameter, but you might want to double-check the terraform resource docs on that one

ag4ve.us avatar
ag4ve.us

Never used codebuild, but guessing that’s someone else’s module and that it worked at some point, so you may want to check when that feature was moved and pin the provider better. But also, dynamic is generally used because some data is getting pulled out of a dataset - if you’re not defining that part of the data, you can just remove it. You might also be pinned to/pulling from an older version of the module (and it’s now fixed) - you might want to look at the module’s repo and your source line.

Michael Dizon avatar
Michael Dizon
05:24:57 PM

sorry for posting twice, not sure if people are still using the refarch channel

i’ve used the dns-delegated component to launch a hosted zone in one of my workload accounts (prod), but I’m not sure how to create dns records in the subdomain (service.prod.example.net). am i supposed to deploy dns-primary in the prod account in addition to the dns account?

Release notes from terraform avatar
Release notes from terraform
08:53:34 PM

v1.7.0-alpha20231108 1.7.0-alpha20231108 (November 8, 2023) UPGRADE NOTES:

Input validations are being restored to the state file in this version of Terraform. Due to a state interoperability issue (#33770) in earlier versions, users that require interaction between different minor series should ensure they have upgraded to the following patches:

Users…

Release v1.7.0-alpha20231108 · hashicorp/terraformattachment image

1.7.0-alpha20231108 (November 8, 2023) UPGRADE NOTES:

Input validations are being restored to the state file in this version of Terraform. Due to a state interoperability issue (#33770) in earlie…

2023-11-09

2023-11-10

Joe Perez avatar
Joe Perez
Armon Dadgar on X

Awesome to see the @github Octoverse show HCL (@HashiCorp Config Language) as the top language along with Shell, with 36% YoY growth . I guess this whole Infrastructure as Code thing was a decent idea. https://t.co/STdir7ti0u

2023-11-11

muhaha avatar

How can I configure aws-auth config map in cloudposse/terraform-aws-eks-cluster or cloudposse/terraform-aws-eks-node-group modules to allow Karpener Node role (https://karpenter.sh/docs/getting-started/migrating-from-cas/#update-aws-auth-configmap) ? Thanks

Migrating from Cluster Autoscalerattachment image

Migrate to Karpenter from Cluster Autoscaler

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  # If Karpenter IAM role is enabled, add it to the `aws-auth` ConfigMap to allow the nodes launched by Karpenter to join the EKS cluster
muhaha avatar

ah, thanks, trying to figure out how to use cloudposse/terraform-aws-components It isnt TF module like cloudposse/terraform-aws-eks-cluster , right ?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we have TF modules in our repos, and TF components in https://github.com/cloudposse/terraform-aws-components. TF components a top-level Terraform root modules https://developer.hashicorp.com/terraform/language/modules#the-root-module

Modules Overview - Configuration Language | Terraform | HashiCorp Developerattachment image

Modules are containers for multiple resources that are used together in a configuration. Find resources for using, developing, and publishing modules.

cloudposse/terraform-aws-components

Opinionated, self-contained Terraform root modules that each solve one, specific problem

muhaha avatar

I see, but how can I enable aws-config module with enabled Karpenter role in a cluster created cloudposse/terraform-aws-eks-cluster without atmos?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the components use the modules (one or many) to define a deployable configuration. For example, the eks component (root module) https://github.com/cloudposse/terraform-aws-components/tree/main/modules/eks/cluster uses the eks-cluster child module https://github.com/cloudposse/terraform-aws-components/blob/main/modules/eks/cluster/main.tf#L131C25-L131C36, the eks-node-group module https://github.com/cloudposse/terraform-aws-components/blob/main/modules/eks/cluster/modules/node_group_by_az/main.tf#L34C25-L34C39 and manu=y others to define a complete configuration to deploy an EKS cluster

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


I see, but how can I enable aws-config module with enabled Karpenter role in a cluster created cloudposse/terraform-aws-eks-cluster without atmos?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it has nothing to do with Atmos

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

``` variable “region” { type = string description = “AWS Region” }

variable “availability_zones” { type = list(string) description = «-EOT AWS Availability Zones in which to deploy multi-AZ resources. Ignored if availability_zone_ids is set. Can be the full name, e.g. us-east-1a, or just the part after the region, e.g. a to allow reusable values across regions. If not provided, resources will be provisioned in every zone with a private subnet in the VPC. EOT default = [] nullable = false }

variable “availability_zone_ids” { type = list(string) description = «-EOT List of Availability Zones IDs where subnets will be created. Overrides availability_zones. Can be the full name, e.g. use1-az1, or just the part after the AZ ID region code, e.g. -az1, to allow reusable values across regions. Consider contention for resources and spot pricing in each AZ when selecting. Useful in some regions when using only some AZs and you want to use the same ones across multiple accounts. EOT default = [] }

variable “availability_zone_abbreviation_type” { type = string description = “Type of Availability Zone abbreviation (either fixed or short) to use in names. See https://github.com/cloudposse/terraform-aws-utils for details.” default = “fixed” nullable = false

validation { condition = contains([“fixed”, “short”], var.availability_zone_abbreviation_type) error_message = “The availability_zone_abbreviation_type must be either "fixed" or "short".” } }

variable “managed_node_groups_enabled” { type = bool description = “Set false to prevent the creation of EKS managed node groups.” default = true nullable = false }

variable “oidc_provider_enabled” { type = bool description = “Create an IAM OIDC identity provider for the cluster, then you can create IAM roles to associate with a service account in the cluster, instead of using kiam or kube2iam. For more information, see https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html” default = true nullable = false }

variable “cluster_endpoint_private_access” { type = bool description = “Indicates whether or not the Amazon EKS private API server endpoint is enabled. Default to AWS EKS resource and it is false” default = false nullable = false }

variable “cluster_endpoint_public_access” { type = bool description = “Indicates whether or not the Amazon EKS public API server endpoint is enabled. Default to AWS EKS resource and it is true” default = true nullable = false }

variable “cluster_kubernetes_version” { type = string description = “Desired Kubernetes master version. If you do not specify a value, the latest available version is used” default = null }

variable “public_access_cidrs” { type = list(string) description = “Indicates which CIDR blocks can access the Amazon EKS public API server endpoint when enabled. EKS defaults this to a list with 0.0.0.0/0.” default = [“0.0.0.0/0”] nullable = false }

variable “enabled_cluster_log_types” { type = list(string) description = “A list of the desired control plane logging to enable. For more information, see https://docs.aws.amazon.com/en_us/eks/latest/userguide/control-plane-logs.html. Possible values [api, audit, authenticator, controllerManager, scheduler]” default = [] nullable = false }

variable “cluster_log_retention_period” { type = number description = “Number of days to retain cluster logs. Requires enabled_cluster_log_types to be set. See https://docs.aws.amazon.com/en_us/eks/latest/userguide/control-plane-logs.html.” default = 0 nullable = false }

variable “apply_config_map_aws_auth” { type = bool description = “Whether to execute kubectl apply to apply the ConfigMap to allow worker nodes to join the EKS cluster” default = true nullable = false }

variable “map_additional_aws_accounts” { type = list(string) description = “Additional AWS account numbers to add to aws-auth ConfigMap” default = [] nullable = false }

variable “map_additional_worker_roles” { type = list(string) description = “AWS IAM Role ARNs of worker nodes to add to aws-auth ConfigMap” default = [] nullable = false }

variable “aws_team_roles_rbac” { type = list(object({ aws_team_role = string groups = list(string) }))

description = “List of aws-team-roles (in the target AWS account) to map to Kubernetes RBAC groups.” default = [] nullable = false }

variable “aws_sso_permission_sets_rbac” { type = list(object({ aws_sso_permission_set = string groups = list(string) }))

description = «-EOT (Not Recommended): AWS SSO (IAM Identity Center) permission sets in the EKS deployment account to add to aws-auth ConfigMap. Unfortunately, aws-auth ConfigMap does not support SSO permission sets, so we map the generated IAM Role ARN corresponding to the permission set at the time Terraform runs. This is subject to change when any changes are made to the AWS SSO configuration, invalidating the mapping, and requiring a terraform apply in this project to update the aws-auth ConfigMap and restore access. EOT

default = [] nullable = false }

variable “map_additional_iam_roles” { type = list(object({ rolearn = string username = string groups = list(string) }))

description = “Additional IAM roles to add to config-map-aws-auth ConfigMap” default = [] nullable = false }

variable “map_additional_iam_users” { type = list(object({ userarn = string username = string groups = list(string) }))

description = “Additional IAM users to add to aws-auth ConfigMap” default = [] nullable = false }

variable “allowed_security_groups” { type = list(string) description = “List of Security Group IDs to be allowed to connect to the EKS cluster” default = [] nullable = false }

variable “allowed_cidr_blocks” { type = list(string) description = “List of CIDR blocks to be allowed to connect to the EKS cluster” default = [] nullable = false }

variable “subnet_type_tag_key” { type = string description = “The tag used to find the private subnets to find by availability zone. If null, will be looked up in vpc outputs.” default = null }

variable “color” { type = string description = “The cluster stage represented by a color; e.g. blue, green” default = “” nullable = false }

variable “node_groups” { # will create 1 node group for each item in map type = map(object({ # EKS AMI version to use, e.g. “1.16.13-20200821” (no “v”). ami_release_version = optional(string, null) # Type of Amazon Machine Image (AMI) associated with the EKS Node Group ami_type = optional(string, null) # Additional attributes (e.g. 1) for the node group attributes = optional(list(string), null) # will create 1 auto scaling group in each specified availability zone # or all AZs with subnets if none are specified anywhere availability_zones = optional(list(string), null) # Whether to enable Node Group to scale its AutoScaling Group cluster_autoscaler_enabled = optional(bool, null) # True to create new node_groups before deleting old ones, avoiding a temporary outage create_before_destroy = optional(bool, null) # Desired number of worker nodes when initially provisioned desired_group_size = optional(number, null) # Set of instance types associated with the EKS Node Group. Terraform will only perform drift detection if a configuration value is provided. instance_types = optional(list(string), null) # Ke…

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you use Atmos to manage those variables for diff environments, accounts, OUs, regions etc. to make the config DRY

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you don’t use Atmos, you provide all those variables in some other TF files in your repo

muhaha avatar

oh, I see, so the component is actually wrapping tf modules, the problem is I used the eks module directly

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

components are wrappers on top of modules, yes

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

components are also TF root modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

they are more high level than modules

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

they can be used with Atmos and without. As mentioned, Atmos is used to manage the configuration for the components in a reusable and DRY way for many diff environments

muhaha avatar

then I am screwed, I have a lot of infra created directly via cloudposse modules, only thing left is to edit aws-auth configmap, so migration to the components would not make a sense now

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can definitely copy some parts of the code to deal with the karpenter role and auth map from our EKS component into your code

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you have infra created from the EKS and karpenter modules, which means you have already created your own component

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can add any code to your component

muhaha avatar

What is editing aws-auth CM ? I see that every terraform-aws-eks-node-group is added there, but not sure how its exactly done. aws_eks_node_group TF resource?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

are you using the node groups or karpenter (they are diff things), or both?

muhaha avatar

I have default one , trying to replace cluster-autoscaler. Karpenter needs some node for its controller, right ? I have created roles, policies, only thing needed is to edit that aws-auth CM.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

• Add this file into your folder https://github.com/cloudposse/terraform-aws-components/blob/main/modules/eks/cluster/karpenter.tf - it creates the karpenter role with all the required permissions

# IAM Role for EC2 instance profile that is assigned to EKS worker nodes launched by Karpenter

# <https://aws.amazon.com/blogs/aws/introducing-karpenter-an-open-source-high-performance-kubernetes-cluster-autoscaler/>
# <https://karpenter.sh/>
# <https://karpenter.sh/v0.10.1/getting-started/getting-started-with-terraform/>
# <https://karpenter.sh/v0.10.1/getting-started/getting-started-with-eksctl/>
# <https://www.eksworkshop.com/beginner/085_scaling_karpenter/>
# <https://karpenter.sh/v0.10.1/aws/provisioning/>
# <https://www.eksworkshop.com/beginner/085_scaling_karpenter/setup_the_environment/>
# <https://ec2spotworkshops.com/karpenter.html>
# <https://catalog.us-east-1.prod.workshops.aws/workshops/76a5dd80-3249-4101-8726-9be3eeee09b2/en-US/autoscaling/karpenter>

locals {
  karpenter_iam_role_enabled = local.enabled && var.karpenter_iam_role_enabled

  karpenter_instance_profile_enabled = local.karpenter_iam_role_enabled && !var.legacy_do_not_create_karpenter_instance_profile

  # Used to determine correct partition (i.e. - `aws`, `aws-gov`, `aws-cn`, etc.)
  partition = one(data.aws_partition.current[*].partition)
}

data "aws_partition" "current" {
  count = local.karpenter_iam_role_enabled ? 1 : 0
}

module "karpenter_label" {
  source  = "cloudposse/label/null"
  version = "0.25.0"

  enabled    = local.karpenter_iam_role_enabled
  attributes = ["karpenter"]

  context = module.this.context
}

data "aws_iam_policy_document" "assume_role" {
  count = local.karpenter_iam_role_enabled ? 1 : 0

  statement {
    effect  = "Allow"
    actions = ["sts:AssumeRole"]

    principals {
      type        = "Service"
      identifiers = ["ec2.amazonaws.com"]
    }
  }
}

# IAM Role for EC2 instance profile that is assigned to EKS worker nodes launched by Karpenter
resource "aws_iam_role" "karpenter" {
  count = local.karpenter_iam_role_enabled ? 1 : 0

  name               = module.karpenter_label.id
  description        = "IAM Role for EC2 instance profile that is assigned to EKS worker nodes launched by Karpenter"
  assume_role_policy = data.aws_iam_policy_document.assume_role[0].json
  tags               = module.karpenter_label.tags
}

resource "aws_iam_instance_profile" "default" {
  count = local.karpenter_instance_profile_enabled ? 1 : 0

  name = one(aws_iam_role.karpenter[*].name)
  role = one(aws_iam_role.karpenter[*].name)
  tags = module.karpenter_label.tags
}

# AmazonSSMManagedInstanceCore policy is required by Karpenter
resource "aws_iam_role_policy_attachment" "amazon_ssm_managed_instance_core" {
  count = local.karpenter_iam_role_enabled ? 1 : 0

  role       = one(aws_iam_role.karpenter[*].name)
  policy_arn = format("arn:%s:iam::aws:policy/AmazonSSMManagedInstanceCore", local.partition)
}

resource "aws_iam_role_policy_attachment" "amazon_eks_worker_node_policy" {
  count = local.karpenter_iam_role_enabled ? 1 : 0

  role       = one(aws_iam_role.karpenter[*].name)
  policy_arn = format("arn:%s:iam::aws:policy/AmazonEKSWorkerNodePolicy", local.partition)
}

resource "aws_iam_role_policy_attachment" "amazon_ec2_container_registry_readonly" {
  count = local.karpenter_iam_role_enabled ? 1 : 0

  role       = one(aws_iam_role.karpenter[*].name)
  policy_arn = format("arn:%s:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly", local.partition)
}

# Create a CNI policy that is a merger of AmazonEKS_CNI_Policy and required IPv6 permissions
# <https://github.com/SummitRoute/aws_managed_policies/blob/master/policies/AmazonEKS_CNI_Policy>
# <https://docs.aws.amazon.com/eks/latest/userguide/cni-iam-role.html#cni-iam-role-create-ipv6-policy>
data "aws_iam_policy_document" "ipv6_eks_cni_policy" {
  count = local.karpenter_iam_role_enabled ? 1 : 0

  statement {
    effect = "Allow"
    actions = [
      "ec2:AssignIpv6Addresses",
      "ec2:AssignPrivateIpAddresses",
      "ec2:AttachNetworkInterface",
      "ec2:CreateNetworkInterface",
      "ec2:DeleteNetworkInterface",
      "ec2:DescribeInstances",
      "ec2:DescribeInstanceTypes",
      "ec2:DescribeTags",
      "ec2:DescribeNetworkInterfaces",
      "ec2:DetachNetworkInterface",
      "ec2:ModifyNetworkInterfaceAttribute",
      "ec2:UnassignPrivateIpAddresses"
    ]
    resources = ["*"]
  }

  statement {
    effect = "Allow"
    actions = [
      "ec2:CreateTags"
    ]
    resources = [
      "arn:${local.partition}:ec2:*:*:network-interface/*"
    ]
  }
}

resource "aws_iam_policy" "ipv6_eks_cni_policy" {
  count = local.karpenter_iam_role_enabled ? 1 : 0

  name        = "${module.this.id}-CNI_Policy"
  description = "CNI policy that is a merger of AmazonEKS_CNI_Policy and required IPv6 permissions"
  policy      = data.aws_iam_policy_document.ipv6_eks_cni_policy[0].json
  tags        = module.karpenter_label.tags
}

resource "aws_iam_role_policy_attachment" "ipv6_eks_cni_policy" {
  count = local.karpenter_iam_role_enabled ? 1 : 0

  role       = one(aws_iam_role.karpenter[*].name)
  policy_arn = one(aws_iam_policy.ipv6_eks_cni_policy[*].arn)
}

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

• Add this code

worker_role_arns = compact(concat(
    var.map_additional_worker_roles,
    [local.karpenter_role_arn]
  ))
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  workers_role_arns = local.worker_role_arns
muhaha avatar

Oh, got it,

  module "eks_cluster" {
    source = "cloudposse/eks-cluster/aws"
    workers_role_arns = [my-karpenter-role]
  }

right?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

worker_role_arns is then used in the EKS module to add it to the auth map https://github.com/cloudposse/terraform-aws-eks-cluster/blob/main/auth.tf#L52

    for role_arn in var.workers_role_arns : {
muhaha avatar

This will definitely help, thanks a lot

2023-11-13

setheryops avatar
setheryops

I think I know the answer to this but ill ask anyways….Theres not a way to sort resources in terraform state list by creation date is there?

Hao Wang avatar
Hao Wang
Comment on #18366 How do I get the creation time of a terraform run/state file?

Hi @red8888,

Terraform doesn’t currently retain metadata about creation and update times itself. Users don’t usually want to automatically delete Terraform-managed infrastructure, but I can see that there are some more unsual use-cases where that would be desirable; it sounds like you’re using Terraform to manage some development or other temporary infrastructure here?

Since Terraform itself doesn’t understand the relationships between your state files (Terraform just reads/writes that individual object from S3, ignoring everything else in the bucket, including the metadata), I think indeed something like what you proposed here is the best option. S3 itself knows when those resources are created, and your external program can “know” how you structure the objects in that bucket to know what is and is not a Terraform state.

This is one way in which Terraform’s design differs from CloudFormation’s: Terraform doesn’t have a first-class concept of a “stack”, so each user defines their own organizational structure for storing the different states, depending on their organizational needs. An implication of that difference is that Terraform itself can’t list “stacks”, and so unfortunately a tool that operates “for each stack” is outside of Terraform’s scope.

Destroying all of the resources in state does require access to the configuration today, because certain objects (provider configurations in particular, but also destroy-time provisioners, etc) are kept only there and not in the state. The primary use-case for Terraform is applying changes to configuration files managed in version control, and Terraform is less suited for situations where configuration is being dynamically generated on the fly and where, in particular, it’s not possible to reproduce the configuration that created a particular set of resources.

It feels to me like CloudFormation may be a better fit for your use-case, since it stores the whole configuration and state within the stack object and so makes it easier to do this “for each stack, destroy it conditionally” sort of operation on temporary/transient infrastructure.

1
aj_baller23 avatar
aj_baller23

Hi all , running into an issue where windows is detecting terrraform changes to our aws infrastructure, but mac is not detecting the changes (which is what I expect). All the TF files were created on a mac machine. Has anyone experience this behavior before, how can I resolved it.

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Jeremy White (Cloud Posse)

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)
cloudposse/geodesic

Geodesic is a DevOps Linux Toolbox in Docker. We use it as an interactive cloud automation shell. It’s the fastest way to get up and running with a rock solid Open Source toolchain. ★ this repo! https://slack.cloudposse.com/

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

docker solves the “it works on my machine” issue. Everyone can have the same dev environment

Dominique Dumont avatar
Dominique Dumont

Can you show us an example of the changes detected by Terraform on Windows ?

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Dan Miller (Cloud Posse)

aj_baller23 avatar
aj_baller23

I figured out what what it was… had to do with the newline going from mac to windows.. thanks!

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

great! glad you worked it out

Joe Perez avatar
Joe Perez

Hello all, I’ve been trying to get this IAM policy to render with a single statement and a list of ARNs that need to include a line item for just the ARN and the ARN with the wildcard (arn+”/*”) When I try to do this with a loop, it looks like the loop can only return a single string or a map object. I was hoping the two ARN line items could be added during the same loop iteration. This is similar to what I have so far which only adds the single ARN each time:

variable "bucket_arns" {
    default = ["arn:aws:s3:::bucket1","arn:aws:s3:::bucket2","arn:aws:s3:::bucket3"]
}

data "aws_iam_policy_document" "example" {

  statement {
    actions   = ["s3:*"]
    effect = "Allow"
    resources = [for bucket in var.bucket_arns : bucket]
  }
}

output "policy" { value = data.aws_iam_policy_document.example.json}
Joe Perez avatar
Joe Perez

I’ve successfully gotten this to work with a dynamic block statement, but then there’s a statement block for each and figure this can easily hit the character limit with a lot of ARNs. Thinking it would be nice to have this collapsed into a single policy statement

Joe Perez avatar
Joe Perez

I’ve also done this in a hacky way by iterating over the variable list twice as local variables and then combining the lists

loren avatar

In that particular example, you can write just, resources = var.bucket_arns

Joe Perez avatar
Joe Perez

lol I might be missing something. bucket_arns wont produce a list like:

    resources = [
        "arn:aws:s3:::bucket1",
        "arn:aws:s3:::bucket1/*",
        "arn:aws:s3:::bucket2",
        "arn:aws:s3:::bucket2/*",
        "arn:aws:s3:::bucket3",
        "arn:aws:s3:::bucket3/*"
        ] 
loren avatar

True, but neither will your example! But ok I understand what you’re going for now. That’s a little trickier. You need each loop to produce a list of two items, the bucket arn and the object arn. And you need flatten to collapse the list of lists into one list

1
1
loren avatar

On my phone, so can’t provide code. And I’m off to bed. If you don’t get it by morning, I’ll expand more with code

Joe Perez avatar
Joe Perez

all good, no rush

Joe Perez avatar
Joe Perez

just a problem I’ve been working on today and was wondering how others have addressed this

Joe Perez avatar
Joe Perez

another acceptable answer would be “you’re overthinking this, just put each line item in the statement resource and call it a day”

loren avatar

[ bucket, "${bucket}/*" ] creates the list of two items

loren avatar

Then wrap the whole for expression inside flatten([ for ... ])

Joe Perez avatar
Joe Perez

you’re right, that works

Joe Perez avatar
Joe Perez

for anyone following along:

data "aws_iam_policy_document" "example" {

  statement {
    actions   = ["s3:*"]
    effect = "Allow"
    resources = flatten([for bucket in var.bucket_arns : [ bucket, "${bucket}/*" ]])
  }
}
1
loren avatar

That’s it, yeah

1

2023-11-14

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hey everyone! Big favor to task….

Cloud Posse is very close to becoming an AWS Advanced Partner, but we need your help!

If you’ve found any of our Terraform modules helpful, including this Slack community or office hours, please let AWS know by leaving a review. We need these reviews to level up in their partner ecosystem.

https://cloudposse.com/apn-review

10
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Thank you all so much!

2
1

2023-11-15

Release notes from terraform avatar
Release notes from terraform
04:13:33 PM

v1.6.4 1.6.4 (November 15, 2023) ENHANCEMENTS:

backend/s3: Add the parameter endpoints.sso to allow overriding the AWS SSO API endpoint. (#34195)

BUG FIXES:

terraform test: Fix bug preventing passing sensitive output values from previous run blocks as inputs to future run blocks. (<a…

backend/s3: Adds SSO endpoint override by gdavison · Pull Request #34195 · hashicorp/terraformattachment image

Adds S3 backend parameter endpoints.sso to override the AWS SSO API endpoint. Closes #34142 Depends on release of hashicorp/aws-sdk-go-base#741 Target Release

1.6.4 Draft CHANGELOG entry

ENHANCEM…

2023-11-16

RB avatar

Does anyone know of any corporate terraform classes that do remote instructor-lead trainings?

RB avatar

Looking for a 2 day classes

Fizz avatar

only in panama. have you tried contacting hashicorp to see if they keep a registry of trainers?

Joe Niland avatar
Joe Niland

@Matt Gowie didn’t you used to do something in this space?

Matt Gowie avatar
Matt Gowie

Yeah I used to do terraform trainings for a couple big companies, Salesforce and Deloitte. It was through DevelopIntelligence. Happy to put you in touch with them @RB

Software developer training with DevelopIntelligenceattachment image

DevelopIntelligence helps enterprise organizations onboard, upskill and reskill tech talent with innovative, custom software developer training.

RB avatar

Thanks Matt!

2023-11-17

tamsky avatar

https://github.com/hashicorp/terraform/issues/19932#issuecomment-1817043906 - I hope someday to be in a conversation where I can use Eric’s word terralythic.

Comment on #19932 Instantiating Multiple Providers with a loop

Fwiw, we’ve solved this problem using atmos with Terraform and regularly deploy Terraform root modules (what we call components) to dozens or more of accounts in every single AWS region (e.g. for compliance baselines), without HCL code generation. IMO, even if this requested feature existed in terraform, we would not use it because it tightly couples the terraform state to multiple regions, which not only breaks the design principle for DR (that regions share nothing), but makes the blast radius of any change massive and encourages terralythic root module design. In our design pattern, we instantiate a root module once per region, ensuring that each instantiation is decoupled from the other. The one exception to this is when we set up things like transit gateways (with hubs and spokes), then we declare two providers, so we can configure source and target destinations. This ensures no two gateways are tightly coupled to each other.

TL;DR: I acknowledge why at first glance not supporting multiple providers in a loop seems like an awful limitation and why some things would be simplified if it were supported; that said, we deploy enormous infrastructures at Cloud Posse in strict enterprise environments, and don’t feel any impact of this inherent limitation.

1
2
loren avatar

i’d like to see the multi-account, multi-region project structure lol. struggling with that myself at the moment (again)…

Comment on #19932 Instantiating Multiple Providers with a loop

Fwiw, we’ve solved this problem using atmos with Terraform and regularly deploy Terraform root modules (what we call components) to dozens or more of accounts in every single AWS region (e.g. for compliance baselines), without HCL code generation. IMO, even if this requested feature existed in terraform, we would not use it because it tightly couples the terraform state to multiple regions, which not only breaks the design principle for DR (that regions share nothing), but makes the blast radius of any change massive and encourages terralythic root module design. In our design pattern, we instantiate a root module once per region, ensuring that each instantiation is decoupled from the other. The one exception to this is when we set up things like transit gateways (with hubs and spokes), then we declare two providers, so we can configure source and target destinations. This ensures no two gateways are tightly coupled to each other.

TL;DR: I acknowledge why at first glance not supporting multiple providers in a loop seems like an awful limitation and why some things would be simplified if it were supported; that said, we deploy enormous infrastructures at Cloud Posse in strict enterprise environments, and don’t feel any impact of this inherent limitation.

RB avatar

Check out this as a multi account / region reference if you use atmos.

https://github.com/cloudposse/atmos/tree/master/examples/complete

If terragrunt, they have a similar refarch too that

loren avatar

hmm, ok, thanks! iiuc, the stacks tenant1/dev and tenant1/test1 have a multi-region setup

RB avatar

Yes so to deploy the same component (tf root dir) across 2 regions (us-east-2, us-west-2), it would be like this

atmos terraform plan vpc -- stack tenant1-ue2-dev
atmos terraform plan vpc -- stack tenant1-uw2-dev

https://github.com/cloudposse/atmos/tree/master/examples/complete/stacks/orgs/cp/tenant1/dev

loren avatar

Another question on that @RB… how is the account created in the first place? I assume it is still a two-step process, create the account, then deploy the stack to the account. Is the account creation done with terraform or something else? And is creation plus assignment of a stack all part of one workflow, or two separate ones?

RB avatar

yes, account creation is done with a separate component. i think it’s the account component.

https://github.com/cloudposse/terraform-aws-components/tree/main/modules/account

RB avatar

let’s say you’re adding a new dev account here under the plat tenant

components:
  terraform:
    account:
      vars:
          organizational_units:
            - name: plat
              accounts:
                - name: plat-dev
                  tenant: plat
                  stage: dev
                  tags:
                    eks: false
RB avatar

then the command to create the account

atmos terraform plan account --stack core-gbl-root
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@loren would love to show a demo sometime

1
jose.amengual avatar
jose.amengual

I was looking at the aws-sso and aws-ssosync since I will be using google and I read the docs of the aws-sso component and I was surprise to see you guys deploy it on the root account but then a question came up, do you deploy all the assume roles on the root account too or you do that in the identity account?

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Dan Miller (Cloud Posse) @Jeremy G (Cloud Posse)

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

if youre using google you should know that amazon now supports google natively, so you dont need ssosync https://aws.amazon.com/about-aws/whats-new/2023/06/aws-iam-identity-center-automated-user-provisioning-google-workspace/

jose.amengual avatar
jose.amengual

ohhhh that is awesome news!!!

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

regarding deployment, you can deploy it to root or any other account that is delegated as an “administrator”. For example we’ve deployed it to identity in the past. But now we only deploy to root, since it’s generally easier to manage. It’s preference

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

the component is only deployed once to whichever account you choose as the administrator

jose.amengual avatar
jose.amengual

but then the user roles, poweruser, admin etc you guys deploy it to identity or the root?

jose.amengual avatar
jose.amengual

basically the aws-teams etc and so

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

sso only deploys the PermissionSets to the single account. then we use aws-teams/team-roles to deploy roles for terraform, etc

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)
  1. sso to the root (or identity) account (1 deployment)
  2. aws-teams to identity account (1 deployment)
  3. aws-team-roles to all other accounts (many deployments)
Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)
jose.amengual avatar
jose.amengual

ok so the teams permissions sets are on root, then they assume a role in identity and then role chaining into the other accounts

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

yeah exactly. We also deploy PermissionSets directly with sso for each account entirely for convenience sake

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

those Permission Sets will allow access from SSO to the specific account directly. In UI for example

jose.amengual avatar
jose.amengual

ahhh so if a developer needs console access to the dev account then a permission set is deployed into that account which matches the Gsuite group and then the user can log in directly?

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

yes or they can use role chaining. But direct access by the SSO login page tends to be easier

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

but to apply terraform and access state, you need to access the identity aws-team

jose.amengual avatar
jose.amengual

but in that case why bother deploying permission sets to the root account for dev access using role chaining?

jose.amengual avatar
jose.amengual

ahhh ok, you answered my question

1
jose.amengual avatar
jose.amengual

well in my case a dev might not have ever access to the state

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

also in the case that you’re using both SAML and SSO to authenticate users

jose.amengual avatar
jose.amengual

only the automation account which will run the pipelines

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

if devs only ever need access to 1 account and do not need to access state in some other account, then they probably do not need the chaining set up

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

entirely with SSO is likely good enough

jose.amengual avatar
jose.amengual

yes, agree

jose.amengual avatar
jose.amengual

Question @Dan Miller (Cloud Posse) the iam-roles needs to exist before deploying the aws-sso component? I’m getting errors trying to lookup stuff trying to deploy it

jose.amengual avatar
jose.amengual

I’m in debug mode now…

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

I don’t believe it needs roles at all. The trust relationship is created by aws-teams after sso is deployed

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

What’s the error?

jose.amengual avatar
jose.amengual

many…

jose.amengual avatar
jose.amengual

basically I do not use tenant so I need to do this part

## The overridable_* variables in this file provide Cloud Posse defaults.
## Because this module is used in bootstrapping Terraform, we do not configure
## these inputs in the normal way. Instead, to change the values, you should
## add a `variables_override.tf` file and change the default to the value you want.
jose.amengual avatar
jose.amengual

for the iam-role submodule

jose.amengual avatar
jose.amengual

and my root account is not called root

jose.amengual avatar
jose.amengual

( I did not created it, somene else did)

jose.amengual avatar
jose.amengual

so my account map is like this:

organization_config:
          root_account:
            name: PePe-Org
            stage: root
          accounts: []
          # organization:
            # service_control_policies:
            #   #- DenyEC2InstancesWithoutEncryptionInTransit
          organizational_units:
            - name: sandbox
              accounts: []
              service_control_policies:
                - DenyLeavingOrganization
jose.amengual avatar
jose.amengual

etc…

jose.amengual avatar
jose.amengual

and I do not use glb, I use global

jose.amengual avatar
jose.amengual

so then i get this:

Error: Invalid index
│ 
│   on ../account-map/modules/iam-roles/main.tf line 44, in locals:
│   44:   static_terraform_role  = local.account_map.terraform_roles[local.account_name]
│     ├────────────────
│     │ local.account_map.terraform_roles is object with 8 attributes
│     │ local.account_name is "root"
│ 
│ The given key does not identify an element in this collection value.
Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

Are you using the latest version of both account-map, tfstate, and teams? Also I just stepped away from the computer and am a little busy tonight, but I can take a more thorough look in the morning

jose.amengual avatar
jose.amengual

not in a rush, I just pulled from main

1
jose.amengual avatar
jose.amengual

so it looks like the code expect the root account be called root in the account component

 organization_config:
          root_account:
            name: root
            stage: root
          accounts: []
          # organization:
            # service_control_policies:
            #   #- DenyEC2InstancesWithoutEncryptionInTransit
jose.amengual avatar
jose.amengual

After changing the name to root, I was able to go further into the aws-sso, but I think my assumption of roles to be in the accounts beforehand was right.

jose.amengual avatar
jose.amengual

because now the aws-sso component is trying to assume

╷
│ Error: Cannot assume IAM Role
│ 
│   with provider["registry.terraform.io/hashicorp/aws"],
│   on providers.tf line 28, in provider "aws":
│   28: provider "aws" {
│ 
│ IAM Role (arn:aws:iam::22222222:role/pepe-global-root-admin) cannot be assumed.
jose.amengual avatar
jose.amengual

I never deployed any roles to any of the accounts

jose.amengual avatar
jose.amengual

they re all empty

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

You can set the root account name to whatever you like. For example, we use core-root

        organization_config:
          root_account:
            name: core-root
            stage: root
            tenant: core

Then in account-map, you can specify both the account alias and account name however you like

        root_account_aws_name: root
        root_account_account_name: core-root
jose.amengual avatar
jose.amengual

is I use any name that is not root then it fails with those errors I posted earlier

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

Why is terraform trying to assume a role? In our case we use the SuperAdmin user, which has complete access, and we disable the backend

jose.amengual avatar
jose.amengual

maybe is because I do not use tenant

jose.amengual avatar
jose.amengual

I have no idea why; is my first time trying to deploy aws-sso

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)
# Note that you cannot update the aws-sso component if you are logged in via aws-sso.
# Use the SuperAdmin credentials (as of now, stored in 1password)
components:
  terraform:
    aws-sso:
      backend:
        s3:
          role_arn: null
jose.amengual avatar
jose.amengual

in my case I’m using root account creds with a role I created, not via SSO

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

If backend role is configured, then atmos will attempt to use that role to access tfstate. We set to null when we want to use the current role/user and not assume another role

jose.amengual avatar
jose.amengual

I tried that, and I got the same error

jose.amengual avatar
jose.amengual

now just 2 errors instead of 4

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

What’s your error?

jose.amengual avatar
jose.amengual
Error: Cannot assume IAM Role
│ 
│   with provider["registry.terraform.io/hashicorp/aws"],
│   on providers.tf line 28, in provider "aws":
│   28: provider "aws" {
│ 
│ IAM Role (arn:aws:iam::222222:role/pepe-global-root-admin) cannot be assumed.
│ 
│ There are a number of possible causes of this - the most common are:
│   * The credentials used in order to assume the role are invalid
│   * The credentials do not have appropriate permission to assume the role
│   * The role ARN is not valid
│ 
│ Error: operation error STS: AssumeRole, https response error StatusCode: 403, RequestID: acf3c25b--49a0-84bd-, api error
│ AccessDenied: User: arn:aws:iam::222222:user/automation-root is not authorized to perform: sts:AssumeRole on resource:
│ arn:aws:iam::222222:role/pepe-global-root-admin
│ 
╵
╷
│ Error: Cannot assume IAM Role
│ 
│   with provider["registry.terraform.io/hashicorp/aws"].root,
│   on providers.tf line 50, in provider "aws":
│   50: provider "aws" {
│ 
│ IAM Role (arn:aws:iam::222222:role/pepe-global-root-admin) cannot be assumed.
│ 
│ There are a number of possible causes of this - the most common are:
│   * The credentials used in order to assume the role are invalid
│   * The credentials do not have appropriate permission to assume the role
│   * The role ARN is not valid
│ 
│ Error: operation error STS: AssumeRole, https response error StatusCode: 403, RequestID: 45a433ce--4347-a76a, api error
│ AccessDenied: User: arn:aws:iam::2222222:user/automation-root is not authorized to perform: sts:AssumeRole on resource:
│ arn:aws:iam::22222222:role/pepe-global-root-admin
jose.amengual avatar
jose.amengual

this role:

arn:aws:iam::2222222:user/automation-root

is the one I’m currently using to deploy this

jose.amengual avatar
jose.amengual

is not picking up my current role

jose.amengual avatar
jose.amengual

ohh wait , I put the backend setting in the wrong place

jose.amengual avatar
jose.amengual

mmm same

jose.amengual avatar
jose.amengual

should I set privileged: false?

jose.amengual avatar
jose.amengual

interesting If I set it to true then it complains about locals and other stuff but not any more the assume role

jose.amengual avatar
jose.amengual

what is the difference between dynamic and static roles? how “dynamically” they are?

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

privileged: true is used to tell remote-state to use the current role to pull from the s3 backend

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

static roles are the previous behavior, where account-map always expected the terraform role to exist. dynamic roles use the configuration in aws-team/roles to dynamically find the name of the role

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

more on that with the comments I added with this PR: https://github.com/cloudposse/terraform-aws-components/pull/870

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

so TLDR, if you use dynamic roles, you need to have aws-team/roles configured in code. Not necessarily applied, but the following command should work:

atmos describe stacks --components=aws-teams,aws-team-roles --component-types=terraform --sections=vars
jose.amengual avatar
jose.amengual

ohhh so the error I’m getting could be related to that

jose.amengual avatar
jose.amengual

mmm

jose.amengual avatar
jose.amengual

I don’t have those configured

jose.amengual avatar
jose.amengual

but those components will be deployed on identy and all.other accounts ,not root , right ?

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

aws-teams is always deployed to identity, aws-team-roles is deployed to root (optionally) and all other accounts

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

if you dont have those configured, then I’d disable dynamic roles

jose.amengual avatar
jose.amengual

I see , ok

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(@Dan Miller (Cloud Posse) see the thread I tagged you in. TL;DR AWS SSO doesn’t work properly with Terraform with delegated administrator accounts.)
@Jeremy G (Cloud Posse): We were following the general pattern of delegating control to core-identity, but as you noticed, the delegated IAM administrator cannot itself manage the Permission Sets deployed to core-root . That means anyone running Terraform for aws-sso has to have admin access to core-root, which negates any benefit of the delegation. Since the delegation adds complexity without a security benefit, we gave up on it.

1
Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

@Jeremy White (Cloud Posse) were you able to get the SSO integration working with GSuite? @jose.amengual has had some issues and was asking about it

jose.amengual avatar
jose.amengual

Groups were not syncing automatically only users

jose.amengual avatar
jose.amengual

from docs:

SCIM automatic synchronization from Google Workspace only supports provisioning users; groups aren't automatically provisioned. You can't create groups for your Google Workspace users using the AWS Management Console. After provisioning users, you can create groups using a CLI or API operation
jose.amengual avatar
jose.amengual

so if you guys got this working without the sso-sync lambda I will like to know

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Dan Miller (Cloud Posse) can you chime in? I think we were just discussing this yesterday. I think due to the aforementioned problems with groups, I think the ssosync is still necessary.

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

@Jeremy White (Cloud Posse) implemented this recently and said that he was able to get it working but had to follow the steps very precisely. @Jeremy White (Cloud Posse) can you chime in please?

1
Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

Sure, so there are two sets of instructions… one on the google side and another set on the AWS side which both ought to guide you through the steps of enabling the integration.The trickiest part is getting the automatic provisioning dialog to show up in Google Workspaces. I understand that others have gotten it working through these steps, and I’ve been on calls to guide people through them, and most of the time the reason it doesn’t work is a minuscule misstep here or there.

Configure Amazon Web Services (AWS) auto-provisioning - Google Workspace Admin Help

You can set up automated user provisioning (autoprovisioning) so that any changes you make to user accounts in Google Workspace are automatically synced with this third-party app. Before you begin:&

Use Google Workspace with IAM Identity Center - AWS IAM Identity Center

Learn how to set up Google Workspace and IAM Identity Center.

jose.amengual avatar
jose.amengual

I got the automatic provisioning working fine for users but groups did not sync and base on the note on step 10 it looks like is not supported. did you get it working with groups?

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

so, two things:

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)
  1. group creation, update, and deletion is definitely not supported
Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)
  1. users in groups works AFAIK. That is, once the group is created manually in Identity center, users will be assigned/unassigned correctly
jose.amengual avatar
jose.amengual

ok, that confirms what the docs say about 1, Users worked fine but groups did not, so I had to deploy the sso-sync

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

just clarifying statement: ‘had to deploy’ sso-sync is for groups only. Users can be added to the groups by auto-provisioning. Is that how you have it configured now?

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

There is a small risk in this. Customers have reported back that ssosync can get confused and will delete/recreate groups. unfortunately, such an action will disconnect all permission sets and saml apps

jose.amengual avatar
jose.amengual

auto-provisioning only synced users to Identity center

jose.amengual avatar
jose.amengual

to be able to sync groups from google I had to deploy the ssosync

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

gotcha. And I’ll try to remember to tag you if auto-provisioning eventually adds group support

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

I think AWS/Google know this is still non-ideal

jose.amengual avatar
jose.amengual

definitely and the problem is the comment is buried on a step by step guide

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

yes. To be clear, you need to scroll down to the https://docs.aws.amazon.com/singlesignon/latest/userguide/gs-gwp.html#<i class="em em-r2m"very bottom of the AWS Guide and click “next steps” before AWS spells out that they -do- in fact let you create the groups. (If you try to do it in the AWS Web Console, it blocks such interactions due to automated provisioning being enabled)

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

@Jeremy G (Cloud Posse) please make note of ^ . Dan brought it up that it could be unclear if you don’t follow every step of the guide

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

Basically, you can only create/delete groups via API once the automated provisioning is turned on

this1
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Now, I’m confused. @Jeremy White (Cloud Posse) what is current best practice for keeping Google groups in sync with AWS?

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

we’re discussing this right now on a call if you want to join, but TLDR the groups have to be created with API. They cannot be created manually

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

we can create them with Terraform

Jeremy White (Cloud Posse) avatar
Jeremy White (Cloud Posse)

AWS tells you in the article that this is what they recommend

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

I’m adding it to the aws-sso component

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

That is creating them, but what about keeping them in sync?

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

Google integration for identity center. same as okta/jumpcloud/etc, but group creation is not supported

Dan Miller (Cloud Posse) avatar
Dan Miller (Cloud Posse)

once the groups are created, then the integration will automatically sync

2023-11-19

Elad Levi avatar
Elad Levi

Anyone know if Github organization repo’s rulesets can be managed with terraform ?

2023-11-20

2023-11-21

Martin Helfert avatar
Martin Helfert

Is there some way to prevent local-exec from showing sensitive values in case of an error? It suppresses the value if the commands are running without issues showing module.test.null_resource.this (local-exec): (output suppressed due to sensitive value in config) in the logs, but if the resource fails with local-exec provisioner error , all the commands are shown in plain text including the sensitive values

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

define a variable, mark it sensitive, and use it in the command of the local-exec

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
#26611 Fixes for sensitive values used as input to provisionersattachment image

Two separate commits in this pull request, which I’d suggest reviewing separately. The first is a bug fix which I’m pretty confident about, and the second is a UI change that I’d really appreciate feedback on.

Unmark provisioner arguments

If provisioner configuration or connection info includes sensitive values, we need to unmark them before calling the provisioner. Failing to do so causes serialization to error.

Unlike resources, we do not need to capture marked paths here, so we just discard the marks.

Hide maybe-sensitive provisioner output

If the provisioner configuration includes sensitive values, it’s a reasonable assumption that we should suppress its log output. Obvious examples where this makes sense include echoing a secret to a file using local-exec or remote-exec.

This commit adds tests for both logging output from provisioners with non-sensitive configuration, and suppressing logs for provisioners with sensitive values in configuration.

Note that we do not suppress logs if connection info contains sensitive information, as provisioners should not be logging connection information under any circumstances.

Screenshot

after

Martin Helfert avatar
Martin Helfert

Thanks. I’m using a token from a data resource so I can’t set a variable to sensitive. I tried to achieve something similar by using

locals {
  ecr_password = sensitive(data.aws_ecr_authorization_token.default.password)
}

But this does not seem to work. The token is still printed in the logs in case of an error

Frank avatar

I am trying to create users and databases in my RDS instance. This needs to be done via a Bastion instance, which is only reachable over Systems Manager. I came across terraform-ssh-tunnel which seems to support SSM but in our case it needs to assume a role within the target account first before getting the instance and tunneling through it.

Has anyone ever attempted to do something like this?

flaupretre/terraform-ssh-tunnel

Manage your private resources via a tunnel

Gabriela Campana (Cloud Posse) avatar
Gabriela Campana (Cloud Posse)

@Matt Calhoun

flaupretre/terraform-ssh-tunnel

Manage your private resources via a tunnel

Frank avatar

I got it to work with the aforementioned module by making a few changes to it: https://github.com/flaupretre/terraform-ssh-tunnel/pull/30

#30 feat: add aws assume role support for ssm

This adds an optional aws_assume_role input which is used in the SSM tunnel.
It will assume a given role before starting the tunnel.

In our use-case we run Terraform in a separate account, which needs to assume a role into the target account before it can do any actions.
For normal Terraform this is handled using the assume_role functionality of the aws provider, but in this particular case we need to explicitly assume the role ourselves.

With these changes we were able to use the module without getting a “Permission Denied” error.

Igor Rodionov avatar
Igor Rodionov

We never had an issue like that as we provision AWS VPN primarily or run Terraform on instances inside of the perimeter. Your solution is great for achieving the goal.

Ola Bello avatar
Ola Bello

Hi, I have been getting this error when trying to use the eks_workers module, anyone knows a way around this │ Error: Unsupported argument │ │ on .terraform/modules/eks_workers.autoscale_group/main.tf line 244, in resource “aws_autoscaling_group” “default”: │ 244: tags = flatten([ │ │ An argument named “tags” is not expected here.

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  version = "0.30.1"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

0.30.1 is an old version

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  dynamic "tag" {
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

PRs are welcome

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
  version = "0.30.1"
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then please run the following commands and commit the changes

make init
make github/init
make readme
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

also, why are you using the EKS workers and not Managed Node Groups (they are easier to deal with) or karpenter?

Ola Bello avatar
Ola Bello

Thank you so much! The managed node groups seem to apply much better to my use case.

Ola Bello avatar
Ola Bello

@Andriy Knysh (Cloud Posse) I used EKS workers because I’d like to set maximum number of pods to be created on a node to false, does managed node groups has same feature?

2023-11-22

2023-11-24

muhaha avatar

Hi:wave: Is there any option to NOT lowercase the role name in cloudposse/iam-role/aws module ? Thanks

2023-11-26

Juan Pablo Lorier avatar
Juan Pablo Lorier

Hi, I’ve started using cloudposse modules to manage the ECS clusters and I’m having a hard time with them. After several days moving forward error by error, I’m stucked with this new error.

Error: creating ECS Service (XXXXXXXX): InvalidParameterException: Classic Load Balancers are not supported with Fargate.

The load balancer block:

ecs_load_balancers = [ { target_group_arn = null elb_name = module.alb.alb_name, container_name = module.container_definition[”${var.environment}-${each.value.service_name}”].json_map_object.name # lookup(each.value, “container_name”, each.key), container_port = module.container_definition[”${var.environment}-${each.value.service_name}”].json_map_object.portMappings[0].containerPort #lookup(each.value, “container_port”, 5000) }

The elb_name is from an ALB (confirmed) so no clue why this is complaining for the ELB type.

Any hints?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

looks like you are trying to provision the cluster on Fargate. Check these variables https://github.com/cloudposse/terraform-aws-ecs-cluster/blob/main/variables.tf#L34

variable "capacity_providers_fargate" {
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
capacity_providers_fargate
default_capacity_strategy
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

or you are saying that you want fargate and you have ALB (not classic), but the error says that the load balancer is classic?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

look at this example which provisions ECS cluster and ALB https://github.com/cloudposse/terraform-aws-components/blob/main/modules/ecs/main.tf

``` locals { enabled = module.this.enabled

dns_enabled = local.enabled && var.route53_enabled

acm_certificate_domain = try(length(var.acm_certificate_domain_suffix) > 0, false) ? format(“%s.%s.%s”, var.acm_certificate_domain_suffix, var.environment, module.dns_delegated.outputs.default_domain_name) : coalesce(var.acm_certificate_domain, module.dns_delegated.outputs.default_domain_name)

maintenance_page_fixed_response = { content_type = “text/html” status_code = “503” message_body = file(“${path.module}/${var.maintenance_page_path}”) } }

This is used due to the short limit on target group names i.e. 32 characters

module “target_group_label” { source = “cloudposse/label/null” version = “0.25.0”

name = “default”

tenant = “” namespace = “” stage = “” environment = “”

context = module.this.context }

resource “aws_security_group” “default” { count = local.enabled ? 1 : 0 name = module.this.id description = “ECS cluster EC2 autoscale capacity providers” vpc_id = module.vpc.outputs.vpc_id }

resource “aws_security_group_rule” “ingress_cidr” { for_each = local.enabled ? toset(var.allowed_cidr_blocks) : [] type = “ingress” from_port = 0 to_port = 65535 protocol = “tcp” cidr_blocks = [each.value] security_group_id = join(“”, aws_security_group.default.*.id) }

resource “aws_security_group_rule” “ingress_security_groups” { for_each = local.enabled ? toset(var.allowed_security_groups) : [] type = “ingress” from_port = 0 to_port = 65535 protocol = “tcp” source_security_group_id = each.value security_group_id = join(“”, aws_security_group.default.*.id) }

resource “aws_security_group_rule” “egress” { count = local.enabled ? 1 : 0 type = “egress” from_port = 0 to_port = 65535 protocol = “tcp” cidr_blocks = [“0.0.0.0/0”] security_group_id = join(“”, aws_security_group.default.*.id) }

module “cluster” { source = “cloudposse/ecs-cluster/aws” version = “0.4.1”

context = module.this.context

container_insights_enabled = var.container_insights_enabled capacity_providers_fargate = var.capacity_providers_fargate capacity_providers_fargate_spot = var.capacity_providers_fargate_spot capacity_providers_ec2 = { for name, provider in var.capacity_providers_ec2 : name => merge( provider, { security_group_ids = concat(aws_security_group.default.*.id, provider.security_group_ids) subnet_ids = var.internal_enabled ? module.vpc.outputs.private_subnet_ids : module.vpc.outputs.public_subnet_ids associate_public_ip_address = !var.internal_enabled } ) }

# external_ec2_capacity_providers = { # external_default = { # autoscaling_group_arn = module.autoscale_group.autoscaling_group_arn # managed_termination_protection = false # managed_scaling_status = false # instance_warmup_period = 300 # maximum_scaling_step_size = 1 # minimum_scaling_step_size = 1 # target_capacity_utilization = 100 # } # }

}

#locals {

user_data = «EOT

##!/bin/bash #echo ECS_CLUSTER=”${module.cluster.name}” » /etc/ecs/ecs.config #echo ECS_ENABLE_CONTAINER_METADATA=true » /etc/ecs/ecs.config #echo ECS_POLL_METRICS=true » /etc/ecs/ecs.config #EOT # #} # #data “aws_ssm_parameter” “ami” {

name = “/aws/service/ecs/optimized-ami/amazon-linux-2/recommended/image_id”

#} # #module “autoscale_group” {

source = “cloudposse/ec2-autoscale-group/aws”

version = “0.31.1”

#

context = module.this.context

#

image_id = data.aws_ssm_parameter.ami.value

instance_type = “t3.medium”

security_group_ids = aws_security_group.default.*.id

subnet_ids = var.internal_enabled ? module.vpc.outputs.private_subnet_ids : module.vpc.outputs.public_subnet_ids

health_check_type = “EC2”

desired_capacity = 1

min_size = 1

max_size = 2

wait_for_capacity_timeout = “5m”

associate_public_ip_address = true

user_data_base64 = base64encode(local.user_data)

#

# Auto-scaling policies and CloudWatch metric alarms

autoscaling_policies_enabled = true

cpu_utilization_high_threshold_percent = “70”

cpu_utilization_low_threshold_percent = “20”

#

iam_instance_profile_name = module.cluster.role_name

#}

resource “aws_route53_record” “default” { for_each = local.dns_enabled ? var.alb_configuration : {} zone_id = module.dns_delegated.outputs.default_dns_zone_id name = format(“%s.%s”, lookup(each.value, “route53_record_name”, var.route53_record_name), var.environment) type = “A”

alias { name = module.alb[each.key].alb_dns_name zone_id = module.alb[each.key].alb_zone_id evaluate_target_health = true } }

data “aws_acm_certificate” “default” { count = local.enabled ? 1 : 0 domain = local.acm_certificate_domain statuses = [“ISSUED”] most_recent = true }

module “alb” { source = “cloudposse/alb/aws” version = “1.5.0”

for_each = local.enabled ? var.alb_configuration : {}

vpc_id = module.vpc.outputs.vpc_id subnet_ids = lookup(each.value, “internal_enabled”, var.internal_enabled) ? module.vpc.outputs.private_subnet_ids : module.vpc.outputs.public_subnet_ids ip_address_type = lookup(each.value, “ip_address_type”, “ipv4”)

internal = lookup(each.value, “internal_enabled”, var.internal_enabled)

security_group_enabled = lookup(each.value, “security_group_enabled”, true) security_group_ids = [module.vpc.outputs.vpc_default_security_group_id]

http_enabled = lookup(each.value, “http_enabled”, true) http_port = lookup(each.value, “http_port”, 80) http_redirect = lookup(each.value, “http_redirect”, true) http_ingress_cidr_blocks = lookup(each.value, “http_ingress_cidr_blocks”, var.alb_ingress_cidr_blocks_http)

https_enabled = lookup(each.value, “https_enabled”, true) https_port = lookup(each.value, “https_port”, 443) https_ingress_cidr_blocks = lookup(each.value, “https_ingress_cidr_blocks”, var.alb_ingress_cidr_blocks_https) certificate_arn = lookup(each.value, “certificate_arn”, one(data.aws_acm_certificate.default[*].arn))

access_logs_enabled = lookup(each.value, “access_logs_enabled”, true) alb_access_logs_s3_bucket_force_destroy = lookup(each.value, “alb_access_logs_s3_bucket_force_destroy”, true) alb_access_logs_s3_bucket_force_destroy_enabled = lookup(each.value, “alb_access_logs_s3_bucket_force_destroy_enabled”, true)

lifecycle_rule_enabled = lookup(each.value, “lifecycle_rule_enabled”, true)

expiration_days = lookup(each.value, “expiration_days”, 90) noncurrent_version_expiration_days = lookup(each.value, “noncurrent_version_expiration_days”, 90) standard_transition_days = lookup(each.value, “standard_transition_days”, 30) noncurrent_version_transition_days = lookup(each.value, “noncurrent_version_transition_days”, 30)

enable_glacier_transition = lookup(each.value, “enable_glacier_transition”, true) glacier_transition_days = lookup(each.value, “glacier_transition_days”, 60)

stickiness = lookup(each.value, “stickiness”, null) cross_zone_load_balancing_enabled = lookup(each.value, “cross_zone_load_balancing_enabled”, true)

target_group_name = join(module.target_group_label.delimiter, [module.target_group_label.id, each.key]) target_group_port = lookup(each.value, “target_group_port”, 80) …

Juan Pablo Lorier avatar
Juan Pablo Lorier

I’m saying that I’m creating an Fargate ECS and an ALB but I can’t make the services to deploy correctly. Let me look at those examples and will get back here. Thanks!

Juan Pablo Lorier avatar
Juan Pablo Lorier

module “ecs_cluster” { source = “cloudposse/ecs-cluster/aws”

enabled = var.enabled context = module.label.context

container_insights_enabled = var.container_insights_enabled logging = var.logging log_configuration = var.log_configuration capacity_providers_fargate = true tags = var.tags depends_on = [ module.label ] }

module “label” { source = “cloudposse/label/null” version = “0.25.0”

namespace = var.namespace name = var.cluster_name environment = var.environment tenant = var.tenant id_length_limit = 255

}

Juan Pablo Lorier avatar
Juan Pablo Lorier

For the ECS example, it deploys the cluster and alb, but no services. The issue is with the service creation

Juan Pablo Lorier avatar
Juan Pablo Lorier

Looking at: modules/ecs-service/main.tf

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
load_balancer supports the following:

elb_name - (Required for ELB Classic) Name of the ELB (Classic) to associate with the service.
target_group_arn - (Required for ALB/NLB) ARN of the Load Balancer target group to associate with the service.
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so this is not correct

target_group_arn = null
elb_name         = module.alb.alb_name,
Juan Pablo Lorier avatar
Juan Pablo Lorier

that’s it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you need the other way around

Juan Pablo Lorier avatar
Juan Pablo Lorier

thanks! I will provide the target_groups instead

1
Doug Bergh avatar
Doug Bergh

i created a role using cloudposse/iam-role/aws and the role’s ARN has the name lower-cased…i.e. my name is blahBlahBlah and the ARN is arn:iam:role/blahblahblah.My CloudFormation resources that use it can’t find it!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

all resource names/IDs/ARNs are controlled by the label module https://github.com/cloudposse/terraform-aws-iam-role/blob/main/main.tf#L29

module "role_name" {
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and the module accepts a bunch of variables to control its behavior, in this case you can use https://github.com/cloudposse/terraform-aws-iam-role/blob/main/context.tf#L242

variable "label_value_case" {
Doug Bergh avatar
Doug Bergh
module "iam-role" {
  source = "cloudposse/iam-role/aws"
  version     = "0.19.0"

  enabled   = true
  name      = var.wave_code_deploy_role_name

Thanks for the quick reply Andriy! I’m not sure how to connect it all - how do I make this name use label_value_case==’none’?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
module "role" {
  source = "......"

 
  label_value_case = "none"
  context = module.this.context
}
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

so when instantiating the module, you can provide label_value_case

Doug Bergh avatar
Doug Bergh

got it! Thanks Andriy! (btw minor feedback - “none” as default would seem better to me since it would match the AWS Console behavior)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, maybe it would be better, but for historical reasons it’s set to lower and it’s already eveywhere so would be difficult to change

2023-11-27

2023-11-29

Release notes from terraform avatar
Release notes from terraform
10:33:33 AM

v1.6.5 1.6.5 (November 29, 2023) BUG FIXES:

backend/s3: Fixes parsing errors in shared config and credentials files. (#34313) backend/s3: Fixes error with AWS SSO when using FIPS endpoints. (<a href=”https://github.com/hashicorp/terraform/pull/34313” data-hovercard-type=”pull_request”…

Updates `aws-sdk-go-base` by gdavison · Pull Request #34313 · hashicorp/terraformattachment image

Updates dependencies to include upstream bug fixes, including fixes to shared config file parsing and SSO with FIPS endpoints Fixes #34277 Target Release

1.6.5 Draft CHANGELOG entry

BUG FIXES

Hao Wang avatar
Hao Wang
#1302 Re-release with Helm 3.12.2 (3.12.1 cannot download charts)

I am a terraform noob, sorry if this isn’t in the right place! Originally: awslabs/data-on-eks#376

While following this blog to deploy the jupyterhub blueprint , Terraform fails with the error:

module.eks_blueprints_addons.module.karpenter.helm_release.this[0]: Creating...
╷
│ Error: could not download chart: unexpected status from HEAD request to <https://public.ecr.aws/v2/karpenter/karpenter/manifests/v0.32.1>: 400 Bad Request
│ 
│   with module.eks_blueprints_addons.module.karpenter.helm_release.this[0],
│   on .terraform/modules/eks_blueprints_addons.karpenter/main.tf line 9, in resource "helm_release" "this":
│    9: resource "helm_release" "this" {
│ 
╵

I was eventually able to replicate this with Helm 3.13.1, but 3.13.2 works as expected:

helm install <oci://public.ecr.aws/karpenter/karpenter>  --version v0.32.1 --generate-name

Error: INSTALLATION FAILED: unexpected status from HEAD request to <https://public.ecr.aws/v2/karpenter/karpenter/manifests/v0.32.1>: 400 Bad Request

However, the local version of helm didn’t seem to affect Terraform, so I’m guessing this provider needs to be re-released with https://github.com/hashicorp/terraform-provider-helm/pull/1300/files#diff-33ef32bf6c23acb95f5902d7097b7a1d5128ca061167ec0716715b0b9eeaa5f6R13

Terraform, Provider, Kubernetes and Helm Versions

terraform -v
Terraform v1.6.5
on linux_amd64

• provider registry.terraform.io/hashicorp/aws v5.27.0 • provider registry.terraform.io/hashicorp/cloudinit v2.3.2 • provider registry.terraform.io/hashicorp/helm v2.12.0 • provider registry.terraform.io/hashicorp/kubernetes v2.24.0 • provider registry.terraform.io/hashicorp/random v3.1.0 • provider registry.terraform.io/hashicorp/time v0.9.2 • provider registry.terraform.io/hashicorp/tls v4.0.5

Terraform version:
Provider version:
Kubernetes version:

Steps to Reproduce

git clone <https://github.com/awslabs/data-on-eks.git> cd data-on-eks/ai-ml/jupyterhub && chmod +x install.sh

Expected Behavior

Helm installs chart

Actual Behavior

Error: INSTALLATION FAILED: unexpected status from HEAD request to https://public.ecr.aws/v2/karpenter/karpenter/manifests/v0.32.1: 400 Bad Request

References

Looks like this version bump was just fixed, all we need is a new release?
https://github.com/hashicorp/terraform-provider-helm/pull/1300/files#diff-33ef32bf6c23acb95f5902d7097b7a1d5128ca061167ec0716715b0b9eeaa5f6R13

Community Note

• Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Hao Wang avatar
Hao Wang

I believe this impacts wide range of deployments

#1302 Re-release with Helm 3.12.2 (3.12.1 cannot download charts)

I am a terraform noob, sorry if this isn’t in the right place! Originally: awslabs/data-on-eks#376

While following this blog to deploy the jupyterhub blueprint , Terraform fails with the error:

module.eks_blueprints_addons.module.karpenter.helm_release.this[0]: Creating...
╷
│ Error: could not download chart: unexpected status from HEAD request to <https://public.ecr.aws/v2/karpenter/karpenter/manifests/v0.32.1>: 400 Bad Request
│ 
│   with module.eks_blueprints_addons.module.karpenter.helm_release.this[0],
│   on .terraform/modules/eks_blueprints_addons.karpenter/main.tf line 9, in resource "helm_release" "this":
│    9: resource "helm_release" "this" {
│ 
╵

I was eventually able to replicate this with Helm 3.13.1, but 3.13.2 works as expected:

helm install <oci://public.ecr.aws/karpenter/karpenter>  --version v0.32.1 --generate-name

Error: INSTALLATION FAILED: unexpected status from HEAD request to <https://public.ecr.aws/v2/karpenter/karpenter/manifests/v0.32.1>: 400 Bad Request

However, the local version of helm didn’t seem to affect Terraform, so I’m guessing this provider needs to be re-released with https://github.com/hashicorp/terraform-provider-helm/pull/1300/files#diff-33ef32bf6c23acb95f5902d7097b7a1d5128ca061167ec0716715b0b9eeaa5f6R13

Terraform, Provider, Kubernetes and Helm Versions

terraform -v
Terraform v1.6.5
on linux_amd64

• provider registry.terraform.io/hashicorp/aws v5.27.0 • provider registry.terraform.io/hashicorp/cloudinit v2.3.2 • provider registry.terraform.io/hashicorp/helm v2.12.0 • provider registry.terraform.io/hashicorp/kubernetes v2.24.0 • provider registry.terraform.io/hashicorp/random v3.1.0 • provider registry.terraform.io/hashicorp/time v0.9.2 • provider registry.terraform.io/hashicorp/tls v4.0.5

Terraform version:
Provider version:
Kubernetes version:

Steps to Reproduce

git clone <https://github.com/awslabs/data-on-eks.git> cd data-on-eks/ai-ml/jupyterhub && chmod +x install.sh

Expected Behavior

Helm installs chart

Actual Behavior

Error: INSTALLATION FAILED: unexpected status from HEAD request to https://public.ecr.aws/v2/karpenter/karpenter/manifests/v0.32.1: 400 Bad Request

References

Looks like this version bump was just fixed, all we need is a new release?
https://github.com/hashicorp/terraform-provider-helm/pull/1300/files#diff-33ef32bf6c23acb95f5902d7097b7a1d5128ca061167ec0716715b0b9eeaa5f6R13

Community Note

• Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Dominique Dumont avatar
Dominique Dumont

aws doc on karpenter indicates URL

<oci://public.ecr.aws/karpenter/karpenter>

You may want to try that one. HTH

ECR Public Gallery

Amazon ECR Public Gallery is a website that allows anyone to browse and search for public container images, view developer-provided details, and see pull commands

1
TechHippie avatar
TechHippie

Hello Team - I am using the fully-private-cluster terraform blueprint to create a private EKS cluster with 2 managed node groups. I am trying to restrict the ECR repositories EKS cluster can pull images from by modifying the AmazonEC2ContainerRegistryReadOnly policy(custom policy) to contain specific repositories instead of all. This setup works for the first node group. But for the second node group it fails saying policy with the same name exists. How can I make it use the existing IAM policy if it exists? I tried to use aws_iam_policy data source but now it fails on node group 1 execution itself as the IAM policy doesn’t exist with that name yet. Any guidance on troubleshooting it will be of great help.

Hao Wang avatar
Hao Wang

should work by attaching the same policy to 2 node groups

TechHippie avatar
TechHippie

Yeh. I am not able to do that as during the second instantiation of the node group module . In blueprints, nodegroup’s policies and roles are created within node group module.

Hao Wang avatar
Hao Wang

do you have the link to the blueprint, is it possible to pass a variable for the name of both nodegroups’ policies?

Hao Wang avatar
Hao Wang

ok, this will use a community module, I guess you may customize the code?

2023-11-30

Josh B. avatar
Josh B.

This might be very premature and maybe just announced at ReInvent, but I noticed redis has a serverless option now and was wondering if Terraform updated the resource for it (couldn’t find it) if so I would be willing to put in a pr in the cloudposse module.

Josh B. avatar
Josh B.
Amazon ElastiCache Serverless for Redis and Memcached is now available | Amazon Web Servicesattachment image

Today, we are announcing the availability of Amazon ElastiCache Serverless, a new serverless option that allows customers to create a cache in under a minute and instantly scale capacity based on application traffic patterns. ElastiCache Serverless is compatible with two popular open-source caching solutions, Redis and Memcached. You can use ElastiCache Serverless to operate a […]

ikar avatar

that’s lovely!

Release notes from terraform avatar
Release notes from terraform
04:43:33 PM

v1.7.0-alpha20231130 1.7.0-alpha20231130 (November 30, 2023) UPGRADE NOTES:

Input validations are being restored to the state file in this version of Terraform. Due to a state interoperability issue (#33770) in earlier versions, users that require interaction between different minor series should ensure they have upgraded to the following patches:

Users…

Release v1.7.0-alpha20231130 · hashicorp/terraformattachment image

1.7.0-alpha20231130 (November 30, 2023) UPGRADE NOTES:

Input validations are being restored to the state file in this version of Terraform. Due to a state interoperability issue (#33770) in earli…

    keyboard_arrow_up