#terraform (2021-02)

terraform Discussions related to Terraform or Terraform Modules

Archive: https://archive.sweetops.com/terraform/

2021-02-01

Laurynas avatar
Laurynas

I’m updating Terraform version that is used by our team from 0.13.5 to 0.14.5. So as I understand with terraform 0.14 .terraform.lock.hcl should be committed to git?

1
RB avatar


Terraform CLI to create a dependency lock file and commit it to version control along with your configuration

loren avatar

i think the lockfile workflows are still a “work in progress”… for example, in reusable modules, where perhaps you run tests, it probably doesn’t make sense to commit the lockfile alongside the tests

1
loren avatar

and if you are deploying the same config, same version across many many accounts each with their own tfstate, it doesn’t really make sense to commit the lock file for each one of them. a little orchestration is needed to create/update the lock file centrally, and put it in place before init/plan/apply

loren avatar

i posted my thoughts on a similar upgrade a couple weeks back… https://sweetops.slack.com/archives/CB6GHNLG0/p1610817829068500

fwiw, updated a decent number of tf states from 0.13.5 to 0.14.4 over the last week… no significant issues, but a few things took a little while to understand:

• sensitive values may be marked in the provider, i.e. an iam access/secret key. you cannot for_each over objects containing these values, but you can for_each over non-sensitive keys and index into the object. any outputs containing provider-marked sensitive values must also be marked sensitive • some of the output handling is a little odd, particularly with conditional resources/modules and accordingly conditional outputs. in some places, outputting null as the false condition caused a persistent diff. worked fine in tf 0.13.5, but not in tf 0.14.4. changing it to "" fixed it :man-shrugging::skin-tone-2: • the workflow around the new lock file, .terraform.lock.hcl, is quite cumbersome. it really clutters up the repo when you have a lot of root modules, and means you have to init each root somehow to generate the file, and commit it, anytime you want to update providers? no thanks! but, unfortunately, there is no way to disable it. the file is mandatory for a plan/apply. i’m using terraform-bundle already, setting up the plugin-cache in advance, restricting versions, and restricting network connectivity in CI. so i thought i could just remove the file after init, but no dice. you can remove it after apply, and don’t have to commit it (but that means CI will need to generate it) • if you are updating from 0.12, you’ll likely want to (or need to) first update to tf 0.13 for the new provider/registry syntax, to get the old syntax out of your tf 0.12 tfstate

Laurynas avatar
Laurynas

Yes your thoughts make a lot of sense. Our infrastructure is composed of 100s of small terraform components each with its own state file. Untli now we always used default version of terraform providers and didn’t worry about versioning. The new approach makes sense but I probably want to keep the same version of providers for all my infra.

I’m not sure with this new approach I’m i supposed to read the changelogs of all providers I use and only update manually? This seems like quite a lot of work because I’m a single devops engineer in our team…

loren avatar

if you weren’t bit by provider versions before, i wouldn’t feel bad about just adding the lockfile in .gitignore

loren avatar

another pain point is that init only adds hashes for the platform you are on now. if your platform and your CI/teammate’s platforms are different (osx vs linux vs windows), the hash changes! so you actually need to run providers lock for each platform

1
2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

fortunately, everyone uses geodesic to run terraform so never happens. hiding

1
Vincent Van der Kussen avatar
Vincent Van der Kussen

Hi, does anyone here knows of a way to handle data resources with nill values? I’m using google_kms_key_ring and it seems to return nill when a non existing keyring name is provided

Rogerio Goncalves avatar
Rogerio Goncalves

hey wave

https://github.com/cloudposse/terraform-aws-efs/commit/53847b81f887f13a7cfec6132bf362bde6dd3788#diff-05b5a57c136b6ff5965[…]d184d9daa9a65a288eR41-R43 shouldn’t this change be a major release? changing encrypted from false to true will enforce recreation of the EFS resource

Bc compliance (#71) · cloudposse/terraform-aws-efs@53847b8
  • workflows updated * readme updated, file system encryption enabled by default
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Please see our explanation here:

Bc compliance (#71) · cloudposse/terraform-aws-efs@53847b8
  • workflows updated * readme updated, file system encryption enabled by default
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

For 0.x releases, it’s not conventional to bump major.

Rogerio Goncalves avatar
Rogerio Goncalves

thanks Erik.

roth.andy avatar
roth.andy
Announcing Version 2.0 of the Kubernetes and Helm Providers for HashiCorp Terraformattachment image

Version 2.0 of the Kubernetes and Helm providers includes a more declarative authentication flow, alignment of resource behaviors and attributes with upstream APIs, normalized wait conditions across several resources, and removes support for Helm v2.

1
Steve Wade (swade1987) avatar
Steve Wade (swade1987)
terraform-aws-modules/terraform-aws-pricing

Terraform module which calculates price of AWS infrastructure (from Terraform state and plan) - terraform-aws-modules/terraform-aws-pricing

Mohammed Yahya avatar
Mohammed Yahya

I’m putting my faith here https://www.infracost.io/

Cloud cost estimates for Terraform in pull requests | Infracostattachment image

Infracost shows cloud cost estimates for Terraform projects. It integrates into pull requests and allows developers and DevOps to see cost breakdowns and compare options upfront.

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

Interesting how’d you find it?

Mohammed Yahya avatar
Mohammed Yahya

still in early development but awesome community support and look promising, using it already but not much resources, but more than the ^^ one

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

We are all in AWS and they look like they have a fair amount of resources covered

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

Now to work out how to hook this into Atlantis

1
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

You beat me too it

1
Mansoor Ebrahim avatar
Mansoor Ebrahim

trying to setup my terraform project with v0.14.4… however i’m getting an error Error: Unsupported argument on main.tf line 21, in provider “kubernetes”: 21: load_config_file = false

here is my code provider “kubernetes” { host = data.aws_eks_cluster.cluster.endpoint cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data) token = data.aws_eks_cluster_auth.cluster.token load_config_file = false }

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Announcement: terraform-null-label v0.23.0 is now out. It allow for setting the letter case of tag names and labels. Previously we forced tag names to be Title case and labels to be lower case. Now we allow some configuration. The primary impetus for this is that GCE does not allow uppercase letters in tag names, but we have taken it a step further based on other requests we have had over the years.

Note that with this version of null label, we have dropped support for Terraform 0.12. All future versions of our Terraform modules (once they are updated to terraform-null-label v0.23.0) are for TF 0.13.0 or above.

Release v0.23.0 Allow control of letter case of outputs · cloudposse/terraform-null-label

With this release, you gain control over the letter case of generated tag names and supplied labels, which means you also have control over the letter case of the ultimate id. Labels are the elemen…

2
sytten avatar

Is it possible to list the files uploaded in a remote run?

sytten avatar

I feel the upload part is longer than it should and since I work in a monorepo it might be due to some random stuff being uploaded

sytten avatar

I added this terraformignore

sytten avatar
*
!infrastructure/
**/.terraform/
.git/
sytten avatar

thanks in advance if someone has atip

sweetops avatar
sweetops

is there a more efficient way to do this? content_type = lookup(var.mime_types, split(".", each.value)[length(split(".", each.value))])

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Can’t think of a better way off the top of my head. I don’t think terraform supports negative indexes like in some languages (e.g. split(".", each.value)[-1])

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

That said, I think it would be a lot more readable if you broke it down into locals.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hrmm… but with the each there, guess that won’t work either

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I suppose you could call reverse and then pick index 0

loren avatar

since you aren’t assigning a default with lookup, you can at least just use the index…

var.mime_types[split(".", each.value)[length(split(".", each.value))]]

(not setting a default with lookup is deprecated, anyway)

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You can also use `

var.mime_types[replace(each.value, "/^.*\./", "")]
loren avatar

beat me to the replace option… i’m sure there’s some regex that’ll work

loren avatar

relying on the greediness of .* like that ought to work

sweetops avatar
sweetops

I think I like the reverse option, it’s readable to me content_type = lookup(var.mime_types, reverse(split(".", each.value))[0])

loren avatar

either way, maybe leave a comment for your future self or teammates explaining what is happening

1
sweetops avatar
sweetops

haha, good point

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

alternatively… (to beat a dead horse)

locals {
  mime_extension = "/^.*\."
}

...

var.mime_types[replace(each.value, local.mime_extension, "")]
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that feels pretty readable

sweetops avatar
sweetops

i can’t predict how many . will be in the filename

sweetops avatar
sweetops

and I want to grab the last characters after the last . and use that in the lookup to determine the mime type

Alex Jurkiewicz avatar
Alex Jurkiewicz

So much functionality is missing from the core functions. I’ve been waiting for endswith() for years now

loren avatar

that’s how golang devs think… you get only the most basic tools in stdlib, and from there everyone reimplements common utils functions over and over in slightly different ways

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

What would endswith do?

Alex Jurkiewicz avatar
Alex Jurkiewicz

endswith("abcdef", "f") == true

loren avatar

this is a bit of a legit question… example from python:

>>> 'abcdef'.endswith('fed')
False
>>> 'abcdef'.rstrip('fde')
'abc'

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Well, you can do that easily enough with Terraform. Yes they do not have that exact function, but my feeling is that if you can do it without too much trouble then it is good enough.

> length(regexall("[fed]$", "abcdef")) > 0
true
> substr(trim("Xabcdef","fde"),1,-1)
abc

I will grant that the subst is kind of a hack, but it works and I would rather see Hashicorp add something that cannot be hacked togeter, such as tr

$ echo 0123456789 | tr '[0-9]' '[g-p]'
ghijklmnop
loren avatar

that’s how golang devs think… you get only the most basic tools in stdlib, and from there everyone reimplements common utils functions over and over in slightly different ways

loren avatar

any ideas on how to accept a “template string” as a variable, and template it, without using the deprecated template_file (which actually accepted strings not files)? the function templatefile() actually requires a file… for example, i used to template arns like this, so the user wouldn’t have to hard-code values if they didn’t want to:

data "template_file" "policy_arns" {
  count = length(var.policy_arns)

  template = var.policy_arns[count.index]

  vars = {
    partition  = data.aws_partition.current.partition
    region     = data.aws_region.current.name
    account_id = data.aws_caller_identity.current.account_id
  }
}
RB avatar

That would be a good feature request. template_string

RB avatar

You could use a tmp file that you use, dump the template string to, use template file, and then delete the file

confusdcodr avatar
confusdcodr

Very convoluted but I imagine you could create the file using the local_file resource (https://registry.terraform.io/providers/hashicorp/local/latest/docs/resources/file) and then use templatefile() to template it

loren avatar

ideally the solution would not involve a resource as that involves state and the create/destroy lifecycle

Sarath Pantala avatar
Sarath Pantala
Error: Incorrect attribute value type

  on .terraform/modules/eks/workers_launch_template.tf line 40, in resource "aws_autoscaling_group" "workers_launch_template":
  40:   vpc_zone_identifier = lookup(

Inappropriate value for attribute "vpc_zone_identifier": set of string
required

can someone help me how to handle this

Alex Jurkiewicz avatar
Alex Jurkiewicz

The type of value you pass to this argument must be a set of strings. What value do you think the lookup is returning?

Sarath Pantala avatar
Sarath Pantala

i didnt passed any value for this *vpc_zone_identifier*

Sarath Pantala avatar
Sarath Pantala

values i am passing from seperate file called *clusterProperties.env*

TF_VAR_worker_groups_launch_template=$(cat <<EOF
[{
    name                                     = "worker"
    asg_desired_capacity                     = "1"                                           # Desired worker capacity in the autoscaling group.
    asg_max_size                             = "5"                                           # Maximum worker capacity in the autoscaling group.
    asg_min_size                             = "0"                                           # Minimum worker capacity in the autoscaling group.
    on_demand_base_capacity                  = "15"                                           # Absolute minimum amount of desired capacity that must be fulfilled by on-demand instances
    on_demand_percentage_above_base_capacity = "75"                                          # Percentage split between on-demand and Spot instances above the base on-demand capacity
    subnets                                  = "${TF_VAR_subnets}"                           # A comma delimited string of subnets to place the worker nodes in. i.e. subnet-123,subnet-456,subnet-789
    ami_id                                   = "${TF_VAR_ami}"                               # AMI ID for the eks workers. If none is provided, Terraform will search for the latest version of their EKS optimized worker AMI.
    asg_desired_capacity                     = "3"                                           # Desired worker capacity in the autoscaling group.
    asg_force_delete                         = false                                         # Enable forced deletion for the autoscaling group.
    instance_type                            = "${TF_VAR_worker_instance_type}"              # Size of the workers instances.
    override_instance_type                   = "t3.2xlarge"                                   # Need to specify at least one additional instance type for mixed instances policy. The instance_type holds  higher priority for on demand instances.
    on_demand_allocation_strategy            = "prioritized"                                 # Strategy to use when launching on-demand instances. Valid values: prioritized.
    on_demand_base_capacity                  = "7"                                           # Absolute minimum amount of desired capacity that must be fulfilled by on-demand instances
    on_demand_percentage_above_base_capacity = "75"                                          # Percentage split between on-demand and Spot instances above the base on-demand capacity
    spot_allocation_strategy                 = "lowest-price"                                # The only valid value is lowest-price, which is also the default value. The Auto Scaling group selects the cheapest Spot pools and evenly allocates your Spot capacity across the number of Spot pools that you specify.
    spot_instance_pools                      = 10                                            # "Number of Spot pools per availability zone to allocate capacity. EC2 Auto Scaling selects the cheapest Spot pools and evenly allocates Spot capacity across the number of Spot pools that you specify."
    #spot_max_price                           = ""                                            # Maximum price per unit hour that the user is willing to pay for the Spot instances. Default is the on-demand price
    #spot_price                               = ""                                            # Cost of spot instance.
    placement_tenancy                        = "default"                                     # The tenancy of the instance. Valid values are "default" or "dedicated".
    root_volume_size                         = "50"                                         # root volume size of workers instances.
    root_volume_type                         = "gp2"                                         # root volume type of workers instances, can be 'standard', 'gp2', or 'io1'
    root_iops                                = "0"                                           # The amount of provisioned IOPS. This must be set with a volume_type of "io1".
    key_name                                 = "${TF_VAR_ssh_key_name}"                      # The key name that should be used for the instances in the autoscaling group
    pre_userdata                             = "sudo usermod -l peks ec2-user && sudo usermod -d /home/peks -m peks && echo 'peks ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers.d/90-cloud-init-users && sudo groupmod -n peks ec2-user && sudo mkdir -p /goshposh/log && chmod -R 777 /goshposh && chown -R 1000:1000 /goshposh && echo ${TF_VAR_friday_pub_key} >> /home/peks/.ssh/authorized_keys && echo ${TF_VAR_peks_pub_key} >> /home/peks/.ssh/authorized_keys"       # userdata to pre-append to the default userdata.
    additional_userdata                      = "yum install -y <https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm> && systemctl start amazon-ssm-agent"                                            # userdata to append to the default userdata.
    ebs_optimized                            = true                                          # sets whether to use ebs optimization on supported types.
    enable_monitoring                        = true                                          # Enables/disables detailed monitoring.
    public_ip                                = false                                         # Associate a public ip address with a worker
    kubelet_extra_args                       = "--node-labels=kubernetes.io/role=worker --kube-reserved=cpu=200m,memory=256Mi,ephemeral-storage=1Gi --system-reserved=cpu=200m,memory=256Mi,ephemeral-storage=3Gi --eviction-hard=memory.available<500Mi,nodefs.available<10%"
    autoscaling_enabled                      = true                                          # Sets whether policy and matching tags will be added to allow autoscaling.
    additional_security_group_ids            = "${TF_VAR_worker_additional_security_group_ids}"                                            # A comma delimited list of additional security group ids to include in worker launch config
    protect_from_scale_in                    = true                                          # Prevent AWS from scaling in, so that cluster-autoscaler is solely responsible.
    #suspended_processes                      = ""                                            # A comma delimited string of processes to to suspend. i.e. AZRebalance,HealthCheck,ReplaceUnhealthy
    #target_group_arns                        = ""                                            # A comma delimited list of ALB target group ARNs to be associated to the ASG
    #enabled_metrics                          = ""                                            # A comma delimited list of metrics to be collected i.e. GroupMinSize,GroupMaxSize,GroupDesiredCapacity

  }
]
EOF

)

Alex Jurkiewicz avatar
Alex Jurkiewicz

is this from a public terraform module?

Sarath Pantala avatar
Sarath Pantala

Yes

Sarath Pantala avatar
Sarath Pantala

for thsi value where should i pass the variable *vpc_zone_identifier*

Alex Jurkiewicz avatar
Alex Jurkiewicz

can you link to the public terraform module you are using?

Sarath Pantala avatar
Sarath Pantala

i can’t send that is confidential ?

Alex Jurkiewicz avatar
Alex Jurkiewicz

ok, so the module is private then.

In your EKS module, the value being provided to vpc_zone_identifier needs to be a set of strings. You will need to read the module’s code to understand what value is currently being used and how to fix it.

Sarath Pantala avatar
Sarath Pantala

as of now I am not passing this variable anywhere

Sarath Pantala avatar
Sarath Pantala

variable “vpc_zone_identifier”{ type = string }

Alex Jurkiewicz avatar
Alex Jurkiewicz

Two things:

  1. The code snippet in your original error message shows the value is being set with a lookup function. That function has a default value which is probably getting used in this case.
  2. The variable specification you just posted requires a value to be set. Terraform would error if you didn’t provide a value. So you must be setting something.
Gideon Bar avatar
Gideon Bar

I used https://github.com/cloudposse/terraform-aws-emr-cluster to terraform an EMR cluster. I then tried to ssh using the auto-generated key and couldn’t. How do I gain access to the master shell (ssh?) and view spark and zeppelin UI safely (ssh tunneling?)

cloudposse/terraform-aws-emr-cluster

Terraform module to provision an Elastic MapReduce (EMR) cluster on AWS - cloudposse/terraform-aws-emr-cluster

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Could you be using the wrong username?

cloudposse/terraform-aws-emr-cluster

Terraform module to provision an Elastic MapReduce (EMR) cluster on AWS - cloudposse/terraform-aws-emr-cluster

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(don’t ssh as root user)

Gideon Bar avatar
Gideon Bar

which user should I be using?

Gideon Bar avatar
Gideon Bar

AWS documentation says hadoop@…

Gideon Bar avatar
Gideon Bar

In the applications I have Spark and Zeppelin

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I think amazon linux uses *ec2*-*user*

Sarath Pantala avatar
Sarath Pantala

can anyone connect for a *Zoom call or google meet* to solve the terraform i am getting

Error: Incorrect attribute value type

  on .terraform/modules/eks/workers_launch_template.tf line 40, in resource "aws_autoscaling_group" "workers_launch_template":
  40:   vpc_zone_identifier = lookup(

Inappropriate value for attribute "vpc_zone_identifier": set of string
required
jose.amengual avatar
jose.amengual

copy your terraform code here

jose.amengual avatar
jose.amengual

and your input variables

jose.amengual avatar
jose.amengual

without that is pretty hard to help you

Sarath Pantala avatar
Sarath Pantala

I have Terraform scripts for eks which is in terraform v0.11.14 version i need to upgrade to v0.12.0

Sarath Pantala avatar
Sarath Pantala

I need someone’s help

2021-02-02

OliverS avatar
OliverS

I have an EKS cluster and an EKS node group both created with your modules. Instances of that node group by default have the security group listed under “Cluster Security Group” in the AWS Console’s EKS cluster view tab called Networking. I’d like these instances to have an additional security group. How to do this? The workers_security_group_ids adds SG to the security group listed under “Additional Security Groups” of the cluster, so this will not work as instances do not have that security group.

Helder Dias avatar
Helder Dias

Hello Guys, one question regarding module https://github.com/cloudposse/terraform-aws-eks-iam-role

cloudposse/terraform-aws-eks-iam-role

Terraform module to provision an EKS IAM Role for Service Account - cloudposse/terraform-aws-eks-iam-role

Helder Dias avatar
Helder Dias

I can’t use it if the service account doesn’t already exist in the time of apply

Helder Dias avatar
Helder Dias
service_account_name      = var.external_secrets_service_acount 
Helder Dias avatar
Helder Dias

This account must already exist

Helder Dias avatar
Helder Dias

which is bad in case you want to recreate from scrathc and you can’t plan

Helder Dias avatar
Helder Dias

Even with depends_on doesn’t work

Helder Dias avatar
Helder Dias

Any workaround for this ?

OliverS avatar
OliverS

I need to allow an ALB to communicate with pod that has an ingress and a nodeport service, in an EKS cluster that uses nodegroup. It seems like I have to add the ALB’s security group to that of the EKS instances, which were created by AWS EKS NodeGroup. But this does not seem possible out of the box with your EKS cluster module (at least at version 0.4). Am I going about this incorrectly?

rei avatar

I have this solution working but using the https://github.com/kubernetes-sigs/aws-load-balancer-controller from within the cluster. Works like a charm

kubernetes-sigs/aws-load-balancer-controller

A Kubernetes controller for Elastic Load Balancers - kubernetes-sigs/aws-load-balancer-controller

rei avatar
sajid2045/eks-alb-nginx-ingress

Helm chart for two layer ingress controller with alb doing ssl termination and nginx doing dynamic host based routing. - sajid2045/eks-alb-nginx-ingress

OliverS avatar
OliverS

The aws LB controller is what I use. Do you mind posting what your ingress yaml looks like?

rei avatar

@OliverS I am using a helmfile based on the cloudposse helmfiles. Take a look here: https://gist.github.com/reixd/914a19f2835690cca36db306025dcc85

OliverS avatar
OliverS

Thanks but this is not the AWS LB Controller, it is the nginx-ingress controller (which is used by the older AWS ALB Ingress Controller).

David van Ginneken avatar
David van Ginneken

Hello everyone.

David van Ginneken avatar
David van Ginneken

Trying to use this module and I bang my head tying to get access_points setup. What would the variable look like?

cloudposse/terraform-aws-efs

Terraform Module to define an EFS Filesystem (aka NFS) - cloudposse/terraform-aws-efs

David van Ginneken avatar
David van Ginneken

Right now I have it set this way.

David van Ginneken avatar
David van Ginneken

access_points = { example = { posix_user = { gid = “55007” uid = “55007” }, root_directory = { creation_info = { gid = “55007” uid = “55007” permissions = “0755” } } }

loren avatar

the var is defined as type map(map(map(any)))… believe it or not, any does not actually mean absolutely anything… each object in the map needs to be the same type and have the same attributes

loren avatar

though, i think the root_directory key should not be specified like that… only keys should be posix_user and creation_info

loren avatar
access_points = {
  example = {
    posix_user = {
      gid = "55007"
      uid = "55007"
    },
    creation_info = {
      gid = "55007"
      uid = "55007"
      permissions = "0755"
    }
  }
}
loren avatar

this is a strange way to type the object IMO, since each attribute is required. it would be much more clear to use an actual object({...}) type

loren avatar

the indexing into var.access_points while also using for_each is also strange… i feel like it should just be:

gid            = each.value.posix_user.gid
uid            = each.value.posix_user.uid
secondary_gids = each.value.posix_user.secondary_gids
David van Ginneken avatar
David van Ginneken

Thanks for the pointers! I’ll look into tomorrow: it is getting late here

David van Ginneken avatar
David van Ginneken

Just confirming that variable formatting works perfect. Many thanks

David van Ginneken avatar
David van Ginneken

Plus a closing “}” of course

jose.amengual avatar
jose.amengual

Thinking on getting a Mac with a M1 chip, anyone developing in terraform running one?

jose.amengual avatar
jose.amengual

BigSur killed my screen so I’m a bit skeptical that all the tools will work on a M1

pjaudiomv avatar
pjaudiomv

I would wait all of the tooling isn’t quite ready yet. Somewhere there is a GitHub ticket tracker with brew

pjaudiomv avatar
pjaudiomv

My friend got one and ended up sending it back because it was too much of a pain

pjaudiomv avatar
pjaudiomv

However if you do end-up getting one please update us on your hard won experience :)

jose.amengual avatar
jose.amengual

it is already sounding far from ideal

Helder Dias avatar
Helder Dias

aaratn avatar

Agree to what @pjaudiomv said, haven’t seen / heard a success story for M1 mac used for dev / devops teams . I have heard same return stories just like Patrick’s friend did

loren avatar

i’d go with the m1 and use a local VM or a remote dev environment as an interim solution. the speed and power improvements seem worth it IMO

aaratn avatar
If you need to install Rosetta on your Mac

Rosetta 2 enables a Mac with Apple silicon to use apps built for a Mac with an Intel processor.

Helder Dias avatar
Helder Dias

I think apple wants paying customers to do the beta test and rollout lol

aaratn avatar

Lessons learned on day 1 of an Apple M1 device (not my primary, just for testing):

  • arch -x86_64 <cmd> to force an arch
  • if you run your shell x86_64, everything else will automatically be Intel
  • Go works! Just build for amd64 and run it like normal, Rosetta does its job.
jose.amengual avatar
jose.amengual

yes, I could do that too

Nick Marchini avatar
Nick Marchini

wave

wave1
Nick Marchini avatar
Nick Marchini

Getting an error message when trying to setup a cluster using the latest version 0.27.0 of the module for Elasticsearch.

Error: invalid value for domain_name (must start with a lowercase alphabet and be at least 3 and no more than 28 characters long. Valid characters are a-z (lowercase letters), 0-9, and - (hyphen).)

  on .terraform/modules/elasticsearch-cluster/main.tf line 102, in resource "aws_elasticsearch_domain" "default":
 102:   domain_name           = module.this.id

I can see in main.tf the following code

resource "aws_elasticsearch_domain" "default" {
  count                 = module.this.enabled ? 1 : 0
  domain_name           = module.this.id
 

But the context.tf file doesn’t contain anything for id

module "this" {
  source  = "cloudposse/label/null"
  version = "0.22.1" // requires Terraform >= 0.12.26

  enabled             = var.enabled
  namespace           = var.namespace
  environment         = var.environment
  stage               = var.stage
  name                = var.name
  delimiter           = var.delimiter
  attributes          = var.attributes
  tags                = var.tags
  additional_tag_map  = var.additional_tag_map
  label_order         = var.label_order
  regex_replace_chars = var.regex_replace_chars
  id_length_limit     = var.id_length_limit

  context = var.context
}

I want to use a string that is 17 chars long but can only use one that is 10 or the error occurs. I am passing my domain name to the module variable for name Is this the right way to set the domain name?

Alex Jurkiewicz avatar
Alex Jurkiewicz

are you setting id_length_limit to 28?

Alex Jurkiewicz avatar
Alex Jurkiewicz

(for the null label module)

Nick Marchini avatar
Nick Marchini

Nope not setting that i figured out it was the name variable that forms part of the id. I changed that and it works but need to figure out how to construct the id to want I want

Alex Jurkiewicz avatar
Alex Jurkiewicz

If you want a specific thing for the id, maybe just hardcode it

Alex Jurkiewicz avatar
Alex Jurkiewicz

I’m trying to write variable validation to ensure a list of type = list(number) contains no null values. So I need a test that returns false if the list contains null.

This doesn’t work (!): contains([null,1,2], null) (“Invalid value for “value” parameter: argument must not be null.”)

This does, but is much uglier: length([for i in [null,1,2] : i if i == null]) == 0.

Any better suggestion?

Mohammed Yahya avatar
Mohammed Yahya

Dose anyone have Architectural Decision Records for Terraform as an example? I want to learn more about why you picked up Terraform.

aaratn avatar

Following this, @Mohammed Yahya please do let me know if you get this from other sources

1
jose.amengual avatar
jose.amengual

is there anything better than terraform yet? that is a question I ask

jose.amengual avatar
jose.amengual

when comparing community sizes there is nothing to compare to TF

jose.amengual avatar
jose.amengual

there is many other things to include in the comparison table when evaluating

Mohammed Yahya avatar
Mohammed Yahya

I agree with you, also We need this ADR to show C-Level why they need Terraform, also compare to CDK and Pulumi which start gaining popular

Zach avatar

I get the impression a lot of pulumi’s popularity is pulumi hyping itself

2
Mohammed Yahya avatar
Mohammed Yahya

true

Zach avatar

I can’t prove that of course, but I see a lot of blog spam and ads from them

Mohammed Yahya avatar
Mohammed Yahya

well they pioneered CDK, that’s count for them

Zach avatar

Oh yah, it might be a great product! I’m not even disputing that

roth.andy avatar
roth.andy

That was an easy one for us. As far as open source tools go, it has the highest market share, the highest level of community support, and the least amount of vendor lock

roth.andy avatar
roth.andy

I’ve learned to word this stuff as “This is the tool your technical team is recommending.” rather than “Is it okay if we use this?“.

1
Andrew Nazarov avatar
Andrew Nazarov

Sorry for the off-topic, I would like to ask @Mohammed Yahya if you could share some details about your ADR workflow. I’m interested in how it works for you and your team, how often you do this(for which type of decision), where you store records (git/confluence, dedicated repo/with the code), any tooling you are using. Basically, the question is about your feelings if it really helps you and your team/org. If you have any examples of your ADRs that you can share I will appreciate it a lot.

One of our client tried ADR approach, but it didn’t last long. Probably the majority didn’t see any value in following this and ones who did gave up because they were only a few. But it’s a different story. As for me I like the idea.

this1
Mohammed Yahya avatar
Mohammed Yahya

We use ADR for breaking the ice and destroy the silo of a specific technology, the team can disagree on choosing between two technologies like the famous one, should we use Jenkins and why ( some hate Jenkins, others not so we break the tie using an ADR) it helps a lot knowing what drive the company choices from the start we initially use a repo called technology-adrs inside that a punch of md files describe an ADR for each choice we need to make

tech-adr
|-- cloud.md
|-- iac.md
|--cicd.md
|--frontend.md
|--backend.md
|--sec.md

now we are thinking to move to stackshare.io for our stack listing and choices, here a sample I just create https://stackshare.io/mhmdio/decisions/105668378243793712 this will make us transparent, easy with on-boarding and the team will understand why we choose something over the other.

but I could not find any decent ADR for Terraform, I want to know why Enterprises using it, working with Enterprise clients it is hard to convince them into move and change the way they dealing with Cloud or migration to cloud.

2
3

2021-02-03

Adrian Wnuk avatar
Adrian Wnuk

Hello guys,

How I can define separately configuration for blocks of cors rules with this module? https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn

I have configuration module something like this

cors_allowed_origins = ["<https://example.com>","*"]

cors_allowed_methods = ["PUT", "DELETE"]
cors_allowed_headers = ["*"]
cors_expose_headers = []
cors_max_age_seconds = 300

and this configuration creating for me 3 cors blocks with origins example.com, * and aliases defined in module (this is pretty good) but I can’t edit any options for this origins like allowed_methods, allowed_headers. All of created cors block have same configuration

There is any solution to do this using only this module?

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Anyone using the terraform-aws-vpc module? You’re probably running into this now:

https://github.com/terraform-aws-modules/terraform-aws-vpc/issues/581

Terrafrom 11 refresh failed due multiple VPC Endpoint Services matched · Issue #581 · terraform-aws-modules/terraform-aws-vpc

Started getting error for terraform 11 and module version 1.72 Error: Error refreshing state: 1 error occurred: * module.vpc.data.aws_vpc_endpoint_service.s3: 1 error occurred: * module.vpc.data.aw…

loren avatar

i was, but i updated to the latest module version and it was fixed

Terrafrom 11 refresh failed due multiple VPC Endpoint Services matched · Issue #581 · terraform-aws-modules/terraform-aws-vpc

Started getting error for terraform 11 and module version 1.72 Error: Error refreshing state: 1 error occurred: * module.vpc.data.aws_vpc_endpoint_service.s3: 1 error occurred: * module.vpc.data.aw…

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Ah! Didn’t notice.

Frank avatar

I’m trying to upgrade our Terraform from 0.13.5 to 0.14.5 but I’m running into an issue. All outputs of the terraform-aws-ecs-container-definition module are giving me a Error: Output refers to sensitive values

Is anyone familiar with this error and how it could be fixed? Should the outputs of the module be changed with sensitive = true or is there something on my end I have to change?

loren avatar

yes, in tf 0.14, the provider has the ability to mark attributes of resources as sensitive. if you output such an attribute, the output must also be marked as sensitive

Frank avatar

These outputs are only generated in the ecs-container-definition module and are then fed into an ecs-alb-service-task

Frank avatar

We aren’t outputting them ourselves though

loren avatar

same deal the modules need to treat them as sensitive

Frank avatar

I see some module do have the sensitive = true for some of the outputs

Frank avatar

Seems like I’ll have to make some PR’s then to add this to the ones I mentioned

1
Frank avatar

I would have assumed that the tests would catch this, or my usecase is just completely different from those tests

loren avatar

tests are hard

Frank avatar

Its hard to cover all usecases, I know all too well I’m afraid haha

RB avatar

how does one set the required_version to be picked up automatically by atlantis for 0.13 or 0.14 ?

RB avatar

my default version in atlantis is set for 0.12.30 but im trying to run a plan in a module that has the following block and atlantis cannot seem to interpret as using 0.13 or 0.14. module is applied using tf 0.14.5.

terraform {
  required_providers {
    aws = { 
      source = "hashicorp/aws"
    }   
  }
  required_version = ">= 0.13"
}
RB avatar

error from atlantis.

Warning: Provider source not supported in Terraform v0.12

  on versions.tf line 3, in terraform:
   3:     aws = {
   4:       source = "hashicorp/aws"
   5:     }

A source was declared for provider aws. Terraform v0.12 does not support the
provider source attribute. It will be ignored.


Error: Unsupported Terraform Core version
RB avatar

Maybe just setting it in the repo atlantis.yaml is a workaround for now

Patrick Jahns avatar
Patrick Jahns

@RB Struggled with this today as well - atlantis an only work with = and not with >= right now. https://github.com/runatlantis/atlantis/issues/1217

atlantis does not interpret terraform.required_version specifiers other than '=' · Issue #1217 · runatlantis/atlantis

Currently, it appears that the code in project_command_builder.go only accepts terraform.required_version specifications that exactly specify a single version, rather than a range or other version …

1
OliverS avatar
OliverS

I’m upgrading some modules that work fine in terraform 0.12, to terraform 0.13. Got the terraform init to complete. Had to up the version on some third-party modules. The terraform apply gives me several errors “Provider configuration not present”. Unfortunately I do not know how to address this:

To work with
module.eks_main.module.vpc.module.label.data.null_data_source.tags_as_list_of_maps[3]
its original provider configuration at
provider["registry.terraform.io/-/null"] is required, but it has been removed.
This occurs when a provider configuration is removed while objects created by
that provider still exist in the state. Re-add the provider configuration to
destroy
module.eks_main.module.vpc.module.label.data.null_data_source.tags_as_list_of_maps[3],
after which you can remove the provider configuration again.

How do I re-add the provider: in what file (the eks_main maint.tf? the vpc module main.tf? etc), and would it just be like

provider "aws" {
    region = "us-east-1"
}
RB avatar

There’s a secret replace command that needs to be run

pjaudiomv avatar
pjaudiomv

terraform state replace-provider "[registry.terraform.io/-/aws](http://registry.terraform.io/-/aws)" "[registry.terraform.io/hashicorp/aws](http://registry.terraform.io/hashicorp/aws)"

2
1
RB avatar

Looks like op might need to replace the null provider tho

loren avatar

this ought to show you all providers from the config and the tfstate, so you can map out what to replace:

terraform providers
2
OliverS avatar
OliverS

geez, thanks so much guys, I did not know these commands, they did the job (I did the replace on null, local, template, and aws)

1
OliverS avatar
OliverS

For upgrade from 0.12 to 0.14, the docs say to first upgrade to 0.13. Does this mean for 0.13 just the init + validate + verify that plan created, or does it also require apply?

RB avatar

i do this and have had good luck

rm -rf .terraform/

# upgrade from tf12 to tf13
tfenv use latest:^0.13
terraform init -upgrade
terraform state replace-provider "registry.terraform.io/-/aws" "hashicorp/aws" -yes
terraform apply
tf 0.13upgrade -yes
terraform init
terraform apply

# upgrade from tf13 to tf14
tfenv use latest:^0.14
terraform init
terraform apply
1
RB avatar

some of it might be extra

RB avatar
RB
03:40:01 PM

¯_(ツ)_/¯

OliverS avatar
OliverS

what’s tfenv

RB avatar
tfutils/tfenv

Terraform version manager. Contribute to tfutils/tfenv development by creating an account on GitHub.

1
khabbabs avatar
khabbabs
Install - TFSwitch

A command line tool to switch between different versions of terraform (install with homebrew and more)

RB avatar

more chars in the command line and fewer stars tho

RB avatar

what are some of the bennies with tfswitch over tfenv ?

khabbabs avatar
khabbabs

haven’t seen tfutils before but seems like its very similar

khabbabs avatar
khabbabs

and has you said less chars haha

OliverS avatar
OliverS

Just had a look, both have excellent capabilities if you have to switch between several terraform versions regularly

OliverS avatar
OliverS

BTW just saw this in the upgrade docs for 0.14:
Terraform v0.14 does not support legacy Terraform state snapshot formats from prior to Terraform v0.13, so before upgrading to Terraform v0.14 you must have successfully run terraform apply at least once with Terraform v0.13 so that it can complete its state format upgrades.

1
RB avatar

seems to match my commands above ^

Tomek avatar

is there a way to get information on the last terraform run (apply/plan)? Basically trying to do something like this feature in terraform cloud https://www.terraform.io/docs/cloud/run/manage.html

RB avatar

if you have a tfstate s3 with versioning on, you can look at previous versions

RB avatar

i havent used it but this might come in handy https://github.com/camptocamp/terraboard

camptocamp/terraboard

A web dashboard to inspect Terraform States - camptocamp/terraboard

Tomek avatar

ah nice, thanks! yea I was inspecting the tfstate hoping to see if there was a lastRun type param that might also include any errors that occured but I don’t think that exists

Tomek avatar

(we do use versioned s3 state backends)

1
RB avatar

for last run information, you’d need a CICD for terraform like atlantis

khabbabs avatar
khabbabs

@RB have you ran terraboard in aws? whats the recommended way.. fargate? ECS?

khabbabs avatar
khabbabs

ah I went straight to the github page rather reading

np1
RB avatar

it’s on my todo when time permits

RB avatar

ill be free by Q3 maybe haha

Mohammed Yahya avatar
Mohammed Yahya

+1 for Terraboard, run it locally for a start using docker

2
bruno avatar

We use terraboard in dev environments, very convenient for developers to understand what’s in their TF states or search resources by type/name

1
Nikola Milic avatar
Nikola Milic

Hello everyone! wave I have spent around a week or two trying to set up basic terraform configuration base for my example project and heard many opinions (yes/no? to Terragrunt, yes/no? to workspaces) so in abundance of various conflicting information and incomplete tutorials (tutorials which advocate an idea but do not showcase it fully) I’ve kind of lost focus. This is when I decided a Stack-overflow post might be a good idea, but that has also backfired since I haven’t got any answers to my broad questions, even though people replied.

TLDR from SO: I’d like to have multi-env (dev,stage) Terraform IaC setup that uses modules, that clearly separates prod and non-prod state management. (for the time being, resources provisioned do NOT matter to me, that can be as simple as an s3 bucket which I tried to illustrate).

Is it okay if I post it here, I’m looking for help in understanding how to set this up, and of course to change my approach If it is too restrictive/plain wrong. Thanks!

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Go ahead. Suggest you post the content in replies on a thread and not as direct messages in the channel, to avoid creating noise for people.

Nikola Milic avatar
Nikola Milic

Sure, thanks! Here’s the post on SO. I’m posting it just to avoid duplicating text since I tried my best to describe scenario I’m in there, but I’ll gladly discuss this topic here if anyone is for it, and post my resolution as an answer on SO later on. https://stackoverflow.com/questions/66024950/how-to-organize-terraform-modules-for-multiple-environments

1
Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)
1
johntellsall avatar
johntellsall

Nikola thanks for posting! I have the same question: what’s the simplest real-world solution to a basic Terraform setup? The SO answers seem to all say “it’s complicated” Yoni thanks for answering.

Nikola Milic avatar
Nikola Milic

Yeah I have added my own answer since my avoidance of terraform workspaces led me to this rat race for which i didn’t find an answer. So as one person in the comments replied, the workspaces were created to solve this issue of multi-env project. I successfully did what i wanted to do, but at the moment with the local state keeping. This all is very muh work (more r&d) in progress, but I think i realized the path i want to go with. I’ll try and widen my answer with an example on the SO answer i created a bit later on.

1
Julian avatar

Hey everybody - trying to find the best way to import / generate baseline configurations from an AWS environment into terraform code to then edit. I’ve been under a small rock, so are we still in the days of the predefined resource + import or is there a more streamlined solution I’ve been unaware of?

loren avatar

i recall someone using a combination of import -allow-missing-config and something else, maybe state show?, to write out a near-working config…

Julian avatar
Julian
09:09:19 PM

much rejoicing

Julian avatar

Thanks @loren I’ll take a look

Julian avatar

yup, simple, and works wonderfully

1
1

2021-02-04

Michael Koroteev avatar
Michael Koroteev

Hi, we are trying to update the “eks-cluster” module (version 0.32) and we started encountering this error when running terraform plan:

Error: Get "<http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth>": dial tcp [::1]:80: i/o timeout

We suspect it is because of the kubernetes provider version, which was upgraded recently, but in their docs we don’t see any breaking changes regarding this existing configuration:

provider "kubernetes" {
  token                  = join("", data.aws_eks_cluster_auth.eks.*.token)
  host                   = join("", data.aws_eks_cluster.eks.*.endpoint)
  cluster_ca_certificate = base64decode(join("", data.aws_eks_cluster.eks.*.certificate_authority.0.data))
}

Did anyone encounter this issue ? thanks

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Do you have var.apply_config_map_aws_auth = false?

Michael Koroteev avatar
Michael Koroteev

nope, We are using the default “true” value. we found a workaround where removing the resource from the state solves the issue, but we would obviously need a better option. If I set that variable to false, I still need to apply the configMap (only this time separately or manually) right ? so what is the difference ?

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

If you were setting apply… to false then the provider would not get configured, so I thought that might be what was happening. Seems like a bug in the Kubernetes provider. Did you check for open issues?

Michael Koroteev avatar
Michael Koroteev

yes in that case the configMap will not be created and the provider won’t do anything, In the open issues regarding this problem I only found the workaround with terraform state rm . I can try working with this var, but I still need the configmap executed

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

You might try setting var.apply_config_map_aws_auth = false to create the cluster, then set it to true and update the cluster. It is possible there is a new race condition or something. @Andriy Knysh (Cloud Posse) would you please review the EKS cluster auth in light of the new Kubernetes provider?

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Michael Koroteev You might try updating to the current eks-cluster module, v0.34.1. I just tried it and it worked for me.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this

dial tcp [::1]:80
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

looks like it tries to connect to a local host cluster (and obviously fails)

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Right. Question is: does this seem something that could be new due to the new v2.0 Kubernetes Terraform Provider ?

Announcing Version 2.0 of the Kubernetes and Helm Providers for HashiCorp Terraformattachment image

Version 2.0 of the Kubernetes and Helm providers includes a more declarative authentication flow, alignment of resource behaviors and attributes with upstream APIs, normalized wait conditions across several resources, and removes support for Helm v2.

nnsense avatar
nnsense

This looks like the same issue I’m having, in my case isn’t ipv6 but still trying to call localhost:

GET /api/v1/namespaces/kube-system/configmaps/aws-auth HTTP/1.1
Host: localhost
User-Agent: HashiCorp/1.0 Terraform/0.14.6
Accept: application/json, */*
Accept-Encoding: gzip
---: timestamp=2021-02-10T00:25:35.225Z
2021-02-10T00:25:35.226Z [INFO]  plugin.terraform-provider-kubernetes_v2.0.2_x5: 2021/02/10 00:25:35 [DEBUG] Kubernetes API Resp
onse Details:
---[ RESPONSE ]--------------------------------------
HTTP/1.1 503 Service Unavailable
Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

@Andriy Knysh (Cloud Posse) @Erik Osterman (Cloud Posse) looks like a lot of people are having issues with EKS-cluster since the new v2.0 Kubernetes Terraform Provider. I tried but could not reproduce the problem. Let’s put our thinking caps on.

Announcing Version 2.0 of the Kubernetes and Helm Providers for HashiCorp Terraformattachment image

Version 2.0 of the Kubernetes and Helm providers includes a more declarative authentication flow, alignment of resource behaviors and attributes with upstream APIs, normalized wait conditions across several resources, and removes support for Helm v2.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Behavior suggests

data.aws_eks_cluster.eks.*.endpoint

Is null. Could be a bug where the provider is not waiting for the data.

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)
Partial/Progressive Configuration Changes · Issue #4149 · hashicorp/terraform

For a while now I&#39;ve been wringing my hands over the issue of using computed resource properties in parts of the Terraform config that are needed during the refresh and apply phases, where the …

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)
Token not being set in provider when trying to upgrade the cluster · Issue #1095 · hashicorp/terraform-provider-kubernetes

Terraform Version, Provider Version and Kubernetes Version Terraform version: 0.14.1 Kubernetes provider version: 1.13.0 Kubernetes version: 1.15 Affected Resource(s) Terraform Configuration Files …

Michael Koroteev avatar
Michael Koroteev

Yes I’m working with terraform 0.14 I will try using the latest version of the module and let you guys know. anyway, I checked in the state itself and the data.aws_eks_cluster.eks.*.endpoint field contains the actual value.

nnsense avatar
nnsense

Me too (14.5 or something). It’s strange, yes the ENDPOINT variable if set as output is indeed showing the right value, but if I enable TRACE it clearly shows localhost as endpoint (it even shows my local server answer, the same I get if I run curl post myself locally). Basically I have the same config as your complete examples, with the addition of the iam role thing. If I apply all good. If I then refresh, it shows unauthorised with the 503 to (it seems) localhost. If I destroy, exactly after having destroyed the nodes, it fails again with unauthorised , then I run it again and it destroys the rest of the things until only one module exist, that I usually delete with rm : module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes[0] (leaving kubernetes_config_map_ignore_role_changes = true) Should I try with v0.13?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@nnsense we did test the new changes with k8s provider on TF 0.13 (did not test it completely on 0.14)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, please try 013, we are going to deploy with 0.14 and find the issues if they exist

nnsense avatar
nnsense

Thanks! I’m trying as we speak

nnsense avatar
nnsense

“As we speak” was incredibly optimistic… 10 minutes to destroy the old one, 10 minutes to create the one with tf0.13,, :D

1
nnsense avatar
nnsense

Still creating... [6m20s elapsed]

nnsense avatar
nnsense

Hey, with tf 0.13 refresh worked without throwing that error… mmmmhh… Interesting!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ok thanks @nnsense

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

we are going to deploy/destroy with TF 0.14

nnsense avatar
nnsense

Thanks!!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Re: kubernetes provider issues with terraform 0.14

Error: Get "<http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth>": dial tcp [::1]:80: i/o timeout

Posted more updates here: https://github.com/cloudposse/terraform-aws-eks-cluster/issues/104#issuecomment-792520725

Seems like no good fix is available yet. Anyone solve this?

Fail with I/O timeout due to bad configuration of the Kubernetes provider · Issue #104 · cloudposse/terraform-aws-eks-cluster

Describe the Bug Creating an EKS cluster fails due to bad configuration of the Kubernetes provider. This appears to be more of a problem with Terraform 0.14 than with Terraform 0.13. Error: Get &qu…

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

Note the fact that Terraform has an example for how to do auth with the v2 of the provider: https://github.com/hashicorp/terraform-provider-kubernetes/tree/master/_examples/eks

Creation, deletion, and upgrades worked without any issues for me, using that code. Buuuuuuut I have no idea how complex the migration path is

hashicorp/terraform-provider-kubernetes

Terraform Kubernetes provider. Contribute to hashicorp/terraform-provider-kubernetes development by creating an account on GitHub.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

thanks @Vlad Ionescu (he/him) I’ll look into that today

Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

Note the fact that my comment saying that got 3 reactions on GitHub No idea why, but beware!

I still maintain that it worked fine for me on a new cluster scenario. And it worked fine for the students on my “Running containers on AWS” course

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

no worries, we’ll figure it out. Our current module also works in many cases, but does not work in some for some people

1
Vlad Ionescu (he/him) avatar
Vlad Ionescu (he/him)

If you have any questions, I’m here!

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

We have released terraform-aws-eks-cluster v0.37.0 which resumes support for Kubernetes Terraform provider v1. We have rolled back to using the v1 provider in our root modules until the connectivity issue with the v2 provider is resolved. That is the best resolution to this issue we have to offer at this time.

We recommend using terraform-aws-eks-cluster v0.38.0, terraform-aws-eks-node-group v0.19.0, and edit the [versions.tf](http://versions.tf) in your root module to include

terraform{
  ...
  required_providers {
  ...
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = "~> 1.13"
    }
  }
}
1
Vincent Sheffer avatar
Vincent Sheffer

I’m getting

ERROR: Post “http://localhost/api/v1/namespaces/kube-system/configmaps”: dial tcp 127.0.0.1 connect: connection refused

reliably on initial creation.

I’m using 0.38.0 of the module and the kubernetes provider is 1.13.1. I’m only creating the cluster (no nodegroups) initially as I’d like to keep my workspaces smaller and focused in Terraform Enterprise.

I used an earlier version of the module and never had this issue.

Module is pretty challenging to use in the current state.

Oh, and the error is in the resource “aws_auth_ignore_changes”.

Vincent Sheffer avatar
Vincent Sheffer

My issue turns out to be wrong versions. Specifically, I switched to the versions in the examples/complete version 0.38.0. I think the change that fixed it for me was kubernetes provider >= 2.0.

aaratn avatar
Infrastructure as code with Terraform and GitLab | GitLab

Documentation for GitLab Community Edition, GitLab Enterprise Edition, Omnibus GitLab, and GitLab Runner.

1
1
Kevin Huff avatar
Kevin Huff

Hey all I have a question regarding the terraform-aws-elastic-beanstalk-environment module. We’re in the process of upgrading from a real old version. (0.11.0), and I’m trying to get the environment name in elastic beanstalk to match what that version generated, which was just the stage. Looks like were maybe setting it through the Environment tag. Now it’s some combination of namespace-name-stage. I assumed setting environment = var.stage would do it but I can’t see the affect that has. Any assistance would be greatly appreciated.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

any of namespace, environment, stage, name are optional

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can set just one of them, or a few, or all of them

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the module will generate IDs based on the pattern namespace-environment-stage-name, but the order of these are configurable as well

Kevin Huff avatar
Kevin Huff

So if I exclude any of them they won’t be used?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

at least one is required

Kevin Huff avatar
Kevin Huff

Sweet, thanks so much for the advice.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

note that we use the pattern namespace-environment-stage-name to uniquely identify all the resources for an organization (hence the namespace which is usually an abbreviation for the org)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

this is useful for consistency (ALL your resources are names the same way), and also for naming AWS global resources like S3 buckets (which are global and the names could not be reused b/w accounts)

Kevin Huff avatar
Kevin Huff

I also assume namespace comes into play for eb url?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for all global resources. if we name all of them using the pattern, there is a very little chance of naming conflicts (if somebody else is using the same pattern and the same values, which is very unlikely)

Kevin Huff avatar
Kevin Huff

is there a way to use a different pattern for env name and global resources?

Kevin Huff avatar
Kevin Huff

I’m just a little stuck since I’m working in old code, and a whole lot else will need to change if the env name has to change.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

all the values are optional

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can just use name which can be anything

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

you can also change the delimiter from - to whatever you want

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

You can also use label_order to change the order of the labels.

Release notes from terraform avatar
Release notes from terraform
06:04:22 PM

v0.14.6 0.14.6 (February 04, 2021) ENHANCEMENTS: backend/s3: Add support for AWS Single-Sign On (SSO) cached credentials (#27620) BUG FIXES: cli: Rerunning init will reuse installed providers rather than fetching the provider again (<a href=”https://github.com/hashicorp/terraform/issues/27582” data-hovercard-type=”pull_request”…

deps: Bump github.com/aws/[email protected] by bflad · Pull Request #27620 · hashicorp/terraform

Changes: * backend/s3: Support for AWS Single-Sign On (SSO) cached credentials Updated via: go get github.com/aws/[email protected] go mod tidy Please note that Terraform CLI will not initiate o…

Reuse installed target dir providers in init by pselle · Pull Request #27582 · hashicorp/terraform

In init, we can check to see if the target dir already has the provider we are seeking and skip further querying/installing of that provider. This will help address concerns users are having where …

party_parrot2
curious deviant avatar
curious deviant

Hello,

I am creating an SSH key pair in TF and storing in secrets manager for further use by related resources. While checking out support for SSH key generation via TF code, I came across the warning that the solution is not production grade since the private key would be stored in TF state file. How are others solving for such use cases ?

Mohammed Yahya avatar
Mohammed Yahya

everything in state file is not production grade ready

Mohammed Yahya avatar
Mohammed Yahya

I would very secure the s3 bucket that host the state file, and enjoy using the Terraform hype - so the TLS Terraform provider will make it easy to generate Key/certs when needed

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

How about AWS secrets manager?

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Or use vault?

curious deviant avatar
curious deviant

Thank you for your responses. I want to generate the SSH key pair and store in secrets manager. Maybe I’ll use the local-exec

2021-02-05

Thomas Hoefkens avatar
Thomas Hoefkens

Hi everyone! I have created an EKS cluster with the terraform_aws_eks module and the cluster was created with a particular access key and secret key. On a client machine, I cannot use that access key but have to use another set of accesskeys and then assume a role using the aws sts command. After assuming the role, I have “admin access”. When I then call kubectl get pods, I do not have access. I thought I could solve this by including this bit in the cluster creation:

map_roles = [ { rolearn = “arnawsiam:role/my-role” username = “my-role” groups = [“system:masters”] } ]

where rolearn is the role that I assumed… but when executing kubectl get pods, I still have no access. Could someone point me to a solution ?

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

You still need to use aws eks update-kubeconfig to generate your kubeconfig file, and you need to generate it with the --profile you want to use to access the cluster.

1
1
charlesz avatar
charlesz

wanted to auto scale aurora using https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/appautoscaling_target but i cannot see an option on how to scale down

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

You force it to scale down by reducing max_capacity

charlesz avatar
charlesz

any pointers you can give me or better yet examples that i can play with?

charlesz avatar
charlesz

my goal is to scale up/down my rds instances dependeing on time

Vincent Van der Kussen avatar
Vincent Van der Kussen

Is there a way to remove a provider in the remote state that has been added but has a typo?

uselessuseofcat avatar
uselessuseofcat

hi,

how can I convert [""] to [] ?

loren avatar

or maybe a for loop if you need more conditions:

[for item in <list> : item if !contains(["", null, false], item)]
uselessuseofcat avatar
uselessuseofcat

Thanks,

But I already have it set here:

target_groups = join(", ", compact([try("${aws_alb_target_group.internal_ecs_tg[0].arn}", null),try("${aws_alb_target_group.external_ecs_tg[0].arn}", null),]))

That was set in locals, but when I’m calling it from aws_cloudformation_stack and there are internal and external target groups, I got [""] .

uselessuseofcat avatar
uselessuseofcat
TargetGroupsArns: ["${local.target_groups}"]
loren avatar

i can’t really follow that

uselessuseofcat avatar
uselessuseofcat

OK, thanks, indeed it’s complicated, oh love Terraform

Thanks again!

uselessuseofcat avatar
uselessuseofcat

How can I skip a property when calling cloudformation stack from Terraform? Empty value does not work

RB avatar

Try null

RB avatar

Thats worked for me in the past

chrism avatar

@Erik Osterman (Cloud Posse) https://github.com/cloudskiff/driftctl might be interesting to you

cloudskiff/driftctl

Detect, track and alert on infrastructure drift. Contribute to cloudskiff/driftctl development by creating an account on GitHub.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yea, something to help manage it

cloudskiff/driftctl

Detect, track and alert on infrastructure drift. Contribute to cloudskiff/driftctl development by creating an account on GitHub.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Where I think we’d want to see this is in the TACOS

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so i didn’t look that closely last time

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
 driftctl scan --from <tfstate+s3://acmecorp/states/terraform.tfstate>
Mohammed Yahya avatar
Mohammed Yahya

not working as expected BTW

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that’s cool - it will literally consume the statefile and compare that with what’s running.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

did you kick the tires?

Mohammed Yahya avatar
Mohammed Yahya

yes

Mohammed Yahya avatar
Mohammed Yahya
  drift:
    desc: driftctl - Detect, track and alert on infrastructure drift.
    cmds:
      - | 
        driftctl scan \
        --from tfstate+s3://{{.ACCOUNT_ID}}-{{.REGION}}-tf-state/state/dev-app.tfstate \
        --from tfstate+s3://{{.ACCOUNT_ID}}-{{.REGION}}-tf-state/state/dev-data.tfstate \
        --from tfstate+s3://{{.ACCOUNT_ID}}-{{.REGION}}-tf-state/state/dev-network.tfstate
    silent: true
    vars:
      ACCOUNT_ID:
        sh: aws sts get-caller-identity | yq e  ".0.Account" -
Mohammed Yahya avatar
Mohammed Yahya

the issue is it get 97% of the state file as drift, where it should be the opposite

Mohammed Yahya avatar
Mohammed Yahya
[aws-vault] [drift] Found 148 resource(s)
[aws-vault] [drift]  - 4% coverage
[aws-vault] [drift]  - 7 covered by IaC
[aws-vault] [drift]  - 141 not covered by IaC
[aws-vault] [drift]  - 0 deleted on cloud provider
[aws-vault] [drift]  - 0/7 drifted from IaC
Mohammed Yahya avatar
Mohammed Yahya

look like it dose not recognize resources in modules. just resources without modules

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

we’ve been using Fugue for this

Igor avatar

Looks like the modules issue is being fixed in the next release.

1
Igor avatar

I would love it if it could just grab all state files from the S3 location, instead of having to specify them one-by-one using –from cli attribute

this1
Gerald avatar

Hi all, I’m Gerald, part of the driftctl team. Only just joined this slack and thrilled to see your discussions! If you don’t mind, I’ll update some of the posts with more recent informations. Feel free to comment if needed

1
Gerald avatar

https://sweetops.slack.com/archives/CB6GHNLG0/p1612544451305500?thread_ts=1612543469.302600&cid=CB6GHNLG0 @Mohammed Yahya the tool now reads resources from modules within the tfstate, (which was indeed not the case before the 0.5.0 release). So you should probably get significantly lower drift % if you retry it.

look like it dose not recognize resources in modules. just resources without modules

Mohammed Yahya avatar
Mohammed Yahya

Thank you, I will give it a shot, and awesome tool BTW

1
Gerald avatar

https://sweetops.slack.com/archives/CB6GHNLG0/p1612801916339600?thread_ts=1612543469.302600&cid=CB6GHNLG0 @Igor it’s now possible to read all tfstate files within a S3 or a folder if stored locally. Much more convenient

I would love it if it could just grab all state files from the S3 location, instead of having to specify them one-by-one using –from cli attribute

1
Gerald avatar

BTW we had a recurring bug that caused the tool to hang and it seemed to take ages to run while it was basically just stuck. We finally fixed it last week in the last release so I hope you’ll get a better experience now. (we still have issues with SSO causing freezes though as you can see in some of our issues on GH. Working on it )

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Thanks @Gerald - welcome and glad you’re keeping us up to date.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Ping me and maybe we can get a demo on #office-hours

Gerald avatar

Thanks @Erik Osterman (Cloud Posse)

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

does anyone have a recommended guide for configuring AWS SSO (using Azure AD) with Terraform?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(we’ll have a terraform module coming out soon - but no specific instructions for azure)

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

Thanks guys

1
Mohammed Yahya avatar
Mohammed Yahya
aaratn/terraenv

Terraform & Terragrunt Version Manager. Contribute to aaratn/terraenv development by creating an account on GitHub.

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Why is this needed vs tfenv tgenv?

aaratn/terraenv

Terraform & Terragrunt Version Manager. Contribute to aaratn/terraenv development by creating an account on GitHub.

aaratn avatar

terraenv author here, I created this tool to solve below problems

• single tool to do terraform and terragrunt version management

• available as pip, brew, docker image, osx and linux binaries ( tfenv and tgenv are bash scripts and not binaries )

1
1
Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Why not simply work with them to create a single tool based on their existing code? I don’t have any issue with your tool, just trying to reduce duplicate work.

Patrick Jahns avatar
Patrick Jahns

Sometimes these tools get to their limitations and it is good to create a new tool. Personally speaking I’ve now migrated from tfenv to tfswitch ( https://tfswitch.warrensbox.com/ ) Reason beeing that tfenv did not properly parse *.tf files to detect the terraform version from there for me/us. Additionally tfenv did add considerable overhead in time for executing terraform - tfswitch ( incombination with direnv ) works almost instant to switch

TFSwitch

A command line tool to switch between different versions of terraform (install with homebrew and more)

Patrick Jahns avatar
Patrick Jahns

Quite often https://xkcd.com/927/ happens

Standardsattachment image

[Title text] “Fortunately, the charging one has been solved now that we’ve all standardized on mini-USB. Or is it micro-USB? Shit.”

2021-02-06

loren avatar
loren
03:45:56 PM

This could be handy, for generating minimal iam policies… https://github.com/iann0036/iamlive

1
1
1
Emmanuel Gelati avatar
Emmanuel Gelati

Hi, Why do I need to use DynamoDB with aws remote state?

msharma24 avatar
msharma24
How to manage Terraform stateattachment image

A guide to file layout, isolation, and locking for Terraform projects

Emmanuel Gelati avatar
Emmanuel Gelati

But I could azure blob storage without any db help and locking works

Emmanuel Gelati avatar
Emmanuel Gelati

is not enough create a locking file on s3?

msharma24 avatar
msharma24

I’m afraid I don’t have any knowledge of azure services. I have always used AWS and with TF been using S3 as remote backend for state files and dynamodb for locking.

Zach avatar


This backend also supports state locking and consistency checking via native capabilities of Azure Blob Storage.

Zach avatar

thats why

kskewes avatar
kskewes

Don’t talk too loud or the next version of the provider will require a lambda too troll

Alex Jurkiewicz avatar
Alex Jurkiewicz

Azure blob has a native “locking” concept, while S3 does not. That’s why the AWS backed uses a second service to implement homegrown locking

Alex Jurkiewicz avatar
Alex Jurkiewicz
Lease Blob (REST API) - Azure Storage

The Lease Blob operation creates and manages a lock on a blob for write and delete operations.

Ofek Solomon avatar
Ofek Solomon
07:46:13 AM

I also have this issue, is there any solution for this? thanks!

Hi, we are trying to update the “eks-cluster” module (version 0.32) and we started encountering this error when running terraform plan:

Error: Get "<http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth>": dial tcp [::1]:80: i/o timeout

We suspect it is because of the kubernetes provider version, which was upgraded recently, but in their docs we don’t see any breaking changes regarding this existing configuration:

provider "kubernetes" {
  token                  = join("", data.aws_eks_cluster_auth.eks.*.token)
  host                   = join("", data.aws_eks_cluster.eks.*.endpoint)
  cluster_ca_certificate = base64decode(join("", data.aws_eks_cluster.eks.*.certificate_authority.0.data))
}

Did anyone encounter this issue ? thanks

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Do you have var.apply_config_map_aws_auth = false?

Hi, we are trying to update the “eks-cluster” module (version 0.32) and we started encountering this error when running terraform plan:

Error: Get "<http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth>": dial tcp [::1]:80: i/o timeout

We suspect it is because of the kubernetes provider version, which was upgraded recently, but in their docs we don’t see any breaking changes regarding this existing configuration:

provider "kubernetes" {
  token                  = join("", data.aws_eks_cluster_auth.eks.*.token)
  host                   = join("", data.aws_eks_cluster.eks.*.endpoint)
  cluster_ca_certificate = base64decode(join("", data.aws_eks_cluster.eks.*.certificate_authority.0.data))
}

Did anyone encounter this issue ? thanks

uselessuseofcat avatar
uselessuseofcat

Hi, I have a CloudFormation template to which I lost 2 days trying to solve a problem but I am CF noob. I want to put this on Terraform side where one value can be either a string or a list - depending on true or false value.

For example:

false ? "test" : tolist(["test2"])

But I got an error:

The true and false result expressions must have consistent types. The given
expressions are string and list of string, respectively.

Is there any workaround for this?

Many thanks!

Jeremy G (Cloud Posse) avatar
Jeremy G (Cloud Posse)

Generally, no, Terraform uses moderately strict typing most of the time. There is very little you can do with a value that might be a string or a list. Usually with these options you use a list of strings and just have one string in the list.

2021-02-07

RB avatar

There isn’t. Each variable has to be a single type

RB avatar

What are you trying to accomplish with this technique

uselessuseofcat avatar
uselessuseofcat

I have one value from CF template that can be either a string AWS::NoValue or a list depending on other values.

RB avatar

Oh i thought you were trying to do it in terraform

uselessuseofcat avatar
uselessuseofcat

Is there any way to omit line in Terraform template?

RB avatar

You can set it to null

charlesz avatar
charlesz

hi all, i have a question, i was asked to make a spot fleet that scales in / out according to specific time. is this still applicable in 0.12.x version of terraform? - https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/spot_fleet_request#iam_fleet_role

uselessuseofcat avatar
uselessuseofcat

Hi! When I’m loading template_file trough data, how can I specify the list to iterate trough? For example https://www.terraform.io/docs/language/functions/templatefile.html

I know that templatefile function renders the template, but how can I implement that?

For example, this is where I specify template_file:

data "template_file" "cf" {
  template = "${file("${path.module}/templates/cf-asg.tpl")}"
  vars = {
    service_name = "${var.service_name}"
    subnets = join("\",\"", var.subnets)
    availability_zones = join("\",\"", var.availability_zones)
    lc_name = "${aws_launch_configuration.ecs_config_launch_config.name}"
    min_instances = "${var.min_instances}"
    max_instances = "${var.max_instances}"
    desired_instances = "${var.desired_instances}"
    asg_health_check_type = "${var.asg_health_check_type}"
    no_of_resource_signals = "${var.no_of_resource_signals}"
    #tgs = [local.tg]
    region_tag = var.region_tag
    env_tag = var.env_tag
    newrelic_infra_tag = var.newrelic_infra_tag
    purpose_tag = var.purpose_tag
    patch_group_tag = var.patch_group_tag
  }
}

This is where I’m loading it:

resource "aws_cloudformation_stack" "autoscaling_group" {
  name = "${var.service_name}-asg"

  template_body = data.template_file.cf

  depends_on = [aws_launch_configuration.ecs_config_launch_config]
}

And this is a part of cf-asg.tpl:

      MinSize: "${min_instances}"
      MaxSize: "${max_instances}"
      %{ for tg in tgs ~}
      TargetGroupARNs: ${tg}
      %{ endfor ~}

So, how can I specify the list tgs to iterate trough?

templatefile - Functions - Configuration Language - Terraform by HashiCorp

The templatefile function reads the file at the given path and renders its content as a template.

loren avatar

Instead of doing this by terraform-templating the CF, can you write the CF to accept an input parameter and use a CF condition to set AWS::NoValue?

templatefile - Functions - Configuration Language - Terraform by HashiCorp

The templatefile function reads the file at the given path and renders its content as a template.

uselessuseofcat avatar
uselessuseofcat

No Because, both values in condition must exist, but in some cases I do not have target group.

loren avatar

the CF parameter list-type accepts a comma-separated string. you can detect an “empty” list in a CF condition with something like this:

Conditions:
  UseTargetGroupArns: !Not
    - !Equals
      - !Join
        - ''
        - !Ref TargetGroupArns
      - ''
loren avatar

and then in the CF ASG resource:

      TargetGroupARNs: !If
        - UseTargetGroupArns
        - !Ref TargetGroupArns
        - !Ref 'AWS::NoValue'
loren avatar

on the TF side, you pass in either an empty string, or a comma-separated string. because it’s a string either way, the TF conditional syntax will work

uselessuseofcat avatar
uselessuseofcat

Thanks, Ioren, but it’s late for that, I’ve managed to set it up via templates.

I find CF very hard and do not understand it at all.

uselessuseofcat avatar
uselessuseofcat

Thanks a lot!

loren avatar

the CF parameter for the target group looks like this:

  TargetGroupArns:
    Default: ''
    Description: >-
      Comma-separated string of Target Group ARNs to associate with the
      Autoscaling Group; conflicts with LoadBalancerNames
    Type: CommaDelimitedList
uselessuseofcat avatar
uselessuseofcat

Empty strings didn’t work for me for example, I’ve tried that

loren avatar

i guarantee it works been doing this for years

loren avatar

i’m literally copying from a template i already have

uselessuseofcat avatar
uselessuseofcat

damn, wish I had this 2 days ago, spent my whole weekend on it. Are you also creating ASG via CF from Terraform?

loren avatar

but i totally agree on CF being difficult and hard to understand, especially as the use case becomes more advanced

loren avatar

yes, the ASG is defined in CFN, for exactly the kind of use case you describe, where we want to use the CFN UpdatePolicy, which is a service-side feature of CFN that terraform on its own cannot implement…

loren avatar

here’s the template. pick out what you need, ignore what you don’t… feel free to ask questions if you need a hand… https://github.com/plus3it/terraform-aws-watchmaker/blob/master/modules/lx-autoscale/watchmaker-lx-autoscale.template.cfn.yaml

plus3it/terraform-aws-watchmaker

Terraform module for Watchmaker. Contribute to plus3it/terraform-aws-watchmaker development by creating an account on GitHub.

uselessuseofcat avatar
uselessuseofcat

This is gold Ioren! Thanks, I’ll save this for future. But it looks like template works also for my use!

I just need to sort out some evaluation, but it works with a little hard coding!

Thanks again

1
uselessuseofcat avatar
uselessuseofcat

I’ve managed to set it up with template. Here’s part of template file:

%{ for tg in tgs ~}
      TargetGroupARNs: [${tg}]
%{ endfor ~}

And in templatefile function I have:

tgs = try(tolist([internal_tg.*.arn[0]]), [])

So if there is no internal_tg, it will skip TargetGroupArns!

loren avatar

that works also… we made it a point to avoid templating in the CF, so folks could, if they want, just use the CF directly without terraform

uselessuseofcat avatar
uselessuseofcat

Lost 3 days But, as I said I find CF very hard to debug, and to be honest I am little afraid. But I guess some people will use your module. I would if I knew.

RB avatar

data "template_file" "cf" is deprecated in favor of templatefile

RB avatar

why are you using a template file to dynamically create a cf stack using tf…

RB avatar

why not just create the resources purely in tf

uselessuseofcat avatar
uselessuseofcat

Let me explain man, since I’m doing it for 3 days straight

First of all, I am creating CF stack for ASG because I need rolling updates, and CF can bump max number of instances on the fly

RB avatar

isnt there a way to do that in pure tf ?

uselessuseofcat avatar
uselessuseofcat

nope man

uselessuseofcat avatar
uselessuseofcat

not yet

uselessuseofcat avatar
uselessuseofcat

and there is one stupid CF property that can be a list or a value “AWS::NoValue” which tell CF to skip that Property. But the thing is - in some cases I need to set list, in some cases string…

uselessuseofcat avatar
uselessuseofcat

so my last resort is template, I can iterate the list and if empty, skip a line in template file

RB avatar

oh man i didnt know that

RB avatar

is there a module that does all this for you

RB avatar

cause that would be amazing

uselessuseofcat avatar
uselessuseofcat
Using Terraform for zero downtime updates of an Auto Scaling group in AWSattachment image

A lot has been written about the benefits of immutable infrastructure. A brief version is that treating the infrastructure components as…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@endofcake

Using Terraform for zero downtime updates of an Auto Scaling group in AWSattachment image

A lot has been written about the benefits of immutable infrastructure. A brief version is that treating the infrastructure components as…

1
endofcake avatar
endofcake

I’m glad this old blog is still useful

1
uselessuseofcat avatar
uselessuseofcat

@endofcake saved my ass a year ago, but I wanted to improve to have ALB healthchecks, and also to set grace period to 300 secs

uselessuseofcat avatar
uselessuseofcat

Thanks!

1
uselessuseofcat avatar
uselessuseofcat

I already have that set up, but I want to have ALB healthchecks in place

Zach avatar

You might be better off doing a blue green swap of the ASGs then

Zach avatar

unless you have a hard requirement for the rolling update

Zach avatar

or another alternative, update the ASG with your new config but use the instance-refresh CLI command to do the rolling update

Chris Fowles avatar
Chris Fowles

Terraform supports instance refresh now

Chris Fowles avatar
Chris Fowles
Support ASG Instance Refresh · Issue #13785 · hashicorp/terraform-provider-aws

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or other comme…

2
Bob avatar

Hi~ Anyone have experience/recommendation keeping your Terraform code DRY? Like how Terragrunt does it, but using Terraform Cloud, Scalr, or Spacelift? We have a few environments we “promote” infrastructure changes to (dev –> test –> prod) and would like get away from “copying” the same terraform code/modules . I notice env0 has support for terragrunt, but want to know what others have done

TIA!

Peter Huynh avatar
Peter Huynh

When I last look at this, I ended up doing something similar to https://terraspace.cloud/ and the roles/profiles pattern from puppet. The general ideas are: • app (your tf code for your application) • stack (your environment) • config (it has the per-environment setup) • modules (shared modules across multiple projects) The difference between app and modules for me is that the app defines the infrastructure specific to your application, whereas a module can be shared across multiple apps (for example a tagging module or a label module).

The stack contains the instantiation code for the app per environment. This gets duplicated across multiple environments, depending on the parameters passed.

I havent had much time since then to review this pattern, but hopefully it can help a bit.

Mohammed Yahya avatar
Mohammed Yahya

You still need to use modules, and calling the same module multiple times with different values, IMHO that is not RY code. it’s like calling the same function in your app code. I tried various ways to implement my IaC, all of them has pros and cons. see what fit your needs, I would start with question: Mono repo Vs Multiple Repos for architecting my IaC?

Mohammed Yahya avatar
Mohammed Yahya

then You can choose the tools: like Terraform vs Terragrunt Vs [env0 - scalr - spacelift - TFC] choosing the tool will affect how you layout your repo/repos. Personally I would go with Vanilla Terraform, and stacks approach like: • App stack • Data stack • Network stack I called it micro stacks approach - for different envs for sure I have a DRY code, but that give me more control over my envs and that’s fine with me

Troy Taillefer avatar
Troy Taillefer
Terraform Workflow Best Practices at Scaleattachment image

What is the optimal HashiCorp Terraform workflow as you get more teams within your organization to adopt it?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Patrick Jahns avatar
Patrick Jahns

@Mohammed Yahya - the approach you described reminds me of terraservices - https://www.hashicorp.com/resources/evolving-infrastructure-terraform-opencredo

Is it similar or am I misunderstanding your approach ?

5 Common Terraform Patterns—Evolving Your Infrastructure with Terraformattachment image

Nicki Watt, OpenCredo’s CTO, explains how her company uses HashiCorp’s stack—and particularly Terraform—to support its customers in moving to the world of CI/CD and DevOps.

1
Mohammed Yahya avatar
Mohammed Yahya

@Patrick Jahns the slides looks very helpful explaining how anyone starting with Terraform evolve into his own patterns, also read this book, it’s awesome I strongly recommend it https://infrastructure-as-code.com/book/

Book

Exploring better ways to build and manage cloud infrastructure

1
Patrick Jahns avatar
Patrick Jahns

Thanks for sharing - will add it into my reading list. Just thought your approach sounded similar to the terraservice approach

Mohammed Yahya avatar
Mohammed Yahya

Thanks I’m glad you like it, actually my journey was similar to them

Patrick Jahns avatar
Patrick Jahns

I suppose we all go through different (similar) stages of learning - by being part of communities like these here I try to skip some of the learnings - only to find myself tipping over some of the pain points eventually at a different stage

1

2021-02-08

loren avatar

if i could beg for a favor and get some folks to the linked issues and the pr, i would truly be grateful… https://github.com/hashicorp/terraform-provider-aws/issues/4426#issuecomment-775504542

Feature request: Exclusive policy attachment list for users, groups, and roles · Issue #4426 · hashicorp/terraform-provider-aws

Terraform Version +$ terraform -v Terraform v0.11.7 + provider.aws v1.16.0 Affected Resource(s) aws_iam_user_policy_attachment aws_iam_group_policy_attachment aws_iam_role_policy_attachment Expecte…

5
Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

I may be misunderstanding, but isn’t this what aws_iam_policy_attachment does?

Feature request: Exclusive policy attachment list for users, groups, and roles · Issue #4426 · hashicorp/terraform-provider-aws

Terraform Version +$ terraform -v Terraform v0.11.7 + provider.aws v1.16.0 Affected Resource(s) aws_iam_user_policy_attachment aws_iam_group_policy_attachment aws_iam_role_policy_attachment Expecte…

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Specifically, if you look at the documentation: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy_attachment

This means that even any users/roles/groups that have the attached policy via any other mechanism (including other Terraform resources) will have that attached policy revoked by this resource.
loren avatar

aws_iam_policy_attachment manages all attachments of a policy, but does not manage all the policies attached to a role. it’s flipped

loren avatar

i care more about the role than where the policies are attached. and being able to detect drift if someone comes along and attaches a new policy to the role and thereby changes its permission-set

Thomas Hoefkens avatar
Thomas Hoefkens

Hi everyone! I have a strange issue and wonder whether any of you have encountered it or managed to solve it.. I deploy an EKS cluster with fargate profiles using terraform, and this works perfectly the first time round. Then I issue a TF destroy and all resources are gone, so far so good. Now, when again applying the TF scripts, with the same cluster name, the creation gets stuck on creating fargate profiles.. as if something is hindering AWS from recreating the same fargate profile names (which have been correctly deleted by TF): module.eks.module.fargate.aws_eks_fargate_profile.this[“default”]: Still creating… [44m50s elapsed] Is this is a bug or is there a workaround for this? Often I can see that the Profile got created for the cluster, yet TF is somehow not “seeing” that the creation is complete…

Alex Jurkiewicz avatar
Alex Jurkiewicz

you might need to run with trace logging on so you can see what API request/response data the AWS provider is sending. Perhaps there’s a bug and it’s not looking for the same resource you are seeing in the console

Alex Jurkiewicz avatar
Alex Jurkiewicz

please don’t double post questions in different channels. You can link to a message instead to consolidate responses

1

2021-02-09

Adnan avatar

Hi Everyone! Is anyone using porter.sh in prod? Specifically as a bridge between terraform and helm?

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

is there a way in terraform-compliance to test outputs?

Bart Coddens avatar
Bart Coddens

HI all, I am using this module : https://github.com/cloudposse/terraform-aws-tfstate-backend I would like to create this : terraform_state_file = "s3state/var.tier/terraform.tfstate" where var.tier is a variable. The statefile is stored as such then: s3state/test/terraform.tfstate. The variable is tier=test

cloudposse/terraform-aws-tfstate-backend

Terraform module that provision an S3 bucket to store the terraform.tfstate file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption. - cloudposse…

Andy avatar

If that variable is within a string you’ll need to use ${var.tier}:

terraform_state_file               = "s3state/${var.tier}/terraform.tfstate"
cloudposse/terraform-aws-tfstate-backend

Terraform module that provision an S3 bucket to store the terraform.tfstate file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption. - cloudposse…

Bart Coddens avatar
Bart Coddens

thanks Andy, I will check

Bart Coddens avatar
Bart Coddens

wee that works: + “terraform_state_file” = “s3state/test/terraform.tfstate”

Bart Coddens avatar
Bart Coddens

thx a lot !

1
Bart Coddens avatar
Bart Coddens

With the above module the s3 state backend is configured properly, thanks all for this excellent module

Bart Coddens avatar
Bart Coddens

How do you manage multiple state files, do you generate the backend files by hand ?

gcw-sweetops avatar
gcw-sweetops

Is there a simple ‘how to’ on using terraform-AWScloud fronts3-can ? The .tf under examples/complete doesn’t seem to run for me when I change the relevant parameters to match a brand new AWS setup (AWS hosted domain with R53)? I haven’t been able t find one but then again I’m fried

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Best bet is to share the output for the error you’re getting

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

All our modules are continually tested with integration tests to verify they work

gcw-sweetops avatar
gcw-sweetops

I understand that. I’m bumping up against an issue of not knowing exactly what’s required to get a working setup. I set logging_enabled to “false” but it still complains of not being able to create the logging s3 bucket

gcw-sweetops avatar
gcw-sweetops

Thank you for the reply BTW

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Please share the literal example/error. E.g. https://sweetops.slack.com/archives/CUJPCP1K6/p1612902288011800

@Erik Osterman (Cloud Posse) Hi all,

I am getting the following from the ECS web app module using webhooks. I am guessing its coming from the webhooks module. It seems GitHub there are breaking changes with the GitHub provider.

Warning: Additional provider information from registry

The remote registry returned warnings for
registry.terraform.io/hashicorp/github:
- For users on Terraform 0.13 or greater, this provider has moved to
integrations/github. Please update your source in required_providers.



Error: Failed to query available provider packages

Could not retrieve the list of available versions for provider
hashicorp/github: no available releases match the given constraints ~> 2.8.0,
3.0.0
gcw-sweetops avatar
gcw-sweetops
Error: Error creating S3 bucket: BucketAlreadyExists: The requested bucket name - Pastebin.com

Pastebin.com is the number one paste tool since 2002. Pastebin is a website where you can store text online for a set period of time.

gcw-sweetops avatar
gcw-sweetops
gcw/foosa

Contribute to gcw/foosa development by creating an account on GitHub.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So this is in no way a terraform problem. The bucket already exists… often this happens if you provision a root module without a statebackend

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the example above doesn’t include a state backend, increasing the odds you’ll accidentally reprovision the same resource

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

HCP provides free state backends for terraform. That will be the easiest way to get up and running.

gcw-sweetops avatar
gcw-sweetops

Ok. Thank you

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

hot off the press: https://github.com/cloudposse/terraform-aws-sso

cc: @Mohammed Yahya

cloudposse/terraform-aws-sso

Terraform module to configure AWS Single Sign-On (SSO) - cloudposse/terraform-aws-sso

5
Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Now just need to figure out how to take our non-Terraform-managed-AWS-SSO configuration and migrate it

cloudposse/terraform-aws-sso

Terraform module to configure AWS Single Sign-On (SSO) - cloudposse/terraform-aws-sso

Mohammed Yahya avatar
Mohammed Yahya

@Erik Osterman (Cloud Posse) Awesome !! very clean and simple

Mohammed Yahya avatar
Mohammed Yahya

@Yoni Leitersdorf (Indeni Cloudrail) create terraform templates and import terraform import them, I did it last week, here is a sample script I used:

Mohammed Yahya avatar
Mohammed Yahya
terraform import module.permission_set_power_access.aws_ssoadmin_permission_set.this arn:aws:sso:::permissionSet/ssoins-XXXXXXXXXXXXXXX/ps-yyyyyyyyyyyyyyyy,arn:aws:sso:::instance/ssoins-XXXXXXXXXXXXXXX
terraform import module.permission_set_power_access.aws_ssoadmin_account_assignment.this\[0\] zzzzzzzzzzz-4edbad0a-1509-4f26-8876-aaaaaaaaaaaaa,GROUP,1111111111111111,AWS_ACCOUNT,arn:aws:sso:::permissionSet/ssoins-XXXXXXXXXXXXXXX/ps-yyyyyyyyyyyyyyyy,arn:aws:sso:::instance/ssoins-XXXXXXXXXXXXXXX
terraform import module.permission_set_power_access.aws_ssoadmin_account_assignment.this\[1\] zzzzzzzzzzz-4edbad0a-1509-4f26-8876-aaaaaaaaaaaaa,GROUP,222222222222,AWS_ACCOUNT,arn:aws:sso:::permissionSet/ssoins-XXXXXXXXXXXXXXX/ps-yyyyyyyyyyyyyyyy,arn:aws:sso:::instance/ssoins-XXXXXXXXXXXXXXX
terraform import module.permission_set_power_access.aws_ssoadmin_managed_policy_attachment.this\[0\] arn:aws:iam::aws:policy/PowerUserAccess,arn:aws:sso:::permissionSet/ssoins-XXXXXXXXXXXXXXX/ps-yyyyyyyyyyyyyyyy,arn:aws:sso:::instance/ssoins-XXXXXXXXXXXXXXX
1
Mike Martin avatar
Mike Martin

I took a slightly different approach when creating my module to combine assignments and permission sets all in one. https://github.com/glg-public/terraform-aws-single-sign-on

glg-public/terraform-aws-single-sign-on

Terraform module to provision AWS SSO permission sets, assignments, managed and inline policies. - glg-public/terraform-aws-single-sign-on

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@matt would it make sense to do this maybe in the root of our terraform-aws-sso module? … combining the submodules?

loren avatar

well that’s nifty, tailscale has a community terraform provider already… https://registry.terraform.io/providers/davidsbond/tailscale/latest/docs

1
Mohammed Yahya avatar
Mohammed Yahya

Thank you for sharing, I have been looking into openVPN replacement, I took a look at Hashicorp Boundary, but I choose AWS client VPN, but this one looks awesome, I will give it a shot

Andy avatar

@Mohammed Yahya out of interest what EC2 instance type do you use for OpenVPN? We’re using a t3.small with around 50-80 users connecting throughout the day. We’ve had latency issues reported over lockdown (!) but looking at network monitoring from the server, nothing stands out as an obvious cause. I’m considering just trying to increase the instance type to a t3.medium or t3.large to get better baseline network performance.

Mohammed Yahya avatar
Mohammed Yahya

@Andy I’m using the managed AWS VPN service. AWS Client VPN. way much better performance than ec2 openvpn and more secure

Andy avatar

OK. Looks expensive though Or are there smart ways to manage that?

# 100 users
# $0.05 per user per hour
# 8 working hours in a day
# 253 working days in a year
100 * 0.05 * 8 * 253 = $10,120 per year
Mohammed Yahya avatar
Mohammed Yahya

yes it’s, but worth it for the simplicity, performance and integration with SSO like Okta or AWS SSO

1
Mohammed Yahya avatar
Mohammed Yahya

or check Tailscale I just learn about it today

loren avatar

follow @Tailscale on twitter also, the devs are quite active

Matt Gowie avatar
Matt Gowie

@loren this is awesome. Stoked to see this — Thanks for sharing.

1
Matt Gowie avatar
Matt Gowie

Unfortunately for me, Tailscale got declined during a secuirty review by a client’s auditing team so I don’t get to use it with that client. But this will be sweet, because I was hoping to have a better way to manage those ACLs than just copy / pasta a json document around.

Matt Gowie avatar
Matt Gowie

One day.

loren avatar

bummer on the security review did they at least say what they would have needed to accept it? i wonder if the tailscale team would be interested in that?

Matt Gowie avatar
Matt Gowie

Tailscale failed the security review because one of the founders refused to fill out an 90 question security review questionnaire. I talked with him about it and he just said they didn’t have the time and my client’s team just refused to try after that and considered them too small of a company. It was a sad way for them to get rejected.

nnsense avatar
nnsense

Help!!

nnsense avatar
nnsense

I mean, hi everyone! ok, now, HELP!!

3
loren avatar
LMGTFY - Search Made Easy

For all those people who find it more convenient to bother you with their question rather than search it for themselves.

1
loren avatar

i kid of course! put the problem out there, this community is awesome

LMGTFY - Search Made Easy

For all those people who find it more convenient to bother you with their question rather than search it for themselves.

nnsense avatar
nnsense

AhhhA! Caution, I’ve spent the last 3 days between google and cloudposse git, I could bite

2
nnsense avatar
nnsense

I really hope somebody can help me with cloudposse EKS cluster.. I really don’t know why… first time, it creates the cluster.. second apply… `

Error: the server is currently unable to handle the request (get configmaps aws-auth)
nnsense avatar
nnsense

It’s SO annoying.. reading the TRACE, it seems trying to call localhost (?) which asnwer 503…

nnsense avatar
nnsense
HTTP/1.1 503 Service Unavailable
Connection: close
Content-Length: 299
Content-Type: text/html; charset=iso-8859-1
Date: Wed, 10 Feb 2021 00:25:35 GMT
Server: Apache/2.4.6 (CentOS) OpenSSL/1.0.2k-fips PHP/5.4.16

<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>503 Service Unavailable</title>
</head><body>
<h1>Service Unavailable</h1>
<p>The server is temporarily unable to service your
request due to maintenance downtime or capacity
problems. Please try again later.</p>
</body></html>
nnsense avatar
nnsense

All this seems related the F. map_additional_iam_roles and also map_additional_iam_users (tried both)

nnsense avatar
nnsense

That unathorized thing seems to be related module.eks_cluster.kubernetes_config_map.aws_auth[0]

nnsense avatar
nnsense

This, if I set kubernetes_config_map_ignore_role_changes to true

nnsense avatar
nnsense

If I set it to false, the the module is instead module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes

nnsense avatar
nnsense

I’ve tried almost everything, but I really don’t understand why it run perfectly, but If I run a refresh, an apply, or a destroy, it throws that error and byebye

nnsense avatar
nnsense

Tried.

nnsense avatar
nnsense

I’ve forgotten to say, I’m not using any other module than cloudposse eks cluster and node-groups

loren avatar

now i feel bad about speaking up so flippantly, because i know nothing about eks

nnsense avatar
nnsense

Oh don’t worry, me too

loren avatar

but, i know one big difference between an initial apply and subsequent terraform actions, is that terraform will actually attempt to describe the running resources and compare them to the config. so i’d guess it is that part of the execution that is throwing the error. i have no idea how to use that to help you though

nnsense avatar
nnsense

Yep, that’s correct but I didn’t change anything between the two apply (or the apply and the refresh). It’s clearly written to change kubernetes_config_map_ignore_role_changes into the readme if I want to change the nodes, or the users, but I don’t want, it’s throwing that error even if I run tf apply -auto-approve && tf refresh

nnsense avatar
nnsense

I really don’t know what to do, I’ve tried reading the code of the module, but it looks fine to me

nnsense avatar
nnsense

@Erik Osterman (Cloud Posse)… I know you know the answer…

loren avatar

i would recommend threading, at least, to give others a chance with their own questions…

2
nnsense avatar
nnsense

Oops.. You’re right

loren avatar

it’s hard to pick where to start a thread when a lot convo happens in the channel, but you can be explicit about starting a thread, just post start thread here, or something and sorry i can’t help more with the eks problem. there are quite a few eks users here though, so i do think someone will be able to help

loren avatar

there is also #kubernetes, and sometimes cross-posting can help, in moderation

nnsense avatar
nnsense

Cool, thanks I’ll keep this thread

nnsense avatar
nnsense

Yeah I really hope somebody can help, I really don’t know what else to try, the next step would be to fork the module git and try to fix it but I don;t want to end up using my repo and anyway I have the feeling the fix is simple

nnsense avatar
nnsense

Let’s see into kubernetes channel

nnsense avatar
nnsense

Why not bombarding everyone with my problms after all

loren avatar

it is very easy to fork and use your own work with git:// sources, so that is very viable

nnsense avatar
nnsense

For testing yes, to use it at work not so much

loren avatar

and also sets you up to pr the fix, if you figure out it is a bug upstream!

loren avatar

create a work org, and fork it there, then use that for work

nnsense avatar
nnsense

Trust me, If I don’t find an easy fix, I will do exactly that

nnsense avatar
nnsense

I need these modules to work, I;’m not going to rewrite the whole thing to change them

Mr.Devops avatar
Mr.Devops

hello - has anyone come with a solution to use a list of instance id in the target_id for the resource https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lb_target_group_attachment#target_id

I find it annoying to have to create multiply lb_target_group_attachment for every instances

2021-02-10

Victor Hugo dos Santos avatar
Victor Hugo dos Santos

Hi… The aws-ssm-iam-role supose to work with terraform 12 ???

cloudposse/terraform-aws-ssm-iam-role

Terraform module to provision an IAM role with configurable permissions to access SSM Parameter Store - cloudposse/terraform-aws-ssm-iam-role

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

modules without automated tests have not been upgraded

cloudposse/terraform-aws-ssm-iam-role

Terraform module to provision an IAM role with configurable permissions to access SSM Parameter Store - cloudposse/terraform-aws-ssm-iam-role

Victor Hugo dos Santos avatar
Victor Hugo dos Santos

Im getting this error: warning: Quoted type constraints are deprecated

on .terraform/modules/ssm_role.label/variables.tf line 19, in variable "delimiter": 19: type = "string"

Terraform 0.11 and earlier required type constraints to be given in quotes, but that form is now deprecated and will be removed in a future version of Terraform. To silence this warning, remove the quotes around "string".

(and 13 more similar warnings elsewhere) But, look like this module isnt update..

Patrick Doyle avatar
Patrick Doyle

Hello. I’m trying to use the CIS config rules module but getting an error that no source URL was returned when running terraform init with the latest version(0.14.6). I am using the URL defined in the example, https://github.com/cloudposse/terraform-aws-config.git//modules/cis-1-2-rules?ref=master. The module URL seems correct based on the terraform docs so I’m not sure if this is an issue with the repo or with terraform…

cloudposse/terraform-aws-config

This module configures AWS Config, a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. - cloudposse/terraform-aws-config

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@matt

cloudposse/terraform-aws-config

This module configures AWS Config, a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. - cloudposse/terraform-aws-config

matt avatar

I would suggest using the Terraform module syntax to simplify…

matt avatar
module "config_cis-1-2-rules" {
  source  = "cloudposse/config/aws//modules/cis-1-2-rules"
  version = "0.7.2"
  # insert the 11 required variables here
}
Patrick Doyle avatar
Patrick Doyle

Awesome that works, thanks!

Release notes from terraform avatar
Release notes from terraform
06:14:29 PM

v0.15.0-alpha20210210 0.15.0 (Unreleased) BREAKING CHANGES:

The list and map functions, both of which were deprecated since Terraform v0.12, are now removed. You can replace uses of these functions with tolist([…]) and tomap({…}) respectively. (#26818)

Terraform now requires UTF-8 character encoding and virtual terminal support when running on…

lang/funcs: Remove the deprecated "list" and "map" functions by apparentlymart · Pull Request #26818 · hashicorp/terraform

Prior to Terraform 0.12 these two functions were the only way to construct literal lists and maps (respectively) in HIL expressions. Terraform 0.12, by switching to HCL 2, introduced first-class sy…

1
Mr.Devops avatar
Mr.Devops
07:16:28 PM

please anyone?

hello - has anyone come with a solution to use a list of instance id in the target_id for the resource https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lb_target_group_attachment#target_id

I find it annoying to have to create multiply lb_target_group_attachment for every instances

loren avatar

use for_each? still technically multiple resources, but you don’t have to spell them out individually

hello - has anyone come with a solution to use a list of instance id in the target_id for the resource https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lb_target_group_attachment#target_id

I find it annoying to have to create multiply lb_target_group_attachment for every instances

Mr.Devops avatar
Mr.Devops

thx @loren i will check this out.

Mr.Devops avatar
Mr.Devops

but wouldn’t it fail since target_id only accept a single value (string)?

loren avatar

with for_each, the entire resource is duplicated, and the attribute remains a string:

resource "aws_lb_target_group_attachment" "test" {
  for_each = toset([<your list of instance IDs>])

  target_group_arn = aws_lb_target_group.test.arn
  target_id        = each.key
}
Mr.Devops avatar
Mr.Devops

i see

Mr.Devops avatar
Mr.Devops

thx for the knowledge

loren avatar

if you are creating the instances in the same state, do not use the instance ID in the for_each expression. instead use an identifier that maps back to each instance:

resource "aws_instance" "test" {
  for_each = toset(["foo", "bar"])

  ...
}

resource "aws_lb_target_group_attachment" "test" {
  for_each = toset(["foo", "bar"])

  target_group_arn = aws_lb_target_group.test.arn
  target_id        = aws_instance[each.key].id
}

note how the for_each expression is using the same keys for both resources, and how in the attachment we index into the instance resource object

Mr.Devops avatar
Mr.Devops

I originally attempted to use the data source below and feed that into the target_id. i guess can still do this and fee that into the for_each ?

data "aws_instances" "test" {
  instance_tags = {
    env = "stage"
  }

  filter {
    name   = "tag:name"
    values = "xyz*"
  }
loren avatar

technically data sources are ok, as long as they do not themselves depend on resources created in the same tfstate

Mr.Devops avatar
Mr.Devops

got it

Mr.Devops avatar
Mr.Devops

very helpful Loren thx you!

1
Asis avatar

Our team runs terragrunt modules locally , what are the best solutions/ practice to run modules in more unified pattern Note we have s3bucket + dynamodb for locking state

Mohammed Yahya avatar
Mohammed Yahya

run it in CICD

Mohammed Yahya avatar
Mohammed Yahya

add security checks to the pipeline also

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

also try #terragrunt

Asis avatar

@Erik Osterman (Cloud Posse) thanks

uselessuseofcat avatar
uselessuseofcat

Is someone still using this Lambda https://registry.terraform.io/modules/blinkist/airship-ecs-instance-draining/aws/latest?

It looks it’s not working anymore :disappointed:

Lambda logs looks something like this: Event needs-retry.autoscaling.CompleteLifecycleAction: calling handler <botocore.retryhandler.RetryHandler object at 0x7fe662775b10> and Event request-created.autoscaling.CompleteLifecycleAction: calling handler <bound method RequestSigner.handler of <botocore.signers.RequestSigner object at 0x7fe6627a99d0>>

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

See #airship @maarten

johntellsall avatar
johntellsall

This tool visualizes Terraform state files! Has anyone played with Pluralith? https://www.pluralith.com/

3

2021-02-11

Mohammed Yahya avatar
Mohammed Yahya
Provision AWS infrastructure using Terraform (By HashiCorp): an example of running Amazon ECS tasks on AWS Fargate | Amazon Web Servicesattachment image

AWS Fargate is a a serverless compute engine that supports several common container use cases, like running micro-services architecture applications, batch processing, machine learning applications, and migrating on premise applications to the cloud without having to manage servers or clusters of Amazon EC2 instances. AWS customers have a choice of fully managed container services, including […]

1
Balazs Varga avatar
Balazs Varga

hello, can I ask about vault? Is there a way to autogenerate missing passwords and store them on vault? So I don’t need to provide from helm/helmfile.

uselessuseofcat avatar
uselessuseofcat

Hi, I have the rolling update set with Terraform and CF for ECS clusters. This is how it works:

  • I have ECS cluster behind ALB
  • When there is an AMI change, Terraform applies it
  • ASG, which was created with CloudFormation template on Terraform, adds a new instance (this was not possible with TF module)

Here it becomes funky:

  • Target group sees status of the old instance as “initial draining” 30 seconds after I run terraform apply
  • Healthchecks are failing, because, of course, container on my new EC2 instance is not started yet and Target Group sees it as unhealthy, but doesn’t continue to serve traffic from the old insntace.
  • Then I got bunch of 503s and after 502s until the container on the new instance is up

These parts are ok:

  • I have Lambda function that drains ECS containers
  • After draining finishes, the instance is killed

This worked before, when I had EC2 checks on ASG. Now I want to use TargetGroupArns to check HTTP and to see if I’ll get 200 and if application is really running.

Is there any workaround on this?

Like to set draining of instances with a delay of few minutes?

andrewAtX82 avatar
andrewAtX82

Hi guys, I was looking through the terraform-aws-ses-lambda-forwarder code as I was intrigued to see a system close to one that we devised. I see that listed under the limitations is the use of a verified domain as the sender. We use SRS to be compliant with without breaking SPF . I’ve had success using senrews to do SRS0 and SRS1 rewrites.

Another thing to note is additional cleanup of the email. SES very loosely accepts emails, however is very strict with what it sends out. You will need to clean up duplicate headers, remove DKIM signatures and return-paths, etc when forwarding . The aws-lambda-ses-forwarder has some problems with sending bounce messages and a host of other minor bugs. Just a heads up.

cloudposse/terraform-aws-ses-lambda-forwarder

This is a terraform module that creates an email forwarder using a combination of AWS SES and Lambda running the aws-lambda-ses-forwarder NPM module. - cloudposse/terraform-aws-ses-lambda-forwarder

Sender Rewriting Scheme

For a mail transfer agent (MTA), the Sender Rewriting Scheme (SRS) is a scheme for rewriting the envelope sender address of an email message, in view of remailing it. In this context, remailing is a kind of email forwarding. SRS was devised in order to forward email without breaking the Sender Policy Framework (SPF), in 2003.

Sender Policy Framework

Sender Policy Framework (SPF) is an email authentication method designed to detect forging sender addresses during the delivery of the email. SPF alone, though, is limited to detecting a forged sender claim in the envelope of the email, which is used when the mail gets bounced. Only in combination with DMARC can it be used to detect the forging of the visible sender in emails (email spoofing), a technique often used in phishing and email spam. SPF allows the receiving mail server to check during mail delivery that a mail claiming to come from a specific domain is submitted by an IP address authorized by that domain’s administrators. The list of authorized sending hosts and IP addresses for a domain is published in the DNS records for that domain. Sender Policy Framework is defined in RFC 7208 dated April 2014 as a “proposed standard”.

senrews

Sender Rewriting Scheme module for emails

arithmetric/aws-lambda-ses-forwarder

Serverless email forwarding using AWS Lambda and SES - arithmetric/aws-lambda-ses-forwarder

Vincent Sheffer avatar
Vincent Sheffer

Can I get some guidance on the difference between terraform-aws-eks-workers and terraform-aws-eks-node-group ? They both seem very similar and both are actively being maintained. When should we use one over the other?

cloudposse/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

cloudposse/terraform-aws-eks-node-group

Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

one is considered self-managed and uses ASGs the other is for AWS managed node pools

cloudposse/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

cloudposse/terraform-aws-eks-node-group

Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.

Vincent Sheffer avatar
Vincent Sheffer

Thanks got it. Despite my comment, I prefer to user workers, but the nodes aren’t joining the cluster. I’ve tracked it down to this error:

Tag "KubernetesCluster" nor "[kubernetes.io/cluster/](http://kubernetes.io/cluster/)..." not found; Kubernetes may behave unexpectedly

Vincent Sheffer avatar
Vincent Sheffer

The “kubernetes.io/cluster/${var.cluster_name}” = “owned”

tag is not propagating to my nodes.

Is this a bug or user error?

nnsense avatar
nnsense

@Erik Osterman (Cloud Posse) QQ: what module you advise to add if I need autoscaling with node-group like the workers module does? https://github.com/cloudposse/terraform-aws-ec2-autoscale-group ? Apparently node-group module isn’t creating scaling policy, or I’m misunderstanding how that works?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

autoscaling in kubernetes requires a controller. Doesn’t matter which node group flavor.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

look up the aws kubernetes cluster autoscaler

nnsense avatar
nnsense

Thanks for taking the time to answer :slightly_smiling_face: I will check that, anyway my issue was my blame eventually, I’ve deployed the autoscaler using the helm chart and I didn’t set the region

Vincent Sheffer avatar
Vincent Sheffer

I’ve tried both. eks-node-group seems to work better for me, but wondering what the experience is like for others.

jose.amengual avatar
jose.amengual

is this correct?

dynamic "custom_header" {
        for_each = lookup(origin.value, "custom_header", [])
        content {
          name  = custom_header.value.name
          value = custom_header.value.value
        }
      }
jose.amengual avatar
jose.amengual

the question is more custom_header.value.value might be using reserved keyword?

jose.amengual avatar
jose.amengual

I wonder if this have to be something like custom_header.value.custom_header_value or something like that

jose.amengual avatar
jose.amengual

This is tf 0.12.24

jose.amengual avatar
jose.amengual

I figure it out

jose.amengual avatar
jose.amengual

the other items need to have all the values

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Cloud automation startup Spacelift raises $6M Series A led by Blossom Capital – TechCrunchattachment image

Spacelift, a startup that automates the management of cloud infrastructure, has raised $6 million in a Series A funding round led by London’s Blossom Capital. Polish fund Inovo Venture Partners and Hoxton Ventures are also investing. The Polish and U.S.-based startup is taking advantage of the oppo…

5
1
2
Paweł Hytry - Spacelift avatar
Paweł Hytry - Spacelift

it’s @marcinw

Cloud automation startup Spacelift raises $6M Series A led by Blossom Capital – TechCrunchattachment image

Spacelift, a startup that automates the management of cloud infrastructure, has raised $6 million in a Series A funding round led by London’s Blossom Capital. Polish fund Inovo Venture Partners and Hoxton Ventures are also investing. The Polish and U.S.-based startup is taking advantage of the oppo…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

eek thanks!

Mohammed Yahya avatar
Mohammed Yahya

https://www.manatee.app/

Manatee alerts you the instant your infrastructure drifts from Terraform. It's free to use, supports all major clouds, and takes minutes to set up.
1
Marcin Brański avatar
Marcin Brański

As much as I like such idea I think I could never trust SaaS to give RO credentials to whole AWS. This could wuickly go wrong and be exploited. Are there any opensource solutions like this?

this1
Mohammed Yahya avatar
Mohammed Yahya

I agree, I tested it, it’s early alpha, check driftctl

Mohammed Yahya avatar
Mohammed Yahya
How to catch your infrastructure drift in minutes using driftctl.attachment image

Learn how to use driftctl in a real-life environment, with multiple Terraform states and output filtering.

2021-02-12

Mohammed Yahya avatar
Mohammed Yahya

one way to keep secrets out of your state file https://secrethub.io/docs/guides/terraform/

Secrets Management for Terraform - Documentation - SecretHubattachment image

A step-by-step guide to manage secrets in Terraform.

Patrick Jahns avatar
Patrick Jahns

whats the difference to using datasoucre with i.e. ssm parameter store ?

Secrets Management for Terraform - Documentation - SecretHubattachment image

A step-by-step guide to manage secrets in Terraform.

Tim Schwenke avatar
Tim Schwenke

Are the Terraform modules https://github.com/cloudposse/terraform-null-label and https://github.com/cloudposse/terraform-terraform-label aimed at usage in any Terraform module or specifically made for Cloud Posse modules? I’m asking this because the docs mention [context.tf](http://context.tf) files that are part of all Cloud Posse modules that use terraform-null-label

cloudposse/terraform-null-label

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label

cloudposse/terraform-terraform-label

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-terraform-label

1
mfridh avatar

I use null-label and [context.tf](http://context.tf) also in some non-cloudposse things where it makes sense to be able to pass just that (the full context) between various modules. It’s neat.

cloudposse/terraform-null-label

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label

cloudposse/terraform-terraform-label

Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-terraform-label

Tim Schwenke avatar
Tim Schwenke

Ah ok. But you can also just use it without [context.tf](http://context.tf) , right? Or does that not make any sense? Though in that case one would have to define stuff like the namespace and so on as variables manually

mfridh avatar

I add [context.tf](http://context.tf) to every module I want to be able to accept incoming context

It does both the variable + output additions all-in-one, which is the neat thing about it.

1
Tim Schwenke avatar
Tim Schwenke

Alright, thanks for the help

mfridh avatar

This probably doesn’t mean much to you but an example:

module db_context {
  source = "../db-context"

  kubernetes_cluster            = var.kubernetes_cluster
  cluster_identifier = var.cluster_identifier
}

module service_context {
  source = "../service-context"

  context   = module.this.context
  delimiter = "_"
  kubernetes_cluster   = var.kubernetes_cluster
}

resource random_password default {
  length  = 32
  special = false
}

resource mysql_database default {
  name = module.service_context.id
}

# ...
1
mfridh avatar

all of the involved modules have the [context.tf](http://context.tf) copied in. Makes it really convenient.

Tim Schwenke avatar
Tim Schwenke

I was asking because I’m setting up the infrastructure for a product where every env / stage has it’s own account and I was wondering if I should adapt my TF code to use null-labels. Kind of becomes redundant if everything atm is separated by account already to still have the environment in the id values

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
SweetOps Slack Archive

SweetOps is a collaborative DevOps community. We welcome engineers from around the world of all skill levels, backgrounds, and experience to join us! This is the best place to talk shop, ask questions, solicit feedback, and work together as a community to build sweet infrastructure.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the [context.tf](http://context.tf) pattern is a game changer for working with a lot of modules.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

We’re probably at or near the point we can sunset terraform-terraform-label . I believe :100: of our HCL2 modules use terraform-null-label now so we can use the [context.tf](http://context.tf) pattern. @Maxim Mironenko (Cloud Posse)?

Mohammed Yahya avatar
Mohammed Yahya

terraform-provider-aws v3.28.0

  • +7 NEW FEATURES including aws_securityhub_organization_admin_account
  • +35 NEW ENHANCEMENTS
  • +10 BUG FIXES

https://github.com/hashicorp/terraform-provider-aws/releases/tag/v3.28.0

Release v3.28.0 · hashicorp/terraform-provider-aws

FEATURES: New Data Source: aws_cloudfront_cache_policy (#17336) New Resource: aws_cloudfront_cache_policy (#17336) New Resource: aws_cloudfront_realtime_log_config (#14974) New Resource: aws_confi…

Mohammed Yahya avatar
Mohammed Yahya
Release v3.28.0 · hashicorp/terraform-provider-aws

FEATURES: New Data Source: aws_cloudfront_cache_policy (#17336) New Resource: aws_cloudfront_cache_policy (#17336) New Resource: aws_cloudfront_realtime_log_config (#14974) New Resource: aws_confi…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@matt heads up

2021-02-13

Mohammed Yahya avatar
Mohammed Yahya
CloudGram

Draw architecture diagrams in the browser declaratively.

2
1
Bob avatar

Hello guys, this might have been asked before, but what criteria do you guys use when evaluating if something needs to be created as a module?

I recently joined a company with 6 cloud engineers that have been discussing about maturing their terraform deployment, and modules have been brought up. The -legacy- engineers wanted to create a module for everything, even simple ones. For example, azure resource groups, and the arguments were:

  1. Takes me 10 mins to write it anyway
  2. I can make it accept a comma-delimited name, and it creates multiple resource groups for me
  3. If you want to create 1 resource group, the module can handle it anyway
  4. I can ask for required tags on the resource groups, and I’m sure we’re going to need something else on those resource groups in the future

Our goal is to eventually allow our app dev teams to create their terraform code to deploy their infrastructure for their apps. They originally managed the deployment by creating standalone deployments for each resources - like 1 deployment for resource group, 1 for SQL PaaS, 1 for storage account - all separate repositories and “pipeline”. We would like to move to more application-based repositories that contains all the terraform code/infrastructure needed for the said application (shared services infrastructure like AKS will be separately managed)

I feel this is a case of over engineering/YAGNI, but being new, I may be biased. I don’t feel simple/standalone terraform resources should have another wrapper on top of it (module). Is there a compelling reason why this pattern can bite us in the future (aka very bad idea)?

Zach avatar

• does it create multiple resources that all work together to create a final “thing”

• am I going to use this again

• does it need standardization of names, tags, etc

2
Alex Jurkiewicz avatar
Alex Jurkiewicz

Similar logic to converting code to a library. The first time, write it in-place, hardcoded. The second time, copy and paste. The third time, consider common code.

5
kskewes avatar
kskewes

If we need to change this later, what are the extension points and possible migration options (tfstate?)

kskewes avatar
kskewes

We haven’t wrapped any cloud posse modules but considered it… Until for each came along of modules and thus now we definitely won’t.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)


Takes me 10 mins to write it anyway

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if it takes 10 mins, then there is no need for such a module

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

a good useful module takes a day to write + examples, tests, docs etc.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

and since modules are supposed to be reusable in many projects by many people, there is no way around that ^

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

otherwise it will create more problems than it solves

loren avatar

Modules give you an opportunity to write tests and version smaller components of your infrastructure. I feel that is very valuable. But I would recommend reusing high quality community modules as much as possible. As @Andriy Knysh (Cloud Posse) mentions, maintaining good modules takes a fair amount of work, and specialization/experience in some obscure terraform details that may not add business value for you

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

yes, it’s similar to any other programming language. In your own code, you use functions to aggregate some common functionality and make the rest of the code DRY(er). But functions in a public library are completely diff things, they are for public consumption and everything else matters even more than the code itself (docs, examples, tests, tutorials, etc.)

Alex Jurkiewicz avatar
Alex Jurkiewicz


If we need to change this later, what are the extension points and possible migration options (tfstate?)
Well, there are good manual escape hatches for converting a resource from manually managed to part of a repo, eg terraform state mv. The workflow looks like this:

  1. Migrate your TF code from aws_rds_cluster resources to cloudposse-rds-cluster module
  2. Run terraform plan and see what your RDS resources get shifted from, eg from aws_rds.cluster.main to module.rds.aws_rds_cluster.primary[0]
  3. Run terraform state mv $old $new and then re-run terraform plan to see how many changes the module still wants to make. You’ll often find modules want to change things that require a rebuild, like the name or other important stuff. If you own the module it’s easy to add lifecycle directives to ignore changes in name. If you don’t own the module, this brings me to my more important point:

It’s not always worth migrating existing pre-module uses to use the module, unless the fit is 100%. Nothing is worse than an in-house module with 100 options each used by a single consumer.

2
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

2021-02-14

2021-02-15

Yashodhan Ghadge avatar
Yashodhan Ghadge

hey guys!, Ive got a VPC that some other team has made via terraform , can I define a vpc module and pass in the vpc id to it to add a few more subnets to it?

roth.andy avatar
roth.andy

Use one of the subnet modules

roth.andy avatar
roth.andy

A vpc module will make a new vpc

Yashodhan Ghadge avatar
Yashodhan Ghadge

Oh I never thought of that

Yashodhan Ghadge avatar
Yashodhan Ghadge

Let me try to use the subnet modules

Yashodhan Ghadge avatar
Yashodhan Ghadge
cloudposse/terraform-aws-dynamic-subnets

Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets

cloudposse/terraform-aws-named-subnets

Terraform module for named subnets provisioning. Contribute to cloudposse/terraform-aws-named-subnets development by creating an account on GitHub.

Yashodhan Ghadge avatar
Yashodhan Ghadge

as far as I can tell, they are identical to a large degree

Yashodhan Ghadge avatar
Yashodhan Ghadge

which one should I use?

roth.andy avatar
roth.andy

Use whichever fits your use case best. I usually use the dynamic one

Yashodhan Ghadge avatar
Yashodhan Ghadge

ah got it thanks

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

unrelated: @roth.andy do you think we should create a terraform-aws-static-subnets module for manually defining subnets?

roth.andy avatar
roth.andy

Is that not what the named subnets module does?

roth.andy avatar
roth.andy

Idk I’ve never used it

2021-02-16

Reinholds Zviedris avatar
Reinholds Zviedris
09:44:46 AM

Maybe here is someone who encountered something like that?

Hey all! Having an issue with running terraform init / terraform plan as service account on Google Cloud. It has necessary rights for backend bucket where state is stored. Have authenticated against GCP with SA key and account is set as default. gcloud auth list output:

                       Credentialed Accounts
ACTIVE  ACCOUNT
*       [email protected]
        [[email protected]](mailto:[email protected])

gcloud config list output:

[compute]
region = us-east1
zone = us-east1-d
[core]
account = [email protected]
disable_usage_reporting = True
project = project

Your active configuration is: [default]

When I run terraform init / terraform plan then it’s run using [[email protected]](mailto:[email protected]) instead of SA (That I see from activity log in GCP console about infra bucket access). Anyone had something similar and could advice what to do and where to proceed? Any help would be appreciated. Tried already couple of suggestions from what I found on net, but no luck.

Reinholds Zviedris avatar
Reinholds Zviedris

Solved - using GOOGLE_APPLICATION_CREDENTIALS env variable pointing to SA key file.

Hey all! Having an issue with running terraform init / terraform plan as service account on Google Cloud. It has necessary rights for backend bucket where state is stored. Have authenticated against GCP with SA key and account is set as default. gcloud auth list output:

                       Credentialed Accounts
ACTIVE  ACCOUNT
*       [email protected]
        [[email protected]](mailto:[email protected])

gcloud config list output:

[compute]
region = us-east1
zone = us-east1-d
[core]
account = [email protected]
disable_usage_reporting = True
project = project

Your active configuration is: [default]

When I run terraform init / terraform plan then it’s run using [[email protected]](mailto:[email protected]) instead of SA (That I see from activity log in GCP console about infra bucket access). Anyone had something similar and could advice what to do and where to proceed? Any help would be appreciated. Tried already couple of suggestions from what I found on net, but no luck.

1
itsmebharat.gcp avatar
itsmebharat.gcp

Hi, I have an ec2 cluster, there are multiple tags(Name) associated with cluster instances. I want to fetch these tags(Name) and pass it to a module that accepts a list of EC2 instances. Any suggestions ?

loren avatar

Congrats to the BridgeCrew folks? “Prisma Cloud Shifts Left With Proposed Acquisition of Bridgecrew” https://blog.paloaltonetworks.com/2021/02/prisma-cloud-bridgecrew/

Prisma Cloud Shifts Left With Proposed Acquisition of Bridgecrewattachment image

The proposed acquisition of Bridgecrew will expand Prisma Cloud with leading Infrastructure as Code (IaC) security.

4
Mohammed Yahya avatar
Mohammed Yahya
06:57:20 PM

no freebies

Prisma Cloud Shifts Left With Proposed Acquisition of Bridgecrewattachment image

The proposed acquisition of Bridgecrew will expand Prisma Cloud with leading Infrastructure as Code (IaC) security.

barak avatar

@Mohammed Yahya No worries. Palo Alto Networks will continue to invest in Bridgecrew’s open-source initiatives as part of its ongoing commitment to DevOps community. OSS tools are here to stay and grow, now in even a faster pace .

3
1
Mohammed Yahya avatar
Mohammed Yahya

@barak Thanks, since I’m heavily using it.

barak avatar

Great to hear. I’ll continue to maintain and create those. Now under the PANW umbrella.

2
kgib avatar

I’d like to use the EKS module to deploy EKS with workers in private subnet

kgib avatar

what is the simplest method to accomplish?

melissa Jenner avatar
melissa Jenner

I use the terragrunt (terraform) provisioned a VPC times ago. But, today, when I re-run the script I got, “Remote state S3 bucket blue-green-terraform-state does not exist or you don’t have permissions to access it.”. I login to AWS console, the S3 bucket blue-green-terraform-state is there. I have no clue. Can someone help?

$ terragrunt init [terragrunt] [/depot/infra/dev/Oregon/nsm/green/vpc] Running command: terraform –version [terragrunt] Terraform version: 0.13.5 [terragrunt] Reading Terragrunt config file at /depot/infra/dev/Oregon/nsm/green/vpc/terragrunt.hcl [terragrunt] Initializing remote state for the s3 backend [terragrunt] [terragrunt] Remote state S3 bucket blue-green-terraform-state does not exist or you don’t have permissions to access it. Would you like Terragrunt to create it? (y/n)

$ cat terragrunt.hcl 
remote_state {
  backend = "s3"

  config = {
    encrypt        = false
    bucket         = "blue-green-terraform-state"
    key            = "infra/Oregon/green/vpc/terraform.tfstate"
    region         = "us-west-2"
    dynamodb_table = "green-vpc-lock-table"
  }
}

$ env | grep AWS
AWS_SECRET_ACCESS_KEY=#####################
AWS_ACCESS_KEY_ID=############
loren avatar

doublecheck everything around the credential and bucket… for example, is the region correct? is the access key disabled/deleted? does aws sts get-caller-identity return the expected account/user info?

melissa Jenner avatar
melissa Jenner

$ aws sts get-caller-identity

Could not connect to the endpoint URL: “https://sts.amazonaws.com/

AWS Identity & Access Management - Amazon Web Services

AWS Identity and Access Management (IAM) enables you to manage access to AWS services and resources securely.

melissa Jenner avatar
melissa Jenner

What does it mean?

loren avatar

sounds like your networking is broken

melissa Jenner avatar
melissa Jenner

Thank you.

loren avatar

or your ca bundle is out of date

melissa Jenner avatar
melissa Jenner

What is ca bundle?

melissa Jenner avatar
melissa Jenner

And how to update if it is out dated?

loren avatar

ca is certificate authority. basically, your system may not “trust” the remote endpoint

melissa Jenner avatar
melissa Jenner

Oh

loren avatar

how to update it depends on your system/platform

loren avatar

you can add the --debug flag to get more details: aws sts get-caller-identity --debug

melissa Jenner avatar
melissa Jenner

The “aws sts get-caller-identity” returned proper value. But, it still complains S3 bucket does not exist. aws sts get-caller-identity { “UserId”: “#################”, “Account”: “############”, “Arn”: “arnawsiam:user/albert” } $ terragrunt init [terragrunt] Terraform version: 0.13.5 [terragrunt] Reading Terragrunt config file at depot/infra/dev/Oregon/nsm/blue/vpc/terragrunt.hcl [terragrunt] Initializing remote state for the s3 backend [terragrunt] Remote state S3 bucket blue-green-terraform-state does not exist or you don’t have permissions to access it. Would you like Terragrunt to create it? (y/n) Any clue?

melissa Jenner avatar
melissa Jenner

I found out. The command, “aws sts get-caller-identity” returned to me the wrong ID. It is not the ID I use.

melissa Jenner avatar
melissa Jenner

I have no clue how that happens.

loren avatar

because you have the wrong access/secret key exported into the environment

loren avatar

an access/secret key is tied incontrovertibly to a specific iam user and account. wrong key, wrong user, wrong account

melissa Jenner avatar
melissa Jenner

I use Ubuntu. I exported the ACCESS_KEY_ID and SECRET_ACCESS_KEY AWS_SECRET_ACCESS_KEY=############################ AWS_ACCESS_KEY_ID=#################AFA

aws sts get-caller-identity { “UserId”: “###########BAP”, “Account”: “9999999934618”, “Arn”: “arnawsiam:user/albert” } But, aws sts get-caller-identity gives wrong ID.

loren avatar

then that key is not the one you think it is

2021-02-17

Thomas Windell avatar
Thomas Windell

Hi there! Does any know if there would be a way to use this module but have more that one service and task definition? Having multiple service seems like a common architecture with AWS ECS - is there perhaps another module that is more suitable?

cloudposse/terraform-aws-ecs-alb-service-task

Terraform module which implements an ECS service which exposes a web service via ALB. - cloudposse/terraform-aws-ecs-alb-service-task

RB avatar

The container_definition_json var has this description
A string containing a JSON-encoded array of container definitions
(“[{ “name”: “container1”, … }, { “name”: “container2”, … }]”).
See AWS docs, https://github.com/cloudposse/terraform-aws-ecs-container-definition, or
Terraform docs

cloudposse/terraform-aws-ecs-container-definition

Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource - cloudposse/terraform-aws-ecs-container-definition

RB avatar

So the answer is yes, you can use multiple json documents in this module

Thomas Windell avatar
Thomas Windell

@RB Thanks for your response!

Although I am a bit confused - my understanding is that the container_definition_json var would allow multiple containers to be created for the task definition, which is not what I am asking.

A container definition is a sub set of a task definition though. And then a service contains a task definition.

ContainerDefinition - Amazon Elastic Container Service

Container definitions are used in task definitions to describe the different containers that are launched as part of a task.

Service - Amazon Elastic Container Service

Details on a service within a cluster

TaskDefinition - Amazon Elastic Container Service

The details of a task definition which describes the container and volume definitions of an Amazon Elastic Container Service task. You can specify which Docker images to use, the required resources, and other configurations related to launching the task definition through an Amazon ECS service or task.

RB avatar

Oh woops, my bad

RB avatar

For multiple services with different task definitions, wouldn’t you simply use another reference to that same module and then repeat for however many services you want?

RB avatar

@Thomas Windell ^

Thomas Windell avatar
Thomas Windell

@RB You are right! I spoke to a colleague and he helped me improve my understanding of modules. Tbh I am quite new to terraform. Thanks for your help

np1
Bart Coddens avatar
Bart Coddens

Hey all with the s3 user module: https://github.com/cloudposse/terraform-aws-iam-s3-user I want to do this:

cloudposse/terraform-aws-iam-s3-user

Terraform module to provision a basic IAM user with permissions to access S3 resources, e.g. to give the user read/write/delete access to the objects in an S3 bucket - cloudposse/terraform-aws-iam-…

Bart Coddens avatar
Bart Coddens
module "s3_user" {
  source       = "cloudposse/iam-s3-user/aws"
  label_order  = ["namespace", "name", "environment", "stage", "attributes"]
  namespace    = "dspace"
  name         = var.name
  environment  = "s3"
  stage        = var.tier
  s3_actions   = ["s3:GetBucketAcl", "s3:GetBucketVersioning", "s3:ListBucket", "s3:GetBucketLocation"]
  s3_resources = ["arn:aws:s3:::cloudposseisawesome"]
  s3_actions   = ["s3:PutObject", "s3:GetObject", "s3:DeleteObject"]
  s3_resources = ["arn:aws:s3:::cloudposseisawesome/*"]
}
Bart Coddens avatar
Bart Coddens

This does not work because the s3_actions is defined double

Bart Coddens avatar
Bart Coddens

a bit puzzled howto do this

pjaudiomv avatar
pjaudiomv

why dont you just wanna do this

module "s3_user" {
  source       = "cloudposse/iam-s3-user/aws"
  label_order  = ["namespace", "name", "environment", "stage", "attributes"]
  namespace    = "dspace"
  name         = var.name
  environment  = "s3"
  stage        = var.tier
  s3_actions   = ["s3:GetBucketAcl", "s3:GetBucketVersioning", "s3:ListBucket", "s3:GetBucketLocation", "s3:PutObject", "s3:GetObject", "s3:DeleteObject"]
  s3_resources = ["arn:aws:s3:::cloudposseisawesome", "arn:aws:s3:::cloudposseisawesome/*"]
}
Bart Coddens avatar
Bart Coddens

aha that’s the correct syntax

Bart Coddens avatar
Bart Coddens

now I need this for another bucket as well:

Bart Coddens avatar
Bart Coddens

{ “Effect”: “Allow”, “Action”: [ “s3:GetObject” ], “Resource”: “arnawss3:::cloudposseisawesome-prod/” },

Bart Coddens avatar
Bart Coddens

the user needs to have read and write permissions on it’s own bucket but only read permissions on the production bucket

Bart Coddens avatar
Bart Coddens

the original policy looked like this:

pjaudiomv avatar
pjaudiomv

ok in that case the module doesnt support that msot likely youll have to take the user output from the module and attach a policy outside the mdouel most likely using aws_iam_user_policy_attachment

Bart Coddens avatar
Bart Coddens

"Statement": [ { “Effect”: “Allow”, “Action”: [ “s3:GetBucketAcl”, “s3:GetBucketVersioning”, “s3:ListBucket”, “s3:GetBucketLocation” ], “Resource”: “arnawss3:::cloudposseisawesome-prod” }, { “Effect”: “Allow”, “Action”: [ “s3:GetBucketAcl”, “s3:GetBucketVersioning”, “s3:ListBucket”, “s3:GetBucketLocation” ], “Resource”: “arnawss3:::cloudposseisawesome-test” }, { “Effect”: “Allow”, “Action”: [ “s3:GetObject” ], “Resource”: “arnawss3:::cloudposseisawesome-prod/” }, { “Effect”: “Allow”, “Action”: [ “s3:PutObject”, “s3:GetObject”, “s3:DeleteObject” ], “Resource”: “arnawss3:::cloudposseisawesome-test/” } ]

Bart Coddens avatar
Bart Coddens

I see

Bart Coddens avatar
Bart Coddens

for your reference, this worked:

Bart Coddens avatar
Bart Coddens

module "s3_user" { source = “cloudposse/iam-system-user/aws” # Cloud Posse recommends pinning every module to a specific version # version = “x.x.x” label_order = [“namespace”, “name”, “environment”, “stage”, “attributes”] namespace = “dspace” name = var.name environment = “s3”

inline_policies_map = { s3 = data.aws_iam_policy_document.s3_policy.json } }

data “aws_iam_policy_document” “s3_policy” { statement { actions = [ “s3:GetBucketAcl”, “s3:GetBucketVersioning”, “s3:ListBucket”, “s3:GetBucketLocation” ] resources = [ “arnawss3:::dspace-${var.name}-s3-prod”, ] } statement { actions = [ “s3:PutObject”, “s3:GetObject”, “s3:DeleteObject”] resources = [ “arnawss3:::dspace-${var.name}-s3-prod/” ] } statement { actions = [ “s3:PutObject”, “s3:GetObject”, “s3:DeleteObject” ] resources = [ “arnawss3:::dspace-${var.name}-s3-test/” ] } }

faith.isims avatar
faith.isims

Hello there ! Please i have a question, i’m using terraform-aws-nlb module, i’m trying to add 2 listners in the same nlb, there is a way to do that ? Thanks !

Matt Gowie avatar
Matt Gowie

Hey Bradai, you can create your 2nd listener and target group outside of the module. Something like this:

module "nlb" {
...
}

resource "aws_lb_listener" "your_listener_name" {
  load_balancer_arn = module.nlb.load_balancer_arn
  ...
}

...
faith.isims avatar
faith.isims

Thank you very match, i will try this :)

Daniel avatar

Hi, I was hoping to understand the background on the sensitive output change introduced in terraform-aws-ecs-container-definition#118.

The PR mentions an issue with the terraform-aws-ecs-alb-service-task module but I cannot find any references or examples of the actual issue’s code or error. Is there any examples of the actual error and the use-case? While I understand 0.14’s sensitive flagging behavior, I’m confused as to what values were being used in the OP’s container definition that were flagged as sensitive and caused this issue. In my modules, all the secrets are dumped into SM/SSM Parameters and only their ARN references are exposed in the container definition. I’ve been using TF 0.14 without issue in this manner. To my knowledge, those are not sensitive values.

My concern is that sensitive outputs are infectious for a lack of better words. Some outputs are indeed sensitive but I don’t see how the container definitions are.

fix: mark outputs as sensitive by syphernl · Pull Request #118 · cloudposse/terraform-aws-ecs-container-definition

what Marks the outputs as sensitive Update workflows etc. missed by #119 why Otherwise TF 0.14 would give an Error: Output refers to sensitive values when using these outputs to feed into other …

Bart Coddens avatar
Bart Coddens

Hi all, I am iplementing replication with this module: https://github.com/cloudposse/terraform-aws-s3-bucket

cloudposse/terraform-aws-s3-bucket

Terraform module that creates an S3 bucket with an optional IAM user for external CI/CD systems - cloudposse/terraform-aws-s3-bucket

Bart Coddens avatar
Bart Coddens

in my original configuration I had:

Bart Coddens avatar
Bart Coddens
replication_configuration {
  role = "cloudposseisthebest-role"

  rules {
    id       = "Replicate to DEEP_ARCHIVE on target"
    priority = 0
    status   = "Enabled"


    destination {
      bucket        = "arn:aws:s3:::cloudposseisthebest-role"
      storage_class = "DEEP_ARCHIVE"
    }
  }
}
Bart Coddens avatar
Bart Coddens

can you set the storage class with this module ?

Joe Hosteny avatar
Joe Hosteny
cloudposse/terraform-aws-s3-bucket

Terraform module that creates an S3 bucket with an optional IAM user for external CI/CD systems - cloudposse/terraform-aws-s3-bucket

Joe Hosteny avatar
Joe Hosteny

Looks like it from the vars, and the implementationin main.tf too

Bart Coddens avatar
Bart Coddens

it confuses me a bit

Bart Coddens avatar
Bart Coddens

when I try:

Bart Coddens avatar
Bart Coddens

replication_rules = "storage_class=DEEP_ARCHIVE"

Bart Coddens avatar
Bart Coddens

it gives back:

Bart Coddens avatar
Bart Coddens

Error: Invalid value for module argument

on main.tf line 67, in module “s3_bucket”: 67: replication_rules = “storage_class=DEEP_ARCHIVE”

Joe Hosteny avatar
Joe Hosteny

Your input needs to be a list of maps, and destination is a nested map within that

Bart Coddens avatar
Bart Coddens

you lost me here

Joe Hosteny avatar
Joe Hosteny

Something like the example here for the module "subnets" section:

Joe Hosteny avatar
Joe Hosteny
HashiCorp Terraform 0.12 Preview: Rich Value Typesattachment image

As part of the lead up to the release of Terraform 0.12, we are publishing a series of feature preview blog posts. The post this week is on the addition of rich value types in variables and outputs. Terraform variables and outputs today support basic primitives and simple lists and maps. Lists and maps in particular have surprising limitations that lead to unintuitive and frustrating errors. Terraform 0.12 allows the use of arbitrarily complex values for both input variables and outputs, and the types of these values can be exactly specified.

Joe Hosteny avatar
Joe Hosteny

Or see the example in the repo where the grants are configured. It is a similar idea:

Joe Hosteny avatar
Joe Hosteny
cloudposse/terraform-aws-s3-bucket

Terraform module that creates an S3 bucket with an optional IAM user for external CI/CD systems - cloudposse/terraform-aws-s3-bucket

Bart Coddens avatar
Bart Coddens

hmmm it sill confuses me

Bart Coddens avatar
Bart Coddens

when I do this:

Bart Coddens avatar
Bart Coddens

replication_rules = [ { id = “Replicate to DEEP_ARCHIVE on target” } ]

Bart Coddens avatar
Bart Coddens

the plan picks it up:

Bart Coddens avatar
Bart Coddens

+ replication_configuration { + role = (known after apply)

      + rules {
          + id       = "Replicate to DEEP_ARCHIVE on target"
          + priority = 0

          + destination {
              + bucket        = "arn<img src="/assets/images/custom_emojis/aws.png" alt="aws" class="em em--custom-icon em-aws">s3:::dspace-allegheny-s3-backup"
              + storage_class = "STANDARD"
            }
        }
    }
Bart Coddens avatar
Bart Coddens

when I try to set it:

Bart Coddens avatar
Bart Coddens

replication_rules = [ { id = “Replicate to DEEP_ARCHIVE on target” storage_class = “DEEP_ARCHIVE” }

Bart Coddens avatar
Bart Coddens

it’s not picked up

Bart Coddens avatar
Bart Coddens

thx Joe, my answer is in the main channel

1
Bart Coddens avatar
Bart Coddens

hah this does the trick:

Bart Coddens avatar
Bart Coddens
  replication_rules = [
    {
      id = "Replicate to DEEP_ARCHIVE on target"
      destination = {
        bucket        = "arn:aws:s3:::cloudposse-${var.name}-is-awesome"
        storage_class = "DEEP_ARCHIVE"
      }
    }
  ]
1
Mohammed Yahya avatar
Mohammed Yahya
Checkov - Visual Studio Marketplace

Extension for Visual Studio Code - Find and fix misconfigurations in infrastructure-as-code manifests like Terraform, Kubernetes, Cloudformation, Serverless framework, Arm templates using Checkov - static analysis for infrastructure as code .

1
Release notes from terraform avatar
Release notes from terraform
06:34:26 PM

v0.14.7 0.14.7 (February 17, 2021) ENHANCEMENTS: cli: Emit an “already installed” event when a provider is found already installed (#27722) provisioner/remote-exec: Can now run in a mode that expects the remote system to be running Windows and excuting commands using the Windows command interpreter, rather than a Unix-style shell. Specify…

Emit ProviderAlreadyInstalled when provider installed by pselle · Pull Request #27722 · hashicorp/terraform

Emit the ProviderAlreadyInstalled event when we successfully verify that we&#39;ve already installed this provider and are skipping installation. Before: $ terraform init Initializing the backend….

mikesew avatar
mikesew

Question about the branches I see in several of the cloudposse TF modules: ie. https://github.com/cloudposse/terraform-aws-rds/branches I see a master, 0.11/master, 0.12/master branch. is the intention to maintain separate branches for each major TF version? update: nevermind, i see that support was dropped for 0.12 , so master = tf0.13 0.12/master = tf0.12 , which i presume features are at standstill 0.11/master = tf0.11, which i presume was stopped back when you guys moved to 0.12

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Not really - we had to do this for the HCL1 → HCL2 cut-over

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

but it’s too much overhead for us to manage multiple versions of our modules for backwards compatibility

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Probably one for @Erik Osterman (Cloud Posse) and the rest of the posse: has there been consideration for the use of Sematic Versioning (https://semver.org/ for those readers who haven’t seen it before) of the various modules? With the recent moves around AWS provider updates and more recently the minimum Terraform versions changing, it’s been a little harder than I’d like to use pessimistic versioning to track releases without surprise breaking changes.

1
David Lozano avatar
David Lozano

Hi everyone,

Does anyone know why is this conditions returning false? and what would be the right expression to compare with to get true ?

main.tf

variable "empty_list" {
  type = list(string)
  default = []
  }

console

tf console
> var.empty_list
tolist([])
> 

> var.empty_list == []
false
> var.empty_list == tolist([])
false
1
Alex Jurkiewicz avatar
Alex Jurkiewicz

Looks like 0.14-specific behaviour, and not working as intended, because

> [] == []
true
> tolist([]) == tolist([])
true
Alex Jurkiewicz avatar
Alex Jurkiewicz

however I’d say nobody has really noticed because a more intuitive test is length(var.empty_list) == 0

2
Alex Jurkiewicz avatar
Alex Jurkiewicz

it might be because tolist([]) doesn’t generate an object which is the same type as an empty list(string)

David Lozano avatar
David Lozano

don’t have this problem if change var type = list() instead of list(string). but don’t wanna lose the extra data validation. there is something weird when specifying the data type in the list.

Alex Jurkiewicz avatar
Alex Jurkiewicz

It looks like if there is at least a single element in the list, the comparison works as long as you include the ugly tolist

Alex Jurkiewicz avatar
Alex Jurkiewicz
> var.empty_list
tolist([
  "a",
])
> var.empty_list == tolist(["a"])
true
Alex Jurkiewicz avatar
Alex Jurkiewicz

another probably-unfixable HCL wart

David Lozano avatar
David Lozano

Yeah, it works with a non-empty list without issues. I don’t remember this problem in previous TF versions. I’m currently using v0.14.5. Thank you @Alex Jurkiewicz

1
loren avatar

I’d report that as a bug

1
Alex Jurkiewicz avatar
Alex Jurkiewicz

link the issue if you do, I’m interested

1

2021-02-18

Bart Coddens avatar
Bart Coddens

HI all, I am a bit confused with tagging my root volume:

Bart Coddens avatar
Bart Coddens
resource "aws_instance" "bladibla" {
  disable_api_termination     = true

  tags = {
    "Tier"        = "DEV"
    "Application" = "DSpace"
    "Name"        = "UPGRADE-EXPRESS"
    "Terraform"   = "True"
    "Patch Group" = "patch-dev"
  }

  root_block_device {
    volume_type           = "standard"
    volume_size           = 30
    delete_on_termination = false
      tags = {
        "Application" = "DSpace"
        "Data"        = "HOME"
        "Name"        = "UPGRADE-EXPRESS-HOME"
        "Tier"        = "DEV"
        }
      }
    }
Mohammed Yahya avatar
Mohammed Yahya
resource "aws_instance" "bladibla" {
  disable_api_termination     = true
  tags = var.tags
  volume_tags = var.tags
  root_block_device {
    volume_type           = "standard"
    volume_size           = 30
    delete_on_termination = false
      }
    }
Bart Coddens avatar
Bart Coddens

Yeah but the warning shows:

Bart Coddens avatar
Bart Coddens

Do not use volume_tags if you plan to manage block device tags outside the aws_instance configuration, such as using tags in an aws_ebs_volume resource attached via aws_volume_attachment. Doing so will result in resource cycling and inconsistent behavior.

Bart Coddens avatar
Bart Coddens

which is the case:

Bart Coddens avatar
Bart Coddens

resource “aws_ebs_volume” “UPGRADE-HOME” { availability_zone = aws_instance.DE-UPGRADE.availability_zone size = 400 type = “standard” tags = { “Application” = “DSpace” “Data” = “HOME” “Name” = “UPGRADE-EXPRESS-HOME” “Tier” = “DEV” } }

Mohammed Yahya avatar
Mohammed Yahya

is the volume already there ?

Bart Coddens avatar
Bart Coddens

So I combine the two

Mohammed Yahya avatar
Mohammed Yahya

ag I see

Bart Coddens avatar
Bart Coddens

yes it’s the root volume

Mohammed Yahya avatar
Mohammed Yahya

ok then use it in one place

Bart Coddens avatar
Bart Coddens

well the docs show:

Mohammed Yahya avatar
Mohammed Yahya

either in the aws_ebs_volume or in aws_instance

Bart Coddens avatar
Bart Coddens

yes true, so when I look here:

Bart Coddens avatar
Bart Coddens

it shows:

Bart Coddens avatar
Bart Coddens

tags - (Optional) A map of tags to assign to the device.

Mohammed Yahya avatar
Mohammed Yahya

Yes, that obvious, but what you are trying do is to define the same attribute from two arguments in tow different resources. that’s why race condition will occur

Bart Coddens avatar
Bart Coddens

but you cannot convert the root block device into a aws_ebs_volume config right ?

Mohammed Yahya avatar
Mohammed Yahya
Currently, changes to the ebs_block_device configuration of existing resources cannot be automatically detected by Terraform. To manage changes and attachments of an EBS block to an instance, use the aws_ebs_volume and aws_volume_attachment resources instead. If you use ebs_block_device on an aws_instance, Terraform will assume management over the full set of non-root EBS block devices for the instance, treating additional block devices as drift. For this reason, ebs_block_device cannot be mixed with external aws_ebs_volume and aws_volume_attachment resources for a given instance.
Mohammed Yahya avatar
Mohammed Yahya

so use use the aws_ebs_volume and aws_volume_attachment and add tags there not in aws_instance

Bart Coddens avatar
Bart Coddens

good

Bart Coddens avatar
Bart Coddens

so better to leave it untouched

1
Bart Coddens avatar
Bart Coddens

when I do this, the tool refuses with:

Bart Coddens avatar
Bart Coddens

tags is not expected here

Nikola Milic avatar
Nikola Milic

If i use s3 backend for my terraform state, how should I fetch that information for use in my web application or some job in the pipeline?

Use this assumptions:

  1. terraform did provision my resources (let’s say - RDS) and saved the state remotely on s3
  2. my web application needs those resource informations for the provisioned RDS (some are secrets)

Here are what’s coming to my mind:

  1. write a shell script that uses terraform CLI to fetch these secrets from the state and write them to .env file so that the web app can load them
  2. use some secret management software, from AWS? Vault (Overkill?) Take note that I use Gitlab CI for the pipeline, and I know that there is a Terraform integration present there, but I want to know what is the correct way of managing this if I were to transition to Github pipelines some day or something else.
Adnan avatar
$ terraform output
lb_url = "<http://lb-5YI-project-alpha-dev-2144336064.us-east-1.elb.amazonaws.com/>"
vpc_id = "vpc-004c2d1ba7394b3d6"
web_server_count = 4

there is also a json output flag

terraform output -json
1
Ofir Rabanian avatar
Ofir Rabanian

though he mentioned that some are secrets

1
Ofir Rabanian avatar
Ofir Rabanian

regarding the secrets you should probably save them in aws secret manager, and access them using the right permissions from your application

Nikola Milic avatar
Nikola Milic

Thanks, both of you, one more question regarding that:

“The sensitive argument for outputs can help avoid inadvertent exposure of those values. However, you must still keep your Terraform state secure to avoid exposing these values.”

If i keep my S3 bucket private, and use this sensitive flag, is that enough protection so that i can use “terraform outputs” instead of messing with secrets manager?

Nikola Milic avatar
Nikola Milic

My logic is this, if a shell can access remote state, it is privileged (my CI executor). If it’s priviledged - why bother using secrets management, just don’t print secrets to console.

Ofir Rabanian avatar
Ofir Rabanian

when you use “sensitive” on an output, you wouldn’t see it in plan/apply.

Ofir Rabanian avatar
Ofir Rabanian

and your shell probably shouldn’t access your state and take secrets from it..

Ofir Rabanian avatar
Ofir Rabanian

I mean, it’s probably possible - but definitely not recommended

Nikola Milic avatar
Nikola Milic

What is your proposed approach? Let’s say that my provisioned resources are RDS database URL (not secret), and some secrets like username/password (secrets)

Ofir Rabanian avatar
Ofir Rabanian

aws secret manager

Nikola Milic avatar
Nikola Milic

and with that I would manage both non secret provisioning variables AND secret provisioning vars?

Ofir Rabanian avatar
Ofir Rabanian

either, or use ssm for the non secret

Nikola Milic avatar
Nikola Milic


and your shell probably shouldn’t access your state and take secrets from it..
also I’m interested in more explanation behind this

Ofir Rabanian avatar
Ofir Rabanian

the reason to not use secret manager for everything would probably be the pricing of it

Nikola Milic avatar
Nikola Milic

because at some point I’ll also automate “terraform apply” and give that power to Gitlab CI to do it automatically

Nikola Milic avatar
Nikola Milic

So if runner shell can provision my infrastructure in an automated way that I set up, why wouldn’t it have access to those created secrets?

Ofir Rabanian avatar
Ofir Rabanian

that’s ok, I just wouldn’t make the app go and fetch secrets from the state file

1
Nikola Milic avatar
Nikola Milic

Alright, thanks for the help!

Nikola Milic avatar
Nikola Milic

@Adnan as well for the documentation

1
Ofir Rabanian avatar
Ofir Rabanian

sure np

Alex Jurkiewicz avatar
Alex Jurkiewicz

Yes, accessing remote state is a bit of an anti-pattern. It couples things together a little closer than is comfortable, and there’s no way to implement access control. You can use a dedicated secret sharing tool like AWS Secrets Manager (or AWS SSM Parameter Store, if cost is a concern) instead

Bart Coddens avatar
Bart Coddens

Hi all, I used the iam-system-user module to create a user with access key and secret. Could you handle over this data to ansible to store it on the machine ? I know this is not best practice but the legacy application cannot work without this

Bart Coddens avatar
Bart Coddens

the output generates this:

Mohammed Yahya avatar
Mohammed Yahya
Building a secure CI/CD pipeline for Terraform Infrastructure as Code

We created a model for automatically delivering infrastructure changes with robust security practices, and used it to build a secure Terraform CI/CD solution for AWS at OVO.

Bart Coddens avatar
Bart Coddens

Hi all, how do you guys manage the state backend on s3, when I try to do something like this:

Bart Coddens avatar
Bart Coddens
terraform {
  backend "s3" {
    bucket = "bla-test-tfstate"
    key    = "s3/${var.name}/terraform.tfstate"
    region = "eu-west-1"
  }
}
pjaudiomv avatar
pjaudiomv

I dont think you can use a variable there

pjaudiomv avatar
pjaudiomv

you can pass it in as arg on init though

Bart Coddens avatar
Bart Coddens

I can use ansible to fix this

pjaudiomv avatar
pjaudiomv
terraform init -backend-config="bucket=bla-test-tfstate" -backend-config="key=whatever"
Bart Coddens avatar
Bart Coddens

this writes the config and you can use the same variable here

Bart Coddens avatar
Bart Coddens

hmmm let me think

Bart Coddens avatar
Bart Coddens

it fails

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

does anyone have a module (or know of one) that can easily configure the necessary subnet CIDRs for the upstream VPC module?

i know the VPC CIDR block I would like to use and will only ever go across 3 AZs so will need 12 CIDR blocks from the VPC CIDR provided

David Lozano avatar
David Lozano

have you tried using terraform-aws-dynamic-subnets?

cloudposse/terraform-aws-dynamic-subnets

Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets

this1
Alex Jurkiewicz avatar
Alex Jurkiewicz

That will only deploy one subnet per AZ (per public/private).. it sounds like you want multiple subnets per AZ?

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

nope never seen it before

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i want one of each type of subnet per AZ

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

so one public, one private, one intra, one private per AZ

Alex Jurkiewicz avatar
Alex Jurkiewicz

what is “intra” in this context? dynamic-subnets only understands “public” and “private”

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

intra is a type of subnet the upstream module leverages

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

is basically a private subnet that has no Internet routing

David Lozano avatar
David Lozano

those 3 types of subnets you mention are more similar to the concept in this module. but this module creates the VPC and everything. Gonna take a while to go through all the input variables you can use to customize it. but you can start with the examples easily.

terraform-aws-modules/terraform-aws-vpc

Terraform module which creates VPC resources on AWS - terraform-aws-modules/terraform-aws-vpc

Alex Jurkiewicz avatar
Alex Jurkiewicz

dynamic-subnets can’t do that for you, sadly. I think your best bet is to calculate the CIDR ranges yourself using the cidrsubnet function, and then create the subnets “by hand”. Possibly using https://github.com/cloudposse/terraform-aws-named-subnets

cloudposse/terraform-aws-named-subnets

Terraform module for named subnets provisioning. Contribute to cloudposse/terraform-aws-named-subnets development by creating an account on GitHub.

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

yeh right now i hard-code the subnet cidr blocks

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

but i think its too easy to mess up

Alex Jurkiewicz avatar
Alex Jurkiewicz
> [for i in [0,1,2,3,4,5,6,7,8,9,10,11] : cidrsubnet("10.0.0.0/20", 4, i)]
[
  "10.0.0.0/24",
  "10.0.1.0/24",
  "10.0.2.0/24",
  "10.0.3.0/24",
  "10.0.4.0/24",
  "10.0.5.0/24",
  "10.0.6.0/24",
  "10.0.7.0/24",
  "10.0.8.0/24",
  "10.0.9.0/24",
  "10.0.10.0/24",
  "10.0.11.0/24",
]
1
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

yeh i was looking at cidrsubnet before and it just confused me

Alex Jurkiewicz avatar
Alex Jurkiewicz

I find it confusing too. I read the above example as “take 10.0.0.0/20 and divide it into /24s (20+4), and give me the i’th one of those”

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

so my VPC is 10.128.0.0/19

then i was planning on doing dividing that into /21 to get me 4 AZs (i only need 3)

then from there dividing each AZ block again but have it biased towards 2 of the 4 subnets i require

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

that would then give me the four subnets for AZ1 then do the same for AZ2 and AZ3

Alex Jurkiewicz avatar
Alex Jurkiewicz

biasing the sub-division sounds unnecessarily complicated, IMO

1
Alex Jurkiewicz avatar
Alex Jurkiewicz
[public_cidr, private_cidr, intra_cidr] = [for i in [0,1,2] : cidrsubnet("10.1280.0.0/19", 4, i) ]

[public_az1, public_az2, public_az3] = [for i in [0,1,2] : cidrsubnet(public_cidr, 4, i) ]
etc for private/intra
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

agreed but we need more IPs in the private subnets as that is where EKS runs

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

and EKS takes an IP per pod

Alex Jurkiewicz avatar
Alex Jurkiewicz

then start with a bigger cidr than /19

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i can’t

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

as we need to handle 16 regions

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

with 64 VPCs per region

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

having 5xx IPs for the public subnets is literally overkill as its only ever going to contain 2 load-balancers

Alex Jurkiewicz avatar
Alex Jurkiewicz

It might be worth you writing out what cidr ranges you want for each sort of subnet. This has been a lot of me suggesting something and then you coming back with another requirement

David Lozano avatar
David Lozano

you can make subnets of different sizes using cidrsubnets instead. you can add more newbits to the function and create small subnets for public and bigger for private. Do not need to create all subnets of the same size

> cidrsubnets("10.1.0.0/16", 4, 4, 8, 4)
[
  "10.1.0.0/20",
  "10.1.16.0/20",
  "10.1.32.0/24",
  "10.1.48.0/20",
]
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

my issue is the way we chunk up the subnets

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

as is my logic correct?

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

VPC -> AZs -> subnets

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

my initial thinking was …

VPC 0 = 10.128.0.0/19

4 AZ blocks
10.128.0.0/21, 10.128.8.0/21, 10.128.16.0/21, 10.128.24.0/21

4 Subnets in AZ 1
10.128.0.0/23, 10.128.2.0/23, 10.128.4.0/23, 10.128.6.0/23

4 Subnets in AZ 2
10.128.8.0/23, 10.128.10.0/23, 10.128.12.0/23,  10.128.14.0/23
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

this makes every subnet an equal size

David Lozano avatar
David Lozano

the AZs is not tight to any specific IP range, the subnets are, and they exist in one specific AZs regardless of their cidr blocks. the subnet breakdown looks more like this VPC -> Subnets(AZ)

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

so what you’re saying is i just need to carve up the VPC CIDR in 12 subnets

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

and not worry about the AZ specifics

Alex Jurkiewicz avatar
Alex Jurkiewicz

Might be easier to break things up the other way.

[private_cidr, intra_cidr, public_cidr] = cidrsubnets("10.128.0.0/19", 2,2,5) # /21, /21, /24

[private_az1, private_az2, private_az3] = cidrsubnets(private_cidr, 2,2,2) # /23 each
[public_az1, public_az2, public_az3] = cidrsubnets(public_cidr, 2,2,2) # /26 each
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

so i need four main cidr groups

public (tiny) private (biggest) database (medium) intra (small)

David Lozano avatar
David Lozano
10:45:57 PM

maybe this tool can help us to visualize the breakdown better. Something like this?

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

yeh i was using that earlier

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

and trying to make it to cidrsubnets

David Lozano avatar
David Lozano

specifically the IP group example in the screenshot can be generated with cidrsubnets("10.128.0.0/19", 5, 5, 4, 3, 2, 5, 5, 4, 3, 2)

David Lozano avatar
David Lozano

you could create lists of public, private and intra from that list based in the indexes or something like that creating sublists.

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i am trying to automate this away as much as possible to make it super simple for people

David Lozano avatar
David Lozano

Hi everyone,

Has anyone encountered this issue before? I think it has to do with the way terraform processes list values.

If I make the first cidr_blocks an empty list [] to match the second cidr_blocks type it throws a different error "source_security_group_id": conflicts with cidr_blocks since cidr_blocks and source_security_group_id can not be present in the same rule.

main.tf

module "sg" {
  source = "github.com/cloudposse/terraform-aws-security-group?ref=0.1.3"

  rules = [
    {
      type                     = "ingress"
      from_port                = 22
      to_port                  = 22
      protocol                 = "tcp"
      cidr_blocks              = null
      self                     = null
      source_security_group_id = "sg-0000aaaa1111bbb"
    },

    {
      type                     = "egress"
      from_port                = 0
      to_port                  = 65535
      protocol                 = "all"
      cidr_blocks              = ["0.0.0.0/0"]
      self                     = null
      source_security_group_id = null
    }
  ]
  vpc_id  = "vpc-0000aaaa1111bbb"
  context = module.this.context
}

ERROR

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@sweetops556 heads up

David Lozano avatar
David Lozano
tf apply
panic: inconsistent list element types (cty.Object(map[string]cty.Type{"cidr_blocks":cty.DynamicPseudoType, "from_port":cty.Number, "protocol":cty.String, "self":cty.DynamicPseudoType, "source_security_group_id":cty.String, "to_port":cty.Number, "type":cty.String}) then cty.Object(map[string]cty.Type{"cidr_blocks":cty.Tuple([]cty.Type{cty.String}), "from_port":cty.Number, "protocol":cty.String, "self":cty.DynamicPseudoType, "source_security_group_id":cty.String, "to_port":cty.Number, "type":cty.String}))

goroutine 545 [running]:
github.com/zclconf/go-cty/cty.ListVal(0xc000e784c0, 0x2, 0x2, 0xc0005465e0, 0x1, 0x1, 0x1)
        /go/pkg/mod/github.com/zclconf/[email protected]/cty/value_init.go:166 +0x5a8
github.com/zclconf/go-cty/cty/convert.conversionTupleToList.func2(0x3860460, 0xc000bc5420, 0x2f350a0, 0xc000bc5440, 0x0, 0x0, 0x0, 0x3860320, 0x2cebaef0, 0x10, ...)
        /go/pkg/mod/github.com/zclconf/[email protected]/cty/convert/conversion_collection.go:327 +0x794
github.com/zclconf/go-cty/cty/convert.getConversion.func1(0x3860460, 0xc000bc5420, 0x2f350a0, 0xc000bc5440, 0x0, 0x0, 0x0, 0xc001009c50, 0xc0005465d0, 0x3860360, ...)
        /go/pkg/mod/github.com/zclconf/[email protected]/cty/convert/conversion.go:46 +0x433
github.com/zclconf/go-cty/cty/convert.retConversion.func1(0x3860460, 0xc000bc5420, 0x2f350a0, 0xc000bc5440, 0xc0005465d0, 0x0, 0x0, 0x0, 0xc00030c270, 0x10000c001c70000)
        /go/pkg/mod/github.com/zclconf/[email protected]/cty/convert/conversion.go:188 +0x6b
github.com/zclconf/go-cty/cty/convert.Convert(0x3860460, 0xc000bc5420, 0x2f350a0, 0xc000bc5440, 0x3860360, 0xc000877040, 0xc000bc5420, 0x2f350a0, 0xc000bc5440, 0x0, ...)
        /go/pkg/mod/github.com/zclconf/[email protected]/cty/convert/public.go:51 +0x1b9
github.com/hashicorp/terraform/terraform.(*nodeModuleVariable).EvalModuleCallArgument(0xc000594900, 0x389bce0, 0xc001c441a0, 0xc0005ca301, 0x0, 0x0, 0x0)
        /home/circleci/project/project/terraform/node_module_variable.go:238 +0x265
github.com/hashicorp/terraform/terraform.(*nodeModuleVariable).Execute(0xc000594900, 0x389bce0, 0xc001c441a0, 0xc00003a004, 0x30ada40, 0x3202b60)
        /home/circleci/project/project/terraform/node_module_variable.go:157 +0x7f
github.com/hashicorp/terraform/terraform.(*ContextGraphWalker).Execute(0xc000ebc270, 0x389bce0, 0xc001c441a0, 0x2da00048, 0xc000594900, 0x0, 0x0, 0x0)
        /home/circleci/project/project/terraform/graph_walk_context.go:127 +0xbc
github.com/hashicorp/terraform/terraform.(*Graph).walk.func1(0x3202b60, 0xc000594900, 0x0, 0x0, 0x0)
        /home/circleci/project/project/terraform/graph.go:59 +0x962
github.com/hashicorp/terraform/dag.(*Walker).walkVertex(0xc000594960, 0x3202b60, 0xc000594900, 0xc000e78340)
        /home/circleci/project/project/dag/walk.go:387 +0x375
created by github.com/hashicorp/terraform/dag.(*Walker).Update
        /home/circleci/project/project/dag/walk.go:309 +0x1246



!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!

Terraform crashed! This is always indicative of a bug within Terraform.
A crash log has been placed at "crash.log" relative to your current
working directory. It would be immensely helpful if you could please
report the crash with Terraform[1] so that we can fix this.

When reporting bugs, please include your terraform version. That
information is available on the first line of crash.log. You can also
get it by running 'terraform --version' on the command line.

SECURITY WARNING: the "crash.log" file that was created may contain 
sensitive information that must be redacted before it is safe to share 
on the issue tracker.

[1]: <https://github.com/hashicorp/terraform/issues>

!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!

2021-02-19

pericdaniel avatar
pericdaniel

Hello! Is there a way to pass a lifecycle ignore_changes in the inputs section when you are trying to point to a source module? (Using Terragrunt.hcl)

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

can anyone tell me what i am doing wrong please?

public_subnets   = var.subnet_cidrs == {} ? local.subnet_cidr_map["public"] : var.subnet_cidrs["public"]
private_subnets  = var.subnet_cidrs == {} ? local.subnet_cidr_map["private"] : var.subnet_cidrs["private"]
intra_subnets    = var.subnet_cidrs == {} ? local.subnet_cidr_map["intra"] : var.subnet_cidrs["intra"]
database_subnets = var.subnet_cidrs == {} ? local.subnet_cidr_map["database"] : var.subnet_cidrs["database"]
var.subnet_cidrs is empty map of dynamic
Error: Invalid index

  on .terraform/modules/base.vpc/modules/vpc/main.tf line 20, in module "vpc":
  20:   private_subnets  = var.subnet_cidrs == {} ? local.subnet_cidr_map["private"] : var.subnet_cidrs["private"]
    |----------------
    | var.subnet_cidrs is empty map of dynamic

The given key does not identify an element in this collection value.
loren avatar

what is the error?

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

the error is the last block

Steve Wade (swade1987) avatar
Steve Wade (swade1987)
Error: Invalid index

  on .terraform/modules/base.vpc/modules/vpc/main.tf line 20, in module "vpc":
  20:   private_subnets  = var.subnet_cidrs == {} ? local.subnet_cidr_map["private"] : var.subnet_cidrs["private"]
    |----------------
    | var.subnet_cidrs is empty map of dynamic

The given key does not identify an element in this collection value.
loren avatar

hmmm, version? someone was just posting that equality on [] wasn’t working. betting the bug is impacting {} also?

loren avatar

try length(var.subnet_cidrs) > 0

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

0.13.4

loren avatar

empty map of dynamic is interesting phrasing… what’s the type definition on the variable?

Steve Wade (swade1987) avatar
Steve Wade (swade1987)
  subnet_cidrs = {
    database = ["10.60.25.0/24", "10.60.26.0/24", "10.60.27.0/24"]
    intra    = ["10.60.10.0/24", "10.60.11.0/24", "10.60.12.0/24"]
    private  = ["10.60.4.0/24", "10.60.5.0/24", "10.60.6.0/24"]
    public   = ["10.60.1.0/24", "10.60.2.0/24", "10.60.3.0/24"]
  }
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

it could look like that

loren avatar

well that’s a value, what’s the type?

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

map(any)

loren avatar

any must be what it means by “dynamic”

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i am going to use length like you suggested

1
loren avatar

here’s the other thread on [] equality… https://sweetops.slack.com/archives/CB6GHNLG0/p1613610796013400

Hi everyone,

Does anyone know why is this conditions returning false? and what would be the right expression to compare with to get true ?

main.tf

variable "empty_list" {
  type = list(string)
  default = []
  }

console

tf console
> var.empty_list
tolist([])
> 

> var.empty_list == []
false
> var.empty_list == tolist([])
false
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

interesting thanks for this

loren avatar

yeah, something is whack:

> tomap({}) == {}
false
loren avatar

i bet if you changed your condition to == tomap({}) that would work also, but i can’t see why it should be necessary

loren avatar

the open/closed issues i’m finding seem pretty user-hostile… instead of making the behavior work, they’re modifying the output to help show why it doesn’t work

loren avatar
empty list comparison does not work if the list contains an object · Issue #23562 · hashicorp/terraform

Terraform Version Terraform v0.12.17 Terraform Configuration Files Thanks @dpiddockcmp for a simpler example. #23562 (comment) variable &quot;object&quot; { type = list(object({ a = string })) defa…

Dangerous 0.12 change: strings and numbers no longer compare equal · Issue #21978 · hashicorp/terraform

TL;DR 0 == &quot;0&quot; ? &quot;foo&quot; : &quot;bar&quot; In Terraform 0.11, we get &quot;foo&quot; In Terraform 0.12, we get &quot;bar&quot; Terraform Version Terraform v0.12.3 + provider.googl…

local and variable lists treated differently · Issue #26673 · hashicorp/terraform

Terraform Version Terraform v0.13.4 Terraform Configuration Files variable testlist { type = list(string) default = [&quot;NOTSET&quot;] } variable teststring { type = string default = &quot;NOTSET…

1
loren avatar

they’re also based on lists, where [] is not actually a list! of course. but {} is definitely a map, so what’s their excuse for that?

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

yeh its weird right

Bart Coddens avatar
Bart Coddens

hi all while using the module:

Bart Coddens avatar
Bart Coddens
cloudposse/terraform-aws-s3-bucket

Terraform module that creates an S3 bucket with an optional IAM user for external CI/CD systems - cloudposse/terraform-aws-s3-bucket

Bart Coddens avatar
Bart Coddens

I would like to create a prefix: so the name of the bucket should be cloudposseisawesome-prod/var.name

pjaudiomv avatar
pjaudiomv

You can’t have a / in a bucket name

Bart Coddens avatar
Bart Coddens

correct

pjaudiomv avatar
pjaudiomv

Oh so is this just for tags

Bart Coddens avatar
Bart Coddens

it’s ok we can have a single bucket per customer in this case

Bart Coddens avatar
Bart Coddens

no need to fiddle around

Bart Coddens avatar
Bart Coddens

the module does not support migration to DEEP_ARCHIVE right ?

Mohammed Yahya avatar
Mohammed Yahya

https://github.com/hashicorp/terraform-provider-aws/releases/tag/v3.29.0aws_securityhub_invite_accepter is finally out

Release v3.29.0 · hashicorp/terraform-provider-aws

FEATURES: New Resource: aws_cloudwatch_event_archive (#17270) New Resource: aws_elasticache_global_replication_group (#15885) New Resource: aws_s3_object_copy (#15461) New Resource: aws_securityhu…

1
Bart Coddens avatar
Bart Coddens
cloudposse/terraform-aws-s3-bucket

Terraform module that creates an S3 bucket with an optional IAM user for external CI/CD systems - cloudposse/terraform-aws-s3-bucket

1
Bart Coddens avatar
Bart Coddens

does not seem to support a transition to DEEP_ARCHIVE yet, how can I request this ?

Matt Gowie avatar
Matt Gowie

@Bart Coddens You can put up an issue in that repo and see if anyone gets around to it. But if you want it done, the main way to do so is fork, update, and PR back. We gladly accept these types of contributions and you’ll get a quick feedback and turnaround if you post your PR in #pr-reviews.

1
mikesew avatar
mikesew

Just curious how people have managed terraform version upgrades with modules? It seems that since the state is not backwards compatible, we have several workspaces all at some version of 0.12.x .

Alex Jurkiewicz avatar
Alex Jurkiewicz

You mean, updating modules to work with a newer version of TF?

mikesew avatar
mikesew

Well, having a module be backwards compatible with multiple ‘consumer’ TF configs which are often at weird versions.

# ./modules/rds/main.tf   , Git Tag is at 1.0
terraform {
  required_version = ">= 0.12"
}
# ./myapp/database.tf
# workspace is at TF 0.12.10
module "rds" {
  source  = "terraform.mycompany.com/mycompany/rds/aws"
  version = "~>1.0"

.. but other workspaces might be at 12.20, or 12.24, or others want to try using 0.13.

Alex Jurkiewicz avatar
Alex Jurkiewicz

huh. I’ve never really seen people use different versions of Terraform for different workspaces. Certainly not more than two versions

Alex Jurkiewicz avatar
Alex Jurkiewicz

generally at my company we only roll forward. Modules have a single development branch, and it works with the current stable Terraform. If you are using an older version of the module and you want newer functionality, you have to update your Terraform version to stable

1
Alex Jurkiewicz avatar
Alex Jurkiewicz

the maintenance burden of modules is so high I think this is the only realistic approach. Not even CloudPosse can support multiple versions of their modules

1
mikesew avatar
mikesew

re: different versions, from what I see, one app may ‘upgrade’ terraform environment by environment (dev, qa, uat, stg then prod) so there would be some minor drift. but then there might be another app (with it’s dev/qa/uat/stg/prod terraform workspaces) which are pinned at an older TF version. Me as the module maintainer ..is that my concern?

Matt Gowie avatar
Matt Gowie

@mikesew likely useful to check out Hashi’s suggestions on this: https://www.terraform.io/docs/language/expressions/version-constraints.html#best-practices

This gist is reusable, child modules should pin a version constraint using >=.

Root modules should pin a specific version or if you’re more willing for things to break then you can pin using the pessimistic constraint operator.

2
mikesew avatar
mikesew

thanks. But I see with for example anton B’s terraform-aws-rds module (https://github.com/terraform-aws-modules/terraform-aws-rds) it’s got a: master branch at tf0.12 , tagged with a 2.x.x semver terraform011 branch that’s released on a 1.x.x semver .. and they’re both seeing updates. and my particular modules are horribly written so I seem to always make MANY breaking changes (like renaming variable inputs, sorry), therefore I feel like I should be bumping my semver constantly. So I feel like that doesn’t really scale.

Matt Gowie avatar
Matt Gowie

Are you talking about how you version your module itself or are you talking about how your modules version their providers / terraform?

Alex Jurkiewicz avatar
Alex Jurkiewicz

Is this an internal module you are talking about? Or one you’ve published and want to become popular

mikesew avatar
mikesew

this is an internal module (0.12) that I’m just trying to upgrade to .13 or .14 simply to stay with the times, and not break any workspaces that are using it. What I’m going to try is install terraform 13, • goto my module, checkout a new branch • run terraform 0.13upgrade • try it with a test terraform spin • set required_version to at least 0.13

terraform {
  required_version = ">= 0.13"

• Pull Request to back to master branch • tag/release that commit with a new breaking version (2.0.0) • create old branch named master/terraform012 for any legacy hotfixes

• put out announcement or release notes saying this is now tf0.13, no support for .12?

Alex Jurkiewicz avatar
Alex Jurkiewicz

Sounds good, but why bother with a branch. Main/master can be evergreen branch. You can create a branch for older version if you have future need to hotfix

mikesew avatar
mikesew

.. sorry you’re right. I just mod’d above. THANK YOU for the discussion/advice.

2021-02-20

nnsense avatar
nnsense

QQ: is this null_data_source still required as a workaround for nodes to wait for EKS module and cm to be in place? Yesterday I’ve seen this message from a deployment using it:

Warning: Deprecated Resource

The null_data_source was historically used to construct intermediate values to
re-use elsewhere in configuration, the same can now be achieved using locals

What if I move the two from null_data_source shown into examples from that into a locals { cluster_name = module.eks_cluster.eks_cluster_id } ? Would that achieve the same (waiting for aws-auth cm to exist)? On the same subject, what’s the second variable (kubernetes_config_map_id) for? I cannot find it anywhere into the code, how the two are tied together if set into locals (provided it’s the right option if we want to make terraform happy and stop using null_data_source I even tried to move the two vars into locals, and the deployment completed successfully… but I have the strong feeling I’m missing something here… but, if I’m not, and moving those into locals is everything we need to get rid of that message, I’m happy to update and send a PR.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Jeremy G (Cloud Posse)

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

waiting for the cluster AND the config map to be created first is required https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/examples/complete/main.tf#L79

cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i did not know that we can use locals instead of null data source

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if we can, that’s great

2021-02-21

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

does anyone know of a tool that can mass terraform import resources from AWS?

i need to get our route53 hosted zones and records under terraform management and away from people using ClickOps to update them all.

i swear there was something from Google that I have seen before but can’t for the life of me find it

i think its https://github.com/GoogleCloudPlatform/terraformer

RB avatar

Yep terraformer is nice

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

Going to need it to get this clickops nonsense under control

loren avatar

import their iam users and roles also, and make them all readonly

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i have in the new platform i only really need this to do sub-domain delegation

Steve Wade (swade1987) avatar
Steve Wade (swade1987)
➜  Desktop terraformer import aws --resources=aws_route53_zone
2021/02/21 16:23:04 aws importing default region
2021/02/21 16:23:04 open /Users/stevewade/.terraform.d/plugins/darwin_amd64: no such file or directory
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

am i missing something obvious here :point_up: i install terraform using tfenv

loren avatar
4. Run terraform init against an versions.tf file to install the plugins required for your platform

?

loren avatar


Or alternatively
Copy your Terraform provider’s plugin(s) to folder ~/.terraform.d/plugins/{darwin,linux}_amd64/, as appropriate.

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

makes sense i was just trying to do this from an empty directory

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

makes sense i need to set it up correctly first

1
Hao Wang avatar
Hao Wang

just a quick update, the RDS wouldn’t be recreated if a snapshot is used, https://github.com/hashicorp/terraform-provider-aws/issues/17037

RDS instance got recreated after another apply without any change · Issue #17037 · hashicorp/terraform-provider-aws

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or other comme…

Hao Wang avatar
Hao Wang

by the way, recently I am thinking if there is a package manager for Cloudposse modules, so the modules can be upgraded in TF files, just bump the version something like that

Mohamed Habib avatar
Mohamed Habib

Thinking the same! do you know if any package managers for tform exists?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Are you referring to root modules? or child modules?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

we use rennovatebot to manage upgrades

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you can look into vendir to do vendoring of root modules

Hao Wang avatar
Hao Wang

@Erik Osterman (Cloud Posse) do you have some doc around how Cloudposse uses rennovatebot? not sure if I understand root/child modules, the case I run into, for example, if I use dynomodb module from Cloudposse, when a new version got released, what I do is to go to TF registry and find the new version and update the version in the codes

Hao Wang avatar
Hao Wang

If there is a way to update the versions in an automatic way, that will save some time

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yep, that’s what rennovate bot does. It then opens a PR with the update.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

that’s the configuration that we use.

Hao Wang avatar
Hao Wang

amazing, thanks Erik

2021-02-22

Bart Coddens avatar
Bart Coddens

Hi all, I made some changes to the s3 bucket module to support transition to deep archive storage class

Bart Coddens avatar
Bart Coddens

where can I submit my code ?

Alex Jurkiewicz avatar
Alex Jurkiewicz

as a pull request to the repository. I guess you aren’t familiar with those – Github’s help pages have good intros/explanations

Bart Coddens avatar
Bart Coddens

I forked the main repo and pushed my changes to my own branch

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

General guidelines here: https://github.com/cloudposse/terraform-aws-s3-bucket#developing

After opening the PR, you can promote it in #pr-reviews for expedited review

cloudposse/terraform-aws-s3-bucket

Terraform module that creates an S3 bucket with an optional IAM user for external CI/CD systems - cloudposse/terraform-aws-s3-bucket

Bart Coddens avatar
Bart Coddens

I cannot access the #pr-review I guess

Frank avatar

Anyone else experienced this issue when updating the AWS provider from v3.28.0 -> v.3.29.0 (with the terraform-aws-rds module)

Error: ConflictsWith
  on .terraform/modules/rds_postgres_db/main.tf line 44, in resource "aws_db_instance" "default":
  44:   snapshot_identifier         = var.snapshot_identifier
"snapshot_identifier": conflicts with username
Releasing state lock. This may take a few moments...

Not sure what the issue is here. snapshot_identifier is not set (so defaults to "") and database_username is set to a custom value so I don’t see why it would conflict.

cloudposse/terraform-aws-rds

Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.

1
1
Ankit Rathi avatar
Ankit Rathi

I am getting the same issue while trying to create simple AWS-RDS MySQL

$ ./run.sh plan

Error: ConflictsWith

  on .terraform/modules/rds_instance/main.tf line 44, in resource "aws_db_instance" "default":
  44:   snapshot_identifier         = var.snapshot_identifier

"snapshot_identifier": conflicts with username
cloudposse/terraform-aws-rds

Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.

Ankit Rathi avatar
Ankit Rathi

I see a workaround is keeping it as null @Frank

Frank avatar

@Ankit Rathi Ah wow that’s an “old” issue, weird that it suddenly surfaced with a new AWS Provider even though that version has no RDS changes mentioned in the changelog

1
Ankit Rathi avatar
Ankit Rathi

yeah i agree, i think its because we use aws_db_instance as dependency which depends on hashicorp/aws

Frank avatar

@Ankit Rathi It appears to have been resolved in AWS Provider v3.29.1

2
Nikola Milic avatar
Nikola Milic

I’ve successfully created a Gitlab CI pipeline v0.1 where I test, build and publish my docker image to ECR repository. Also in this codebase, there is Terraform fully set up (with remote s3 backend) but it’s not automated (connected with CI), but rather provisioning is version controled - but done manually.

I’m ready to step up and create v0.2 - the same thing as above, but where CI actually does provisioning if there are changes to infra. Can you give me some guidelines on where to start?

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Are you referring to using the plan command and then applying that plan?

Nikola Milic avatar
Nikola Milic

Yep

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Here’s an example for how to do a plan and then scan it for vulnerabilities (all in GitLab): https://github.com/indeni/cloudrail-demo/blob/master/.gitlab-ci.yml

You’d add another “stage” after that does apply of the plan if it passed the Cloudrail step.

indeni/cloudrail-demo

This repository contains the instructions for how to use Cloudrail, as well as specific scenarios to test Cloudrail with. - indeni/cloudrail-demo

2
Asis avatar

Hi wave Any harshicop vault experts here .. I m unable to unseal vault using the 3 master keys . I had the backend storage as consul . Is there a way I can kill the existing vault and recreate attach backend storage as consult.

mikesew avatar
mikesew

what’s the error message you see? Just doing a google search, i came up with https://dev.to/v6/how-to-reset-a-hashicorp-vault-back-to-zero-state-using-consul-ae .

# so assume in your consul config file, you have:
  "data_dir": "/opt/consul/data",

^^^ so delete whatever is in your data dir.

Bart Coddens avatar
Bart Coddens

Hi all, I have some a tag on the root volume, I want terraform to ignore it

Bart Coddens avatar
Bart Coddens

in my config I have:

Bart Coddens avatar
Bart Coddens

lifecycle { ignore_changes = [tags, ami] }

Bart Coddens avatar
Bart Coddens

the plan says:

Bart Coddens avatar
Bart Coddens
      ~ root_block_device {
            delete_on_termination = false
            device_name           = "/dev/xvda"
            encrypted             = false
            iops                  = 0
          ~ tags                  = {
              - "Name" = "IOWA-TEST-ROOT" -> null
            }
            throughput            = 0
            volume_id             = "vol-04e6d26cb3fd7a43a"
            volume_size           = 8
            volume_type           = "standard"
        }
    }
RB avatar

try

ignore_changes = [root_block_device.tags, tags, ami]
Bart Coddens avatar
Bart Coddens

ha ok, but as such you cannot modify the size of the root volume right ?

RB avatar

i dont believe so

Bart Coddens avatar
Bart Coddens

but that’s ok, changing the root volume size is rare

Leon Garcia avatar
Leon Garcia

hi, I’m facing an issue with latest version of terraform-aws-cloudfront-s3-cdn I have set values for custom_origins and now it asks for custom_headers after adding a blank object list, I get other errors related to path, domain, etc…so I set my version to 0.48.1 and works fine should I open a ticket in github?

jose.amengual avatar
jose.amengual

yes, open the ticket please and add the output and error messages

jose.amengual avatar
jose.amengual

@Leon Garcia

Leon Garcia avatar
Leon Garcia

thanks, it’s done

jose.amengual avatar
jose.amengual

link?

Leon Garcia avatar
Leon Garcia
Unable to update CDN settings when using custom_origins · Issue #135 · cloudposse/terraform-aws-cloudfront-s3-cdn

Describe the Bug After updating a value in a current custom_origins object, terraform throws error of missing custom_headers that currently is not being used. Expected Behavior Apply changes withou…

jose.amengual avatar
jose.amengual

Thanks

Leon Garcia avatar
Leon Garcia

i see some related changes to custom_headers recently.. but I can’t find why I get the errors for other stuff..

Mike Robinson avatar
Mike Robinson

Hello team. I’m working with the eks-iam-role module. We have other modules that are responsible for, among other things, adding policies to existing IAM roles when resources (ie. SQS) are created. Thus I do not have a policy to pass into this module, so eks-iam-role cannot plan because aws_iam_policy_document is a required value, and I’d prefer our SQS module handle the IAM policy.

However, this line leads me to think that aws_iam_policy_document was intended to be optional. If I pass “{}” into the module, similar to this coalesce(), the plan works.

Should I file a bug to get aws_iam_policy_document made optional? Hopefully all those words I wrote makes sense to someone.

cloudposse/terraform-aws-eks-iam-role

Terraform module to provision an EKS IAM Role for Service Account - cloudposse/terraform-aws-eks-iam-role

cloudposse/terraform-aws-eks-iam-role

Terraform module to provision an EKS IAM Role for Service Account - cloudposse/terraform-aws-eks-iam-role

Fred Torres avatar
Fred Torres

Are folks having issues downloading providers right now?

could not query provider
registry for registry.terraform.io/hashicorp/aws: failed to retrieve
authentication checksums for provider: 404 Not Found
Mike Robinson avatar
Mike Robinson

@Fred Torres Yes, same here and on multiple providers and sources

Error verifying checksum for provider "aws"
The checksum for provider distribution from the Terraform Registry
did not match the source. This may mean that the distributed files
were changed after this version was released to the Registry.
1
RB avatar

tf cloud is having issues, it appears.

Mike Robinson avatar
Mike Robinson
Terraform Cloud Outage

HashiCorp Services’s Status Page - Terraform Cloud Outage.

joshmyers avatar
joshmyers

#terraform I think your S3 bucket policy is hosed, getting 403s trying to download any release from https://www.terraform.io/downloads.html

Robert Horrox avatar
Robert Horrox

This outage is also affecting regular terraform runs

Robert Horrox avatar
Robert Horrox
Error: Failed to install provider

Error while installing hashicorp/aws v3.29.0: unsuccessful request to
<https://releases.hashicorp.com/terraform-provider-aws/3.29.0/terraform-provider-aws_3.29.0_linux_amd64.zip>:
404 Not Found
Robert Horrox avatar
Robert Horrox

that provider is missing on their releases site

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Have you guys been getting “File is not a zip file” too?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
06:01:40 PM
melissa Jenner avatar
melissa Jenner

Question on the module, cloudposse/elasticache-redis/aws. I use this module created redis cluster. See the output below.

  1. Why is the word “replicas” part of endpoint? Is the endpoint the redis primary endpoint or replica endpoint?
  2. Why output of cluster_host is empty? ``` cluster_host = cluster_id = redis-replicas-blue cluster_port = 6379 redis_cluster_endpoint = clustercfg.redis-replicas-blue.ujhy8y.usw2.cache.amazonaws.com

module “redis” { source = “cloudposse/elasticache-redis/aws” availability_zones = data.terraform_remote_state.vpc.outputs.azs vpc_id = data.terraform_remote_state.vpc.outputs.vpc_id enabled = var.enabled name = var.name tags = var.tags allowed_security_groups = [data.terraform_remote_state.vpc.outputs.default_security_group_id] subnets = data.terraform_remote_state.vpc.outputs.elasticache_subnets cluster_size = var.redis_cluster_size #number_cache_clusters instance_type = var.redis_instance_type apply_immediately = true automatic_failover_enabled = true engine_version = var.redis_engine_version family = var.redis_family cluster_mode_enabled = true replication_group_id = var.replication_group_id replication_group_description = var.replication_group_description at_rest_encryption_enabled = var.at_rest_encryption_enabled transit_encryption_enabled = var.transit_encryption_enabled cloudwatch_metric_alarms_enabled = var.cloudwatch_metric_alarms_enabled cluster_mode_num_node_groups = var.cluster_mode_num_node_groups snapshot_retention_limit = var.snapshot_retention_limit snapshot_window = var.snapshot_window dns_subdomain = var.dns_subdomain cluster_mode_replicas_per_node_group = var.cluster_mode_replicas_per_node_group } ```

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for cluster_host to be populated, need to provide a Zone ID https://github.com/cloudposse/terraform-aws-elasticache-redis/blob/master/main.tf#L168

cloudposse/terraform-aws-elasticache-redis

Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it will create a record in the DNS zone pointing to the cluster endpoint (endpoint is what AWS generates, cluster_host is pointing to it via DNS)

melissa Jenner avatar
melissa Jenner

Thank you. How about endpoint? Why it has the word, replicas?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

if you use the endpoint in your apps and then update/recreate the cluster, you’ll have to change the URL in all the apps

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

by using the cluster_host, it will be alwways the same

melissa Jenner avatar
melissa Jenner

I have not registered a domain. Therefore, I do not have zone_id. I am a bit confused of the output. The word, replicas is part of the endpoint. Is this endpoint primary or replica endpoint?

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

redis-replicas-blue is something you provided in the variables

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

the module does not have that

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)
cloudposse/terraform-aws-elasticache-redis

Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis

melissa Jenner avatar
melissa Jenner
variable "replication_group_id" {
  type        = string
  description = "The replication group identifier. This parameter is stored as a lowercase string."
  default     = "redis-replicas-blue"
}
melissa Jenner avatar
melissa Jenner

Let me destroy the redis, re-create it with redis-blue, and see how it will be.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

that’s your variable

joshmyers avatar
joshmyers
awslabs/tecli

In a world where everything is Terraform, teams use Terraform Cloud API to manage their workloads. TECLI increases teams productivity by facilitating such interaction and by providing easy commands…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I don’t fully grok the need for the cli, especially as presented. The terraform provider for TFE exists for a reason

awslabs/tecli

In a world where everything is Terraform, teams use Terraform Cloud API to manage their workloads. TECLI increases teams productivity by facilitating such interaction and by providing easy commands…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-tfe-cloud-infrastructure-automation

Terraform Enterprise/Cloud Infrastructure Automation - cloudposse/terraform-tfe-cloud-infrastructure-automation

2
1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Terraform can provision TFC/TFE

joshmyers avatar
joshmyers

Yeah, thought tool looked interesting. Solves a bit of a chicken n egg maybe

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So we run one command to provision a workspace that looks for new workspaces. So literally, terraform cloud terraforms terraform cloud for all new workspaces. Only the initial “command and control” workspace is done with terraform .

joshmyers avatar
joshmyers

terraform cloud terraforms terraform cloud. meta.

joshmyers avatar
joshmyers

Alex Jurkiewicz avatar
Alex Jurkiewicz

dang. I get the long-term economic incentives of AWS supporting their ecosystem with contributions like this. But it’s so rare to see, that it’s still a little

btai avatar

stupid question, but using a cloudposse module for the first time (surprising) and I can’t seem to get it to provision resources as it currently says No changes. Infrastructure is up-to-date.when i try to run terraform apply.

I have the modules set up as so:

module "monitor_configs" {
  source  = "cloudposse/config/yaml"
  version = "0.7.0"
  enabled = true

  map_config_paths           = ["catalog/monitors/kube.yaml"]
  
  context = module.this.context
}

module "synthetic_configs" {
  source  = "cloudposse/config/yaml"
  version = "0.7.0"
  enabled = true

  map_config_paths           = []
  
  context = module.this.context
}

module "datadog_monitors" {
  source = "git::<https://github.com/cloudposse/terraform-datadog-monitor.git?ref=master>"
  enabled = true

  datadog_monitors     = module.monitor_configs.map_configs
  datadog_synthetics   = module.synthetic_configs.map_configs
  # alert_tags           = var.alert_tags
  # alert_tags_separator = var.alert_tags_separator
  
  context = module.this.context
}

and a context.tf file that is just copypasted this and set var.enabled to true: https://github.com/cloudposse/terraform-datadog-monitor/blob/master/examples/complete/context.tf

am I missing something obvious?

cloudposse/terraform-datadog-monitor

Terraform module to configure and provision Datadog monitors from a YAML configuration, complete with automated tests. - cloudposse/terraform-datadog-monitor

Alex Jurkiewicz avatar
Alex Jurkiewicz

is module.context.enabled true?

cloudposse/terraform-datadog-monitor

Terraform module to configure and provision Datadog monitors from a YAML configuration, complete with automated tests. - cloudposse/terraform-datadog-monitor

btai avatar

module.this.context.enalbed = true and module.this.enabled = true

Matt Gowie avatar
Matt Gowie

Is it possible that your map_config_paths is misconfigured? Can you terraform console and check that module.monitor_config.* includes anything?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You have a file called kube.yaml?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I suspect because we have try here, that file loading errors are squashed.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-yaml-config

Terraform module to convert local and remote YAML configuration templates into Terraform lists and maps - cloudposse/terraform-yaml-config

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(not deliberate btw)

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

the config we ship is k8s.yaml

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-datadog-monitor

Terraform module to configure and provision Datadog monitors from a YAML configuration, complete with automated tests. - cloudposse/terraform-datadog-monitor

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So I think it’s just a case of the file not existing in the default path.

btai avatar

i renamed it to kube.yaml

btai avatar

i can name it back to k8s. I also tried with wildcard initially

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

I suspec thte path is relative to the module

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

you might try to use ./catalog/...

btai avatar

yeah terraform console doesn’t give me map config

> module.monitor_configs.*
[
  {
    "all_imports_list" = []
    "all_imports_map" = {
      "1" = []
      "10" = []
      "2" = []
      "3" = []
      "4" = []
      "5" = []
      "6" = []
      "7" = []
      "8" = []
      "9" = []
    }
    "list_configs" = []
    "map_configs" = {}
  },
]
btai avatar

got it

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
cloudposse/terraform-yaml-config

Terraform module to convert local and remote YAML configuration templates into Terraform lists and maps - cloudposse/terraform-yaml-config

btai avatar

@Erik Osterman (Cloud Posse) yah must be that

btai avatar

doh

btai avatar

so it’s not relative to module path, but I incorrectly put map_config_paths = ["catalog/monitors/k8s.yaml"] when the actual file was catalog/k8s.yaml. thanks guys

btai avatar

i assumed i was having trouble w/ context enabled

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so I suspect the error handling for this will get fixed :point_up: when we upgrade the module to use our terraform-provider-utils provider (cc: @Andriy Knysh (Cloud Posse))

1
Steve Wade (swade1987) avatar
Steve Wade (swade1987)

does anyone have a recommend module or starting place to implement https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/aws-multi-accounts-tutorial ?

Tutorial: Azure Active Directory integration with Amazon Web Services to connect multiple accounts

Learn how to configure single sign-on between Azure AD and Amazon Web Services (legacy tutorial).

loren avatar

that article misuses the term “AWS SSO”… that’s a whole ‘nother service. the article is just putting an iam identity provider in every account you want Azure AD SSO to connect to

Tutorial: Azure Active Directory integration with Amazon Web Services to connect multiple accounts

Learn how to configure single sign-on between Azure AD and Amazon Web Services (legacy tutorial).

loren avatar

step 1 is to decide how you want users to auth into accounts. do you want a single identity account for principals? SSO or otherwise, this requires users to assume-role to authenticate to their target account

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i want to put azure AD as the identity provider to our new users account

loren avatar

or do you want users to auth directly into the target account and role, using only their SSO identity

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

then allow them from there to assume roles in other accounts

loren avatar

that sounds like the former to me

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

yes it will be

loren avatar

the next step is to decide whether you want to use AWS SSO, or use the IAM identity provider. the latter is what is described in that doc

loren avatar

you can use Azure AD SSO -> AWS SSO, so you still maintain identities in a single place

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

this is where i am looking for peoples recommendations

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i would like to do it “properly”

loren avatar

the doc you linked has a link to this one, for using actual AWS SSO… https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/amazon-web-service-tutorial

Tutorial: Azure Active Directory single sign-on (SSO) integration with Amazon Web Services (AWS)

Learn how to configure single sign-on between Azure Active Directory and Amazon Web Services (AWS).

loren avatar

nope, that’s wrong too, that’s still the iam identity provider. dang azure docs

loren avatar

here’s the aws sso doc, using azure ad as the IdP, https://docs.aws.amazon.com/singlesignon/latest/userguide/azure-ad-idp.html

Azure AD - AWS Single Sign-On

Learn how to set up SCIM provisioning between Azure AD and AWS SSO.

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

would this be your recommended approach

loren avatar

i honestly don’t have a recommendation on this topic. the point of mgmt shifts around between the two, and if you don’t try both then it’s just really hard to say what will best suit your environment and users

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

makes sense

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i will trial both over the coming days and see

loren avatar

one thing i do not trust much is the cli integration of the first approach, azure ad sso -> iam identity provider. if you need cli credentials, and if azure ad is your IdP, then i might lean into the aws sso connection as an intermediary. then you can use awscliv2 to get credentials

loren avatar

if you were using okta, that would not be a concern, as okta has a very strong api and developer community maintaining all sorts of cli utilities for authenticating against the okta api

loren avatar

if you only/primarily need console access, or if the new “cloud shell” is a sufficient working environment for cli users, then that’s not a concern either

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

I will need console and CLI as I need to allow assuming roles locally to encrypt secrets using SOPs via KMS

loren avatar

i’m only finding this utility for that integration, when using azure ad -> iam identity provider… https://github.com/sportradar/aws-azure-login

sportradar/aws-azure-login

Use Azure AD SSO to log into the AWS via CLI. Contribute to sportradar/aws-azure-login development by creating an account on GitHub.

loren avatar

many blog posts, but they all come back to that

loren avatar

but i call hot garbage on any sso tool for aws cli auth that doesn’t mention credential_process

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

Yeh I saw that repo earlier today

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

my current place have done it before but I am really not convinced with the TF code as it seems pretty hacky

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

anyone know what happens in the terraform registry when you rename a terraform module git repo? does it keep the stats? does it pick up the redirect automatically? do we need to resubmit it, etc?

pjaudiomv avatar
pjaudiomv

Not sure. Try it and let us know , I would think it would maintain the reference because a git clone does but yea who knows what else they are doing.

loren avatar

Oh my gosh, been begging for this for years, was just merged! I might actually shed tears of joy/relief… https://github.com/hashicorp/terraform-provider-aws/issues/17510

Exclusive management of inline & managed policies for IAM roles · Issue #17510 · hashicorp/terraform-provider-aws

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or other comme…

3
loren avatar

Of course now I need the same for users and groups, but roles first is good with me

Exclusive management of inline & managed policies for IAM roles · Issue #17510 · hashicorp/terraform-provider-aws

Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave &quot;+1&quot; or other comme…

Alex Jurkiewicz avatar
Alex Jurkiewicz

hm, so exclusive management is now the only approach possible? Or is there some way to enable/disable exclusive management

Alex Jurkiewicz avatar
Alex Jurkiewicz

oh, I get it now. It’s like security groups

Alex Jurkiewicz avatar
Alex Jurkiewicz

The aws_iam_role can specified managed policies / inline policies as part of itself, for exclusive management.

Or you can specify them as separate resources, if you want non-exclusive management

loren avatar

Yes, same idea as how security groups work… You can manage attachments as separate resources (non-exclusive), or as part of the role resource itself (exclusive)

1
loren avatar

I always pursue architectures and operating models that work with exclusive management, so I can use it to enforce drift remediation

kskewes avatar
kskewes

Ideal. We use attachments where we have dependency ordering. Can’t make exclusive bastion security group without creating application security groups for example…

Alex Jurkiewicz avatar
Alex Jurkiewicz

i just found a use case for this just now! Damn… now I’m hanging out for 3.30

loren avatar

The wonderful thing about security groups, is that you can attach more than one to the things. If you sketch it out, you can use exclusive rules for every scenario. A group with rules for this set of things, a group with rules for that set of things, and a group for the relationship between those things…

Alex Jurkiewicz avatar
Alex Jurkiewicz

not totally true. I’ve hit the 5 SG limit many times

loren avatar

I hear you on that! That limit is frustrating, and cause for much design reflection

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)
08:44:14 PM

We’re about to release an update to Cloudrail that includes this:

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

And it works for users, roles and groups @loren

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

(remediation steps will be updated )

loren avatar

thanks @Yoni Leitersdorf (Indeni Cloudrail)! i was actually thinking about you the other day. i was figuring the approach could be generalized a bit to most resources that work with “attachment” concepts… security groups and rules, routing tables and routes, etc…

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Ah good point!

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

We hope to get this out in the coming days and would love your feedback once it’s available.

Alex Jurkiewicz avatar
Alex Jurkiewicz

Is https://github.com/indeni/cloudrail-demo still the best intro to cloudrail?

indeni/cloudrail-demo

This repository contains the instructions for how to use Cloudrail, as well as specific scenarios to test Cloudrail with. - indeni/cloudrail-demo

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Wait a few hours and I’ll give you a new URL - we’re updating the website, launching a web UI for the tool, etc.

2
Alex Jurkiewicz avatar
Alex Jurkiewicz

ha ha. I added

managed_policy_arns = [
    "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
  ]

to a lambda execution role… and Terraform tried to detach this from every other Lambda in the account

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

WHATTTT?

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

You reported it?

Alex Jurkiewicz avatar
Alex Jurkiewicz

that’s how it’s meant to work, right?

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

I might be misunderstanding you, but you attempted to set the policy on a specific role and it ended up trying to remove that role from all the lambdas?

Alex Jurkiewicz avatar
Alex Jurkiewicz

The new functionality Loren linked in this thread lets you write:

resource aws_iam_role foo {
  name = "foo"
  managed_policy_arns = [ ... ]
}

And Terraform will remove any role attachments of the managed policies that aren’t to foo

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Wait… I thought it was the other way around. I thought the goal was to ensure foo was only attached to the policies you want, and if someone else attached a policy you didn’t want to the role, it would be removed.

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

In other words, if your code has foo with managed_policy_arns = [1, 2, 3] and someone attached policy 4 to foo, Terraform will detach it.

Alex Jurkiewicz avatar
Alex Jurkiewicz

I thought so too! The behaviour I saw seemed a little useless

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Weird…

loren avatar

Other way around, or supposed to be. There is already a resource that does what you describe, I think, manage the roles a policy is attached to

loren avatar

Check what resource you were using

loren avatar
What you’re describing is this resource, “aws_iam_policy_attachmentResourceshashicorp/awsTerraform Registry” https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy_attachment
Alex Jurkiewicz avatar
Alex Jurkiewicz

dang, you’re right

2021-02-23

Prasad Reddy avatar
Prasad Reddy

Hi Any one can please let me know how to pass the variables.tfvars files , by using command

1
Rajiv Ranjan avatar
Rajiv Ranjan

terraform plan -var-file=variablee.tfvars

Prasad Reddy avatar
Prasad Reddy

I am running this command terraform apply -var=variables.tfvars for pass the tfvars files

msharma24 avatar
msharma24

use
-var-file=

Prasad Reddy avatar
Prasad Reddy

Thankyou it is working for me

Rajiv Ranjan avatar
Rajiv Ranjan

-var-file can use thid

Prasad Reddy avatar
Prasad Reddy

ok sure

Prasad Reddy avatar
Prasad Reddy

now it is working terraform apply -var-file=variables.tfvar , Thankyou

1
Prasad Reddy avatar
Prasad Reddy

I am writing terraform script to launch the MSK cluster in AWS any one have reference scripts please share with me

Rajiv Ranjan avatar
Rajiv Ranjan
cloudposse/terraform-aws-msk-apache-kafka-cluster

Terraform module to provision AWS MSK. Contribute to cloudposse/terraform-aws-msk-apache-kafka-cluster development by creating an account on GitHub.

Abrar avatar

Hey guys, I’m writing a ecr terraform module for use with my eks clusters - I believe I need to add this policy to worker nodeInstancerRole for the cluster to be able to pull images from ecr repo: https://docs.aws.amazon.com/AmazonECR/latest/userguide/ECR_on_EKS.html

Using Amazon ECR Images with Amazon EKS - Amazon ECR

You can use your Amazon ECR images with Amazon EKS, but you need to satisfy the following prerequisites.

Abrar avatar

In the eks module vars, I cannot find a way to add this policy to the noderole https://github.com/cloudposse/terraform-aws-eks-cluster/blob/0.32.1/variables.tf

cloudposse/terraform-aws-eks-cluster

Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.

bazbremner avatar
bazbremner

This module creates the cluster and the master nodes - you want to look at something like https://github.com/cloudposse/terraform-aws-eks-workers and supply aws_iam_instance_profile_name

cloudposse/terraform-aws-eks-workers

Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers

1
Abrar avatar

Is there anyway to add an additional node role policy via cloudposse/eks repo or will I have to do this externally to the module?

Abrar avatar

Oh I can see the cloudposse/ecr tf module already caters for this, will try it out. Nice! https://github.com/cloudposse/terraform-aws-ecr/tree/0.31.1

cloudposse/terraform-aws-ecr

Terraform Module to manage Docker Container Registries on AWS ECR - cloudposse/terraform-aws-ecr

Ankit Rathi avatar
Ankit Rathi

Hi amazing people, I have one question for <https://github.com/cloudposse/terraform-aws-rds> why do we need subnet_ids here ? is it just for making the database available in at least two or more availability zones ? (does it fulfill any other requirement?)

Mike Robinson avatar
Mike Robinson

I’m not with Cloudposse but according to this code block, it takes a list of subnet IDs to create a unique subnet group. Normally when calling the db_instance resource, you’d need to provide a subnet group ID and not just subnet IDs, that’s just how RDS was designed.

1
Ankit Rathi avatar
Ankit Rathi

ah okay thanks a lot Mike, so its a requirement while creating RDS instance

Mike Robinson avatar
Mike Robinson

Yup. If you were creating an RDS instance through AWS console, you’d need to provide a subnet group ID, which you’d need to have already created.

1
Zach avatar

cross posting from the hangops slack - Hashicorp has reversed and decided to allow the use of ‘undeclared vars’ in tfvars going forward. https://github.com/hashicorp/terraform/issues/22004

"Warning: Value for undeclared variable" · Issue #22004 · hashicorp/terraform

Current Terraform Version Terraform v0.12.3 Use-cases In our current project, we use a common.tfvars file to store shared configurations across multiple modules and plans. Up until recently, this h…

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

wow!

"Warning: Value for undeclared variable" · Issue #22004 · hashicorp/terraform

Current Terraform Version Terraform v0.12.3 Use-cases In our current project, we use a common.tfvars file to store shared configurations across multiple modules and plans. Up until recently, this h…

joshmyers avatar
joshmyers

Anyone noticed anything like https://github.com/hashicorp/terraform/issues/27214#issuecomment-784229902 ? terraform plan vs terraform show plan in 0.14.X

Silence refresh output from plans and applies · Issue #27214 · hashicorp/terraform

Current Terraform Version 0.14.2 Use-cases Silence all of the module.abc: Refreshing state… [id=abc] output in plans and applies so that the output is more concise and easier to review. This is e…

joshmyers avatar
joshmyers
Remove sensitive outputs by nitrocode · Pull Request #122 · cloudposse/terraform-aws-ecs-container-definition

what Revert sensitive = true outputs why Cannot see the difference in task definitions in terraform plan due to sensitive = true references Revert #118

joshmyers avatar
joshmyers
Terraform will perform the following actions:

  # module.ecs_alb_service_task.aws_ecs_task_definition.default[0] will be updated in-place
  ~ resource "aws_ecs_task_definition" "default" {
      # Warning: this attribute value will be marked as sensitive and will
      # not display in UI output after applying this change
      ~ container_definitions    = (sensitive)
        id                       = "userservices-global-build-info-service"
      + ipc_mode                 = ""
      + pid_mode                 = ""
        tags                     = {
            "Environment"       = "global"
            "Name"              = "userservices-global-build-info-service"
            "Namespace"         = "userservices"
            "bamazon:app"       = "build-info-service"
            "bamazon:env"       = "global"
            "bamazon:namespace" = "bamtech"
            "bamazon:team"      = "userservices"
        }
        # (9 unchanged attributes hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.

------------------------------------------------------------------------
Remove sensitive outputs by nitrocode · Pull Request #122 · cloudposse/terraform-aws-ecs-container-definition

what Revert sensitive = true outputs why Cannot see the difference in task definitions in terraform plan due to sensitive = true references Revert #118

joshmyers avatar
joshmyers

Awesome @jose.amengual

RB avatar

we’re reverting the sensitive output

RB avatar

the pr you pointed to reverts it

RB avatar
fix: mark outputs as sensitive by syphernl · Pull Request #118 · cloudposse/terraform-aws-ecs-container-definition

what Marks the outputs as sensitive Update workflows etc. missed by #119 why Otherwise TF 0.14 would give an Error: Output refers to sensitive values when using these outputs to feed into other …

joshmyers avatar
joshmyers

Aye, seen that and commented

joshmyers avatar
joshmyers

Thanks for being so quick to spot that!

np1
loren avatar

new to me: a colleague just pointed out this project, kind of a python-pytest equivalent of terratest? https://github.com/GoogleCloudPlatform/terraform-python-testing-helper

GoogleCloudPlatform/terraform-python-testing-helper

Simple Python test helper for Terraform. Contribute to GoogleCloudPlatform/terraform-python-testing-helper development by creating an account on GitHub.

1
Scott Cochran avatar
Scott Cochran

I may have found a bug in service-control-policies/aws, unless I’m simply doing something wrong. Adding policies is working great, and adding to that policy by adding additional policies is also working. However, when I try to remove something from a policy, that is not working. For example: I currently have 2 files policies in use. I add a 3rd, and I can see the additions in terraform plan. However, if I remove one of the policy files from the list_config_paths, leaving only one policy file, then terraform plan says no changes are to be applied.

Alex Jurkiewicz avatar
Alex Jurkiewicz

It sounds like this is similar to the aws_security_group / aws_security_group_ingress_rule setup.

Are you using two sorts of resources, one to manage the base resource, and another to manage policy attachments to it?

Scott Cochran avatar
Scott Cochran

I created my own module that contains this:

module "this" { source = "cloudposse/label/null" version = "0.22.1"

enabled = var.enabled namespace = var.namespace environment = var.environment stage = var.stage name = var.name delimiter = var.delimiter attributes = var.attributes tags = var.tags additional_tag_map = var.additional_tag_map label_order = var.label_order regex_replace_chars = var.regex_replace_chars id_length_limit = var.id_length_limit

context = var.context }

module "yaml_config" { source = "cloudposse/config/yaml" version = "0.1.0"

list_config_local_base_path = var.list_config_local_base_path != "" ? var.list_config_local_base_path : path.module list_config_paths = var.list_config_paths

context = module.this.context }

data "aws_caller_identity" "this" {}

module "service-control-policies" { source = "cloudposse/service-control-policies/aws" version = "0.4.0"

service_control_policy_statements = module.yaml_config.list_configs service_control_policy_description = var.service_control_policy_description target_id = var.target_id

context = module.this.context }

I’m calling the module like this:

module "create_and_apply_scp" { source = "git::<my bitbucket repo>" enabled = true environment = "sandbox" stage = "ou" name = "nonpci" # No underscores allowed list_config_local_base_path = "" list_config_paths = [ "scp_templates/deny_marketplace.yaml", "scp_templates/default_scps.yaml", "scp_templates/deny_eks.yaml" ] service_control_policy_description = "Non-PCI OU SCPs" target_id = module.create_ou.id }

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Scott Cochran I think @jose.amengual has run into this problem

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Did you figure it out?

Scott Cochran avatar
Scott Cochran

Unfortunately, no.

jose.amengual avatar
jose.amengual

Scott you could add the outputs. of the call module to the service control. policy module to see if the yaml is actually correct

jose.amengual avatar
jose.amengual

I had a problem yesterday using the raw.github url that did not updated the file right away and my plan was empty

jose.amengual avatar
jose.amengual

I realize after a few hours that the problem was the raw.url

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

(basically, seems to be an eventual consistency problem)

loren avatar

Discussion of a “test” command becoming native to terraform… https://twitter.com/mitchellh/status/1364273416178556928?s=19

This is an experiment (can’t stress this enough!) but I think folks will be really really happy to hear that the core Terraform team is researching and working on official integration test support. https://github.com/hashicorp/terraform/pull/27873 (stressing again: experimental, research phase)

5
1
Matt Gowie avatar
Matt Gowie

Yeah this needs to happen. Apprentlymart has had this repo up for years and it provides possible approach which seems like it would add something.

This is an experiment (can’t stress this enough!) but I think folks will be really really happy to hear that the core Terraform team is researching and working on official integration test support. https://github.com/hashicorp/terraform/pull/27873 (stressing again: experimental, research phase)

Matt Gowie avatar
Matt Gowie
apparentlymart/terraform-provider-testing

An experimental Terraform provider to assist in writing tests for Terraform modules - apparentlymart/terraform-provider-testing

Matt Gowie avatar
Matt Gowie

Ah and now that I read that PR — the work they merged is actually just an extension of that provider. Awesome.

loren avatar

posted a question about one of my pain points with testing modules… https://discuss.hashicorp.com/t/question-about-the-prototype-test-command-workflows/21375

Question about the prototype "test command" workflows

I posted this question in the PR adding the prototype for the new test command, but was directed here. One thing I’ve run into using terratest, is tests of modules that use count/for_each, and issues where the test config generates the resources passed to those expressions in a way where the index/label cannot be determined until apply. My workaround has been to support a “prereq” config for each test that is applied first, and to read the outputs via the data source terraform_remote_state . H…

2021-02-24

Yoni Leitersdorf (Indeni Cloudrail) avatar
Yoni Leitersdorf (Indeni Cloudrail)

Today’s an exciting day for us as we officially launch Cloudrail - a second generation security analysis tool for Terraform: http://indeni.com/cloudrail/

Basically, we looked at the good work done by the guys at checkov (congrats btw), tfsec and others, and decided to take it one step further. Cloudrail takes the TF plan, merges it in memory with a snapshot of the cloud account, and then runs context-aware rules on it. A few things that allows us to do:

  1. When we look at an S3 bucket’s ACLs, we know if the account has public access block set or not. This allows us to ignore “public-acl” type settings if the account blocks it anyway.
  2. When we look at an IAM role/user/group, we can tell what policies are attached to it, even outside the TF code (in the cloud).
  3. When an RDS database is defined without specific VPC information, we can calculate what the default VPC looks like (if there is one), what its default security group and whether that will cause a problem. And a bunch more examples… Basically Cloudrail was built to be used in the CI pipeline from day one, so it’s meant to be very dev/devops friendly.

As a token of appreciation for this amazing forum, we will be giving access to Cloudrail for free until the end of June to any member of this Slack forum. Just DM me for access after you’ve signed up to Cloudrail. (after June, it will be 30-evaluations/month for free, though that is also expanded to unlimited if you’re part of an open source project)

2
1
1
Matt Gowie avatar
Matt Gowie

Looking forward to checking this tool out when I have more time! Sounds awesome!

2
Matt Gowie avatar
Matt Gowie

Hey does anyone here create DataDog dashboards using Terraform? I’m just tasked an engineer on a client team with moving some of our dashboards to Terraform so we can create them for our dozen environments or so… and now I’m finding out that they don’t accept raw JSON and instead require that you write TF blocks for each widget. Seems excessive to me… and I’m wondering if any folks have a good work around for that.

Matt Gowie avatar
Matt Gowie

And I realize we can go the route of creating the dashboard via their API… but wondering if there is some middle ground / workaround that would make it nicer than a curl request via local-exec.

Mohammed Yahya avatar
Mohammed Yahya

oh I see, you trying to pass down json instead of a block

Mohammed Yahya avatar
Mohammed Yahya
borgified/terraform-datadog-dashboard

autogenerate dashboards based on metric prefix. Contribute to borgified/terraform-datadog-dashboard development by creating an account on GitHub.

1
Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Matt Gowie sounds like something that would fit nicely into the datadog module and catalog pattern using the looping

Matt Gowie avatar
Matt Gowie

@Mohammed Yahya good stuff — thanks for sharing.

2
Matt Gowie avatar
Matt Gowie

@Erik Osterman (Cloud Posse) Yeah — thinking the same. I will have my client’s team prove out the concept and then we can discuss open sourcing it.

Matt Gowie avatar
Matt Gowie

But it would awesome to provide catalogs for RDS, ElasticSearch, ALBs, EKS, etc. There is a ton of great reuse we could do there with catalogs to allow folks to pick and choose their own custom dashboards. I really like the idea.

this2
Mohammed Yahya avatar
Mohammed Yahya

it uses external data sources

data "external" "list_metrics" {
  program = ["bash", "${path.module}/list_metrics.sh"]
  query = {
    api_key = var.api_key
    app_key = var.app_key
    prefix  = var.prefix
  }
}

you could have a script for each catalog or something like that, I’m not sure can you paste a json sample of the dashboard?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Oh, I missed that script. Thought it was native HCL.

jose.amengual avatar
jose.amengual

@Matt Gowie https://gist.github.com/jamengual/4c7dfd0c5ec957d4f33c6a34b28d8b81 I did to create a custom dashboards using the module shared by @Mohammed Yahya

1
jose.amengual avatar
jose.amengual

it is still a scrip and it needs a bit of work

jose.amengual avatar
jose.amengual

no native HCL yet

jose.amengual avatar
jose.amengual

in the pass I used terraformer after I created everything

jose.amengual avatar
jose.amengual

potentially you could do that and templetarize the dashboards and pass a yaml config to it to fill it up I guess

Matt Gowie avatar
Matt Gowie

Huh… I didn’t see that external script either. That throws a wrench into things. We could still provide value from a local-exec based module / catalog, but it’s much less attractive.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

yea, local-exec is something we really try to avoid in our public modules

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

just need to add jq now to our utils provider.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Mohammed Yahya avatar
Mohammed Yahya

jq and yq

this1
Matt Gowie avatar
Matt Gowie

I’ll dig into this further when I get some time… it should be possible to pull in the exported JSON from a dashboard and use dynamic to build the blocks that we’d want.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Yea, I think it’s best just to store the raw dashboards in VCS and parse them with HCL.

Alex Jurkiewicz avatar
Alex Jurkiewicz

For about a year, I was creating cloudwatch dashboards with terraform. They use pure JSON. I ripped this out about a month ago, in my opinion it was a huge mistake and you should author dashboard code in its native system.

The productivity loss and barrier to entry from writing JSON instead of using the native editor UI is so high it put people off changing the dashboard.

Now, the dashboard is managed by hand. A daily job backs it up to a GitHub repo, so we have some DR/revision control. For dynamic data in the dashboard (eg, the production ALB ID) which changes, we read the dashboard data, update a hardcoded list of locations (“update ALB ARN in the first metric of graph called “Live 5xx rate” and that’s it. In some ways it is uglier. But it’s much easier to add new graphs and etc

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

so, authoring dashboards in JSON is not sustainable.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

But for distributing shared dashboards / parameterizing them, I think the pattern of developing them in the UI, exporting to JSON, and distributing with terraform is reasonable

this1
Alex Jurkiewicz avatar
Alex Jurkiewicz

Yeah. I tried that. But whenever you want to edit the dashboard again, the process is still convoluted:

  1. Edit dashboard in UI
  2. Export it to JSON
  3. Paste over your current template in Terraform
  4. Convert all the hardcoded values to template variables again, using git diff

Step 4 always took ages

Matt Gowie avatar
Matt Gowie

That’s a solid point, but if we treat the dashboards as catalogs that are not overly templatized then it would be possible to break them up into small enough chunks that they’d be reusable across organizations without updates. As in we can come up with an RDS dashboard that is complete enough that it is useful regardless of which organization you’re coming from and you won’t need to do updates to that dashboard’s configuration.

jose.amengual avatar
jose.amengual

yes, pretty much every dashboard will be same most of the time so changing the formula in a configurable manner base on a set template should work fine

kgib avatar

Using this module, I’d like to add another group to existing cluster https://github.com/cloudposse/terraform-aws-eks-node-group

cloudposse/terraform-aws-eks-node-group

Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.

kgib avatar

anyone have example? I’m not sure how to get the existing role and add another group to it

kgib avatar
module "eks_node_group_driver" {
  source  = "cloudposse/eks-node-group/aws"
  version = "0.18.3"

  subnet_ids        = module.subnets.private_subnet_ids
  cluster_name      = data.null_data_source.wait_for_cluster_and_kubernetes_configmap.outputs["cluster_name"]
  existing_workers_role_policy_arns = ["module.eks_node_group.node_role_arn"]
  # cluster_name      = aws_eks_cluster.cluster.id
  # node_group_name   = module.label.id
  # node_role_arn     = 
  instance_types    = ["r5.4xlarge"]
  desired_size      = 1
  min_size          = 1
  max_size          = 1
  kubernetes_labels = var.kubernetes_labels
  disk_size         = 100
  resources_to_tag  = ["instance"]

  context = module.this.context
Matt Gowie avatar
Matt Gowie

@kgib are you referring to "module.eks_node_group.node_role_arn?

Matt Gowie avatar
Matt Gowie

That is just a role that would add supplemental permissions to the node group. It’s not required.

kgib avatar

ok yea, it isn’t working the way I figured. It has no effect on the outcome. I’m just wondering how to get this node group associated with existing role

Matt Gowie avatar
Matt Gowie

I’m a bit confused of your issue without more context. Are you passing a role to your existing module.eks_node_group?

kgib avatar

I’m trying to understand how to pass the role, it’s been unsuccessful thus far

Matt Gowie avatar
Matt Gowie

Can you share your usage of module.eks_node_group?

kgib avatar
module "eks_node_group" {
  source  = "cloudposse/eks-node-group/aws"
  version = "0.18.3"

  subnet_ids        = module.subnets.private_subnet_ids
  cluster_name      = data.null_data_source.wait_for_cluster_and_kubernetes_configmap.outputs["cluster_name"]
  instance_types    = ["c5d.4xlarge"]
  desired_size      = 8
  min_size          = 8
  max_size          = 8
  kubernetes_labels = var.kubernetes_labels
  disk_size         = 100
  resources_to_tag  = ["instance"]

  context = module.this.context
Matt Gowie avatar
Matt Gowie

You’re passing to eks_node_group_driver the role from eks_node_group which may be your issue. If you’re passing a specific role to eks_node_group then you can likely use that same role.

Matt Gowie avatar
Matt Gowie

Oh it looks like your issue is that you’re passing the same context without any changes.

Matt Gowie avatar
Matt Gowie

Try adding the following argument to eks_node_group_driver:

attributes = ["driver"]
kgib avatar

ok…yea, I’m looking to add a node group, just to clarify

kgib avatar

where eks_node_group is exsting and eks_node_group_driver is addition

Matt Gowie avatar
Matt Gowie

Yeah, got that. What is like happening is that you’re running into issue with name collisions due to passing the same names (bundled together via module.this.context ) to both node group module usages. They have to be named differently within AWS to allow you to move forward.

Matt Gowie avatar
Matt Gowie

I gotcha — Try out the above code re attributes. I believe that will be enough to get you moving forward.

kgib avatar

yea, that change seems to help

kgib avatar

so then they each have their own policy ARN?

kgib avatar

is that a desireable outcome?

Matt Gowie avatar
Matt Gowie

Yeah that’ll be the result regardless. And you can customize each node groups policies by passing any external roles to them via existing_workers_role_policy_arns.

kgib avatar

thank you

np1
kgib avatar

gives

Error: Error creating IAM Role existing-cluster-workers: EntityAlreadyExists: Role with name existing-cluster-workers already exists.
	status code: 409, request id: c577f222-6e43-43e0-aa23-ae2848ecaa81
Release notes from terraform avatar
Release notes from terraform
08:14:17 PM

v0.15.0-beta1 Version 0.15.0-beta1

Release notes from terraform avatar
Release notes from terraform
08:34:20 PM

v0.15.0-beta1 0.15.0-beta1 (Unreleased) BREAKING CHANGES:

Empty provider configuration blocks should be removed from modules. If a configuration alias is required within the module, it can be defined using the configuration_aliases argument within required_providers. Existing module configurations which were accepted but could produce incorrect or undefined behavior may now return errors when loading the configuration. (<a href=”https://github.com/hashicorp/terraform/issues/27739“…

Provider configuration_aliases and module validation by jbardin · Pull Request #27739 · hashicorp/terraform

Here we have the initial implementation of configuration_aliases for providers within modules. The idea here is to replace the need for empty configuration blocks within modules as a &quot;proxy&qu…

larry kirschner avatar
larry kirschner

Apologies if this isn’t the right forum for my question regarding the terraform-aws-efs module (https://github.com/cloudposse/terraform-aws-efs)

…it looks like when I upgrade module version from 0.27.0 to current 0.30.0 and then apply my existing EFS filesystem gets destroyed/replaced and I get a new fs id.

Is this by design and/or unavoidable? Is there any way I can upgrade module version and not have my fs replaced?

cloudposse/terraform-aws-efs

Terraform Module to define an EFS Filesystem (aka NFS) - cloudposse/terraform-aws-efs

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

It’s not enough information - we need to see what is prompting the change.

cloudposse/terraform-aws-efs

Terraform Module to define an EFS Filesystem (aka NFS) - cloudposse/terraform-aws-efs

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

For example, we updated all of our modules to use secure defaults. By default encrypted is true now.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Perhaps this change is causing it. In that case, the fix is to set encrypted to false, that way you’re being explicitly insecure

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

If you share the exact output from the terraform plan as a snippet, we can take a look

larry kirschner avatar
larry kirschner

ah ok, so if I go back and look at the changes in apply, there will be something (in EFS or related)

this1
larry kirschner avatar
larry kirschner

yes, let me try that now

larry kirschner avatar
larry kirschner

thanks for responding! I’m not really a devops guy by trade

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

ah, no worries, everyone is welcome here

larry kirschner avatar
larry kirschner

yes I see…like you said:

~ encrypted                       = false -> true # forces replacement
larry kirschner avatar
larry kirschner

so I would have to force this deployment to encrypted = false to upgrade and keep the existing FS it sounds like?

larry kirschner avatar
larry kirschner

thx for help w this! I think I get it now

Jeff Dyke avatar
Jeff Dyke

Greetings. I had posted a question on /r/terraform, based on a response from u/CrimeInBlink47, who mentioned that I should check in here and cloudposse had published a module/provider that would allow for terragrunt type yaml(2 levels max) merging. Which is the final reason (i think) i’m still using TG and not plain TF. BTW, love the video’s i’ve seen so far, thanks for the content. And again Hello!

2
Matt Gowie avatar
Matt Gowie

Hey @Jeff Dyke — I was the one responding to you on Reddit

Here is the provider I was talking about: https://github.com/cloudposse/terraform-provider-utils. Here is an example of usage for deep merging YAML: https://github.com/cloudposse/terraform-provider-utils#examples

Not sure if that is what you were asking for if or if that’s what you need, but figured that might help.

cloudposse/terraform-provider-utils

The Cloud Posse Terraform Provider for various utilities (E.g. deep merging) - cloudposse/terraform-provider-utils

cloudposse/terraform-provider-utils

The Cloud Posse Terraform Provider for various utilities (E.g. deep merging) - cloudposse/terraform-provider-utils

Jeff Dyke avatar
Jeff Dyke

Thanks for confirming, i was looking at these today and it would seem to solve my problem. Another thing i do quite often is make use of TG path_relative_to_include , is there something for that. I’ve only done limited go programming, but if not that could be a great addition.

Jeff Dyke avatar
Jeff Dyke

Just to give you more of an idea, i’m not asking you for a solution. allowing me to have a single remote_state per vpc, by using key = "prod/${path_relative_to_include()}/terraform.tfstate" . I also marry the yaml to the directory structure, allowing me to have an load the correct server configs by using

inputs = {
  servers = local.servers[path_relative_to_include()]["servers"]
}

The merits of doing that could be argued. Thanks for the pointers.

Matt Gowie avatar
Matt Gowie

Huh I don’t know if I know enough about TG to know what you’re referring to… but I don’t believe that has been needed so far. SweetOps has the idea of “Stacks” which utilize Yaml imports and that might solve your problem, but unfortunately it’s not well documented yet (though you can check out https://github.com/Cloudposse/atmos, https://github.com/cloudposse/terraform-yaml-config, and https://github.com/cloudposse/reference-architectures/tree/master/stacks for some idea of what I’m talking about).

cloudposse/atmos

Universal Tool for DevOps and Cloud Automation (works with terraform, helm, helmfile, istioctl, etc) - cloudposse/atmos

cloudposse/terraform-yaml-config

Terraform module to convert local and remote YAML configuration templates into Terraform lists and maps - cloudposse/terraform-yaml-config

cloudposse/reference-architectures

[WIP] Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures

Matt Gowie avatar
Matt Gowie

Your question might be a good one to chat about during #office-hours too — Like “Hey I have X and Y problems in Terraform that TG solves… how are ya’ll doing that without TG?”

Jeff Dyke avatar
Jeff Dyke

ok, cool thanks for the advice. Its nice to be politely introduced to the norms. Appreciate it.

np1
Matt Gowie avatar
Matt Gowie

Also, I’m sure a proposal and a PR would definitely get some attention on that utils provider — it’s new and it has been discussed that it will be built out as more use-cases come up.

1

2021-02-25

mikesew avatar
mikesew

Q: Has anybody encountered problems running tfenv on WSL? I’ve tried ubuntu and centos7, both getting similar errors. it works in windows10 git-bash, but not WSL(s).

msew@NOTEBOOK:~ $ which tfenv
/c/users/msew/.local/bin/tfenv
msew@NOTEBOOK:~ $ tfenv
/usr/bin/env: 'bash\r': No such file or directory

^^^ this seems like a windows/unix CrLf error.

mikesew avatar
mikesew

siiiigh.. I had to recursively sed search replace any /r’s out in all files. it has to do with the way my windows / git core.autocrlf is set (true).

cd ~/.tfenv
sed -i.bak 's/\r$//' ./bin/*
sed -i.bak 's/\r$//' ./lib/*
sed -i.bak 's/\r$//' ./libexec/*
Ankit Rathi avatar
Ankit Rathi

Hi amazing folks I am tring to create very simple RDS mysql - https://github.com/cloudposse/terraform-aws-rds

module "rds_instance" {
  source = "cloudposse/rds/aws"
  # Cloud Posse recommends pinning every module to a specific version
  version = "v0.33.0"
  namespace                   = "backend"
  stage                       = "dev"
  name                        = "somename"
  dns_zone_id                 = var.somezoneid
  host_name                   = "somehostname"
  security_group_ids          = [module.security-group-mysql-www.this_security_group_id]
//  ca_cert_identifier          = "rds-ca-2021"
  allowed_cidr_blocks         = var.dev-vpc-all-bsae-cidr-blocks
  database_name               = "mysqlwww1"
  database_user               = "goodone"
  database_password           = "nicetry"
  database_port               = 3306
  multi_az                    = false
  storage_type                = "gp2"
  allocated_storage           = 100
  storage_encrypted           = false
  engine                      = "mysql"
  engine_version              = "8.0.20"
  major_engine_version        = "8.0"
  instance_class              = "db.t3.medium"
  db_parameter_group          = "mysql8.0"
  //  option_group_name           = "mysql-options"
  publicly_accessible         = false
  subnet_ids                  = [var.dev-vpc-public-subnets[0], var.dev-vpc-public-subnets[1]]
  vpc_id                      = var.dev-vpc-id
  snapshot_identifier         = null
  auto_minor_version_upgrade  = true
  allow_major_version_upgrade = false
  apply_immediately           = false
  maintenance_window          = "Mon:03:00-Mon:04:00"
  skip_final_snapshot         = false
  copy_tags_to_snapshot       = false
  backup_retention_period     = 7
  backup_window               = "22:00-03:00"

  db_parameter = [
    { name  = "myisam_sort_buffer_size"   value = "1048576" },
    { name  = "sort_buffer_size"          value = "2097152" }
  ]
}

Strangely its giving error for DB parameter groups

Error: Missing attribute separator

  on 100-rds.tf line 65, in module "rds_instance":
  65:     { name  = "myisam_sort_buffer_size"   value = "1048576" },

Expected a newline or comma to mark the beginning of the next attribute.

Don’t think the syntax is wrong somewhere ? Anything ?

cloudposse/terraform-aws-rds

Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.

Andy avatar

Think the key value pairs within db_parameter need to be on separate lines. See the full example: https://github.com/cloudposse/terraform-aws-rds/blob/master/examples/complete/main.tf#L48

cloudposse/terraform-aws-rds

Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.

2
Ankit Rathi avatar
Ankit Rathi

yes it works now… the readme should also be updated accordingly i guess

Ankit Rathi avatar
Ankit Rathi

Thanks a lot @Andy

kumar k avatar
kumar k

Hello…I have upgraded one of my terraform module from 0.12.29 to 0.14.5.Now I wanted to check if i can restore to old state using a older statefile from s3 with 0.12.29 version.Is this doable?

bazbremner avatar
bazbremner

If you’ve got versioning enabled on the S3 bucket, shouldn’t be hard. If not, good luck.

kumar k avatar
kumar k

Versioning is enabled on s3.Are there are steps for restore?

kumar k avatar
kumar k

All set.Thanks

2
mikesew avatar
mikesew

What ended up being the fix? delete current version of s3 object? copy from prior version? did you have to do anything with your dynamoDB lock for the rollback process?

kumar k avatar
kumar k

I copied from prior version.Nothing related to dynamodb.

Tomek avatar

:wave: I have two separate terraform projects (with their own terraform state file). Project A creates a lambda that Project B wants to reference. I was going to use the aws_lambda_function data source. E.g.

data "aws_lambda_function" "existing" {
  function_name = var.function_name
}

How are you meant to handle the situation where Project A may have not yet created that lambda and so it won’t exist?

loren avatar

orchestration. ensure B only runs after A

1
Harold Reinstein avatar
Harold Reinstein

What are you using to manage tf ? terraform cloud, scalr, env0 pulumi or ?

Alex Jurkiewicz avatar
Alex Jurkiewicz

You mean where do we execute it? We use CircleCI and a bit of Jenkins

Harold Reinstein avatar
Harold Reinstein

yes as well as how managing large env with maybe 1500 workspaces

Matt Gowie avatar
Matt Gowie

If you have that many workspaces, I would definitely not use Terraform Cloud. Their business tier pricing is complete robbery.

Check out spacelift.io — Great product company who is going to own the space very soon.

Harold Reinstein avatar
Harold Reinstein

@Matt Gowie Thank you will look into spacelift.io. was looking at scalr, env0, pulumi, morpheus, fylamynt, and cloudify as options as well.

Matt Gowie avatar
Matt Gowie

@Harold Reinstein I don’t know anything about Pulumi, Morpheus, Fylamynt, or Cloudify, but Scalr, Env0, TFC, and Spacelift did an #office-hours sessions a few weeks back: https://www.youtube.com/watch?v=4MLBpBqZmpM. If you’re evaluating then that is definitely worth looking at.

Those are the current mainstream TF automation tools out there today. My opinion on all of them —

  1. TFC is a bad offering right now — they’re not really providing much functionality and their pricing is BS.
  2. I’m still confused on what problem Scalr is solving.
  3. Env0 doesn’t have a Terraform Provider to automate their solution so they’re out.
  4. Spacelift looks great and checks all the boxes from what I can tell, so I’m looking towards steering my clients in that direction going forward.
ohad avatar

Hi @Harold Reinstein, I am co-founder and CEO of env0. Feel free to reach out to me personally if you need any help or any questions.

ohad avatar

Hi @Matt Gowie - we indeed do not have yet Terraform provider but we do have API +CLI in order to trigger env0 without our GUI. Also, our custom flows https://docs.env0.com/docs/custom-flows allow lots of flexibility to automate actions before/after terraform init/plan/apply/destroy. That being said, TF provider is definitely on our short term product roadmap.

Custom Flows

You can create custom flows for a template, that allow you to run whatever you want (bash, python, gcloud, ansible, cloudformation, etc.), whenever you want in the deployment process (before or after Terraform init/plan/apply, and even destroy/error).Create a file named env0.yml to define a custom …

2021-02-26

Bart Coddens avatar
Bart Coddens

hi all, I am a bit confused by this module:

Bart Coddens avatar
Bart Coddens
cloudposse/terraform-aws-key-pair

Terraform Module to Automatically Generate SSH Key Pairs (Public/Private Keys) - cloudposse/terraform-aws-key-pair

Bart Coddens avatar
Bart Coddens

can it be used to create the keypair to provision/boot a machine ?

pjaudiomv avatar
pjaudiomv

Sure can

Bart Coddens avatar
Bart Coddens

curently I use something like this:

Bart Coddens avatar
Bart Coddens

aws ec2 –profile “$customer” create-key-pair –key-name cloudposseisawesome-“$customer” –query ‘KeyMaterial’ –output text

Bart Coddens avatar
Bart Coddens

and I store that on my local machine

Bart Coddens avatar
Bart Coddens

the public key is auto stored in AWS though

pjaudiomv avatar
pjaudiomv

If you set generate_ssh_key to true on the module it will there tls resource to generate a key for you Otherwise it will use a local one you specify to create the aws key.

Bart Coddens avatar
Bart Coddens

yeah true the private and public key gets generated on the workstation

Bart Coddens avatar
Bart Coddens

how can I push this public key to the EC2 machine ?

pjaudiomv avatar
pjaudiomv

Only on the initial provisioning you specify the output key name on your instance

Bart Coddens avatar
Bart Coddens

true

Bart Coddens avatar
Bart Coddens

so if I use this, what does it do, does the key gets stored on the machine ?

Bart Coddens avatar
Bart Coddens

what we normally do here is, we generate a keypair like with the command above, use that private key in our ansible framework and our other admin keys get pushed via ansible

pjaudiomv avatar
pjaudiomv

There’s an output for the private key on the module if you choose to have it generate one for you

Bart Coddens avatar
Bart Coddens

hmmm, I am still a bit confused, the private key we need because that’s the one I will be using to connect to the machine that gets provisioned

Bart Coddens avatar
Bart Coddens

but how does the public key get on the machine ?

Bart Coddens avatar
Bart Coddens

if you specify it in the terraform config, it gets uploaded ?

pjaudiomv avatar
pjaudiomv

In the case of ansible most my ansible is deployed through a bastion and I like using this module which stores they keypair in ssm parameter store This way it’s not local and can programmatically retrieve it https://github.com/cloudposse/terraform-aws-ssm-tls-ssh-key-pair

cloudposse/terraform-aws-ssm-tls-ssh-key-pair

Terraform module that provisions an SSH TLS Key pair and writes it to SSM Parameter Store - cloudposse/terraform-aws-ssm-tls-ssh-key-pair

pjaudiomv avatar
pjaudiomv

The pub key is stored on aws yes as an ec2 key

Bart Coddens avatar
Bart Coddens

ah ok, so you can discard it when the machine is setup ?

pjaudiomv avatar
pjaudiomv
cloudposse/terraform-aws-key-pair

Terraform Module to Automatically Generate SSH Key Pairs (Public/Private Keys) - cloudposse/terraform-aws-key-pair

Bart Coddens avatar
Bart Coddens

excellent, can ansible query ssm to get the private key ?

pjaudiomv avatar
pjaudiomv

There’s probably a way but I have a bash script that calls ansible, before ansible is run it retrieved the key from ssm

Bart Coddens avatar
Bart Coddens

that’s even more secure, thanks for the info Patrick !

Bart Coddens avatar
Bart Coddens

I will lookup if I can fetch this via ansible, if so, I will let you know

pjaudiomv avatar
pjaudiomv

Cool thanks

Bart Coddens avatar
Bart Coddens

Thanks for your patience

1
Bart Coddens avatar
Bart Coddens

Hey all, I was checking out this module:

Bart Coddens avatar
Bart Coddens
cloudposse/terraform-aws-iam-user

Terraform Module to provision a basic IAM user suitable for humans. - cloudposse/terraform-aws-iam-user

Bart Coddens avatar
Bart Coddens

it does not support creation of the groups right ?

Rene avatar
cloudposse/terraform-aws-iam-user

Terraform Module to provision a basic IAM user suitable for humans. - cloudposse/terraform-aws-iam-user

cloudposse/terraform-aws-iam-user

Terraform Module to provision a basic IAM user suitable for humans. - cloudposse/terraform-aws-iam-user

RB avatar

if you like terraform-docs and want to see anchors supported, upvotes plz

https://github.com/terraform-docs/terraform-docs/issues/408

Anchor support for resources, modules, inputs, outputs, etc · Issue #408 · terraform-docs/terraform-docs

What problem are you facing? I cannot link to a specific variable in the markdown using a link How could terraform-docs help solve your problem? I&#39;d like each variable to have an anchor associa…

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

talking of terraform-docs how do people handle using it with pre-commit hooks?

roth.andy avatar
roth.andy
antonbabenko/pre-commit-terraform

pre-commit git hooks to take care of Terraform configurations - antonbabenko/pre-commit-terraform

loren avatar

it’s painful when terraform-docs makes a change to its default sections or formats in a new version, and the team is suddenly using different versions

1
roth.andy avatar
roth.andy

All of my pipelines run a validate stage that runs pre-commit install && pre-commit run -a . So, team can run whatever they want, but if it is different from what CI is running then they are responsible for fixing it so that the CI pipeline runs clean, (which usually just requires that they sync the dependency version up with what CI is running)

roth.andy avatar
roth.andy

We offer the ability to run the same docker image locally as the CI engine runs in the pipeline

bazbremner avatar
bazbremner

.pre-commit-config.yaml can be used to pin the versions of the pre-commit plugins, which should avoid the “surprise update” problem

1
loren avatar

we also use a docker image with baked in tool versions. pinning the pre-commit versions also makes sense, though updating those pins could be annoying across a lot of repos

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

i am wondering now if having a docker image that people mount their local directories into would be ideal, that way the versions are the same as the versions used in CI

loren avatar

that’s how we do it, yep

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

its slightly annoying though as it needs to handle git config, aws config etc

roth.andy avatar
roth.andy

Here’s ours: https://github.com/saic-oss/anvil

We have a new docker-compose based pattern for local devs to use which is better than just running docker run, just haven’t gotten it open sourced yet

saic-oss/anvil

DevSecOps tools container, for use in local development and as a builder/runner in CI/CD pipelines. Not to be used to run production workloads. - saic-oss/anvil

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

we have a toolkit image we use with CI

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

the issue is how we could use this locally to keep the versions consistent

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

as you’d really need to mount ~ to ~/local or something like that

zeid.derhally avatar
zeid.derhally

How do you guys handle the pre-commit hooks when there are devs using different OSes? Windows, MacOS, Linux?

mfridh avatar

I’d recommend the “pinning” happens from an automatically updated “build harness”. Only locally pin in individual repositories explicitly for next-gen testing or pinning due to “legacy” reasons.

@zeid.derhally that part… we recommend everyone uses our maintained docker “tools” container as their shell. It’s too painful to maintain all.

Once WSL works better under our corporate bastardized Windows machines maybe we can think about doing it that way but so far WSL doesn’t perform well enough

Steve Wade (swade1987) avatar
Steve Wade (swade1987)

we have them enabled but when someone does a brew update all hell breaks lose

Alex Jurkiewicz avatar
Alex Jurkiewicz

I wrote up a proposal for the KMS module about supporting more flexible ways of customising key policy. I’m interested in feedback from maintainers and users who use non default policies: https://github.com/cloudposse/terraform-aws-kms-key/issues/25

Provide canned policies · Issue #25 · cloudposse/terraform-aws-kms-key

This module currently creates KMS keys with a policy stating &quot;any IAM user/role can do anything with this key&quot;. If you want a more restrictive policy, you have to write it yourself. I thi…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Andriy Knysh (Cloud Posse) seems reasonable to me

Provide canned policies · Issue #25 · cloudposse/terraform-aws-kms-key

This module currently creates KMS keys with a policy stating &quot;any IAM user/role can do anything with this key&quot;. If you want a more restrictive policy, you have to write it yourself. I thi…

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@jose.amengual

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

@Alex Jurkiewicz sounds good to me as well, thanks

2021-02-27

2021-02-28

Mohammed Yahya avatar
Mohammed Yahya

anyone using https://github.com/localstack/localstack/releases/tag/v0.12.7 with Terraform for offline Testing in Terraform

Release LocalStack release 0.12.7 · localstack/localstack

Change Log: LocalStack release 0.12.7 1. New Features initial support for Kinesis consumers and SubscribeToShard via HTTP2 push events add LS_LOG option to customize the default log level add Clou…

    keyboard_arrow_up