#terraform (2021-02)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2021-02-01
I’m updating Terraform version that is used by our team from 0.13.5 to 0.14.5. So as I understand with terraform 0.14 .terraform.lock.hcl
should be committed to git?
Terraform CLI to create a dependency lock file and commit it to version control along with your configuration
i think the lockfile workflows are still a “work in progress”… for example, in reusable modules, where perhaps you run tests, it probably doesn’t make sense to commit the lockfile alongside the tests
and if you are deploying the same config, same version across many many accounts each with their own tfstate, it doesn’t really make sense to commit the lock file for each one of them. a little orchestration is needed to create/update the lock file centrally, and put it in place before init/plan/apply
i posted my thoughts on a similar upgrade a couple weeks back… https://sweetops.slack.com/archives/CB6GHNLG0/p1610817829068500
fwiw, updated a decent number of tf states from 0.13.5 to 0.14.4 over the last week… no significant issues, but a few things took a little while to understand:
• sensitive values may be marked in the provider, i.e. an iam access/secret key. you cannot for_each
over objects containing these values, but you can for_each
over non-sensitive keys and index into the object. any outputs containing provider-marked sensitive values must also be marked sensitive
• some of the output handling is a little odd, particularly with conditional resources/modules and accordingly conditional outputs. in some places, outputting null
as the false condition caused a persistent diff. worked fine in tf 0.13.5, but not in tf 0.14.4. changing it to ""
fixed it :man-shrugging::skin-tone-2:
• the workflow around the new lock file, .terraform.lock.hcl
, is quite cumbersome. it really clutters up the repo when you have a lot of root modules, and means you have to init
each root somehow to generate the file, and commit it, anytime you want to update providers? no thanks! but, unfortunately, there is no way to disable it. the file is mandatory for a plan/apply. i’m using terraform-bundle already, setting up the plugin-cache in advance, restricting versions, and restricting network connectivity in CI. so i thought i could just remove the file after init
, but no dice. you can remove it after apply
, and don’t have to commit it (but that means CI will need to generate it)
• if you are updating from 0.12, you’ll likely want to (or need to) first update to tf 0.13 for the new provider/registry syntax, to get the old syntax out of your tf 0.12 tfstate
Yes your thoughts make a lot of sense. Our infrastructure is composed of 100s of small terraform components each with its own state file. Untli now we always used default version of terraform providers and didn’t worry about versioning. The new approach makes sense but I probably want to keep the same version of providers for all my infra.
I’m not sure with this new approach I’m i supposed to read the changelogs of all providers I use and only update manually? This seems like quite a lot of work because I’m a single devops engineer in our team…
if you weren’t bit by provider versions before, i wouldn’t feel bad about just adding the lockfile in .gitignore
another pain point is that init
only adds hashes for the platform you are on now. if your platform and your CI/teammate’s platforms are different (osx vs linux vs windows), the hash changes! so you actually need to run providers lock
for each platform
fortunately, everyone uses geodesic
to run terraform so never happens.
Hi, does anyone here knows of a way to handle data
resources with nill values? I’m using google_kms_key_ring
and it seems to return nill
when a non existing keyring name is provided
hey
https://github.com/cloudposse/terraform-aws-efs/commit/53847b81f887f13a7cfec6132bf362bde6dd3788#diff-05b5a57c136b6ff5965[…]d184d9daa9a65a288eR41-R43 shouldn’t this change be a major release? changing encrypted from false to true will enforce recreation of the EFS resource
- workflows updated * readme updated, file system encryption enabled by default
Please see our explanation here:
- workflows updated * readme updated, file system encryption enabled by default
For 0.x
releases, it’s not conventional to bump major.
thanks Erik.
Version 2.0 of the Kubernetes and Helm providers includes a more declarative authentication flow, alignment of resource behaviors and attributes with upstream APIs, normalized wait conditions across several resources, and removes support for Helm v2.
is anyone using or planning to use https://github.com/terraform-aws-modules/terraform-aws-pricing ?
Terraform module which calculates price of AWS infrastructure (from Terraform state and plan) - terraform-aws-modules/terraform-aws-pricing
I’m putting my faith here https://www.infracost.io/
Infracost shows cloud cost estimates for Terraform projects. It integrates into pull requests and allows developers and DevOps to see cost breakdowns and compare options upfront.
Interesting how’d you find it?
still in early development but awesome community support and look promising, using it already but not much resources, but more than the ^^ one
We are all in AWS and they look like they have a fair amount of resources covered
Atlantis: Terraform Pull Request Automation
trying to setup my terraform project with v0.14.4… however i’m getting an error Error: Unsupported argument on main.tf line 21, in provider “kubernetes”: 21: load_config_file = false
here is my code provider “kubernetes” { host = data.aws_eks_cluster.cluster.endpoint cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data) token = data.aws_eks_cluster_auth.cluster.token load_config_file = false }
Announcement: terraform-null-label
v0.23.0 is now out. It allow for setting the letter case of tag names and labels. Previously we forced tag names to be Title case and labels to be lower case. Now we allow some configuration. The primary impetus for this is that GCE does not allow uppercase letters in tag names, but we have taken it a step further based on other requests we have had over the years.
Note that with this version of null label, we have dropped support for Terraform 0.12. All future versions of our Terraform modules (once they are updated to terraform-null-label
v0.23.0) are for TF 0.13.0 or above.
With this release, you gain control over the letter case of generated tag names and supplied labels, which means you also have control over the letter case of the ultimate id. Labels are the elemen…
Is it possible to list the files uploaded in a remote run?
I feel the upload part is longer than it should and since I work in a monorepo it might be due to some random stuff being uploaded
I added this terraformignore
*
!infrastructure/
**/.terraform/
.git/
thanks in advance if someone has atip
is there a more efficient way to do this? content_type = lookup(var.mime_types, split(".", each.value)[length(split(".", each.value))])
Can’t think of a better way off the top of my head. I don’t think terraform supports negative indexes like in some languages (e.g. split(".", each.value)[-1]
)
That said, I think it would be a lot more readable if you broke it down into locals.
Hrmm… but with the each
there, guess that won’t work either
I suppose you could call reverse
and then pick index 0
since you aren’t assigning a default with lookup
, you can at least just use the index…
var.mime_types[split(".", each.value)[length(split(".", each.value))]]
(not setting a default with lookup is deprecated, anyway)
You can also use `
var.mime_types[replace(each.value, "/^.*\./", "")]
beat me to the replace
option… i’m sure there’s some regex that’ll work
relying on the greediness of .*
like that ought to work
I think I like the reverse option, it’s readable to me content_type = lookup(var.mime_types, reverse(split(".", each.value))[0])
either way, maybe leave a comment for your future self or teammates explaining what is happening
haha, good point
alternatively… (to beat a dead horse)
locals {
mime_extension = "/^.*\."
}
...
var.mime_types[replace(each.value, local.mime_extension, "")]
that feels pretty readable
i can’t predict how many .
will be in the filename
and I want to grab the last characters after the last .
and use that in the lookup to determine the mime type
So much functionality is missing from the core functions. I’ve been waiting for endswith()
for years now
that’s how golang devs think… you get only the most basic tools in stdlib, and from there everyone reimplements common utils functions over and over in slightly different ways
What would endswith
do?
endswith("abcdef", "f") == true
this is a bit of a legit question… example from python:
>>> 'abcdef'.endswith('fed')
False
>>> 'abcdef'.rstrip('fde')
'abc'
Well, you can do that easily enough with Terraform. Yes they do not have that exact function, but my feeling is that if you can do it without too much trouble then it is good enough.
> length(regexall("[fed]$", "abcdef")) > 0
true
> substr(trim("Xabcdef","fde"),1,-1)
abc
I will grant that the subst
is kind of a hack, but it works and I would rather see Hashicorp add something that cannot be hacked togeter, such as tr
$ echo 0123456789 | tr '[0-9]' '[g-p]'
ghijklmnop
Lol, I repeat, https://sweetops.slack.com/archives/CB6GHNLG0/p1612219807176600?thread_ts=1612219729.176500&cid=CB6GHNLG0
that’s how golang devs think… you get only the most basic tools in stdlib, and from there everyone reimplements common utils functions over and over in slightly different ways
any ideas on how to accept a “template string” as a variable, and template it, without using the deprecated template_file
(which actually accepted strings not files)? the function templatefile()
actually requires a file… for example, i used to template arns like this, so the user wouldn’t have to hard-code values if they didn’t want to:
data "template_file" "policy_arns" {
count = length(var.policy_arns)
template = var.policy_arns[count.index]
vars = {
partition = data.aws_partition.current.partition
region = data.aws_region.current.name
account_id = data.aws_caller_identity.current.account_id
}
}
That would be a good feature request. template_string
You could use a tmp file that you use, dump the template string to, use template file, and then delete the file
Very convoluted but I imagine you could create the file using the local_file resource (https://registry.terraform.io/providers/hashicorp/local/latest/docs/resources/file) and then use templatefile() to template it
ideally the solution would not involve a resource as that involves state and the create/destroy lifecycle
Error: Incorrect attribute value type
on .terraform/modules/eks/workers_launch_template.tf line 40, in resource "aws_autoscaling_group" "workers_launch_template":
40: vpc_zone_identifier = lookup(
Inappropriate value for attribute "vpc_zone_identifier": set of string
required
can someone help me how to handle this
The type of value you pass to this argument must be a set of strings. What value do you think the lookup
is returning?
i didnt passed any value for this *vpc_zone_identifier*
values i am passing from seperate file called *clusterProperties.env*
TF_VAR_worker_groups_launch_template=$(cat <<EOF
[{
name = "worker"
asg_desired_capacity = "1" # Desired worker capacity in the autoscaling group.
asg_max_size = "5" # Maximum worker capacity in the autoscaling group.
asg_min_size = "0" # Minimum worker capacity in the autoscaling group.
on_demand_base_capacity = "15" # Absolute minimum amount of desired capacity that must be fulfilled by on-demand instances
on_demand_percentage_above_base_capacity = "75" # Percentage split between on-demand and Spot instances above the base on-demand capacity
subnets = "${TF_VAR_subnets}" # A comma delimited string of subnets to place the worker nodes in. i.e. subnet-123,subnet-456,subnet-789
ami_id = "${TF_VAR_ami}" # AMI ID for the eks workers. If none is provided, Terraform will search for the latest version of their EKS optimized worker AMI.
asg_desired_capacity = "3" # Desired worker capacity in the autoscaling group.
asg_force_delete = false # Enable forced deletion for the autoscaling group.
instance_type = "${TF_VAR_worker_instance_type}" # Size of the workers instances.
override_instance_type = "t3.2xlarge" # Need to specify at least one additional instance type for mixed instances policy. The instance_type holds higher priority for on demand instances.
on_demand_allocation_strategy = "prioritized" # Strategy to use when launching on-demand instances. Valid values: prioritized.
on_demand_base_capacity = "7" # Absolute minimum amount of desired capacity that must be fulfilled by on-demand instances
on_demand_percentage_above_base_capacity = "75" # Percentage split between on-demand and Spot instances above the base on-demand capacity
spot_allocation_strategy = "lowest-price" # The only valid value is lowest-price, which is also the default value. The Auto Scaling group selects the cheapest Spot pools and evenly allocates your Spot capacity across the number of Spot pools that you specify.
spot_instance_pools = 10 # "Number of Spot pools per availability zone to allocate capacity. EC2 Auto Scaling selects the cheapest Spot pools and evenly allocates Spot capacity across the number of Spot pools that you specify."
#spot_max_price = "" # Maximum price per unit hour that the user is willing to pay for the Spot instances. Default is the on-demand price
#spot_price = "" # Cost of spot instance.
placement_tenancy = "default" # The tenancy of the instance. Valid values are "default" or "dedicated".
root_volume_size = "50" # root volume size of workers instances.
root_volume_type = "gp2" # root volume type of workers instances, can be 'standard', 'gp2', or 'io1'
root_iops = "0" # The amount of provisioned IOPS. This must be set with a volume_type of "io1".
key_name = "${TF_VAR_ssh_key_name}" # The key name that should be used for the instances in the autoscaling group
pre_userdata = "sudo usermod -l peks ec2-user && sudo usermod -d /home/peks -m peks && echo 'peks ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers.d/90-cloud-init-users && sudo groupmod -n peks ec2-user && sudo mkdir -p /goshposh/log && chmod -R 777 /goshposh && chown -R 1000:1000 /goshposh && echo ${TF_VAR_friday_pub_key} >> /home/peks/.ssh/authorized_keys && echo ${TF_VAR_peks_pub_key} >> /home/peks/.ssh/authorized_keys" # userdata to pre-append to the default userdata.
additional_userdata = "yum install -y <https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm> && systemctl start amazon-ssm-agent" # userdata to append to the default userdata.
ebs_optimized = true # sets whether to use ebs optimization on supported types.
enable_monitoring = true # Enables/disables detailed monitoring.
public_ip = false # Associate a public ip address with a worker
kubelet_extra_args = "--node-labels=kubernetes.io/role=worker --kube-reserved=cpu=200m,memory=256Mi,ephemeral-storage=1Gi --system-reserved=cpu=200m,memory=256Mi,ephemeral-storage=3Gi --eviction-hard=memory.available<500Mi,nodefs.available<10%"
autoscaling_enabled = true # Sets whether policy and matching tags will be added to allow autoscaling.
additional_security_group_ids = "${TF_VAR_worker_additional_security_group_ids}" # A comma delimited list of additional security group ids to include in worker launch config
protect_from_scale_in = true # Prevent AWS from scaling in, so that cluster-autoscaler is solely responsible.
#suspended_processes = "" # A comma delimited string of processes to to suspend. i.e. AZRebalance,HealthCheck,ReplaceUnhealthy
#target_group_arns = "" # A comma delimited list of ALB target group ARNs to be associated to the ASG
#enabled_metrics = "" # A comma delimited list of metrics to be collected i.e. GroupMinSize,GroupMaxSize,GroupDesiredCapacity
}
]
EOF
)
is this from a public terraform module?
Yes
for thsi value where should i pass the variable *vpc_zone_identifier*
can you link to the public terraform module you are using?
i can’t send that is confidential ?
ok, so the module is private then.
In your EKS module, the value being provided to vpc_zone_identifier
needs to be a set of strings. You will need to read the module’s code to understand what value is currently being used and how to fix it.
as of now I am not passing this variable anywhere
variable “vpc_zone_identifier”{ type = string }
Two things:
- The code snippet in your original error message shows the value is being set with a
lookup
function. That function has a default value which is probably getting used in this case. - The variable specification you just posted requires a value to be set. Terraform would error if you didn’t provide a value. So you must be setting something.
I used https://github.com/cloudposse/terraform-aws-emr-cluster to terraform an EMR cluster. I then tried to ssh using the auto-generated key and couldn’t. How do I gain access to the master shell (ssh?) and view spark and zeppelin UI safely (ssh tunneling?)
Terraform module to provision an Elastic MapReduce (EMR) cluster on AWS - cloudposse/terraform-aws-emr-cluster
Could you be using the wrong username?
Terraform module to provision an Elastic MapReduce (EMR) cluster on AWS - cloudposse/terraform-aws-emr-cluster
(don’t ssh as root
user)
which user should I be using?
AWS documentation says hadoop@…
In the applications I have Spark and Zeppelin
I think amazon linux uses *ec2*-*user*
can anyone connect for a *Zoom call or google meet* to solve the terraform i am getting
Error: Incorrect attribute value type
on .terraform/modules/eks/workers_launch_template.tf line 40, in resource "aws_autoscaling_group" "workers_launch_template":
40: vpc_zone_identifier = lookup(
Inappropriate value for attribute "vpc_zone_identifier": set of string
required
copy your terraform code here
and your input variables
without that is pretty hard to help you
I have Terraform scripts for eks which is in terraform v0.11.14 version i need to upgrade to v0.12.0
I need someone’s help
2021-02-02
I have an EKS cluster and an EKS node group both created with your modules. Instances of that node group by default have the security group listed under “Cluster Security Group” in the AWS Console’s EKS cluster view tab called Networking. I’d like these instances to have an additional security group. How to do this? The workers_security_group_ids adds SG to the security group listed under “Additional Security Groups” of the cluster, so this will not work as instances do not have that security group.
Hello Guys, one question regarding module https://github.com/cloudposse/terraform-aws-eks-iam-role
Terraform module to provision an EKS IAM Role for Service Account - cloudposse/terraform-aws-eks-iam-role
I can’t use it if the service account doesn’t already exist in the time of apply
service_account_name = var.external_secrets_service_acount
This account must already exist
which is bad in case you want to recreate from scrathc and you can’t plan
Even with depends_on doesn’t work
Any workaround for this ?
I need to allow an ALB to communicate with pod that has an ingress and a nodeport service, in an EKS cluster that uses nodegroup. It seems like I have to add the ALB’s security group to that of the EKS instances, which were created by AWS EKS NodeGroup. But this does not seem possible out of the box with your EKS cluster module (at least at version 0.4). Am I going about this incorrectly?
I have this solution working but using the https://github.com/kubernetes-sigs/aws-load-balancer-controller from within the cluster. Works like a charm
A Kubernetes controller for Elastic Load Balancers - kubernetes-sigs/aws-load-balancer-controller
Take a look at this https://github.com/sajid2045/eks-alb-nginx-ingress
Helm chart for two layer ingress controller with alb doing ssl termination and nginx doing dynamic host based routing. - sajid2045/eks-alb-nginx-ingress
The aws LB controller is what I use. Do you mind posting what your ingress yaml looks like?
BTW it looks like I’m not the only one to face this issue: https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/1791.
Hi, The alb controller provides two ingress annotations to work with AWS security groups to restrict traffic access. alb.ingress.kubernetes.io/security-groups alb.ingress.kubernetes.io/inbound-cidr…
@OliverS I am using a helmfile based on the cloudposse helmfiles. Take a look here: https://gist.github.com/reixd/914a19f2835690cca36db306025dcc85
Thanks but this is not the AWS LB Controller, it is the nginx-ingress controller (which is used by the older AWS ALB Ingress Controller).
Hello everyone.
Trying to use this module and I bang my head tying to get access_points setup. What would the variable look like?
Terraform Module to define an EFS Filesystem (aka NFS) - cloudposse/terraform-aws-efs
Right now I have it set this way.
access_points = { example = { posix_user = { gid = “55007” uid = “55007” }, root_directory = { creation_info = { gid = “55007” uid = “55007” permissions = “0755” } } }
the var is defined as type map(map(map(any)))
… believe it or not, any
does not actually mean absolutely anything… each object in the map needs to be the same type and have the same attributes
though, i think the root_directory
key should not be specified like that… only keys should be posix_user
and creation_info
access_points = {
example = {
posix_user = {
gid = "55007"
uid = "55007"
},
creation_info = {
gid = "55007"
uid = "55007"
permissions = "0755"
}
}
}
this is a strange way to type the object IMO, since each attribute is required. it would be much more clear to use an actual object({...})
type
the indexing into var.access_points while also using for_each is also strange… i feel like it should just be:
gid = each.value.posix_user.gid
uid = each.value.posix_user.uid
secondary_gids = each.value.posix_user.secondary_gids
Thanks for the pointers! I’ll look into tomorrow: it is getting late here
Just confirming that variable formatting works perfect. Many thanks
Plus a closing “}” of course
Thinking on getting a Mac with a M1 chip, anyone developing in terraform running one?
BigSur killed my screen so I’m a bit skeptical that all the tools will work on a M1
I would wait all of the tooling isn’t quite ready yet. Somewhere there is a GitHub ticket tracker with brew
My friend got one and ended up sending it back because it was too much of a pain
However if you do end-up getting one please update us on your hard won experience :)
it is already sounding far from ideal
Agree to what @pjaudiomv said, haven’t seen / heard a success story for M1 mac used for dev / devops teams . I have heard same return stories just like Patrick’s friend did
i’d go with the m1 and use a local VM or a remote dev environment as an interim solution. the speed and power improvements seem worth it IMO
Also there’s Roesetta2 https://support.apple.com/en-us/HT211861
Rosetta 2 enables a Mac with Apple silicon to use apps built for a Mac with an Intel processor.
I think apple wants paying customers to do the beta test and rollout lol
Lessons learned on day 1 of an Apple M1 device (not my primary, just for testing):
arch -x86_64 <cmd>
to force an arch- if you run your shell x86_64, everything else will automatically be Intel
- Go works! Just build for amd64 and run it like normal, Rosetta does its job.
parallels appears to work also… https://twitter.com/bradfitz/status/1354874746953814016
yes, I could do that too
Getting an error message when trying to setup a cluster using the latest version 0.27.0 of the module for Elasticsearch.
Error: invalid value for domain_name (must start with a lowercase alphabet and be at least 3 and no more than 28 characters long. Valid characters are a-z (lowercase letters), 0-9, and - (hyphen).)
on .terraform/modules/elasticsearch-cluster/main.tf line 102, in resource "aws_elasticsearch_domain" "default":
102: domain_name = module.this.id
I can see in main.tf the following code
resource "aws_elasticsearch_domain" "default" {
count = module.this.enabled ? 1 : 0
domain_name = module.this.id
But the context.tf file doesn’t contain anything for id
module "this" {
source = "cloudposse/label/null"
version = "0.22.1" // requires Terraform >= 0.12.26
enabled = var.enabled
namespace = var.namespace
environment = var.environment
stage = var.stage
name = var.name
delimiter = var.delimiter
attributes = var.attributes
tags = var.tags
additional_tag_map = var.additional_tag_map
label_order = var.label_order
regex_replace_chars = var.regex_replace_chars
id_length_limit = var.id_length_limit
context = var.context
}
I want to use a string that is 17 chars long but can only use one that is 10 or the error occurs.
I am passing my domain name to the module variable for name
Is this the right way to set the domain name?
are you setting id_length_limit to 28?
(for the null label module)
Nope not setting that i figured out it was the name variable that forms part of the id. I changed that and it works but need to figure out how to construct the id to want I want
If you want a specific thing for the id, maybe just hardcode it
I’m trying to write variable validation to ensure a list of type = list(number)
contains no null
values. So I need a test that returns false if the list contains null
.
This doesn’t work (!): contains([null,1,2], null)
(“Invalid value for “value” parameter: argument must not be null.”)
This does, but is much uglier: length([for i in [null,1,2] : i if i == null]) == 0
.
Any better suggestion?
Dose anyone have Architectural Decision Records
for Terraform as an example?
I want to learn more about why you picked up Terraform.
Following this, @Mohammed Yahya please do let me know if you get this from other sources
is there anything better than terraform yet? that is a question I ask
when comparing community sizes there is nothing to compare to TF
there is many other things to include in the comparison table when evaluating
I agree with you, also We need this ADR to show C-Level why they need Terraform, also compare to CDK and Pulumi which start gaining popular
true
I can’t prove that of course, but I see a lot of blog spam and ads from them
well they pioneered CDK, that’s count for them
Oh yah, it might be a great product! I’m not even disputing that
That was an easy one for us. As far as open source tools go, it has the highest market share, the highest level of community support, and the least amount of vendor lock
I’ve learned to word this stuff as “This is the tool your technical team is recommending.” rather than “Is it okay if we use this?“.
Sorry for the off-topic, I would like to ask @Mohammed Yahya if you could share some details about your ADR workflow. I’m interested in how it works for you and your team, how often you do this(for which type of decision), where you store records (git/confluence, dedicated repo/with the code), any tooling you are using. Basically, the question is about your feelings if it really helps you and your team/org. If you have any examples of your ADRs that you can share I will appreciate it a lot.
One of our client tried ADR approach, but it didn’t last long. Probably the majority didn’t see any value in following this and ones who did gave up because they were only a few. But it’s a different story. As for me I like the idea.
We use ADR for breaking the ice and destroy the silo of a specific technology, the team can disagree on choosing between two technologies like the famous one, should we use Jenkins and why ( some hate Jenkins, others not so we break the tie using an ADR) it helps a lot knowing what drive the company choices from the start
we initially use a repo called technology-adrs
inside that a punch of md
files describe an ADR for each choice we need to make
tech-adr
|-- cloud.md
|-- iac.md
|--cicd.md
|--frontend.md
|--backend.md
|--sec.md
now we are thinking to move to stackshare.io for our stack listing and choices, here a sample I just create https://stackshare.io/mhmdio/decisions/105668378243793712 this will make us transparent, easy with on-boarding and the team will understand why we choose something over the other.
but I could not find any decent ADR for Terraform, I want to know why Enterprises using it, working with Enterprise clients it is hard to convince them into move and change the way they dealing with Cloud or migration to cloud.
2021-02-03
Hello guys,
How I can define separately configuration for blocks of cors rules with this module? https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn
I have configuration module something like this
cors_allowed_origins = ["<https://example.com>","*"]
cors_allowed_methods = ["PUT", "DELETE"]
cors_allowed_headers = ["*"]
cors_expose_headers = []
cors_max_age_seconds = 300
and this configuration creating for me 3 cors blocks with origins example.com, * and aliases defined in module (this is pretty good) but I can’t edit any options for this origins like allowed_methods, allowed_headers. All of created cors block have same configuration
There is any solution to do this using only this module?
Anyone using the terraform-aws-vpc
module? You’re probably running into this now:
https://github.com/terraform-aws-modules/terraform-aws-vpc/issues/581
Started getting error for terraform 11 and module version 1.72 Error: Error refreshing state: 1 error occurred: * module.vpc.data.aws_vpc_endpoint_service.s3: 1 error occurred: * module.vpc.data.aw…
i was, but i updated to the latest module version and it was fixed
Started getting error for terraform 11 and module version 1.72 Error: Error refreshing state: 1 error occurred: * module.vpc.data.aws_vpc_endpoint_service.s3: 1 error occurred: * module.vpc.data.aw…
Ah! Didn’t notice.
I’m trying to upgrade our Terraform from 0.13.5 to 0.14.5 but I’m running into an issue.
All outputs of the terraform-aws-ecs-container-definition
module are giving me a Error: Output refers to sensitive values
Is anyone familiar with this error and how it could be fixed? Should the outputs of the module be changed with sensitive = true
or is there something on my end I have to change?
yes, in tf 0.14, the provider has the ability to mark attributes of resources as sensitive. if you output such an attribute, the output must also be marked as sensitive
These outputs are only generated in the ecs-container-definition
module and are then fed into an ecs-alb-service-task
We aren’t outputting them ourselves though
same deal the modules need to treat them as sensitive
I see some module do have the sensitive = true
for some of the outputs
I would have assumed that the tests would catch this, or my usecase is just completely different from those tests
tests are hard
Its hard to cover all usecases, I know all too well I’m afraid haha
how does one set the required_version
to be picked up automatically by atlantis for 0.13 or 0.14 ?
my default version in atlantis is set for 0.12.30 but im trying to run a plan in a module that has the following block and atlantis cannot seem to interpret as using 0.13 or 0.14. module is applied using tf 0.14.5.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
required_version = ">= 0.13"
}
Atlantis: Terraform Pull Request Automation
error from atlantis.
Warning: Provider source not supported in Terraform v0.12
on versions.tf line 3, in terraform:
3: aws = {
4: source = "hashicorp/aws"
5: }
A source was declared for provider aws. Terraform v0.12 does not support the
provider source attribute. It will be ignored.
Error: Unsupported Terraform Core version
Maybe just setting it in the repo atlantis.yaml is a workaround for now
@RB
Struggled with this today as well - atlantis an only work with =
and not with >=
right now.
https://github.com/runatlantis/atlantis/issues/1217
Currently, it appears that the code in project_command_builder.go only accepts terraform.required_version specifications that exactly specify a single version, rather than a range or other version …
I’m upgrading some modules that work fine in terraform 0.12, to terraform 0.13. Got the terraform init
to complete. Had to up the version on some third-party modules. The terraform apply
gives me several errors “Provider configuration not present”. Unfortunately I do not know how to address this:
To work with
module.eks_main.module.vpc.module.label.data.null_data_source.tags_as_list_of_maps[3]
its original provider configuration at
provider["registry.terraform.io/-/null"] is required, but it has been removed.
This occurs when a provider configuration is removed while objects created by
that provider still exist in the state. Re-add the provider configuration to
destroy
module.eks_main.module.vpc.module.label.data.null_data_source.tags_as_list_of_maps[3],
after which you can remove the provider configuration again.
How do I re-add the provider: in what file (the eks_main maint.tf? the vpc module main.tf? etc), and would it just be like
provider "aws" {
region = "us-east-1"
}
There’s a secret replace command that needs to be run
terraform state replace-provider "[registry.terraform.io/-/aws](http://registry.terraform.io/-/aws)" "[registry.terraform.io/hashicorp/aws](http://registry.terraform.io/hashicorp/aws)"
Looks like op might need to replace the null provider tho
this ought to show you all providers from the config and the tfstate, so you can map out what to replace:
terraform providers
geez, thanks so much guys, I did not know these commands, they did the job (I did the replace on null, local, template, and aws)
For upgrade from 0.12 to 0.14, the docs say to first upgrade to 0.13. Does this mean for 0.13 just the init + validate + verify that plan created, or does it also require apply?
i do this and have had good luck
rm -rf .terraform/
# upgrade from tf12 to tf13
tfenv use latest:^0.13
terraform init -upgrade
terraform state replace-provider "registry.terraform.io/-/aws" "hashicorp/aws" -yes
terraform apply
tf 0.13upgrade -yes
terraform init
terraform apply
# upgrade from tf13 to tf14
tfenv use latest:^0.14
terraform init
terraform apply
some of it might be extra
¯_(ツ)_/¯
what’s tfenv
Terraform version manager. Contribute to tfutils/tfenv development by creating an account on GitHub.
there’s also https://tfswitch.warrensbox.com/Install/
A command line tool to switch between different versions of terraform (install with homebrew and more)
more chars in the command line and fewer stars tho
what are some of the bennies with tfswitch over tfenv ?
haven’t seen tfutils before but seems like its very similar
and has you said less chars haha
Just had a look, both have excellent capabilities if you have to switch between several terraform versions regularly
BTW just saw this in the upgrade docs for 0.14:
Terraform v0.14 does not support legacy Terraform state snapshot formats from prior to Terraform v0.13, so before upgrading to Terraform v0.14 you must have successfully run terraform apply at least once with Terraform v0.13 so that it can complete its state format upgrades.
seems to match my commands above ^
is there a way to get information on the last terraform run (apply/plan)? Basically trying to do something like this feature in terraform cloud https://www.terraform.io/docs/cloud/run/manage.html
if you have a tfstate s3 with versioning on, you can look at previous versions
i havent used it but this might come in handy https://github.com/camptocamp/terraboard
A web dashboard to inspect Terraform States - camptocamp/terraboard
ah nice, thanks! yea I was inspecting the tfstate hoping to see if there was a lastRun type param that might also include any errors that occured but I don’t think that exists
for last run information, you’d need a CICD for terraform like atlantis
@RB have you ran terraboard in aws? whats the recommended way.. fargate? ECS?
i havent used it but this might come in handy https://github.com/camptocamp/terraboard
it’s on my todo when time permits
ill be free by Q3 maybe haha
We use terraboard in dev environments, very convenient for developers to understand what’s in their TF states or search resources by type/name
Hello everyone! I have spent around a week or two trying to set up basic terraform configuration base for my example project and heard many opinions (yes/no? to Terragrunt, yes/no? to workspaces) so in abundance of various conflicting information and incomplete tutorials (tutorials which advocate an idea but do not showcase it fully) I’ve kind of lost focus. This is when I decided a Stack-overflow post might be a good idea, but that has also backfired since I haven’t got any answers to my broad questions, even though people replied.
TLDR from SO: I’d like to have multi-env (dev,stage) Terraform IaC setup that uses modules, that clearly separates prod and non-prod state management. (for the time being, resources provisioned do NOT matter to me, that can be as simple as an s3 bucket which I tried to illustrate).
Is it okay if I post it here, I’m looking for help in understanding how to set this up, and of course to change my approach If it is too restrictive/plain wrong. Thanks!
Go ahead. Suggest you post the content in replies on a thread and not as direct messages in the channel, to avoid creating noise for people.
Sure, thanks! Here’s the post on SO. I’m posting it just to avoid duplicating text since I tried my best to describe scenario I’m in there, but I’ll gladly discuss this topic here if anyone is for it, and post my resolution as an answer on SO later on. https://stackoverflow.com/questions/66024950/how-to-organize-terraform-modules-for-multiple-environments
Nikola thanks for posting! I have the same question: what’s the simplest real-world solution to a basic Terraform setup? The SO answers seem to all say “it’s complicated” Yoni thanks for answering.
Yeah I have added my own answer since my avoidance of terraform workspaces led me to this rat race for which i didn’t find an answer. So as one person in the comments replied, the workspaces were created to solve this issue of multi-env project. I successfully did what i wanted to do, but at the moment with the local state keeping. This all is very muh work (more r&d) in progress, but I think i realized the path i want to go with. I’ll try and widen my answer with an example on the SO answer i created a bit later on.
Hey everybody - trying to find the best way to import / generate baseline configurations from an AWS environment into terraform code to then edit. I’ve been under a small rock, so are we still in the days of the predefined resource + import or is there a more streamlined solution I’ve been unaware of?
i recall someone using a combination of import -allow-missing-config
and something else, maybe state show
?, to write out a near-working config…
here you go: https://asciinema.org/a/VVW2jx7jtFahLEyPObB0mABJX
Recorded by juliosueiras
found it in the slack archive… https://archive.sweetops.com/terraform-0_12/2019/08/#68874db3-6e97-4810-a3e5-6797c15eeab4
SweetOps Slack archive of #terraform-0_12 for August, 2019.
much rejoicing
Thanks @loren I’ll take a look
2021-02-04
Hi, we are trying to update the “eks-cluster” module (version 0.32) and we started encountering this error when running terraform plan:
Error: Get "<http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth>": dial tcp [::1]:80: i/o timeout
We suspect it is because of the kubernetes provider version, which was upgraded recently, but in their docs we don’t see any breaking changes regarding this existing configuration:
provider "kubernetes" {
token = join("", data.aws_eks_cluster_auth.eks.*.token)
host = join("", data.aws_eks_cluster.eks.*.endpoint)
cluster_ca_certificate = base64decode(join("", data.aws_eks_cluster.eks.*.certificate_authority.0.data))
}
Did anyone encounter this issue ? thanks
Do you have var.apply_config_map_aws_auth = false
?
nope, We are using the default “true” value. we found a workaround where removing the resource from the state solves the issue, but we would obviously need a better option. If I set that variable to false, I still need to apply the configMap (only this time separately or manually) right ? so what is the difference ?
If you were setting apply… to false then the provider would not get configured, so I thought that might be what was happening. Seems like a bug in the Kubernetes provider. Did you check for open issues?
yes in that case the configMap will not be created and the provider won’t do anything, In the open issues regarding this problem I only found the workaround with terraform state rm
. I can try working with this var, but I still need the configmap executed
You might try setting var.apply_config_map_aws_auth = false
to create the cluster, then set it to true
and update the cluster. It is possible there is a new race condition or something. @Andriy Knysh (Cloud Posse) would you please review the EKS cluster auth in light of the new Kubernetes provider?
@Michael Koroteev You might try updating to the current eks-cluster module, v0.34.1. I just tried it and it worked for me.
this
dial tcp [::1]:80
looks like it tries to connect to a local host cluster (and obviously fails)
Right. Question is: does this seem something that could be new due to the new v2.0 Kubernetes Terraform Provider ?
Version 2.0 of the Kubernetes and Helm providers includes a more declarative authentication flow, alignment of resource behaviors and attributes with upstream APIs, normalized wait conditions across several resources, and removes support for Helm v2.
This looks like the same issue I’m having, in my case isn’t ipv6 but still trying to call localhost:
GET /api/v1/namespaces/kube-system/configmaps/aws-auth HTTP/1.1
Host: localhost
User-Agent: HashiCorp/1.0 Terraform/0.14.6
Accept: application/json, */*
Accept-Encoding: gzip
---: timestamp=2021-02-10T00:25:35.225Z
2021-02-10T00:25:35.226Z [INFO] plugin.terraform-provider-kubernetes_v2.0.2_x5: 2021/02/10 00:25:35 [DEBUG] Kubernetes API Resp
onse Details:
---[ RESPONSE ]--------------------------------------
HTTP/1.1 503 Service Unavailable
@Andriy Knysh (Cloud Posse) @Erik Osterman (Cloud Posse) looks like a lot of people are having issues with EKS-cluster since the new v2.0 Kubernetes Terraform Provider. I tried but could not reproduce the problem. Let’s put our thinking caps on.
Version 2.0 of the Kubernetes and Helm providers includes a more declarative authentication flow, alignment of resource behaviors and attributes with upstream APIs, normalized wait conditions across several resources, and removes support for Helm v2.
Behavior suggests
data.aws_eks_cluster.eks.*.endpoint
Is null
. Could be a bug where the provider is not waiting for the data.
I wonder, @Michael Koroteev @nnsense Are you using Terraform 0.14? Seems it is more susceptible to https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs#stacking-with-managed-kubernetes-cluster-resources See also: https://github.com/hashicorp/terraform/issues/4149 .
For a while now I've been wringing my hands over the issue of using computed resource properties in parts of the Terraform config that are needed during the refresh and apply phases, where the …
Terraform Version, Provider Version and Kubernetes Version Terraform version: 0.14.1 Kubernetes provider version: 1.13.0 Kubernetes version: 1.15 Affected Resource(s) Terraform Configuration Files …
Yes I’m working with terraform 0.14
I will try using the latest version of the module and let you guys know.
anyway, I checked in the state itself and the data.aws_eks_cluster.eks.*.endpoint
field contains the actual value.
Me too (14.5 or something). It’s strange, yes the ENDPOINT variable if set as output is indeed showing the right value, but if I enable TRACE it clearly shows localhost as endpoint (it even shows my local server answer, the same I get if I run curl post myself locally).
Basically I have the same config as your complete examples, with the addition of the iam role thing. If I apply
all good. If I then refresh
, it shows unauthorised
with the 503
to (it seems) localhost. If I destroy, exactly after having destroyed the nodes, it fails again with unauthorised
, then I run it again and it destroys the rest of the things until only one module exist, that I usually delete with rm
: module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes[0]
(leaving kubernetes_config_map_ignore_role_changes = true
)
Should I try with v0.13?
@nnsense we did test the new changes with k8s provider on TF 0.13 (did not test it completely on 0.14)
yes, please try 013, we are going to deploy with 0.14 and find the issues if they exist
Thanks! I’m trying as we speak
“As we speak” was incredibly optimistic… 10 minutes to destroy the old one, 10 minutes to create the one with tf0.13,, :D
Still creating... [6m20s elapsed]
Hey, with tf 0.13 refresh
worked without throwing that error… mmmmhh… Interesting!
ok thanks @nnsense
we are going to deploy/destroy with TF 0.14
Thanks!!
Re: kubernetes provider issues with terraform 0.14
Error: Get "<http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth>": dial tcp [::1]:80: i/o timeout
Posted more updates here: https://github.com/cloudposse/terraform-aws-eks-cluster/issues/104#issuecomment-792520725
Seems like no good fix is available yet. Anyone solve this?
Describe the Bug Creating an EKS cluster fails due to bad configuration of the Kubernetes provider. This appears to be more of a problem with Terraform 0.14 than with Terraform 0.13. Error: Get &qu…
Note the fact that Terraform has an example for how to do auth with the v2
of the provider: https://github.com/hashicorp/terraform-provider-kubernetes/tree/master/_examples/eks
Creation, deletion, and upgrades worked without any issues for me, using that code. Buuuuuuut I have no idea how complex the migration path is
Terraform Kubernetes provider. Contribute to hashicorp/terraform-provider-kubernetes development by creating an account on GitHub.
@Andriy Knysh (Cloud Posse)
thanks @Vlad Ionescu (he/him) I’ll look into that today
Note the fact that my comment saying that got 3 reactions on GitHub No idea why, but beware!
I still maintain that it worked fine for me on a new cluster scenario. And it worked fine for the students on my “Running containers on AWS” course
no worries, we’ll figure it out. Our current module also works in many cases, but does not work in some for some people
If you have any questions, I’m here!
We have released terraform-aws-eks-cluster
v0.37.0 which resumes support for Kubernetes Terraform provider v1. We have rolled back to using the v1 provider in our root modules until the connectivity issue with the v2 provider is resolved. That is the best resolution to this issue we have to offer at this time.
We recommend using terraform-aws-eks-cluster
v0.38.0, terraform-aws-eks-node-group
v0.19.0, and edit the [versions.tf](http://versions.tf)
in your root module to include
terraform{
...
required_providers {
...
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 1.13"
}
}
}
I’m getting
ERROR: Post “http://localhost/api/v1/namespaces/kube-system/configmaps”: dial tcp 127.0.0.1 connect: connection refused
reliably on initial creation.
I’m using 0.38.0 of the module and the kubernetes provider is 1.13.1. I’m only creating the cluster (no nodegroups) initially as I’d like to keep my workspaces smaller and focused in Terraform Enterprise.
I used an earlier version of the module and never had this issue.
Module is pretty challenging to use in the current state.
Oh, and the error is in the resource “aws_auth_ignore_changes”.
My issue turns out to be wrong versions. Specifically, I switched to the versions in the examples/complete version 0.38.0. I think the change that fixed it for me was kubernetes provider >= 2.0.
Documentation for GitLab Community Edition, GitLab Enterprise Edition, Omnibus GitLab, and GitLab Runner.
Hey all I have a question regarding the terraform-aws-elastic-beanstalk-environment
module. We’re in the process of upgrading from a real old version. (0.11.0), and I’m trying to get the environment name in elastic beanstalk to match what that version generated, which was just the stage. Looks like were maybe setting it through the Environment tag. Now it’s some combination of namespace-name-stage. I assumed setting environment = var.stage
would do it but I can’t see the affect that has. Any assistance would be greatly appreciated.
any of namespace
, environment
, stage
, name
are optional
you can set just one of them, or a few, or all of them
the module will generate IDs based on the pattern namespace-environment-stage-name
, but the order of these are configurable as well
So if I exclude any of them they won’t be used?
yes
at least one is required
Sweet, thanks so much for the advice.
note that we use the pattern namespace-environment-stage-name
to uniquely identify all the resources for an organization (hence the namespace
which is usually an abbreviation for the org)
this is useful for consistency (ALL your resources are names the same way), and also for naming AWS global resources like S3 buckets (which are global and the names could not be reused b/w accounts)
I also assume namespace comes into play for eb url?
for all global resources. if we name all of them using the pattern, there is a very little chance of naming conflicts (if somebody else is using the same pattern and the same values, which is very unlikely)
is there a way to use a different pattern for env name and global resources?
I’m just a little stuck since I’m working in old code, and a whole lot else will need to change if the env name has to change.
all the values are optional
you can just use name
which can be anything
you can also change the delimiter
from -
to whatever you want
You can also use label_order
to change the order of the labels.
v0.14.6 0.14.6 (February 04, 2021) ENHANCEMENTS: backend/s3: Add support for AWS Single-Sign On (SSO) cached credentials (#27620) BUG FIXES: cli: Rerunning init will reuse installed providers rather than fetching the provider again (<a href=”https://github.com/hashicorp/terraform/issues/27582” data-hovercard-type=”pull_request”…
Changes: * backend/s3: Support for AWS Single-Sign On (SSO) cached credentials Updated via: go get github.com/aws/[email protected] go mod tidy Please note that Terraform CLI will not initiate o…
In init, we can check to see if the target dir already has the provider we are seeking and skip further querying/installing of that provider. This will help address concerns users are having where …
Hello,
I am creating an SSH key pair in TF and storing in secrets manager for further use by related resources. While checking out support for SSH key generation via TF code, I came across the warning that the solution is not production grade since the private key would be stored in TF state file. How are others solving for such use cases ?
everything in state file is not production grade ready
I would very secure the s3 bucket that host the state file, and enjoy using the Terraform hype - so the TLS Terraform provider will make it easy to generate Key/certs when needed
How about AWS secrets manager?
Or use vault?
Thank you for your responses. I want to generate the SSH key pair and store in secrets manager. Maybe I’ll use the local-exec
2021-02-05
Hi everyone! I have created an EKS cluster with the terraform_aws_eks module and the cluster was created with a particular access key and secret key. On a client machine, I cannot use that access key but have to use another set of accesskeys and then assume a role using the aws sts command. After assuming the role, I have “admin access”. When I then call kubectl get pods, I do not have access. I thought I could solve this by including this bit in the cluster creation:
map_roles = [ { rolearn = “arniam:role/my-role” username = “my-role” groups = [“system:masters”] } ]
where rolearn is the role that I assumed… but when executing kubectl get pods, I still have no access. Could someone point me to a solution ?
You still need to use aws eks update-kubeconfig
to generate your kubeconfig file, and you need to generate it with the --profile
you want to use to access the cluster.
wanted to auto scale aurora using https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/appautoscaling_target but i cannot see an option on how to scale down
You force it to scale down by reducing max_capacity
any pointers you can give me or better yet examples that i can play with?
my goal is to scale up/down my rds instances dependeing on time
Is there a way to remove a provider in the remote state that has been added but has a typo?
hi,
how can I convert [""]
to []
?
The compact function removes empty string elements from a list.
or maybe a for loop if you need more conditions:
[for item in <list> : item if !contains(["", null, false], item)]
Thanks,
But I already have it set here:
target_groups = join(", ", compact([try("${aws_alb_target_group.internal_ecs_tg[0].arn}", null),try("${aws_alb_target_group.external_ecs_tg[0].arn}", null),]))
That was set in locals, but when I’m calling it from aws_cloudformation_stack
and there are internal and external target groups, I got [""]
.
TargetGroupsArns: ["${local.target_groups}"]
i can’t really follow that
OK, thanks, indeed it’s complicated, oh love Terraform
Thanks again!
How can I skip a property when calling cloudformation stack from Terraform? Empty value does not work
Try null
Thats worked for me in the past
@Erik Osterman (Cloud Posse) https://github.com/cloudskiff/driftctl might be interesting to you
Detect, track and alert on infrastructure drift. Contribute to cloudskiff/driftctl development by creating an account on GitHub.
yea, something to help manage it
Detect, track and alert on infrastructure drift. Contribute to cloudskiff/driftctl development by creating an account on GitHub.
@loren stumbled across this too https://sweetops.slack.com/archives/CB6GHNLG0/p1610492345159400
might be a cool tool… https://driftctl.com/2020/12/22/announcing-driftctl
Where I think we’d want to see this is in the TACOS
so i didn’t look that closely last time
driftctl scan --from <tfstate+s3://acmecorp/states/terraform.tfstate>
not working as expected BTW
that’s cool - it will literally consume the statefile and compare that with what’s running.
did you kick the tires?
yes
drift:
desc: driftctl - Detect, track and alert on infrastructure drift.
cmds:
- |
driftctl scan \
--from tfstate+s3://{{.ACCOUNT_ID}}-{{.REGION}}-tf-state/state/dev-app.tfstate \
--from tfstate+s3://{{.ACCOUNT_ID}}-{{.REGION}}-tf-state/state/dev-data.tfstate \
--from tfstate+s3://{{.ACCOUNT_ID}}-{{.REGION}}-tf-state/state/dev-network.tfstate
silent: true
vars:
ACCOUNT_ID:
sh: aws sts get-caller-identity | yq e ".0.Account" -
the issue is it get 97% of the state file as drift, where it should be the opposite
[aws-vault] [drift] Found 148 resource(s)
[aws-vault] [drift] - 4% coverage
[aws-vault] [drift] - 7 covered by IaC
[aws-vault] [drift] - 141 not covered by IaC
[aws-vault] [drift] - 0 deleted on cloud provider
[aws-vault] [drift] - 0/7 drifted from IaC
look like it dose not recognize resources in modules. just resources without modules
we’ve been using Fugue for this
I would love it if it could just grab all state files from the S3 location, instead of having to specify them one-by-one using –from cli attribute
Hi all, I’m Gerald, part of the driftctl team. Only just joined this slack and thrilled to see your discussions! If you don’t mind, I’ll update some of the posts with more recent informations. Feel free to comment if needed
https://sweetops.slack.com/archives/CB6GHNLG0/p1612544451305500?thread_ts=1612543469.302600&cid=CB6GHNLG0 @Mohammed Yahya the tool now reads resources from modules within the tfstate, (which was indeed not the case before the 0.5.0 release). So you should probably get significantly lower drift % if you retry it.
look like it dose not recognize resources in modules. just resources without modules
https://sweetops.slack.com/archives/CB6GHNLG0/p1612801916339600?thread_ts=1612543469.302600&cid=CB6GHNLG0 @Igor it’s now possible to read all tfstate files within a S3 or a folder if stored locally. Much more convenient
I would love it if it could just grab all state files from the S3 location, instead of having to specify them one-by-one using –from cli attribute
BTW we had a recurring bug that caused the tool to hang and it seemed to take ages to run while it was basically just stuck. We finally fixed it last week in the last release so I hope you’ll get a better experience now. (we still have issues with SSO causing freezes though as you can see in some of our issues on GH. Working on it )
Thanks @Gerald - welcome and glad you’re keeping us up to date.
Ping me and maybe we can get a demo on #office-hours
Looks like @antonbabenko is doing an episode this week dedicated to this. https://www.youtube.com/watch?v=-BS65owCCmQ
Thanks @Erik Osterman (Cloud Posse)
does anyone have a recommended guide for configuring AWS SSO (using Azure AD) with Terraform?
(we’ll have a terraform module coming out soon - but no specific instructions for azure)
Terraform & Terragrunt Version Manager. Contribute to aaratn/terraenv development by creating an account on GitHub.
Why is this needed vs tfenv tgenv?
Terraform & Terragrunt Version Manager. Contribute to aaratn/terraenv development by creating an account on GitHub.
terraenv author here, I created this tool to solve below problems
• single tool to do terraform and terragrunt version management
• available as pip, brew, docker image, osx and linux binaries ( tfenv and tgenv are bash scripts and not binaries )
Why not simply work with them to create a single tool based on their existing code? I don’t have any issue with your tool, just trying to reduce duplicate work.
Sometimes these tools get to their limitations and it is good to create a new tool. Personally speaking I’ve now migrated from tfenv to tfswitch ( https://tfswitch.warrensbox.com/ )
Reason beeing that tfenv did not properly parse *.tf
files to detect the terraform version from there for me/us.
Additionally tfenv did add considerable overhead in time for executing terraform - tfswitch ( incombination with direnv
) works almost instant to switch
A command line tool to switch between different versions of terraform (install with homebrew and more)
Quite often https://xkcd.com/927/ happens
[Title text] “Fortunately, the charging one has been solved now that we’ve all standardized on mini-USB. Or is it micro-USB? Shit.”
2021-02-06
This could be handy, for generating minimal iam policies… https://github.com/iann0036/iamlive
Hi, Why do I need to use DynamoDB with aws remote state?
A guide to file layout, isolation, and locking for Terraform projects
But I could azure blob storage without any db help and locking works
is not enough create a locking file on s3?
I’m afraid I don’t have any knowledge of azure services. I have always used AWS and with TF been using S3 as remote backend for state files and dynamodb for locking.
Terraform can store state remotely in Azure Blob Storage.
This backend also supports state locking and consistency checking via native capabilities of Azure Blob Storage.
thats why
Don’t talk too loud or the next version of the provider will require a lambda too
Azure blob has a native “locking” concept, while S3 does not. That’s why the AWS backed uses a second service to implement homegrown locking
The Lease Blob operation creates and manages a lock on a blob for write and delete operations.
I also have this issue, is there any solution for this? thanks!
Hi, we are trying to update the “eks-cluster” module (version 0.32) and we started encountering this error when running terraform plan:
Error: Get "<http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth>": dial tcp [::1]:80: i/o timeout
We suspect it is because of the kubernetes provider version, which was upgraded recently, but in their docs we don’t see any breaking changes regarding this existing configuration:
provider "kubernetes" {
token = join("", data.aws_eks_cluster_auth.eks.*.token)
host = join("", data.aws_eks_cluster.eks.*.endpoint)
cluster_ca_certificate = base64decode(join("", data.aws_eks_cluster.eks.*.certificate_authority.0.data))
}
Did anyone encounter this issue ? thanks
Do you have var.apply_config_map_aws_auth = false
?
Hi, we are trying to update the “eks-cluster” module (version 0.32) and we started encountering this error when running terraform plan:
Error: Get "<http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth>": dial tcp [::1]:80: i/o timeout
We suspect it is because of the kubernetes provider version, which was upgraded recently, but in their docs we don’t see any breaking changes regarding this existing configuration:
provider "kubernetes" {
token = join("", data.aws_eks_cluster_auth.eks.*.token)
host = join("", data.aws_eks_cluster.eks.*.endpoint)
cluster_ca_certificate = base64decode(join("", data.aws_eks_cluster.eks.*.certificate_authority.0.data))
}
Did anyone encounter this issue ? thanks
Hi, I have a CloudFormation template to which I lost 2 days trying to solve a problem but I am CF noob. I want to put this on Terraform side where one value can be either a string or a list - depending on true or false value.
For example:
false ? "test" : tolist(["test2"])
But I got an error:
The true and false result expressions must have consistent types. The given
expressions are string and list of string, respectively.
Is there any workaround for this?
Many thanks!
Generally, no, Terraform uses moderately strict typing most of the time. There is very little you can do with a value that might be a string or a list. Usually with these options you use a list of strings and just have one string in the list.
2021-02-07
There isn’t. Each variable has to be a single type
What are you trying to accomplish with this technique
I have one value from CF template that can be either a string AWS::NoValue or a list depending on other values.
Oh i thought you were trying to do it in terraform
Is there any way to omit line in Terraform template?
You can set it to null
hi all, i have a question, i was asked to make a spot fleet that scales in / out according to specific time. is this still applicable in 0.12.x version of terraform? - https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/spot_fleet_request#iam_fleet_role
Hi! When I’m loading template_file trough data, how can I specify the list to iterate trough? For example https://www.terraform.io/docs/language/functions/templatefile.html
I know that templatefile function renders the template, but how can I implement that?
For example, this is where I specify template_file:
data "template_file" "cf" {
template = "${file("${path.module}/templates/cf-asg.tpl")}"
vars = {
service_name = "${var.service_name}"
subnets = join("\",\"", var.subnets)
availability_zones = join("\",\"", var.availability_zones)
lc_name = "${aws_launch_configuration.ecs_config_launch_config.name}"
min_instances = "${var.min_instances}"
max_instances = "${var.max_instances}"
desired_instances = "${var.desired_instances}"
asg_health_check_type = "${var.asg_health_check_type}"
no_of_resource_signals = "${var.no_of_resource_signals}"
#tgs = [local.tg]
region_tag = var.region_tag
env_tag = var.env_tag
newrelic_infra_tag = var.newrelic_infra_tag
purpose_tag = var.purpose_tag
patch_group_tag = var.patch_group_tag
}
}
This is where I’m loading it:
resource "aws_cloudformation_stack" "autoscaling_group" {
name = "${var.service_name}-asg"
template_body = data.template_file.cf
depends_on = [aws_launch_configuration.ecs_config_launch_config]
}
And this is a part of cf-asg.tpl:
MinSize: "${min_instances}"
MaxSize: "${max_instances}"
%{ for tg in tgs ~}
TargetGroupARNs: ${tg}
%{ endfor ~}
So, how can I specify the list tgs to iterate trough?
The templatefile function reads the file at the given path and renders its content as a template.
Instead of doing this by terraform-templating the CF, can you write the CF to accept an input parameter and use a CF condition to set AWS::NoValue
?
The templatefile function reads the file at the given path and renders its content as a template.
No Because, both values in condition must exist, but in some cases I do not have target group.
the CF parameter list-type accepts a comma-separated string. you can detect an “empty” list in a CF condition with something like this:
Conditions:
UseTargetGroupArns: !Not
- !Equals
- !Join
- ''
- !Ref TargetGroupArns
- ''
and then in the CF ASG resource:
TargetGroupARNs: !If
- UseTargetGroupArns
- !Ref TargetGroupArns
- !Ref 'AWS::NoValue'
on the TF side, you pass in either an empty string, or a comma-separated string. because it’s a string either way, the TF conditional syntax will work
Thanks, Ioren, but it’s late for that, I’ve managed to set it up via templates.
I find CF very hard and do not understand it at all.
Thanks a lot!
the CF parameter for the target group looks like this:
TargetGroupArns:
Default: ''
Description: >-
Comma-separated string of Target Group ARNs to associate with the
Autoscaling Group; conflicts with LoadBalancerNames
Type: CommaDelimitedList
Empty strings didn’t work for me for example, I’ve tried that
i guarantee it works been doing this for years
i’m literally copying from a template i already have
damn, wish I had this 2 days ago, spent my whole weekend on it. Are you also creating ASG via CF from Terraform?
but i totally agree on CF being difficult and hard to understand, especially as the use case becomes more advanced
yes, the ASG is defined in CFN, for exactly the kind of use case you describe, where we want to use the CFN UpdatePolicy
, which is a service-side feature of CFN that terraform on its own cannot implement…
here’s the template. pick out what you need, ignore what you don’t… feel free to ask questions if you need a hand… https://github.com/plus3it/terraform-aws-watchmaker/blob/master/modules/lx-autoscale/watchmaker-lx-autoscale.template.cfn.yaml
Terraform module for Watchmaker. Contribute to plus3it/terraform-aws-watchmaker development by creating an account on GitHub.
This is gold Ioren! Thanks, I’ll save this for future. But it looks like template works also for my use!
I just need to sort out some evaluation, but it works with a little hard coding!
Thanks again
I’ve managed to set it up with template. Here’s part of template file:
%{ for tg in tgs ~}
TargetGroupARNs: [${tg}]
%{ endfor ~}
And in templatefile function I have:
tgs = try(tolist([internal_tg.*.arn[0]]), [])
So if there is no internal_tg, it will skip TargetGroupArns!
that works also… we made it a point to avoid templating in the CF, so folks could, if they want, just use the CF directly without terraform
Lost 3 days But, as I said I find CF very hard to debug, and to be honest I am little afraid. But I guess some people will use your module. I would if I knew.
data "template_file" "cf"
is deprecated in favor of templatefile
why are you using a template file to dynamically create a cf stack using tf…
why not just create the resources purely in tf
Let me explain man, since I’m doing it for 3 days straight
First of all, I am creating CF stack for ASG because I need rolling updates, and CF can bump max number of instances on the fly
isnt there a way to do that in pure tf ?
nope man
not yet
and there is one stupid CF property that can be a list or a value “AWS::NoValue” which tell CF to skip that Property. But the thing is - in some cases I need to set list, in some cases string…
so my last resort is template, I can iterate the list and if empty, skip a line in template file
oh man i didnt know that
is there a module that does all this for you
cause that would be amazing
Nope, only one page on the whole internet: https://medium.com/@endofcake/using-terraform-for-zero-downtime-updates-of-an-auto-scaling-group-in-aws-60faca582664
A lot has been written about the benefits of immutable infrastructure. A brief version is that treating the infrastructure components as…
@endofcake
A lot has been written about the benefits of immutable infrastructure. A brief version is that treating the infrastructure components as…
@endofcake saved my ass a year ago, but I wanted to improve to have ALB healthchecks, and also to set grace period to 300 secs
I already have that set up, but I want to have ALB healthchecks in place
You might be better off doing a blue green swap of the ASGs then
unless you have a hard requirement for the rolling update
or another alternative, update the ASG with your new config but use the instance-refresh CLI command to do the rolling update
Terraform supports instance refresh now
Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or other comme…
Hi~ Anyone have experience/recommendation keeping your Terraform code DRY? Like how Terragrunt does it, but using Terraform Cloud, Scalr, or Spacelift? We have a few environments we “promote” infrastructure changes to (dev –> test –> prod) and would like get away from “copying” the same terraform code/modules . I notice env0 has support for terragrunt, but want to know what others have done
TIA!
When I last look at this, I ended up doing something similar to https://terraspace.cloud/ and the roles/profiles pattern from puppet. The general ideas are:
• app (your tf code for your application)
• stack (your environment)
• config (it has the per-environment setup)
• modules (shared modules across multiple projects)
The difference between app
and modules
for me is that the app defines the infrastructure specific to your application, whereas a module can be shared across multiple apps (for example a tagging module or a label module).
The stack
contains the instantiation code for the app per environment. This gets duplicated across multiple environments, depending on the parameters passed.
I havent had much time since then to review this pattern, but hopefully it can help a bit.
The Terraform Framework
You still need to use modules, and calling the same module multiple times with different values, IMHO that is not RY code. it’s like calling the same function in your app code. I tried various ways to implement my IaC, all of them has pros and cons. see what fit your needs, I would start with question: Mono repo Vs Multiple Repos for architecting my IaC?
then You can choose the tools: like Terraform vs Terragrunt Vs [env0 - scalr - spacelift - TFC] choosing the tool will affect how you layout your repo/repos.
Personally I would go with Vanilla Terraform, and stacks approach like:
• App stack
• Data stack
• Network stack
I called it micro stacks
approach - for different envs for sure I have a DRY code, but that give me more control over my envs and that’s fine with me
@Bob I have two recommendations A terraform book from the author of terragrunt https://www.amazon.com/Terraform-Running-Writing-Infrastructure-Code-ebook-dp-B07XKF[…]/dp/B07XKF258P/ref=mt_other?_encoding=UTF8&me=&qid=1612814961 and an article/video https://www.hashicorp.com/resources/terraform-workflow-best-practices-at-scale
What is the optimal HashiCorp Terraform workflow as you get more teams within your organization to adopt it?
@Mohammed Yahya - the approach you described reminds me of terraservices - https://www.hashicorp.com/resources/evolving-infrastructure-terraform-opencredo
Is it similar or am I misunderstanding your approach ?
Nicki Watt, OpenCredo’s CTO, explains how her company uses HashiCorp’s stack—and particularly Terraform—to support its customers in moving to the world of CI/CD and DevOps.
@Patrick Jahns the slides looks very helpful explaining how anyone starting with Terraform evolve into his own patterns, also read this book, it’s awesome I strongly recommend it https://infrastructure-as-code.com/book/
Exploring better ways to build and manage cloud infrastructure
Thanks for sharing - will add it into my reading list. Just thought your approach sounded similar to the terraservice approach
Thanks I’m glad you like it, actually my journey was similar to them
I suppose we all go through different (similar) stages of learning - by being part of communities like these here I try to skip some of the learnings - only to find myself tipping over some of the pain points eventually at a different stage
2021-02-08
if i could beg for a favor and get some folks to the linked issues and the pr, i would truly be grateful… https://github.com/hashicorp/terraform-provider-aws/issues/4426#issuecomment-775504542
Terraform Version +$ terraform -v Terraform v0.11.7 + provider.aws v1.16.0 Affected Resource(s) aws_iam_user_policy_attachment aws_iam_group_policy_attachment aws_iam_role_policy_attachment Expecte…
I may be misunderstanding, but isn’t this what aws_iam_policy_attachment
does?
Terraform Version +$ terraform -v Terraform v0.11.7 + provider.aws v1.16.0 Affected Resource(s) aws_iam_user_policy_attachment aws_iam_group_policy_attachment aws_iam_role_policy_attachment Expecte…
Specifically, if you look at the documentation: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy_attachment
This means that even any users/roles/groups that have the attached policy via any other mechanism (including other Terraform resources) will have that attached policy revoked by this resource.
aws_iam_policy_attachment manages all attachments of a policy, but does not manage all the policies attached to a role. it’s flipped
i care more about the role than where the policies are attached. and being able to detect drift if someone comes along and attaches a new policy to the role and thereby changes its permission-set
Hi everyone! I have a strange issue and wonder whether any of you have encountered it or managed to solve it.. I deploy an EKS cluster with fargate profiles using terraform, and this works perfectly the first time round. Then I issue a TF destroy and all resources are gone, so far so good. Now, when again applying the TF scripts, with the same cluster name, the creation gets stuck on creating fargate profiles.. as if something is hindering AWS from recreating the same fargate profile names (which have been correctly deleted by TF): module.eks.module.fargate.aws_eks_fargate_profile.this[“default”]: Still creating… [44m50s elapsed] Is this is a bug or is there a workaround for this? Often I can see that the Profile got created for the cluster, yet TF is somehow not “seeing” that the creation is complete…
you might need to run with trace logging on so you can see what API request/response data the AWS provider is sending. Perhaps there’s a bug and it’s not looking for the same resource you are seeing in the console
please don’t double post questions in different channels. You can link to a message instead to consolidate responses
2021-02-09
Hi Everyone! Is anyone using porter.sh in prod? Specifically as a bridge between terraform and helm?
is there a way in terraform-compliance to test outputs?
HI all, I am using this module : https://github.com/cloudposse/terraform-aws-tfstate-backend I would like to create this : terraform_state_file = "s3state/var.tier/terraform.tfstate"
where var.tier is a variable. The statefile is stored as such then: s3state/test/terraform.tfstate
. The variable is tier=test
Terraform module that provision an S3 bucket to store the terraform.tfstate
file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption. - cloudposse…
If that variable is within a string you’ll need to use ${var.tier}
:
terraform_state_file = "s3state/${var.tier}/terraform.tfstate"
Terraform module that provision an S3 bucket to store the terraform.tfstate
file and a DynamoDB table to lock the state file to prevent concurrent modifications and state corruption. - cloudposse…
thanks Andy, I will check
wee that works: + “terraform_state_file” = “s3state/test/terraform.tfstate”
With the above module the s3 state backend is configured properly, thanks all for this excellent module
How do you manage multiple state files, do you generate the backend files by hand ?
Is there a simple ‘how to’ on using terraform-AWScloud fronts3-can ? The .tf under examples/complete doesn’t seem to run for me when I change the relevant parameters to match a brand new AWS setup (AWS hosted domain with R53)? I haven’t been able t find one but then again I’m fried
Best bet is to share the output for the error you’re getting
All our modules are continually tested with integration tests to verify they work
I understand that. I’m bumping up against an issue of not knowing exactly what’s required to get a working setup. I set logging_enabled to “false” but it still complains of not being able to create the logging s3 bucket
Thank you for the reply BTW
Please share the literal example/error. E.g. https://sweetops.slack.com/archives/CUJPCP1K6/p1612902288011800
@Erik Osterman (Cloud Posse) Hi all,
I am getting the following from the ECS web app module using webhooks. I am guessing its coming from the webhooks module. It seems GitHub there are breaking changes with the GitHub provider.
Warning: Additional provider information from registry
The remote registry returned warnings for
registry.terraform.io/hashicorp/github:
- For users on Terraform 0.13 or greater, this provider has moved to
integrations/github. Please update your source in required_providers.
Error: Failed to query available provider packages
Could not retrieve the list of available versions for provider
hashicorp/github: no available releases match the given constraints ~> 2.8.0,
3.0.0
Erorr are here: https://pastebin.com/7YNtjPXb
Pastebin.com is the number one paste tool since 2002. Pastebin is a website where you can store text online for a set period of time.
This is the code; https://github.com/gcw/foosa/tree/main/terraform-aws-cloudfront-s3-cdn-master/examples/complete
Contribute to gcw/foosa development by creating an account on GitHub.
So this is in no way a terraform problem. The bucket already exists… often this happens if you provision a root module without a statebackend
the example above doesn’t include a state backend, increasing the odds you’ll accidentally reprovision the same resource
HCP provides free state backends for terraform. That will be the easiest way to get up and running.
Ok. Thank you
hot off the press: https://github.com/cloudposse/terraform-aws-sso
cc: @Mohammed Yahya
Terraform module to configure AWS Single Sign-On (SSO) - cloudposse/terraform-aws-sso
Now just need to figure out how to take our non-Terraform-managed-AWS-SSO configuration and migrate it
Terraform module to configure AWS Single Sign-On (SSO) - cloudposse/terraform-aws-sso
@Erik Osterman (Cloud Posse) Awesome !! very clean and simple
@Yoni Leitersdorf (Indeni Cloudrail) create terraform templates and import terraform import
them, I did it last week, here is a sample script I used:
terraform import module.permission_set_power_access.aws_ssoadmin_permission_set.this arn:aws:sso:::permissionSet/ssoins-XXXXXXXXXXXXXXX/ps-yyyyyyyyyyyyyyyy,arn:aws:sso:::instance/ssoins-XXXXXXXXXXXXXXX
terraform import module.permission_set_power_access.aws_ssoadmin_account_assignment.this\[0\] zzzzzzzzzzz-4edbad0a-1509-4f26-8876-aaaaaaaaaaaaa,GROUP,1111111111111111,AWS_ACCOUNT,arn:aws:sso:::permissionSet/ssoins-XXXXXXXXXXXXXXX/ps-yyyyyyyyyyyyyyyy,arn:aws:sso:::instance/ssoins-XXXXXXXXXXXXXXX
terraform import module.permission_set_power_access.aws_ssoadmin_account_assignment.this\[1\] zzzzzzzzzzz-4edbad0a-1509-4f26-8876-aaaaaaaaaaaaa,GROUP,222222222222,AWS_ACCOUNT,arn:aws:sso:::permissionSet/ssoins-XXXXXXXXXXXXXXX/ps-yyyyyyyyyyyyyyyy,arn:aws:sso:::instance/ssoins-XXXXXXXXXXXXXXX
terraform import module.permission_set_power_access.aws_ssoadmin_managed_policy_attachment.this\[0\] arn:aws:iam::aws:policy/PowerUserAccess,arn:aws:sso:::permissionSet/ssoins-XXXXXXXXXXXXXXX/ps-yyyyyyyyyyyyyyyy,arn:aws:sso:::instance/ssoins-XXXXXXXXXXXXXXX
I took a slightly different approach when creating my module to combine assignments and permission sets all in one. https://github.com/glg-public/terraform-aws-single-sign-on
Terraform module to provision AWS SSO permission sets, assignments, managed and inline policies. - glg-public/terraform-aws-single-sign-on
@matt would it make sense to do this maybe in the root of our terraform-aws-sso
module? … combining the submodules?
well that’s nifty, tailscale has a community terraform provider already… https://registry.terraform.io/providers/davidsbond/tailscale/latest/docs
Thank you for sharing, I have been looking into openVPN replacement, I took a look at Hashicorp Boundary, but I choose AWS client VPN, but this one looks awesome, I will give it a shot
@Mohammed Yahya out of interest what EC2 instance type do you use for OpenVPN? We’re using a t3.small
with around 50-80 users connecting throughout the day. We’ve had latency issues reported over lockdown (!) but looking at network monitoring from the server, nothing stands out as an obvious cause. I’m considering just trying to increase the instance type to a t3.medium
or t3.large
to get better baseline network performance.
@Andy I’m using the managed AWS VPN service. AWS Client VPN. way much better performance than ec2 openvpn and more secure
OK. Looks expensive though Or are there smart ways to manage that?
# 100 users
# $0.05 per user per hour
# 8 working hours in a day
# 253 working days in a year
100 * 0.05 * 8 * 253 = $10,120 per year
yes it’s, but worth it for the simplicity, performance and integration with SSO like Okta or AWS SSO
or check Tailscale I just learn about it today
follow @Tailscale on twitter also, the devs are quite active
Unfortunately for me, Tailscale got declined during a secuirty review by a client’s auditing team so I don’t get to use it with that client. But this will be sweet, because I was hoping to have a better way to manage those ACLs than just copy / pasta a json document around.
One day.
bummer on the security review did they at least say what they would have needed to accept it? i wonder if the tailscale team would be interested in that?
Tailscale failed the security review because one of the founders refused to fill out an 90 question security review questionnaire. I talked with him about it and he just said they didn’t have the time and my client’s team just refused to try after that and considered them too small of a company. It was a sad way for them to get rejected.
Help!!
i got you, https://lmgtfy.app/
For all those people who find it more convenient to bother you with their question rather than search it for themselves.
i kid of course! put the problem out there, this community is awesome
For all those people who find it more convenient to bother you with their question rather than search it for themselves.
AhhhA! Caution, I’ve spent the last 3 days between google and cloudposse git, I could bite
I really hope somebody can help me with cloudposse EKS cluster.. I really don’t know why… first time, it creates the cluster.. second apply… `
Error: the server is currently unable to handle the request (get configmaps aws-auth)
It’s SO annoying.. reading the TRACE, it seems trying to call localhost (?) which asnwer 503…
HTTP/1.1 503 Service Unavailable
Connection: close
Content-Length: 299
Content-Type: text/html; charset=iso-8859-1
Date: Wed, 10 Feb 2021 00:25:35 GMT
Server: Apache/2.4.6 (CentOS) OpenSSL/1.0.2k-fips PHP/5.4.16
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>503 Service Unavailable</title>
</head><body>
<h1>Service Unavailable</h1>
<p>The server is temporarily unable to service your
request due to maintenance downtime or capacity
problems. Please try again later.</p>
</body></html>
All this seems related the F. map_additional_iam_roles
and also map_additional_iam_users
(tried both)
That unathorized thing seems to be related module.eks_cluster.kubernetes_config_map.aws_auth[0]
This, if I set kubernetes_config_map_ignore_role_changes
to true
If I set it to false
, the the module is instead module.eks_cluster.kubernetes_config_map.aws_auth_ignore_changes
I’ve tried almost everything, but I really don’t understand why it run perfectly, but If I run a refresh, an apply, or a destroy, it throws that error and byebye
Tried.
I’ve forgotten to say, I’m not using any other module than cloudposse eks cluster and node-groups
now i feel bad about speaking up so flippantly, because i know nothing about eks
Oh don’t worry, me too
but, i know one big difference between an initial apply and subsequent terraform actions, is that terraform will actually attempt to describe the running resources and compare them to the config. so i’d guess it is that part of the execution that is throwing the error. i have no idea how to use that to help you though
Yep, that’s correct but I didn’t change anything between the two apply (or the apply and the refresh). It’s clearly written to change kubernetes_config_map_ignore_role_changes
into the readme if I want to change the nodes, or the users, but I don’t want, it’s throwing that error even if I run tf apply -auto-approve && tf refresh
I really don’t know what to do, I’ve tried reading the code of the module, but it looks fine to me
@Erik Osterman (Cloud Posse)… I know you know the answer…
i would recommend threading, at least, to give others a chance with their own questions…
Oops.. You’re right
it’s hard to pick where to start a thread when a lot convo happens in the channel, but you can be explicit about starting a thread, just post start thread here, or something and sorry i can’t help more with the eks problem. there are quite a few eks users here though, so i do think someone will be able to help
there is also #kubernetes, and sometimes cross-posting can help, in moderation
Cool, thanks I’ll keep this thread
Yeah I really hope somebody can help, I really don’t know what else to try, the next step would be to fork the module git and try to fix it but I don;t want to end up using my repo and anyway I have the feeling the fix is simple
Let’s see into kubernetes channel
Why not bombarding everyone with my problms after all
it is very easy to fork and use your own work with git:// sources, so that is very viable
For testing yes, to use it at work not so much
and also sets you up to pr the fix, if you figure out it is a bug upstream!
create a work org, and fork it there, then use that for work
Trust me, If I don’t find an easy fix, I will do exactly that
I need these modules to work, I;’m not going to rewrite the whole thing to change them
hello - has anyone come with a solution to use a list of instance id in the target_id for the resource https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lb_target_group_attachment#target_id
I find it annoying to have to create multiply lb_target_group_attachment for every instances
2021-02-10
Hi… The aws-ssm-iam-role supose to work with terraform 12 ???
Terraform module to provision an IAM role with configurable permissions to access SSM Parameter Store - cloudposse/terraform-aws-ssm-iam-role
modules without automated tests have not been upgraded
Terraform module to provision an IAM role with configurable permissions to access SSM Parameter Store - cloudposse/terraform-aws-ssm-iam-role
Im getting this error:
warning: Quoted type constraints are deprecated
on .terraform/modules/ssm_role.label/variables.tf line 19, in variable "delimiter":
19: type = "string"
Terraform 0.11 and earlier required type constraints to be given in quotes,
but that form is now deprecated and will be removed in a future version of
Terraform. To silence this warning, remove the quotes around "string".
(and 13 more similar warnings elsewhere)
But, look like this module isnt update..
Hello. I’m trying to use the CIS config rules module but getting an error that no source URL was returned when running terraform init with the latest version(0.14.6). I am using the URL defined in the example, https://github.com/cloudposse/terraform-aws-config.git//modules/cis-1-2-rules?ref=master. The module URL seems correct based on the terraform docs so I’m not sure if this is an issue with the repo or with terraform…
This module configures AWS Config, a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. - cloudposse/terraform-aws-config
@matt
This module configures AWS Config, a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. - cloudposse/terraform-aws-config
I would suggest using the Terraform module syntax to simplify…
module "config_cis-1-2-rules" {
source = "cloudposse/config/aws//modules/cis-1-2-rules"
version = "0.7.2"
# insert the 11 required variables here
}
Awesome that works, thanks!
v0.15.0-alpha20210210 0.15.0 (Unreleased) BREAKING CHANGES:
The list and map functions, both of which were deprecated since Terraform v0.12, are now removed. You can replace uses of these functions with tolist([…]) and tomap({…}) respectively. (#26818)
Terraform now requires UTF-8 character encoding and virtual terminal support when running on…
Prior to Terraform 0.12 these two functions were the only way to construct literal lists and maps (respectively) in HIL expressions. Terraform 0.12, by switching to HCL 2, introduced first-class sy…
please anyone?
hello - has anyone come with a solution to use a list of instance id in the target_id for the resource https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lb_target_group_attachment#target_id
I find it annoying to have to create multiply lb_target_group_attachment for every instances
use for_each? still technically multiple resources, but you don’t have to spell them out individually
hello - has anyone come with a solution to use a list of instance id in the target_id for the resource https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lb_target_group_attachment#target_id
I find it annoying to have to create multiply lb_target_group_attachment for every instances
thx @loren i will check this out.
but wouldn’t it fail since target_id only accept a single value (string)?
with for_each, the entire resource is duplicated, and the attribute remains a string:
resource "aws_lb_target_group_attachment" "test" {
for_each = toset([<your list of instance IDs>])
target_group_arn = aws_lb_target_group.test.arn
target_id = each.key
}
i see
thx for the knowledge
if you are creating the instances in the same state, do not use the instance ID in the for_each expression. instead use an identifier that maps back to each instance:
resource "aws_instance" "test" {
for_each = toset(["foo", "bar"])
...
}
resource "aws_lb_target_group_attachment" "test" {
for_each = toset(["foo", "bar"])
target_group_arn = aws_lb_target_group.test.arn
target_id = aws_instance[each.key].id
}
note how the for_each expression is using the same keys for both resources, and how in the attachment we index into the instance resource object
I originally attempted to use the data source below and feed that into the target_id. i guess can still do this and fee that into the for_each ?
data "aws_instances" "test" {
instance_tags = {
env = "stage"
}
filter {
name = "tag:name"
values = "xyz*"
}
technically data sources are ok, as long as they do not themselves depend on resources created in the same tfstate
got it
Our team runs terragrunt modules locally , what are the best solutions/ practice to run modules in more unified pattern Note we have s3bucket + dynamodb for locking state
run it in CICD
add security checks to the pipeline also
also try #terragrunt
@Erik Osterman (Cloud Posse) thanks
Is someone still using this Lambda https://registry.terraform.io/modules/blinkist/airship-ecs-instance-draining/aws/latest?
It looks it’s not working anymore :disappointed:
Lambda logs looks something like this: Event needs-retry.autoscaling.CompleteLifecycleAction: calling handler <botocore.retryhandler.RetryHandler object at 0x7fe662775b10>
and Event request-created.autoscaling.CompleteLifecycleAction: calling handler <bound method RequestSigner.handler of <botocore.signers.RequestSigner object at 0x7fe6627a99d0>>
See #airship @maarten
This tool visualizes Terraform state files! Has anyone played with Pluralith? https://www.pluralith.com/
2021-02-11
AWS Fargate is a a serverless compute engine that supports several common container use cases, like running micro-services architecture applications, batch processing, machine learning applications, and migrating on premise applications to the cloud without having to manage servers or clusters of Amazon EC2 instances. AWS customers have a choice of fully managed container services, including […]
hello, can I ask about vault? Is there a way to autogenerate missing passwords and store them on vault? So I don’t need to provide from helm/helmfile.
Hi, I have the rolling update set with Terraform and CF for ECS clusters. This is how it works:
- I have ECS cluster behind ALB
- When there is an AMI change, Terraform applies it
- ASG, which was created with CloudFormation template on Terraform, adds a new instance (this was not possible with TF module)
Here it becomes funky:
- Target group sees status of the old instance as “initial draining” 30 seconds after I run
terraform apply
- Healthchecks are failing, because, of course, container on my new EC2 instance is not started yet and Target Group sees it as unhealthy, but doesn’t continue to serve traffic from the old insntace.
- Then I got bunch of 503s and after 502s until the container on the new instance is up
These parts are ok:
- I have Lambda function that drains ECS containers
- After draining finishes, the instance is killed
This worked before, when I had EC2 checks on ASG. Now I want to use TargetGroupArns to check HTTP and to see if I’ll get 200 and if application is really running.
Is there any workaround on this?
Like to set draining of instances with a delay of few minutes?
Hi guys, I was looking through the terraform-aws-ses-lambda-forwarder code as I was intrigued to see a system close to one that we devised. I see that listed under the limitations is the use of a verified domain as the sender. We use SRS to be compliant with without breaking SPF . I’ve had success using senrews to do SRS0 and SRS1 rewrites.
Another thing to note is additional cleanup of the email. SES very loosely accepts emails, however is very strict with what it sends out. You will need to clean up duplicate headers, remove DKIM signatures and return-paths, etc when forwarding . The aws-lambda-ses-forwarder has some problems with sending bounce messages and a host of other minor bugs. Just a heads up.
This is a terraform module that creates an email forwarder using a combination of AWS SES and Lambda running the aws-lambda-ses-forwarder NPM module. - cloudposse/terraform-aws-ses-lambda-forwarder
For a mail transfer agent (MTA), the Sender Rewriting Scheme (SRS) is a scheme for rewriting the envelope sender address of an email message, in view of remailing it. In this context, remailing is a kind of email forwarding. SRS was devised in order to forward email without breaking the Sender Policy Framework (SPF), in 2003.
Sender Policy Framework (SPF) is an email authentication method designed to detect forging sender addresses during the delivery of the email. SPF alone, though, is limited to detecting a forged sender claim in the envelope of the email, which is used when the mail gets bounced. Only in combination with DMARC can it be used to detect the forging of the visible sender in emails (email spoofing), a technique often used in phishing and email spam. SPF allows the receiving mail server to check during mail delivery that a mail claiming to come from a specific domain is submitted by an IP address authorized by that domain’s administrators. The list of authorized sending hosts and IP addresses for a domain is published in the DNS records for that domain. Sender Policy Framework is defined in RFC 7208 dated April 2014 as a “proposed standard”.
Sender Rewriting Scheme module for emails
Serverless email forwarding using AWS Lambda and SES - arithmetric/aws-lambda-ses-forwarder
Can I get some guidance on the difference between terraform-aws-eks-workers and terraform-aws-eks-node-group ? They both seem very similar and both are actively being maintained. When should we use one over the other?
Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers
Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.
one is considered self-managed and uses ASGs the other is for AWS managed node pools
Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers
Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.
Thanks got it. Despite my comment, I prefer to user workers, but the nodes aren’t joining the cluster. I’ve tracked it down to this error:
Tag "KubernetesCluster" nor "[kubernetes.io/cluster/](http://kubernetes.io/cluster/)..." not found; Kubernetes may behave unexpectedly
The “kubernetes.io/cluster/${var.cluster_name}” = “owned”
tag is not propagating to my nodes.
Is this a bug or user error?
@Erik Osterman (Cloud Posse) QQ: what module you advise to add if I need autoscaling with node-group like the workers module does? https://github.com/cloudposse/terraform-aws-ec2-autoscale-group ? Apparently node-group module isn’t creating scaling policy, or I’m misunderstanding how that works?
autoscaling in kubernetes requires a controller. Doesn’t matter which node group flavor.
look up the aws kubernetes cluster autoscaler
Thanks for taking the time to answer :slightly_smiling_face: I will check that, anyway my issue was my blame eventually, I’ve deployed the autoscaler using the helm chart and I didn’t set the region
I’ve tried both. eks-node-group seems to work better for me, but wondering what the experience is like for others.
is this correct?
dynamic "custom_header" {
for_each = lookup(origin.value, "custom_header", [])
content {
name = custom_header.value.name
value = custom_header.value.value
}
}
the question is more custom_header.value.value
might be using reserved keyword?
I wonder if this have to be something like custom_header.value.custom_header_value
or something like that
This is tf 0.12.24
I figure it out
the other items need to have all the values
congrats to @marcinw @Paweł Hytry - Spacelift https://techcrunch.com/2021/02/11/cloud-automation-startup-spacelift-raises-6m-series-a-led-by-blossom-capital/
Spacelift, a startup that automates the management of cloud infrastructure, has raised $6 million in a Series A funding round led by London’s Blossom Capital. Polish fund Inovo Venture Partners and Hoxton Ventures are also investing. The Polish and U.S.-based startup is taking advantage of the oppo…
it’s @marcinw
Spacelift, a startup that automates the management of cloud infrastructure, has raised $6 million in a Series A funding round led by London’s Blossom Capital. Polish fund Inovo Venture Partners and Hoxton Ventures are also investing. The Polish and U.S.-based startup is taking advantage of the oppo…
eek thanks!
Manatee alerts you the instant your infrastructure drifts from Terraform. It's free to use, supports all major clouds, and takes minutes to set up.
As much as I like such idea I think I could never trust SaaS to give RO credentials to whole AWS. This could wuickly go wrong and be exploited. Are there any opensource solutions like this?
I agree, I tested it, it’s early alpha, check driftctl
Learn how to use driftctl in a real-life environment, with multiple Terraform states and output filtering.
2021-02-12
one way to keep secrets out of your state file https://secrethub.io/docs/guides/terraform/
A step-by-step guide to manage secrets in Terraform.
whats the difference to using datasoucre with i.e. ssm parameter store ?
A step-by-step guide to manage secrets in Terraform.
Are the Terraform modules https://github.com/cloudposse/terraform-null-label and https://github.com/cloudposse/terraform-terraform-label aimed at usage in any Terraform module or specifically made for Cloud Posse modules? I’m asking this because the docs mention [context.tf](http://context.tf)
files that are part of all Cloud Posse modules that use terraform-null-label
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-terraform-label
I use null-label and [context.tf](http://context.tf)
also in some non-cloudposse things where it makes sense to be able to pass just that (the full context) between various modules. It’s neat.
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-null-label
Terraform Module to define a consistent naming convention by (namespace, stage, name, [attributes]) - cloudposse/terraform-terraform-label
Ah ok. But you can also just use it without [context.tf](http://context.tf)
, right? Or does that not make any sense? Though in that case one would have to define stuff like the namespace and so on as variables manually
I add [context.tf](http://context.tf)
to every module I want to be able to accept incoming context
…
It does both the variable
+ output
additions all-in-one, which is the neat thing about it.
Alright, thanks for the help
This probably doesn’t mean much to you but an example:
module db_context {
source = "../db-context"
kubernetes_cluster = var.kubernetes_cluster
cluster_identifier = var.cluster_identifier
}
module service_context {
source = "../service-context"
context = module.this.context
delimiter = "_"
kubernetes_cluster = var.kubernetes_cluster
}
resource random_password default {
length = 32
special = false
}
resource mysql_database default {
name = module.service_context.id
}
# ...
all of the involved modules have the [context.tf](http://context.tf)
copied in. Makes it really convenient.
I was asking because I’m setting up the infrastructure for a product where every env / stage has it’s own account and I was wondering if I should adapt my TF code to use null-labels. Kind of becomes redundant if everything atm is separated by account already to still have the environment in the id values
more details here: https://archive.sweetops.com/search?query=terraform-terraform-label
SweetOps is a collaborative DevOps community. We welcome engineers from around the world of all skill levels, backgrounds, and experience to join us! This is the best place to talk shop, ask questions, solicit feedback, and work together as a community to build sweet infrastructure.
the [context.tf](http://context.tf)
pattern is a game changer for working with a lot of modules.
We’re probably at or near the point we can sunset terraform-terraform-label
. I believe :100: of our HCL2 modules use terraform-null-label now so we can use the [context.tf](http://context.tf)
pattern. @Maxim Mironenko (Cloud Posse)?
terraform-provider-aws v3.28.0
- +7 NEW FEATURES including aws_securityhub_organization_admin_account
- +35 NEW ENHANCEMENTS
- +10 BUG FIXES
https://github.com/hashicorp/terraform-provider-aws/releases/tag/v3.28.0
FEATURES: New Data Source: aws_cloudfront_cache_policy (#17336) New Resource: aws_cloudfront_cache_policy (#17336) New Resource: aws_cloudfront_realtime_log_config (#14974) New Resource: aws_confi…
FEATURES: New Data Source: aws_cloudfront_cache_policy (#17336) New Resource: aws_cloudfront_cache_policy (#17336) New Resource: aws_cloudfront_realtime_log_config (#14974) New Resource: aws_confi…
@matt heads up
2021-02-13
Hello guys, this might have been asked before, but what criteria do you guys use when evaluating if something needs to be created as a module?
I recently joined a company with 6 cloud engineers that have been discussing about maturing their terraform deployment, and modules have been brought up. The -legacy- engineers wanted to create a module for everything, even simple ones. For example, azure resource groups, and the arguments were:
- Takes me 10 mins to write it anyway
- I can make it accept a comma-delimited name, and it creates multiple resource groups for me
- If you want to create 1 resource group, the module can handle it anyway
- I can ask for required tags on the resource groups, and I’m sure we’re going to need something else on those resource groups in the future
Our goal is to eventually allow our app dev teams to create their terraform code to deploy their infrastructure for their apps. They originally managed the deployment by creating standalone deployments for each resources - like 1 deployment for resource group, 1 for SQL PaaS, 1 for storage account - all separate repositories and “pipeline”. We would like to move to more application-based repositories that contains all the terraform code/infrastructure needed for the said application (shared services infrastructure like AKS will be separately managed)
I feel this is a case of over engineering/YAGNI, but being new, I may be biased. I don’t feel simple/standalone terraform resources should have another wrapper on top of it (module). Is there a compelling reason why this pattern can bite us in the future (aka very bad idea)?
• does it create multiple resources that all work together to create a final “thing”
• am I going to use this again
• does it need standardization of names, tags, etc
Similar logic to converting code to a library. The first time, write it in-place, hardcoded. The second time, copy and paste. The third time, consider common code.
If we need to change this later, what are the extension points and possible migration options (tfstate?)
We haven’t wrapped any cloud posse modules but considered it… Until for each came along of modules and thus now we definitely won’t.
Takes me 10 mins to write it anyway
if it takes 10 mins, then there is no need for such a module
a good useful module takes a day to write + examples, tests, docs etc.
and since modules are supposed to be reusable in many projects by many people, there is no way around that ^
otherwise it will create more problems than it solves
Modules give you an opportunity to write tests and version smaller components of your infrastructure. I feel that is very valuable. But I would recommend reusing high quality community modules as much as possible. As @Andriy Knysh (Cloud Posse) mentions, maintaining good modules takes a fair amount of work, and specialization/experience in some obscure terraform details that may not add business value for you
yes, it’s similar to any other programming language. In your own code, you use functions to aggregate some common functionality and make the rest of the code DRY(er). But functions in a public library are completely diff things, they are for public consumption and everything else matters even more than the code itself (docs, examples, tests, tutorials, etc.)
If we need to change this later, what are the extension points and possible migration options (tfstate?)
Well, there are good manual escape hatches for converting a resource from manually managed to part of a repo, eg terraform state mv
. The workflow looks like this:
- Migrate your TF code from aws_rds_cluster resources to cloudposse-rds-cluster module
- Run
terraform plan
and see what your RDS resources get shifted from, eg fromaws_rds.cluster.main
tomodule.rds.aws_rds_cluster.primary[0]
- Run
terraform state mv $old $new
and then re-runterraform plan
to see how many changes the module still wants to make. You’ll often find modules want to change things that require a rebuild, like the name or other important stuff. If you own the module it’s easy to add lifecycle directives to ignore changes in name. If you don’t own the module, this brings me to my more important point:
It’s not always worth migrating existing pre-module uses to use the module, unless the fit is 100%. Nothing is worse than an in-house module with 100 options each used by a single consumer.
2021-02-14
2021-02-15
hey guys!, Ive got a VPC that some other team has made via terraform , can I define a vpc module and pass in the vpc id to it to add a few more subnets to it?
Use one of the subnet modules
A vpc module will make a new vpc
Oh I never thought of that
Let me try to use the subnet modules
I am looking at 2 cloudposse subnets
Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets
Terraform module for named subnets provisioning. Contribute to cloudposse/terraform-aws-named-subnets development by creating an account on GitHub.
as far as I can tell, they are identical to a large degree
which one should I use?
Use whichever fits your use case best. I usually use the dynamic one
ah got it thanks
unrelated: @roth.andy do you think we should create a terraform-aws-static-subnets
module for manually defining subnets?
Is that not what the named subnets module does?
Idk I’ve never used it
2021-02-16
Maybe here is someone who encountered something like that?
Hey all! Having an issue with running terraform init
/ terraform plan
as service account on Google Cloud. It has necessary rights for backend bucket where state is stored. Have authenticated against GCP with SA key and account is set as default.
gcloud auth list
output:
Credentialed Accounts
ACTIVE ACCOUNT
* [email protected]
[[email protected]](mailto:[email protected])
gcloud config list
output:
[compute]
region = us-east1
zone = us-east1-d
[core]
account = [email protected]
disable_usage_reporting = True
project = project
Your active configuration is: [default]
When I run terraform init
/ terraform plan
then it’s run using [[email protected]](mailto:[email protected])
instead of SA (That I see from activity log in GCP console about infra bucket access). Anyone had something similar and could advice what to do and where to proceed? Any help would be appreciated. Tried already couple of suggestions from what I found on net, but no luck.
Solved - using GOOGLE_APPLICATION_CREDENTIALS
env variable pointing to SA key file.
Hey all! Having an issue with running terraform init
/ terraform plan
as service account on Google Cloud. It has necessary rights for backend bucket where state is stored. Have authenticated against GCP with SA key and account is set as default.
gcloud auth list
output:
Credentialed Accounts
ACTIVE ACCOUNT
* [email protected]
[[email protected]](mailto:[email protected])
gcloud config list
output:
[compute]
region = us-east1
zone = us-east1-d
[core]
account = [email protected]
disable_usage_reporting = True
project = project
Your active configuration is: [default]
When I run terraform init
/ terraform plan
then it’s run using [[email protected]](mailto:[email protected])
instead of SA (That I see from activity log in GCP console about infra bucket access). Anyone had something similar and could advice what to do and where to proceed? Any help would be appreciated. Tried already couple of suggestions from what I found on net, but no luck.
Hi, I have an ec2 cluster, there are multiple tags(Name) associated with cluster instances. I want to fetch these tags(Name) and pass it to a module that accepts a list of EC2 instances. Any suggestions ?
Congrats to the BridgeCrew folks? “Prisma Cloud Shifts Left With Proposed Acquisition of Bridgecrew” https://blog.paloaltonetworks.com/2021/02/prisma-cloud-bridgecrew/
The proposed acquisition of Bridgecrew will expand Prisma Cloud with leading Infrastructure as Code (IaC) security.
no freebies
The proposed acquisition of Bridgecrew will expand Prisma Cloud with leading Infrastructure as Code (IaC) security.
@Mohammed Yahya No worries. Palo Alto Networks will continue to invest in Bridgecrew’s open-source initiatives as part of its ongoing commitment to DevOps community. OSS tools are here to stay and grow, now in even a faster pace .
@barak Thanks, since I’m heavily using it.
Great to hear. I’ll continue to maintain and create those. Now under the PANW umbrella.
I’d like to use the EKS module to deploy EKS with workers in private subnet
what is the simplest method to accomplish?
I use the terragrunt (terraform) provisioned a VPC times ago. But, today, when I re-run the script I got, “Remote state S3 bucket blue-green-terraform-state does not exist or you don’t have permissions to access it.”. I login to AWS console, the S3 bucket blue-green-terraform-state is there. I have no clue. Can someone help?
$ terragrunt init [terragrunt] [/depot/infra/dev/Oregon/nsm/green/vpc] Running command: terraform –version [terragrunt] Terraform version: 0.13.5 [terragrunt] Reading Terragrunt config file at /depot/infra/dev/Oregon/nsm/green/vpc/terragrunt.hcl [terragrunt] Initializing remote state for the s3 backend [terragrunt] [terragrunt] Remote state S3 bucket blue-green-terraform-state does not exist or you don’t have permissions to access it. Would you like Terragrunt to create it? (y/n)
$ cat terragrunt.hcl
remote_state {
backend = "s3"
config = {
encrypt = false
bucket = "blue-green-terraform-state"
key = "infra/Oregon/green/vpc/terraform.tfstate"
region = "us-west-2"
dynamodb_table = "green-vpc-lock-table"
}
}
$ env | grep AWS
AWS_SECRET_ACCESS_KEY=#####################
AWS_ACCESS_KEY_ID=############
doublecheck everything around the credential and bucket… for example, is the region correct? is the access key disabled/deleted? does aws sts get-caller-identity
return the expected account/user info?
$ aws sts get-caller-identity
Could not connect to the endpoint URL: “https://sts.amazonaws.com/”
AWS Identity and Access Management (IAM) enables you to manage access to AWS services and resources securely.
What does it mean?
sounds like your networking is broken
Thank you.
or your ca bundle is out of date
What is ca bundle?
And how to update if it is out dated?
ca is certificate authority. basically, your system may not “trust” the remote endpoint
Oh
how to update it depends on your system/platform
you can add the --debug
flag to get more details: aws sts get-caller-identity --debug
The “aws sts get-caller-identity” returned proper value. But, it still complains S3 bucket does not exist.
aws sts get-caller-identity
{
“UserId”: “#################”,
“Account”: “############”,
“Arn”: “arniam:user/albert”
}
$ terragrunt init
[terragrunt] Terraform version: 0.13.5
[terragrunt] Reading Terragrunt config file at depot/infra/dev/Oregon/nsm/blue/vpc/terragrunt.hcl
[terragrunt] Initializing remote state for the s3 backend
[terragrunt] Remote state S3 bucket blue-green-terraform-state does not exist or you don’t have permissions to access it. Would you like Terragrunt to create it? (y/n)
Any clue?
I found out. The command, “aws sts get-caller-identity” returned to me the wrong ID. It is not the ID I use.
I have no clue how that happens.
because you have the wrong access/secret key exported into the environment
an access/secret key is tied incontrovertibly to a specific iam user and account. wrong key, wrong user, wrong account
I use Ubuntu. I exported the ACCESS_KEY_ID and SECRET_ACCESS_KEY AWS_SECRET_ACCESS_KEY=############################ AWS_ACCESS_KEY_ID=#################AFA
aws sts get-caller-identity { “UserId”: “###########BAP”, “Account”: “9999999934618”, “Arn”: “arniam:user/albert” } But, aws sts get-caller-identity gives wrong ID.
then that key is not the one you think it is
2021-02-17
Hi there! Does any know if there would be a way to use this module but have more that one service and task definition? Having multiple service seems like a common architecture with AWS ECS - is there perhaps another module that is more suitable?
Terraform module which implements an ECS service which exposes a web service via ALB. - cloudposse/terraform-aws-ecs-alb-service-task
The container_definition_json var has this description
A string containing a JSON-encoded array of container definitions
(“[{ “name”: “container1”, … }, { “name”: “container2”, … }]”).
See AWS docs,
https://github.com/cloudposse/terraform-aws-ecs-container-definition, or
Terraform docs
Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource - cloudposse/terraform-aws-ecs-container-definition
So the answer is yes, you can use multiple json documents in this module
@RB Thanks for your response!
Although I am a bit confused - my understanding is that the container_definition_json var would allow multiple containers to be created for the task definition, which is not what I am asking.
A container definition is a sub set of a task definition though. And then a service contains a task definition.
Container definitions are used in task definitions to describe the different containers that are launched as part of a task.
Details on a service within a cluster
The details of a task definition which describes the container and volume definitions of an Amazon Elastic Container Service task. You can specify which Docker images to use, the required resources, and other configurations related to launching the task definition through an Amazon ECS service or task.
Oh woops, my bad
For multiple services with different task definitions, wouldn’t you simply use another reference to that same module and then repeat for however many services you want?
@Thomas Windell ^
@RB You are right! I spoke to a colleague and he helped me improve my understanding of modules. Tbh I am quite new to terraform. Thanks for your help
Hey all with the s3 user module: https://github.com/cloudposse/terraform-aws-iam-s3-user I want to do this:
Terraform module to provision a basic IAM user with permissions to access S3 resources, e.g. to give the user read/write/delete access to the objects in an S3 bucket - cloudposse/terraform-aws-iam-…
module "s3_user" {
source = "cloudposse/iam-s3-user/aws"
label_order = ["namespace", "name", "environment", "stage", "attributes"]
namespace = "dspace"
name = var.name
environment = "s3"
stage = var.tier
s3_actions = ["s3:GetBucketAcl", "s3:GetBucketVersioning", "s3:ListBucket", "s3:GetBucketLocation"]
s3_resources = ["arn:aws:s3:::cloudposseisawesome"]
s3_actions = ["s3:PutObject", "s3:GetObject", "s3:DeleteObject"]
s3_resources = ["arn:aws:s3:::cloudposseisawesome/*"]
}
This does not work because the s3_actions is defined double
a bit puzzled howto do this
why dont you just wanna do this
module "s3_user" {
source = "cloudposse/iam-s3-user/aws"
label_order = ["namespace", "name", "environment", "stage", "attributes"]
namespace = "dspace"
name = var.name
environment = "s3"
stage = var.tier
s3_actions = ["s3:GetBucketAcl", "s3:GetBucketVersioning", "s3:ListBucket", "s3:GetBucketLocation", "s3:PutObject", "s3:GetObject", "s3:DeleteObject"]
s3_resources = ["arn:aws:s3:::cloudposseisawesome", "arn:aws:s3:::cloudposseisawesome/*"]
}
aha that’s the correct syntax
now I need this for another bucket as well:
{ “Effect”: “Allow”, “Action”: [ “s3:GetObject” ], “Resource”: “arns3:::cloudposseisawesome-prod/” },
the user needs to have read and write permissions on it’s own bucket but only read permissions on the production bucket
the original policy looked like this:
ok in that case the module doesnt support that msot likely youll have to take the user output from the module and attach a policy outside the mdouel most likely using aws_iam_user_policy_attachment
"Statement": [
{
“Effect”: “Allow”,
“Action”: [
“s3:GetBucketAcl”,
“s3:GetBucketVersioning”,
“s3:ListBucket”,
“s3:GetBucketLocation”
],
“Resource”: “arns3:::cloudposseisawesome-prod”
},
{
“Effect”: “Allow”,
“Action”: [
“s3:GetBucketAcl”,
“s3:GetBucketVersioning”,
“s3:ListBucket”,
“s3:GetBucketLocation”
],
“Resource”: “arns3:::cloudposseisawesome-test”
},
{
“Effect”: “Allow”,
“Action”: [
“s3:GetObject”
],
“Resource”: “arns3:::cloudposseisawesome-prod/”
},
{
“Effect”: “Allow”,
“Action”: [
“s3:PutObject”,
“s3:GetObject”,
“s3:DeleteObject”
],
“Resource”: “arns3:::cloudposseisawesome-test/”
}
]
I see
for your reference, this worked:
module "s3_user" {
source = “cloudposse/iam-system-user/aws”
# Cloud Posse recommends pinning every module to a specific version
# version = “x.x.x”
label_order = [“namespace”, “name”, “environment”, “stage”, “attributes”]
namespace = “dspace”
name = var.name
environment = “s3”
inline_policies_map = { s3 = data.aws_iam_policy_document.s3_policy.json } }
data “aws_iam_policy_document” “s3_policy” { statement { actions = [ “s3:GetBucketAcl”, “s3:GetBucketVersioning”, “s3:ListBucket”, “s3:GetBucketLocation” ] resources = [ “arns3:::dspace-${var.name}-s3-prod”, ] } statement { actions = [ “s3:PutObject”, “s3:GetObject”, “s3:DeleteObject”] resources = [ “arns3:::dspace-${var.name}-s3-prod/” ] } statement { actions = [ “s3:PutObject”, “s3:GetObject”, “s3:DeleteObject” ] resources = [ “arns3:::dspace-${var.name}-s3-test/” ] } }
Hello there ! Please i have a question, i’m using terraform-aws-nlb module, i’m trying to add 2 listners in the same nlb, there is a way to do that ? Thanks !
Hey Bradai, you can create your 2nd listener and target group outside of the module. Something like this:
module "nlb" {
...
}
resource "aws_lb_listener" "your_listener_name" {
load_balancer_arn = module.nlb.load_balancer_arn
...
}
...
Thank you very match, i will try this :)
Hi, I was hoping to understand the background on the sensitive output change introduced in terraform-aws-ecs-container-definition#118.
The PR mentions an issue with the terraform-aws-ecs-alb-service-task module but I cannot find any references or examples of the actual issue’s code or error. Is there any examples of the actual error and the use-case? While I understand 0.14’s sensitive flagging behavior, I’m confused as to what values were being used in the OP’s container definition that were flagged as sensitive and caused this issue. In my modules, all the secrets are dumped into SM/SSM Parameters and only their ARN references are exposed in the container definition. I’ve been using TF 0.14 without issue in this manner. To my knowledge, those are not sensitive values.
My concern is that sensitive outputs are infectious for a lack of better words. Some outputs are indeed sensitive but I don’t see how the container definitions are.
what Marks the outputs as sensitive Update workflows etc. missed by #119 why Otherwise TF 0.14 would give an Error: Output refers to sensitive values when using these outputs to feed into other …
Hi all, I am iplementing replication with this module: https://github.com/cloudposse/terraform-aws-s3-bucket
Terraform module that creates an S3 bucket with an optional IAM user for external CI/CD systems - cloudposse/terraform-aws-s3-bucket
in my original configuration I had:
replication_configuration {
role = "cloudposseisthebest-role"
rules {
id = "Replicate to DEEP_ARCHIVE on target"
priority = 0
status = "Enabled"
destination {
bucket = "arn:aws:s3:::cloudposseisthebest-role"
storage_class = "DEEP_ARCHIVE"
}
}
}
can you set the storage class with this module ?
Terraform module that creates an S3 bucket with an optional IAM user for external CI/CD systems - cloudposse/terraform-aws-s3-bucket
Looks like it from the vars, and the implementationin main.tf too
it confuses me a bit
when I try:
replication_rules = "storage_class=DEEP_ARCHIVE"
it gives back:
Error: Invalid value for module argument
on main.tf line 67, in module “s3_bucket”: 67: replication_rules = “storage_class=DEEP_ARCHIVE”
Your input needs to be a list of maps, and destination is a nested map within that
you lost me here
Something like the example here for the module "subnets"
section:
As part of the lead up to the release of Terraform 0.12, we are publishing a series of feature preview blog posts. The post this week is on the addition of rich value types in variables and outputs. Terraform variables and outputs today support basic primitives and simple lists and maps. Lists and maps in particular have surprising limitations that lead to unintuitive and frustrating errors. Terraform 0.12 allows the use of arbitrarily complex values for both input variables and outputs, and the types of these values can be exactly specified.
Or see the example in the repo where the grants are configured. It is a similar idea:
Terraform module that creates an S3 bucket with an optional IAM user for external CI/CD systems - cloudposse/terraform-aws-s3-bucket
hmmm it sill confuses me
when I do this:
replication_rules = [
{
id = “Replicate to DEEP_ARCHIVE on target”
}
]
the plan picks it up:
+ replication_configuration {
+ role = (known after apply)
+ rules {
+ id = "Replicate to DEEP_ARCHIVE on target"
+ priority = 0
+ destination {
+ bucket = "arn<img src="/assets/images/custom_emojis/aws.png" alt="aws" class="em em--custom-icon em-aws">s3:::dspace-allegheny-s3-backup"
+ storage_class = "STANDARD"
}
}
}
when I try to set it:
replication_rules = [
{
id = “Replicate to DEEP_ARCHIVE on target”
storage_class = “DEEP_ARCHIVE”
}
it’s not picked up
hah this does the trick:
replication_rules = [
{
id = "Replicate to DEEP_ARCHIVE on target"
destination = {
bucket = "arn:aws:s3:::cloudposse-${var.name}-is-awesome"
storage_class = "DEEP_ARCHIVE"
}
}
]
Extension for Visual Studio Code - Find and fix misconfigurations in infrastructure-as-code manifests like Terraform, Kubernetes, Cloudformation, Serverless framework, Arm templates using Checkov - static analysis for infrastructure as code .
v0.14.7 0.14.7 (February 17, 2021) ENHANCEMENTS: cli: Emit an “already installed” event when a provider is found already installed (#27722) provisioner/remote-exec: Can now run in a mode that expects the remote system to be running Windows and excuting commands using the Windows command interpreter, rather than a Unix-style shell. Specify…
Emit the ProviderAlreadyInstalled event when we successfully verify that we've already installed this provider and are skipping installation. Before: $ terraform init Initializing the backend….
Question about the branches I see in several of the cloudposse TF modules:
ie. https://github.com/cloudposse/terraform-aws-rds/branches
I see a master
, 0.11/master, 0.12/master
branch.
is the intention to maintain separate branches for each major TF version?
update: nevermind, i see that support was dropped for 0.12 , so
master = tf0.13
0.12/master = tf0.12 , which i presume features are at standstill
0.11/master = tf0.11, which i presume was stopped back when you guys moved to 0.12
Not really - we had to do this for the HCL1 → HCL2 cut-over
but it’s too much overhead for us to manage multiple versions of our modules for backwards compatibility
Probably one for @Erik Osterman (Cloud Posse) and the rest of the posse: has there been consideration for the use of Sematic Versioning (https://semver.org/ for those readers who haven’t seen it before) of the various modules? With the recent moves around AWS provider updates and more recently the minimum Terraform versions changing, it’s been a little harder than I’d like to use pessimistic versioning to track releases without surprise breaking changes.
Hi everyone,
Does anyone know why is this conditions returning false
? and what would be the right expression to compare with to get true
?
variable "empty_list" {
type = list(string)
default = []
}
console
tf console
> var.empty_list
tolist([])
>
> var.empty_list == []
false
> var.empty_list == tolist([])
false
Looks like 0.14-specific behaviour, and not working as intended, because
> [] == []
true
> tolist([]) == tolist([])
true
however I’d say nobody has really noticed because a more intuitive test is length(var.empty_list) == 0
it might be because tolist([])
doesn’t generate an object which is the same type as an empty list(string)
don’t have this problem if change var type = list()
instead of list(string)
. but don’t wanna lose the extra data validation. there is something weird when specifying the data type in the list.
It looks like if there is at least a single element in the list, the comparison works as long as you include the ugly tolist
> var.empty_list
tolist([
"a",
])
> var.empty_list == tolist(["a"])
true
another probably-unfixable HCL wart
Yeah, it works with a non-empty list without issues. I don’t remember this problem in previous TF versions. I’m currently using v0.14.5. Thank you @Alex Jurkiewicz
2021-02-18
HI all, I am a bit confused with tagging my root volume:
resource "aws_instance" "bladibla" {
disable_api_termination = true
tags = {
"Tier" = "DEV"
"Application" = "DSpace"
"Name" = "UPGRADE-EXPRESS"
"Terraform" = "True"
"Patch Group" = "patch-dev"
}
root_block_device {
volume_type = "standard"
volume_size = 30
delete_on_termination = false
tags = {
"Application" = "DSpace"
"Data" = "HOME"
"Name" = "UPGRADE-EXPRESS-HOME"
"Tier" = "DEV"
}
}
}
resource "aws_instance" "bladibla" {
disable_api_termination = true
tags = var.tags
volume_tags = var.tags
root_block_device {
volume_type = "standard"
volume_size = 30
delete_on_termination = false
}
}
Yeah but the warning shows:
Do not use volume_tags
if you plan to manage block device tags outside the aws_instance
configuration, such as using tags
in an aws_ebs_volume
resource attached via aws_volume_attachment
. Doing so will result in resource cycling and inconsistent behavior.
which is the case:
resource “aws_ebs_volume” “UPGRADE-HOME” { availability_zone = aws_instance.DE-UPGRADE.availability_zone size = 400 type = “standard” tags = { “Application” = “DSpace” “Data” = “HOME” “Name” = “UPGRADE-EXPRESS-HOME” “Tier” = “DEV” } }
is the volume already there ?
So I combine the two
ag I see
yes it’s the root volume
ok then use it in one place
well the docs show:
either in the aws_ebs_volume
or in aws_instance
yes true, so when I look here:
it shows:
• tags
- (Optional) A map of tags to assign to the device.
Yes, that obvious, but what you are trying do is to define the same attribute from two arguments in tow different resources. that’s why race condition will occur
but you cannot convert the root block device into a aws_ebs_volume config right ?
Currently, changes to the ebs_block_device configuration of existing resources cannot be automatically detected by Terraform. To manage changes and attachments of an EBS block to an instance, use the aws_ebs_volume and aws_volume_attachment resources instead. If you use ebs_block_device on an aws_instance, Terraform will assume management over the full set of non-root EBS block devices for the instance, treating additional block devices as drift. For this reason, ebs_block_device cannot be mixed with external aws_ebs_volume and aws_volume_attachment resources for a given instance.
so use use the aws_ebs_volume and aws_volume_attachment and add tags there not in aws_instance
good
when I do this, the tool refuses with:
tags is not expected here
If i use s3 backend for my terraform state, how should I fetch that information for use in my web application or some job in the pipeline?
Use this assumptions:
- terraform did provision my resources (let’s say - RDS) and saved the state remotely on s3
- my web application needs those resource informations for the provisioned RDS (some are secrets)
Here are what’s coming to my mind:
- write a shell script that uses terraform CLI to fetch these secrets from the state and write them to .env file so that the web app can load them
- use some secret management software, from AWS? Vault (Overkill?) Take note that I use Gitlab CI for the pipeline, and I know that there is a Terraform integration present there, but I want to know what is the correct way of managing this if I were to transition to Github pipelines some day or something else.
Output values are the return values of a Terraform module.
Learn how to provision, secure, connect, and run any infrastructure for any application.
$ terraform output
lb_url = "<http://lb-5YI-project-alpha-dev-2144336064.us-east-1.elb.amazonaws.com/>"
vpc_id = "vpc-004c2d1ba7394b3d6"
web_server_count = 4
there is also a json output flag
terraform output -json
regarding the secrets you should probably save them in aws secret manager, and access them using the right permissions from your application
Thanks, both of you, one more question regarding that:
“The sensitive
argument for outputs can help avoid inadvertent exposure of those values. However, you must still keep your Terraform state secure to avoid exposing these values.”
If i keep my S3 bucket private, and use this sensitive flag, is that enough protection so that i can use “terraform outputs” instead of messing with secrets manager?
My logic is this, if a shell can access remote state, it is privileged (my CI executor). If it’s priviledged - why bother using secrets management, just don’t print secrets to console.
when you use “sensitive” on an output, you wouldn’t see it in plan/apply.
and your shell probably shouldn’t access your state and take secrets from it..
I mean, it’s probably possible - but definitely not recommended
What is your proposed approach? Let’s say that my provisioned resources are RDS database URL (not secret), and some secrets like username/password (secrets)
aws secret manager
and with that I would manage both non secret provisioning variables AND secret provisioning vars?
either, or use ssm for the non secret
and your shell probably shouldn’t access your state and take secrets from it..
also I’m interested in more explanation behind this
the reason to not use secret manager for everything would probably be the pricing of it
because at some point I’ll also automate “terraform apply” and give that power to Gitlab CI to do it automatically
So if runner shell can provision my infrastructure in an automated way that I set up, why wouldn’t it have access to those created secrets?
that’s ok, I just wouldn’t make the app go and fetch secrets from the state file
Alright, thanks for the help!
sure np
Yes, accessing remote state is a bit of an anti-pattern. It couples things together a little closer than is comfortable, and there’s no way to implement access control. You can use a dedicated secret sharing tool like AWS Secrets Manager (or AWS SSM Parameter Store, if cost is a concern) instead
Hi all, I used the iam-system-user module to create a user with access key and secret. Could you handle over this data to ansible to store it on the machine ? I know this is not best practice but the legacy application cannot work without this
the output generates this:
We created a model for automatically delivering infrastructure changes with robust security practices, and used it to build a secure Terraform CI/CD solution for AWS at OVO.
Hi all, how do you guys manage the state backend on s3, when I try to do something like this:
terraform {
backend "s3" {
bucket = "bla-test-tfstate"
key = "s3/${var.name}/terraform.tfstate"
region = "eu-west-1"
}
}
I dont think you can use a variable there
you can pass it in as arg on init though
I can use ansible to fix this
terraform init -backend-config="bucket=bla-test-tfstate" -backend-config="key=whatever"
this writes the config and you can use the same variable here
hmmm let me think
it fails
does anyone have a module (or know of one) that can easily configure the necessary subnet CIDRs for the upstream VPC module?
i know the VPC CIDR block I would like to use and will only ever go across 3 AZs so will need 12 CIDR blocks from the VPC CIDR provided
have you tried using terraform-aws-dynamic-subnets?
Terraform module for public and private subnets provisioning in existing VPC - cloudposse/terraform-aws-dynamic-subnets
That will only deploy one subnet per AZ (per public/private).. it sounds like you want multiple subnets per AZ?
nope never seen it before
i want one of each type of subnet per AZ
so one public, one private, one intra, one private per AZ
what is “intra” in this context? dynamic-subnets only understands “public” and “private”
intra is a type of subnet the upstream module leverages
is basically a private subnet that has no Internet routing
those 3 types of subnets you mention are more similar to the concept in this module. but this module creates the VPC and everything. Gonna take a while to go through all the input variables you can use to customize it. but you can start with the examples easily.
Terraform module which creates VPC resources on AWS - terraform-aws-modules/terraform-aws-vpc
dynamic-subnets can’t do that for you, sadly.
I think your best bet is to calculate the CIDR ranges yourself using the cidrsubnet
function, and then create the subnets “by hand”. Possibly using https://github.com/cloudposse/terraform-aws-named-subnets
Terraform module for named subnets provisioning. Contribute to cloudposse/terraform-aws-named-subnets development by creating an account on GitHub.
yeh right now i hard-code the subnet cidr blocks
but i think its too easy to mess up
> [for i in [0,1,2,3,4,5,6,7,8,9,10,11] : cidrsubnet("10.0.0.0/20", 4, i)]
[
"10.0.0.0/24",
"10.0.1.0/24",
"10.0.2.0/24",
"10.0.3.0/24",
"10.0.4.0/24",
"10.0.5.0/24",
"10.0.6.0/24",
"10.0.7.0/24",
"10.0.8.0/24",
"10.0.9.0/24",
"10.0.10.0/24",
"10.0.11.0/24",
]
yeh i was looking at cidrsubnet
before and it just confused me
I find it confusing too. I read the above example as “take 10.0.0.0/20 and divide it into /24s (20+4), and give me the i’th one of those”
so my VPC is 10.128.0.0/19
then i was planning on doing dividing that into /21
to get me 4 AZs (i only need 3)
then from there dividing each AZ block again but have it biased towards 2 of the 4 subnets i require
that would then give me the four subnets for AZ1 then do the same for AZ2 and AZ3
[public_cidr, private_cidr, intra_cidr] = [for i in [0,1,2] : cidrsubnet("10.1280.0.0/19", 4, i) ]
[public_az1, public_az2, public_az3] = [for i in [0,1,2] : cidrsubnet(public_cidr, 4, i) ]
etc for private/intra
agreed but we need more IPs in the private subnets as that is where EKS runs
and EKS takes an IP per pod
then start with a bigger cidr than /19
i can’t
as we need to handle 16 regions
with 64 VPCs per region
having 5xx IPs for the public subnets is literally overkill as its only ever going to contain 2 load-balancers
It might be worth you writing out what cidr ranges you want for each sort of subnet. This has been a lot of me suggesting something and then you coming back with another requirement
you can make subnets of different sizes using cidrsubnets
instead. you can add more newbits to the function and create small subnets for public and bigger for private. Do not need to create all subnets of the same size
> cidrsubnets("10.1.0.0/16", 4, 4, 8, 4)
[
"10.1.0.0/20",
"10.1.16.0/20",
"10.1.32.0/24",
"10.1.48.0/20",
]
my issue is the way we chunk up the subnets
as is my logic correct?
VPC -> AZs -> subnets
my initial thinking was …
VPC 0 = 10.128.0.0/19
4 AZ blocks
10.128.0.0/21, 10.128.8.0/21, 10.128.16.0/21, 10.128.24.0/21
4 Subnets in AZ 1
10.128.0.0/23, 10.128.2.0/23, 10.128.4.0/23, 10.128.6.0/23
4 Subnets in AZ 2
10.128.8.0/23, 10.128.10.0/23, 10.128.12.0/23, 10.128.14.0/23
this makes every subnet an equal size
the AZs is not tight to any specific IP range, the subnets are, and they exist in one specific AZs regardless of their cidr blocks. the subnet breakdown looks more like this VPC -> Subnets(AZ)
so what you’re saying is i just need to carve up the VPC CIDR in 12 subnets
and not worry about the AZ specifics
Might be easier to break things up the other way.
[private_cidr, intra_cidr, public_cidr] = cidrsubnets("10.128.0.0/19", 2,2,5) # /21, /21, /24
[private_az1, private_az2, private_az3] = cidrsubnets(private_cidr, 2,2,2) # /23 each
[public_az1, public_az2, public_az3] = cidrsubnets(public_cidr, 2,2,2) # /26 each
so i need four main cidr groups
public (tiny) private (biggest) database (medium) intra (small)
maybe this tool can help us to visualize the breakdown better. Something like this?
yeh i was using that earlier
and trying to make it to cidrsubnets
specifically the IP group example in the screenshot can be generated with cidrsubnets("10.128.0.0/19", 5, 5, 4, 3, 2, 5, 5, 4, 3, 2)
you could create lists of public, private and intra from that list based in the indexes or something like that creating sublists.
i am trying to automate this away as much as possible to make it super simple for people
Hi everyone,
Has anyone encountered this issue before? I think it has to do with the way terraform processes list values.
If I make the first cidr_blocks
an empty list []
to match the second cidr_blocks
type it throws a different error "source_security_group_id": conflicts with cidr_blocks
since cidr_blocks
and source_security_group_id
can not be present in the same rule.
module "sg" {
source = "github.com/cloudposse/terraform-aws-security-group?ref=0.1.3"
rules = [
{
type = "ingress"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = null
self = null
source_security_group_id = "sg-0000aaaa1111bbb"
},
{
type = "egress"
from_port = 0
to_port = 65535
protocol = "all"
cidr_blocks = ["0.0.0.0/0"]
self = null
source_security_group_id = null
}
]
vpc_id = "vpc-0000aaaa1111bbb"
context = module.this.context
}
ERROR
@sweetops556 heads up
tf apply
panic: inconsistent list element types (cty.Object(map[string]cty.Type{"cidr_blocks":cty.DynamicPseudoType, "from_port":cty.Number, "protocol":cty.String, "self":cty.DynamicPseudoType, "source_security_group_id":cty.String, "to_port":cty.Number, "type":cty.String}) then cty.Object(map[string]cty.Type{"cidr_blocks":cty.Tuple([]cty.Type{cty.String}), "from_port":cty.Number, "protocol":cty.String, "self":cty.DynamicPseudoType, "source_security_group_id":cty.String, "to_port":cty.Number, "type":cty.String}))
goroutine 545 [running]:
github.com/zclconf/go-cty/cty.ListVal(0xc000e784c0, 0x2, 0x2, 0xc0005465e0, 0x1, 0x1, 0x1)
/go/pkg/mod/github.com/zclconf/[email protected]/cty/value_init.go:166 +0x5a8
github.com/zclconf/go-cty/cty/convert.conversionTupleToList.func2(0x3860460, 0xc000bc5420, 0x2f350a0, 0xc000bc5440, 0x0, 0x0, 0x0, 0x3860320, 0x2cebaef0, 0x10, ...)
/go/pkg/mod/github.com/zclconf/[email protected]/cty/convert/conversion_collection.go:327 +0x794
github.com/zclconf/go-cty/cty/convert.getConversion.func1(0x3860460, 0xc000bc5420, 0x2f350a0, 0xc000bc5440, 0x0, 0x0, 0x0, 0xc001009c50, 0xc0005465d0, 0x3860360, ...)
/go/pkg/mod/github.com/zclconf/[email protected]/cty/convert/conversion.go:46 +0x433
github.com/zclconf/go-cty/cty/convert.retConversion.func1(0x3860460, 0xc000bc5420, 0x2f350a0, 0xc000bc5440, 0xc0005465d0, 0x0, 0x0, 0x0, 0xc00030c270, 0x10000c001c70000)
/go/pkg/mod/github.com/zclconf/[email protected]/cty/convert/conversion.go:188 +0x6b
github.com/zclconf/go-cty/cty/convert.Convert(0x3860460, 0xc000bc5420, 0x2f350a0, 0xc000bc5440, 0x3860360, 0xc000877040, 0xc000bc5420, 0x2f350a0, 0xc000bc5440, 0x0, ...)
/go/pkg/mod/github.com/zclconf/[email protected]/cty/convert/public.go:51 +0x1b9
github.com/hashicorp/terraform/terraform.(*nodeModuleVariable).EvalModuleCallArgument(0xc000594900, 0x389bce0, 0xc001c441a0, 0xc0005ca301, 0x0, 0x0, 0x0)
/home/circleci/project/project/terraform/node_module_variable.go:238 +0x265
github.com/hashicorp/terraform/terraform.(*nodeModuleVariable).Execute(0xc000594900, 0x389bce0, 0xc001c441a0, 0xc00003a004, 0x30ada40, 0x3202b60)
/home/circleci/project/project/terraform/node_module_variable.go:157 +0x7f
github.com/hashicorp/terraform/terraform.(*ContextGraphWalker).Execute(0xc000ebc270, 0x389bce0, 0xc001c441a0, 0x2da00048, 0xc000594900, 0x0, 0x0, 0x0)
/home/circleci/project/project/terraform/graph_walk_context.go:127 +0xbc
github.com/hashicorp/terraform/terraform.(*Graph).walk.func1(0x3202b60, 0xc000594900, 0x0, 0x0, 0x0)
/home/circleci/project/project/terraform/graph.go:59 +0x962
github.com/hashicorp/terraform/dag.(*Walker).walkVertex(0xc000594960, 0x3202b60, 0xc000594900, 0xc000e78340)
/home/circleci/project/project/dag/walk.go:387 +0x375
created by github.com/hashicorp/terraform/dag.(*Walker).Update
/home/circleci/project/project/dag/walk.go:309 +0x1246
!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!
Terraform crashed! This is always indicative of a bug within Terraform.
A crash log has been placed at "crash.log" relative to your current
working directory. It would be immensely helpful if you could please
report the crash with Terraform[1] so that we can fix this.
When reporting bugs, please include your terraform version. That
information is available on the first line of crash.log. You can also
get it by running 'terraform --version' on the command line.
SECURITY WARNING: the "crash.log" file that was created may contain
sensitive information that must be redacted before it is safe to share
on the issue tracker.
[1]: <https://github.com/hashicorp/terraform/issues>
!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!
2021-02-19
Hello! Is there a way to pass a lifecycle ignore_changes in the inputs section when you are trying to point to a source module? (Using Terragrunt.hcl)
can anyone tell me what i am doing wrong please?
public_subnets = var.subnet_cidrs == {} ? local.subnet_cidr_map["public"] : var.subnet_cidrs["public"]
private_subnets = var.subnet_cidrs == {} ? local.subnet_cidr_map["private"] : var.subnet_cidrs["private"]
intra_subnets = var.subnet_cidrs == {} ? local.subnet_cidr_map["intra"] : var.subnet_cidrs["intra"]
database_subnets = var.subnet_cidrs == {} ? local.subnet_cidr_map["database"] : var.subnet_cidrs["database"]
var.subnet_cidrs is empty map of dynamic
Error: Invalid index
on .terraform/modules/base.vpc/modules/vpc/main.tf line 20, in module "vpc":
20: private_subnets = var.subnet_cidrs == {} ? local.subnet_cidr_map["private"] : var.subnet_cidrs["private"]
|----------------
| var.subnet_cidrs is empty map of dynamic
The given key does not identify an element in this collection value.
what is the error?
the error is the last block
Error: Invalid index
on .terraform/modules/base.vpc/modules/vpc/main.tf line 20, in module "vpc":
20: private_subnets = var.subnet_cidrs == {} ? local.subnet_cidr_map["private"] : var.subnet_cidrs["private"]
|----------------
| var.subnet_cidrs is empty map of dynamic
The given key does not identify an element in this collection value.
hmmm, version? someone was just posting that equality on []
wasn’t working. betting the bug is impacting {}
also?
try length(var.subnet_cidrs) > 0
0.13.4
empty map of dynamic
is interesting phrasing… what’s the type definition on the variable?
subnet_cidrs = {
database = ["10.60.25.0/24", "10.60.26.0/24", "10.60.27.0/24"]
intra = ["10.60.10.0/24", "10.60.11.0/24", "10.60.12.0/24"]
private = ["10.60.4.0/24", "10.60.5.0/24", "10.60.6.0/24"]
public = ["10.60.1.0/24", "10.60.2.0/24", "10.60.3.0/24"]
}
it could look like that
well that’s a value, what’s the type?
map(any)
any
must be what it means by “dynamic”
here’s the other thread on []
equality… https://sweetops.slack.com/archives/CB6GHNLG0/p1613610796013400
Hi everyone,
Does anyone know why is this conditions returning false
? and what would be the right expression to compare with to get true
?
variable "empty_list" {
type = list(string)
default = []
}
console
tf console
> var.empty_list
tolist([])
>
> var.empty_list == []
false
> var.empty_list == tolist([])
false
interesting thanks for this
yeah, something is whack:
> tomap({}) == {}
false
i bet if you changed your condition to == tomap({})
that would work also, but i can’t see why it should be necessary
the open/closed issues i’m finding seem pretty user-hostile… instead of making the behavior work, they’re modifying the output to help show why it doesn’t work
• https://github.com/hashicorp/terraform/issues/23562
Terraform Version Terraform v0.12.17 Terraform Configuration Files Thanks @dpiddockcmp for a simpler example. #23562 (comment) variable "object" { type = list(object({ a = string })) defa…
TL;DR 0 == "0" ? "foo" : "bar" In Terraform 0.11, we get "foo" In Terraform 0.12, we get "bar" Terraform Version Terraform v0.12.3 + provider.googl…
Terraform Version Terraform v0.13.4 Terraform Configuration Files variable testlist { type = list(string) default = ["NOTSET"] } variable teststring { type = string default = "NOTSET…
they’re also based on lists, where []
is not actually a list! of course. but {}
is definitely a map, so what’s their excuse for that?
yeh its weird right
hi all while using the module:
Terraform module that creates an S3 bucket with an optional IAM user for external CI/CD systems - cloudposse/terraform-aws-s3-bucket
I would like to create a prefix: so the name of the bucket should be cloudposseisawesome-prod/var.name
You can’t have a / in a bucket name
correct
Oh so is this just for tags
it’s ok we can have a single bucket per customer in this case
no need to fiddle around
the module does not support migration to DEEP_ARCHIVE right ?
https://github.com/hashicorp/terraform-provider-aws/releases/tag/v3.29.0
• aws_securityhub_invite_accepter
is finally out
FEATURES: New Resource: aws_cloudwatch_event_archive (#17270) New Resource: aws_elasticache_global_replication_group (#15885) New Resource: aws_s3_object_copy (#15461) New Resource: aws_securityhu…
Dear all, the module: https://github.com/cloudposse/terraform-aws-s3-bucket
Terraform module that creates an S3 bucket with an optional IAM user for external CI/CD systems - cloudposse/terraform-aws-s3-bucket
does not seem to support a transition to DEEP_ARCHIVE yet, how can I request this ?
@Bart Coddens You can put up an issue in that repo and see if anyone gets around to it. But if you want it done, the main way to do so is fork, update, and PR back. We gladly accept these types of contributions and you’ll get a quick feedback and turnaround if you post your PR in #pr-reviews.
Just curious how people have managed terraform version upgrades with modules? It seems that since the state is not backwards compatible, we have several workspaces all at some version of 0.12.x .
You mean, updating modules to work with a newer version of TF?
Well, having a module be backwards compatible with multiple ‘consumer’ TF configs which are often at weird versions.
# ./modules/rds/main.tf , Git Tag is at 1.0
terraform {
required_version = ">= 0.12"
}
# ./myapp/database.tf
# workspace is at TF 0.12.10
module "rds" {
source = "terraform.mycompany.com/mycompany/rds/aws"
version = "~>1.0"
.. but other workspaces might be at 12.20, or 12.24, or others want to try using 0.13.
huh. I’ve never really seen people use different versions of Terraform for different workspaces. Certainly not more than two versions
generally at my company we only roll forward. Modules have a single development branch, and it works with the current stable Terraform. If you are using an older version of the module and you want newer functionality, you have to update your Terraform version to stable
the maintenance burden of modules is so high I think this is the only realistic approach. Not even CloudPosse can support multiple versions of their modules
re: different versions, from what I see, one app may ‘upgrade’ terraform environment by environment (dev, qa, uat, stg then prod) so there would be some minor drift. but then there might be another app (with it’s dev/qa/uat/stg/prod terraform workspaces) which are pinned at an older TF version. Me as the module maintainer ..is that my concern?
@mikesew likely useful to check out Hashi’s suggestions on this: https://www.terraform.io/docs/language/expressions/version-constraints.html#best-practices
This gist is reusable, child modules should pin a version constraint using >=
.
Root modules should pin a specific version or if you’re more willing for things to break then you can pin using the pessimistic constraint operator.
Terraform by HashiCorp
thanks. But I see with for example anton B’s terraform-aws-rds module (https://github.com/terraform-aws-modules/terraform-aws-rds) it’s got a:
master
branch at tf0.12 , tagged with a 2.x.x semver
terraform011
branch that’s released on a 1.x.x semver
.. and they’re both seeing updates.
and my particular modules are horribly written so I seem to always make MANY breaking changes (like renaming variable inputs, sorry), therefore I feel like I should be bumping my semver constantly. So I feel like that doesn’t really scale.
Are you talking about how you version your module itself or are you talking about how your modules version their providers / terraform?
Is this an internal module you are talking about? Or one you’ve published and want to become popular
this is an internal module (0.12) that I’m just trying to upgrade to .13 or .14 simply to stay with the times, and not break any workspaces that are using it.
What I’m going to try is install terraform 13,
• goto my module, checkout a new branch
• run terraform 0.13upgrade
• try it with a test terraform spin
• set required_version to at least 0.13
terraform {
required_version = ">= 0.13"
• Pull Request to back to master branch
• tag/release that commit with a new breaking version (2.0.0)
• create old branch named master/terraform012
for any legacy hotfixes
• put out announcement or release notes saying this is now tf0.13, no support for .12?
Sounds good, but why bother with a branch. Main/master can be evergreen branch. You can create a branch for older version if you have future need to hotfix
.. sorry you’re right. I just mod’d above. THANK YOU for the discussion/advice.
2021-02-20
QQ: is this null_data_source still required as a workaround for nodes to wait for EKS module and cm to be in place? Yesterday I’ve seen this message from a deployment using it:
Warning: Deprecated Resource
The null_data_source was historically used to construct intermediate values to
re-use elsewhere in configuration, the same can now be achieved using locals
What if I move the two from null_data_source
shown into examples from that into a locals { cluster_name = module.eks_cluster.eks_cluster_id }
? Would that achieve the same (waiting for aws-auth cm to exist)? On the same subject, what’s the second variable (kubernetes_config_map_id
) for? I cannot find it anywhere into the code, how the two are tied together if set into locals
(provided it’s the right option if we want to make terraform happy and stop using null_data_source
I even tried to move the two vars into locals, and the deployment completed successfully… but I have the strong feeling I’m missing something here… but, if I’m not, and moving those into locals
is everything we need to get rid of that message, I’m happy to update and send a PR.
@Andriy Knysh (Cloud Posse)
@Jeremy G (Cloud Posse)
waiting for the cluster AND the config map to be created first is required https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/examples/complete/main.tf#L79
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
i did not know that we can use locals instead of null data source
if we can, that’s great
2021-02-21
does anyone know of a tool that can mass terraform import
resources from AWS?
i need to get our route53 hosted zones and records under terraform management and away from people using ClickOps to update them all.
i swear there was something from Google that I have seen before but can’t for the life of me find it
i think its https://github.com/GoogleCloudPlatform/terraformer
Yep terraformer is nice
Going to need it to get this clickops nonsense under control
import their iam users and roles also, and make them all readonly
i have in the new platform i only really need this to do sub-domain delegation
➜ Desktop terraformer import aws --resources=aws_route53_zone
2021/02/21 16:23:04 aws importing default region
2021/02/21 16:23:04 open /Users/stevewade/.terraform.d/plugins/darwin_amd64: no such file or directory
am i missing something obvious here :point_up: i install terraform using tfenv
4. Run terraform init against an versions.tf file to install the plugins required for your platform
?
Or alternatively
Copy your Terraform provider’s plugin(s) to folder ~/.terraform.d/plugins/{darwin,linux}_amd64/
, as appropriate.
makes sense i was just trying to do this from an empty directory
just a quick update, the RDS wouldn’t be recreated if a snapshot is used, https://github.com/hashicorp/terraform-provider-aws/issues/17037
Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or other comme…
by the way, recently I am thinking if there is a package manager for Cloudposse modules, so the modules can be upgraded in TF files, just bump the version something like that
Thinking the same! do you know if any package managers for tform exists?
Are you referring to root modules? or child modules?
we use rennovatebot to manage upgrades
you can look into vendir
to do vendoring of root modules
@Erik Osterman (Cloud Posse) do you have some doc around how Cloudposse uses rennovatebot? not sure if I understand root/child modules, the case I run into, for example, if I use dynomodb module from Cloudposse, when a new version got released, what I do is to go to TF registry and find the new version and update the version in the codes
If there is a way to update the versions in an automatic way, that will save some time
Yep, that’s what rennovate bot does. It then opens a PR with the update.
In all of our repositories, you’ll find an example: https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/.github/renovate.json
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
that’s the configuration that we use.
amazing, thanks Erik
2021-02-22
Hi all, I made some changes to the s3 bucket module to support transition to deep archive storage class
where can I submit my code ?
as a pull request to the repository. I guess you aren’t familiar with those – Github’s help pages have good intros/explanations
I forked the main repo and pushed my changes to my own branch
General guidelines here: https://github.com/cloudposse/terraform-aws-s3-bucket#developing
After opening the PR, you can promote it in #pr-reviews for expedited review
Terraform module that creates an S3 bucket with an optional IAM user for external CI/CD systems - cloudposse/terraform-aws-s3-bucket
Support transition to the deep archive storage class We need this for our business.
I cannot access the #pr-review I guess
Anyone else experienced this issue when updating the AWS provider from v3.28.0 -> v.3.29.0 (with the terraform-aws-rds module)
Error: ConflictsWith
on .terraform/modules/rds_postgres_db/main.tf line 44, in resource "aws_db_instance" "default":
44: snapshot_identifier = var.snapshot_identifier
"snapshot_identifier": conflicts with username
Releasing state lock. This may take a few moments...
Not sure what the issue is here. snapshot_identifier
is not set (so defaults to ""
) and database_username
is set to a custom value so I don’t see why it would conflict.
Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.
I am getting the same issue while trying to create simple AWS-RDS MySQL
$ ./run.sh plan
Error: ConflictsWith
on .terraform/modules/rds_instance/main.tf line 44, in resource "aws_db_instance" "default":
44: snapshot_identifier = var.snapshot_identifier
"snapshot_identifier": conflicts with username
Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.
I see a workaround is keeping it as null
@Frank
Terraform CLI and Terraform AWS Provider Version Terraform v0.13.5 + provider registry.terraform.io/hashicorp/aws v3.18.0 + provider registry.terraform.io/hashicorp/local v2.0.0 + provider registry…
@Ankit Rathi Ah wow that’s an “old” issue, weird that it suddenly surfaced with a new AWS Provider even though that version has no RDS changes mentioned in the changelog
yeah i agree, i think its because we use aws_db_instance
as dependency which depends on hashicorp/aws
I’ve successfully created a Gitlab CI pipeline v0.1 where I test, build and publish my docker image to ECR repository. Also in this codebase, there is Terraform fully set up (with remote s3 backend) but it’s not automated (connected with CI), but rather provisioning is version controled - but done manually.
I’m ready to step up and create v0.2 - the same thing as above, but where CI actually does provisioning if there are changes to infra. Can you give me some guidelines on where to start?
Are you referring to using the plan command and then applying that plan?
Yep
Here’s an example for how to do a plan and then scan it for vulnerabilities (all in GitLab): https://github.com/indeni/cloudrail-demo/blob/master/.gitlab-ci.yml
You’d add another “stage” after that does apply of the plan if it passed the Cloudrail step.
This repository contains the instructions for how to use Cloudrail, as well as specific scenarios to test Cloudrail with. - indeni/cloudrail-demo
Hi Any harshicop vault experts here .. I m unable to unseal vault using the 3 master keys . I had the backend storage as consul . Is there a way I can kill the existing vault and recreate attach backend storage as consult.
what’s the error message you see? Just doing a google search, i came up with https://dev.to/v6/how-to-reset-a-hashicorp-vault-back-to-zero-state-using-consul-ae .
# so assume in your consul config file, you have:
"data_dir": "/opt/consul/data",
^^^ so delete whatever is in your data
dir.
Hi all, I have some a tag on the root volume, I want terraform to ignore it
in my config I have:
lifecycle { ignore_changes = [tags, ami] }
the plan says:
~ root_block_device {
delete_on_termination = false
device_name = "/dev/xvda"
encrypted = false
iops = 0
~ tags = {
- "Name" = "IOWA-TEST-ROOT" -> null
}
throughput = 0
volume_id = "vol-04e6d26cb3fd7a43a"
volume_size = 8
volume_type = "standard"
}
}
try
ignore_changes = [root_block_device.tags, tags, ami]
ha ok, but as such you cannot modify the size of the root volume right ?
i dont believe so
but that’s ok, changing the root volume size is rare
hi, I’m facing an issue with latest version of terraform-aws-cloudfront-s3-cdn
I have set values for custom_origins and now it asks for custom_headers
after adding a blank object list, I get other errors related to path, domain, etc…so I set my version to 0.48.1
and works fine should I open a ticket in github?
yes, open the ticket please and add the output and error messages
@Leon Garcia
thanks, it’s done
link?
Describe the Bug After updating a value in a current custom_origins object, terraform throws error of missing custom_headers that currently is not being used. Expected Behavior Apply changes withou…
Thanks
i see some related changes to custom_headers recently.. but I can’t find why I get the errors for other stuff..
Hello team. I’m working with the eks-iam-role module. We have other modules that are responsible for, among other things, adding policies to existing IAM roles when resources (ie. SQS) are created. Thus I do not have a policy to pass into this module, so eks-iam-role
cannot plan because aws_iam_policy_document
is a required value, and I’d prefer our SQS module handle the IAM policy.
However, this line leads me to think that aws_iam_policy_document
was intended to be optional. If I pass “{}” into the module, similar to this coalesce(), the plan works.
Should I file a bug to get aws_iam_policy_document
made optional? Hopefully all those words I wrote makes sense to someone.
Terraform module to provision an EKS IAM Role for Service Account - cloudposse/terraform-aws-eks-iam-role
Terraform module to provision an EKS IAM Role for Service Account - cloudposse/terraform-aws-eks-iam-role
Are folks having issues downloading providers right now?
could not query provider
registry for registry.terraform.io/hashicorp/aws: failed to retrieve
authentication checksums for provider: 404 Not Found
@Fred Torres Yes, same here and on multiple providers and sources
Error verifying checksum for provider "aws"
The checksum for provider distribution from the Terraform Registry
did not match the source. This may mean that the distributed files
were changed after this version was released to the Registry.
tf cloud is having issues, it appears.
Here we are: https://status.hashicorp.com/incidents/68jxtclzwn33
HashiCorp Services’s Status Page - Terraform Cloud Outage.
#terraform I think your S3 bucket policy is hosed, getting 403s trying to download any release from https://www.terraform.io/downloads.html
This outage is also affecting regular terraform runs
Error: Failed to install provider
Error while installing hashicorp/aws v3.29.0: unsuccessful request to
<https://releases.hashicorp.com/terraform-provider-aws/3.29.0/terraform-provider-aws_3.29.0_linux_amd64.zip>:
404 Not Found
that provider is missing on their releases site
Have you guys been getting “File is not a zip file” too?
Question on the module, cloudposse/elasticache-redis/aws. I use this module created redis cluster. See the output below.
- Why is the word “replicas” part of endpoint? Is the endpoint the redis primary endpoint or replica endpoint?
- Why output of cluster_host is empty? ``` cluster_host = cluster_id = redis-replicas-blue cluster_port = 6379 redis_cluster_endpoint = clustercfg.redis-replicas-blue.ujhy8y.usw2.cache.amazonaws.com
module “redis” { source = “cloudposse/elasticache-redis/aws” availability_zones = data.terraform_remote_state.vpc.outputs.azs vpc_id = data.terraform_remote_state.vpc.outputs.vpc_id enabled = var.enabled name = var.name tags = var.tags allowed_security_groups = [data.terraform_remote_state.vpc.outputs.default_security_group_id] subnets = data.terraform_remote_state.vpc.outputs.elasticache_subnets cluster_size = var.redis_cluster_size #number_cache_clusters instance_type = var.redis_instance_type apply_immediately = true automatic_failover_enabled = true engine_version = var.redis_engine_version family = var.redis_family cluster_mode_enabled = true replication_group_id = var.replication_group_id replication_group_description = var.replication_group_description at_rest_encryption_enabled = var.at_rest_encryption_enabled transit_encryption_enabled = var.transit_encryption_enabled cloudwatch_metric_alarms_enabled = var.cloudwatch_metric_alarms_enabled cluster_mode_num_node_groups = var.cluster_mode_num_node_groups snapshot_retention_limit = var.snapshot_retention_limit snapshot_window = var.snapshot_window dns_subdomain = var.dns_subdomain cluster_mode_replicas_per_node_group = var.cluster_mode_replicas_per_node_group } ```
for cluster_host
to be populated, need to provide a Zone ID https://github.com/cloudposse/terraform-aws-elasticache-redis/blob/master/main.tf#L168
Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis
it will create a record in the DNS zone pointing to the cluster endpoint (endpoint is what AWS generates, cluster_host
is pointing to it via DNS)
Thank you. How about endpoint? Why it has the word, replicas?
if you use the endpoint in your apps and then update/recreate the cluster, you’ll have to change the URL in all the apps
by using the cluster_host
, it will be alwways the same
I have not registered a domain. Therefore, I do not have zone_id. I am a bit confused of the output. The word, replicas is part of the endpoint. Is this endpoint primary or replica endpoint?
redis-replicas-blue
is something you provided in the variables
the module does not have that
Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis
variable "replication_group_id" {
type = string
description = "The replication group identifier. This parameter is stored as a lowercase string."
default = "redis-replicas-blue"
}
Let me destroy the redis, re-create it with redis-blue, and see how it will be.
that’s your variable
In a world where everything is Terraform, teams use Terraform Cloud API to manage their workloads. TECLI increases teams productivity by facilitating such interaction and by providing easy commands…
I don’t fully grok the need for the cli, especially as presented. The terraform provider for TFE exists for a reason
In a world where everything is Terraform, teams use Terraform Cloud API to manage their workloads. TECLI increases teams productivity by facilitating such interaction and by providing easy commands…
Terraform Enterprise/Cloud Infrastructure Automation - cloudposse/terraform-tfe-cloud-infrastructure-automation
Terraform can provision TFC/TFE
Yeah, thought tool looked interesting. Solves a bit of a chicken n egg maybe
So we run one command to provision a workspace that looks for new workspaces. So literally, terraform cloud terraforms terraform cloud for all new workspaces. Only the initial “command and control” workspace is done with terraform
.
terraform cloud terraforms terraform cloud. meta.
dang. I get the long-term economic incentives of AWS supporting their ecosystem with contributions like this. But it’s so rare to see, that it’s still a little
stupid question, but using a cloudposse module for the first time (surprising) and I can’t seem to get it to provision resources as it currently says No changes. Infrastructure is up-to-date.
when i try to run terraform apply.
I have the modules set up as so:
module "monitor_configs" {
source = "cloudposse/config/yaml"
version = "0.7.0"
enabled = true
map_config_paths = ["catalog/monitors/kube.yaml"]
context = module.this.context
}
module "synthetic_configs" {
source = "cloudposse/config/yaml"
version = "0.7.0"
enabled = true
map_config_paths = []
context = module.this.context
}
module "datadog_monitors" {
source = "git::<https://github.com/cloudposse/terraform-datadog-monitor.git?ref=master>"
enabled = true
datadog_monitors = module.monitor_configs.map_configs
datadog_synthetics = module.synthetic_configs.map_configs
# alert_tags = var.alert_tags
# alert_tags_separator = var.alert_tags_separator
context = module.this.context
}
and a context.tf file that is just copypasted this and set var.enabled to true: https://github.com/cloudposse/terraform-datadog-monitor/blob/master/examples/complete/context.tf
am I missing something obvious?
Terraform module to configure and provision Datadog monitors from a YAML configuration, complete with automated tests. - cloudposse/terraform-datadog-monitor
is module.context.enabled
true?
Terraform module to configure and provision Datadog monitors from a YAML configuration, complete with automated tests. - cloudposse/terraform-datadog-monitor
module.this.context.enalbed = true and module.this.enabled = true
Is it possible that your map_config_paths is misconfigured? Can you terraform console
and check that module.monitor_config.*
includes anything?
You have a file called kube.yaml
?
I suspect because we have try
here, that file loading errors are squashed.
Terraform module to convert local and remote YAML configuration templates into Terraform lists and maps - cloudposse/terraform-yaml-config
(not deliberate btw)
the config we ship is k8s.yaml
Terraform module to configure and provision Datadog monitors from a YAML configuration, complete with automated tests. - cloudposse/terraform-datadog-monitor
So I think it’s just a case of the file not existing in the default path.
i renamed it to kube.yaml
i can name it back to k8s. I also tried with wildcard initially
I suspec thte path is relative to the module
you might try to use ./catalog/...
yeah terraform console doesn’t give me map config
> module.monitor_configs.*
[
{
"all_imports_list" = []
"all_imports_map" = {
"1" = []
"10" = []
"2" = []
"3" = []
"4" = []
"5" = []
"6" = []
"7" = []
"8" = []
"9" = []
}
"list_configs" = []
"map_configs" = {}
},
]
got it
Terraform module to convert local and remote YAML configuration templates into Terraform lists and maps - cloudposse/terraform-yaml-config
@Erik Osterman (Cloud Posse) yah must be that
doh
so it’s not relative to module path, but I incorrectly put map_config_paths = ["catalog/monitors/k8s.yaml"]
when the actual file was catalog/k8s.yaml
. thanks guys
i assumed i was having trouble w/ context enabled
so I suspect the error handling for this will get fixed :point_up: when we upgrade the module to use our terraform-provider-utils
provider (cc: @Andriy Knysh (Cloud Posse))
does anyone have a recommend module or starting place to implement https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/aws-multi-accounts-tutorial ?
Learn how to configure single sign-on between Azure AD and Amazon Web Services (legacy tutorial).
that article misuses the term “AWS SSO”… that’s a whole ‘nother service. the article is just putting an iam identity provider in every account you want Azure AD SSO to connect to
Learn how to configure single sign-on between Azure AD and Amazon Web Services (legacy tutorial).
step 1 is to decide how you want users to auth into accounts. do you want a single identity account for principals? SSO or otherwise, this requires users to assume-role to authenticate to their target account
i want to put azure AD as the identity provider to our new users account
or do you want users to auth directly into the target account and role, using only their SSO identity
then allow them from there to assume roles in other accounts
that sounds like the former to me
yes it will be
the next step is to decide whether you want to use AWS SSO, or use the IAM identity provider. the latter is what is described in that doc
you can use Azure AD SSO -> AWS SSO, so you still maintain identities in a single place
this is where i am looking for peoples recommendations
i would like to do it “properly”
the doc you linked has a link to this one, for using actual AWS SSO… https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/amazon-web-service-tutorial
Learn how to configure single sign-on between Azure Active Directory and Amazon Web Services (AWS).
nope, that’s wrong too, that’s still the iam identity provider. dang azure docs
here’s the aws sso doc, using azure ad as the IdP, https://docs.aws.amazon.com/singlesignon/latest/userguide/azure-ad-idp.html
Learn how to set up SCIM provisioning between Azure AD and AWS SSO.
would this be your recommended approach
i honestly don’t have a recommendation on this topic. the point of mgmt shifts around between the two, and if you don’t try both then it’s just really hard to say what will best suit your environment and users
makes sense
i will trial both over the coming days and see
one thing i do not trust much is the cli integration of the first approach, azure ad sso -> iam identity provider. if you need cli credentials, and if azure ad is your IdP, then i might lean into the aws sso connection as an intermediary. then you can use awscliv2 to get credentials
if you were using okta, that would not be a concern, as okta has a very strong api and developer community maintaining all sorts of cli utilities for authenticating against the okta api
if you only/primarily need console access, or if the new “cloud shell” is a sufficient working environment for cli users, then that’s not a concern either
I will need console and CLI as I need to allow assuming roles locally to encrypt secrets using SOPs via KMS
i’m only finding this utility for that integration, when using azure ad -> iam identity provider… https://github.com/sportradar/aws-azure-login
Use Azure AD SSO to log into the AWS via CLI. Contribute to sportradar/aws-azure-login development by creating an account on GitHub.
many blog posts, but they all come back to that
but i call hot garbage on any sso tool for aws cli auth that doesn’t mention credential_process
Yeh I saw that repo earlier today
my current place have done it before but I am really not convinced with the TF code as it seems pretty hacky
anyone know what happens in the terraform registry when you rename a terraform module git repo? does it keep the stats? does it pick up the redirect automatically? do we need to resubmit it, etc?
Not sure. Try it and let us know , I would think it would maintain the reference because a git clone does but yea who knows what else they are doing.
Oh my gosh, been begging for this for years, was just merged! I might actually shed tears of joy/relief… https://github.com/hashicorp/terraform-provider-aws/issues/17510
Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or other comme…
Of course now I need the same for users and groups, but roles first is good with me
Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or other comme…
hm, so exclusive management is now the only approach possible? Or is there some way to enable/disable exclusive management
oh, I get it now. It’s like security groups
The aws_iam_role can specified managed policies / inline policies as part of itself, for exclusive management.
Or you can specify them as separate resources, if you want non-exclusive management
Yes, same idea as how security groups work… You can manage attachments as separate resources (non-exclusive), or as part of the role resource itself (exclusive)
I always pursue architectures and operating models that work with exclusive management, so I can use it to enforce drift remediation
Ideal. We use attachments where we have dependency ordering. Can’t make exclusive bastion security group without creating application security groups for example…
i just found a use case for this just now! Damn… now I’m hanging out for 3.30
The wonderful thing about security groups, is that you can attach more than one to the things. If you sketch it out, you can use exclusive rules for every scenario. A group with rules for this set of things, a group with rules for that set of things, and a group for the relationship between those things…
not totally true. I’ve hit the 5 SG limit many times
I hear you on that! That limit is frustrating, and cause for much design reflection
We’re about to release an update to Cloudrail that includes this:
And it works for users, roles and groups @loren
(remediation steps will be updated )
thanks @Yoni Leitersdorf (Indeni Cloudrail)! i was actually thinking about you the other day. i was figuring the approach could be generalized a bit to most resources that work with “attachment” concepts… security groups and rules, routing tables and routes, etc…
Ah good point!
We hope to get this out in the coming days and would love your feedback once it’s available.
Is https://github.com/indeni/cloudrail-demo still the best intro to cloudrail?
This repository contains the instructions for how to use Cloudrail, as well as specific scenarios to test Cloudrail with. - indeni/cloudrail-demo
Wait a few hours and I’ll give you a new URL - we’re updating the website, launching a web UI for the tool, etc.
ha ha. I added
managed_policy_arns = [
"arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
]
to a lambda execution role… and Terraform tried to detach this from every other Lambda in the account
WHATTTT?
You reported it?
that’s how it’s meant to work, right?
I might be misunderstanding you, but you attempted to set the policy on a specific role and it ended up trying to remove that role from all the lambdas?
The new functionality Loren linked in this thread lets you write:
resource aws_iam_role foo {
name = "foo"
managed_policy_arns = [ ... ]
}
And Terraform will remove any role attachments of the managed policies that aren’t to foo
Wait… I thought it was the other way around. I thought the goal was to ensure foo was only attached to the policies you want, and if someone else attached a policy you didn’t want to the role, it would be removed.
In other words, if your code has foo
with managed_policy_arns = [1, 2, 3]
and someone attached policy 4
to foo, Terraform will detach it.
I thought so too! The behaviour I saw seemed a little useless
Weird…
Other way around, or supposed to be. There is already a resource that does what you describe, I think, manage the roles a policy is attached to
Check what resource you were using
What you’re describing is this resource, “aws_iam_policy_attachment | Resources | hashicorp/aws | Terraform Registry” https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_policy_attachment |
dang, you’re right
2021-02-23
Hi Any one can please let me know how to pass the variables.tfvars files , by using command
terraform plan -var-file=variablee.tfvars
I am running this command terraform apply -var=variables.tfvars for pass the tfvars files
use
-var-file=
Thankyou it is working for me
-var-file can use thid
ok sure
I am writing terraform script to launch the MSK cluster in AWS any one have reference scripts please share with me
Terraform module to provision AWS MSK. Contribute to cloudposse/terraform-aws-msk-apache-kafka-cluster development by creating an account on GitHub.
Hey guys, I’m writing a ecr terraform module for use with my eks clusters - I believe I need to add this policy to worker nodeInstancerRole for the cluster to be able to pull images from ecr repo: https://docs.aws.amazon.com/AmazonECR/latest/userguide/ECR_on_EKS.html
You can use your Amazon ECR images with Amazon EKS, but you need to satisfy the following prerequisites.
In the eks module vars, I cannot find a way to add this policy to the noderole https://github.com/cloudposse/terraform-aws-eks-cluster/blob/0.32.1/variables.tf
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
This module creates the cluster and the master nodes - you want to look at something like https://github.com/cloudposse/terraform-aws-eks-workers and supply aws_iam_instance_profile_name
Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers
Is there anyway to add an additional node role policy via cloudposse/eks repo or will I have to do this externally to the module?
Oh I can see the cloudposse/ecr tf module already caters for this, will try it out. Nice! https://github.com/cloudposse/terraform-aws-ecr/tree/0.31.1
Terraform Module to manage Docker Container Registries on AWS ECR - cloudposse/terraform-aws-ecr
Hi amazing people,
I have one question for <https://github.com/cloudposse/terraform-aws-rds>
why do we need subnet_ids
here ? is it just for making the database available in at least two or more availability zones ? (does it fulfill any other requirement?)
I’m not with Cloudposse but according to this code block, it takes a list of subnet IDs to create a unique subnet group. Normally when calling the db_instance
resource, you’d need to provide a subnet group ID and not just subnet IDs, that’s just how RDS was designed.
ah okay thanks a lot Mike, so its a requirement while creating RDS instance
Yup. If you were creating an RDS instance through AWS console, you’d need to provide a subnet group ID, which you’d need to have already created.
cross posting from the hangops slack - Hashicorp has reversed and decided to allow the use of ‘undeclared vars’ in tfvars going forward. https://github.com/hashicorp/terraform/issues/22004
Current Terraform Version Terraform v0.12.3 Use-cases In our current project, we use a common.tfvars file to store shared configurations across multiple modules and plans. Up until recently, this h…
wow!
Current Terraform Version Terraform v0.12.3 Use-cases In our current project, we use a common.tfvars file to store shared configurations across multiple modules and plans. Up until recently, this h…
Anyone noticed anything like https://github.com/hashicorp/terraform/issues/27214#issuecomment-784229902 ? terraform plan
vs terraform show plan
in 0.14.X
Current Terraform Version 0.14.2 Use-cases Silence all of the module.abc: Refreshing state… [id=abc] output in plans and applies so that the output is more concise and easier to review. This is e…
what Revert sensitive = true outputs why Cannot see the difference in task definitions in terraform plan due to sensitive = true references Revert #118
Terraform will perform the following actions:
# module.ecs_alb_service_task.aws_ecs_task_definition.default[0] will be updated in-place
~ resource "aws_ecs_task_definition" "default" {
# Warning: this attribute value will be marked as sensitive and will
# not display in UI output after applying this change
~ container_definitions = (sensitive)
id = "userservices-global-build-info-service"
+ ipc_mode = ""
+ pid_mode = ""
tags = {
"Environment" = "global"
"Name" = "userservices-global-build-info-service"
"Namespace" = "userservices"
"bamazon:app" = "build-info-service"
"bamazon:env" = "global"
"bamazon:namespace" = "bamtech"
"bamazon:team" = "userservices"
}
# (9 unchanged attributes hidden)
}
Plan: 0 to add, 1 to change, 0 to destroy.
------------------------------------------------------------------------
what Revert sensitive = true outputs why Cannot see the difference in task definitions in terraform plan due to sensitive = true references Revert #118
Awesome @jose.amengual
we’re reverting the sensitive output
the pr you pointed to reverts it
full discussion is in this pr https://github.com/cloudposse/terraform-aws-ecs-container-definition/pull/118
what Marks the outputs as sensitive Update workflows etc. missed by #119 why Otherwise TF 0.14 would give an Error: Output refers to sensitive values when using these outputs to feed into other …
Aye, seen that and commented
new to me: a colleague just pointed out this project, kind of a python-pytest equivalent of terratest? https://github.com/GoogleCloudPlatform/terraform-python-testing-helper
Simple Python test helper for Terraform. Contribute to GoogleCloudPlatform/terraform-python-testing-helper development by creating an account on GitHub.
I may have found a bug in service-control-policies/aws, unless I’m simply doing something wrong. Adding policies is working great, and adding to that policy by adding additional policies is also working. However, when I try to remove something from a policy, that is not working. For example: I currently have 2 files policies in use. I add a 3rd, and I can see the additions in terraform plan. However, if I remove one of the policy files from the list_config_paths, leaving only one policy file, then terraform plan says no changes are to be applied.
It sounds like this is similar to the aws_security_group
/ aws_security_group_ingress_rule
setup.
Are you using two sorts of resources, one to manage the base resource, and another to manage policy attachments to it?
I created my own module that contains this:
module "this" {
source = "cloudposse/label/null"
version = "0.22.1"
enabled = var.enabled
namespace = var.namespace
environment = var.environment
stage = var.stage
name = var.name
delimiter = var.delimiter
attributes = var.attributes
tags = var.tags
additional_tag_map = var.additional_tag_map
label_order = var.label_order
regex_replace_chars = var.regex_replace_chars
id_length_limit = var.id_length_limit
context = var.context
}
module "yaml_config" {
source = "cloudposse/config/yaml"
version = "0.1.0"
list_config_local_base_path = var.list_config_local_base_path != "" ? var.list_config_local_base_path : path.module
list_config_paths = var.list_config_paths
context = module.this.context
}
data "aws_caller_identity" "this" {}
module "service-control-policies" {
source = "cloudposse/service-control-policies/aws"
version = "0.4.0"
service_control_policy_statements = module.yaml_config.list_configs
service_control_policy_description = var.service_control_policy_description
target_id = var.target_id
context = module.this.context
}
I’m calling the module like this:
module "create_and_apply_scp" {
source = "git::<my bitbucket repo>"
enabled = true
environment = "sandbox"
stage = "ou"
name = "nonpci" # No underscores allowed
list_config_local_base_path = ""
list_config_paths = [
"scp_templates/deny_marketplace.yaml",
"scp_templates/default_scps.yaml",
"scp_templates/deny_eks.yaml"
]
service_control_policy_description = "Non-PCI OU SCPs"
target_id = module.create_ou.id
}
@Scott Cochran I think @jose.amengual has run into this problem
Did you figure it out?
Unfortunately, no.
Scott you could add the outputs. of the call module to the service control. policy module to see if the yaml is actually correct
I had a problem yesterday using the raw.github url that did not updated the file right away and my plan was empty
I realize after a few hours that the problem was the raw.url
(basically, seems to be an eventual consistency problem)
Discussion of a “test” command becoming native to terraform… https://twitter.com/mitchellh/status/1364273416178556928?s=19
This is an experiment (can’t stress this enough!) but I think folks will be really really happy to hear that the core Terraform team is researching and working on official integration test support. https://github.com/hashicorp/terraform/pull/27873 (stressing again: experimental, research phase)
Yeah this needs to happen. Apprentlymart has had this repo up for years and it provides possible approach which seems like it would add something.
This is an experiment (can’t stress this enough!) but I think folks will be really really happy to hear that the core Terraform team is researching and working on official integration test support. https://github.com/hashicorp/terraform/pull/27873 (stressing again: experimental, research phase)
An experimental Terraform provider to assist in writing tests for Terraform modules - apparentlymart/terraform-provider-testing
Ah and now that I read that PR — the work they merged is actually just an extension of that provider. Awesome.
posted a question about one of my pain points with testing modules… https://discuss.hashicorp.com/t/question-about-the-prototype-test-command-workflows/21375
I posted this question in the PR adding the prototype for the new test command, but was directed here. One thing I’ve run into using terratest, is tests of modules that use count/for_each, and issues where the test config generates the resources passed to those expressions in a way where the index/label cannot be determined until apply. My workaround has been to support a “prereq” config for each test that is applied first, and to read the outputs via the data source terraform_remote_state . H…
2021-02-24
Today’s an exciting day for us as we officially launch Cloudrail - a second generation security analysis tool for Terraform: http://indeni.com/cloudrail/
Basically, we looked at the good work done by the guys at checkov (congrats btw), tfsec and others, and decided to take it one step further. Cloudrail takes the TF plan, merges it in memory with a snapshot of the cloud account, and then runs context-aware rules on it. A few things that allows us to do:
- When we look at an S3 bucket’s ACLs, we know if the account has public access block set or not. This allows us to ignore “public-acl” type settings if the account blocks it anyway.
- When we look at an IAM role/user/group, we can tell what policies are attached to it, even outside the TF code (in the cloud).
- When an RDS database is defined without specific VPC information, we can calculate what the default VPC looks like (if there is one), what its default security group and whether that will cause a problem. And a bunch more examples… Basically Cloudrail was built to be used in the CI pipeline from day one, so it’s meant to be very dev/devops friendly.
As a token of appreciation for this amazing forum, we will be giving access to Cloudrail for free until the end of June to any member of this Slack forum. Just DM me for access after you’ve signed up to Cloudrail. (after June, it will be 30-evaluations/month for free, though that is also expanded to unlimited if you’re part of an open source project)
Looking forward to checking this tool out when I have more time! Sounds awesome!
Hey does anyone here create DataDog dashboards using Terraform? I’m just tasked an engineer on a client team with moving some of our dashboards to Terraform so we can create them for our dozen environments or so… and now I’m finding out that they don’t accept raw JSON and instead require that you write TF blocks for each widget. Seems excessive to me… and I’m wondering if any folks have a good work around for that.
And I realize we can go the route of creating the dashboard via their API… but wondering if there is some middle ground / workaround that would make it nicer than a curl request via local-exec.
what about datadog provider https://registry.terraform.io/providers/DataDog/datadog/latest/docs
oh I see, you trying to pass down json instead of a block
see this loop, read json and create widget https://github.com/borgified/terraform-datadog-dashboard/blob/master/main.tf
autogenerate dashboards based on metric prefix. Contribute to borgified/terraform-datadog-dashboard development by creating an account on GitHub.
@Matt Gowie sounds like something that would fit nicely into the datadog module and catalog pattern using the looping
@Erik Osterman (Cloud Posse) Yeah — thinking the same. I will have my client’s team prove out the concept and then we can discuss open sourcing it.
But it would awesome to provide catalogs for RDS, ElasticSearch, ALBs, EKS, etc. There is a ton of great reuse we could do there with catalogs to allow folks to pick and choose their own custom dashboards. I really like the idea.
it uses external data sources
data "external" "list_metrics" {
program = ["bash", "${path.module}/list_metrics.sh"]
query = {
api_key = var.api_key
app_key = var.app_key
prefix = var.prefix
}
}
you could have a script for each catalog or something like that, I’m not sure can you paste a json sample of the dashboard?
Oh, I missed that script. Thought it was native HCL.
@Matt Gowie https://gist.github.com/jamengual/4c7dfd0c5ec957d4f33c6a34b28d8b81 I did to create a custom dashboards using the module shared by @Mohammed Yahya
it is still a scrip and it needs a bit of work
no native HCL yet
in the pass I used terraformer after I created everything
potentially you could do that and templetarize the dashboards and pass a yaml config to it to fill it up I guess
Huh… I didn’t see that external script either. That throws a wrench into things. We could still provide value from a local-exec based module / catalog, but it’s much less attractive.
yea, local-exec is something we really try to avoid in our public modules
just need to add jq
now to our utils provider.
I’ll dig into this further when I get some time… it should be possible to pull in the exported JSON from a dashboard and use dynamic to build the blocks that we’d want.
Yea, I think it’s best just to store the raw dashboards in VCS and parse them with HCL.
For about a year, I was creating cloudwatch dashboards with terraform. They use pure JSON. I ripped this out about a month ago, in my opinion it was a huge mistake and you should author dashboard code in its native system.
The productivity loss and barrier to entry from writing JSON instead of using the native editor UI is so high it put people off changing the dashboard.
Now, the dashboard is managed by hand. A daily job backs it up to a GitHub repo, so we have some DR/revision control. For dynamic data in the dashboard (eg, the production ALB ID) which changes, we read the dashboard data, update a hardcoded list of locations (“update ALB ARN in the first metric of graph called “Live 5xx rate” and that’s it. In some ways it is uglier. But it’s much easier to add new graphs and etc
so, authoring dashboards in JSON is not sustainable.
But for distributing shared dashboards / parameterizing them, I think the pattern of developing them in the UI, exporting to JSON, and distributing with terraform is reasonable
Yeah. I tried that. But whenever you want to edit the dashboard again, the process is still convoluted:
- Edit dashboard in UI
- Export it to JSON
- Paste over your current template in Terraform
- Convert all the hardcoded values to template variables again, using git diff
Step 4 always took ages
That’s a solid point, but if we treat the dashboards as catalogs that are not overly templatized then it would be possible to break them up into small enough chunks that they’d be reusable across organizations without updates. As in we can come up with an RDS dashboard that is complete enough that it is useful regardless of which organization you’re coming from and you won’t need to do updates to that dashboard’s configuration.
yes, pretty much every dashboard will be same most of the time so changing the formula in a configurable manner base on a set template should work fine
Using this module, I’d like to add another group to existing cluster https://github.com/cloudposse/terraform-aws-eks-node-group
Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.
anyone have example? I’m not sure how to get the existing role and add another group to it
module "eks_node_group_driver" {
source = "cloudposse/eks-node-group/aws"
version = "0.18.3"
subnet_ids = module.subnets.private_subnet_ids
cluster_name = data.null_data_source.wait_for_cluster_and_kubernetes_configmap.outputs["cluster_name"]
existing_workers_role_policy_arns = ["module.eks_node_group.node_role_arn"]
# cluster_name = aws_eks_cluster.cluster.id
# node_group_name = module.label.id
# node_role_arn =
instance_types = ["r5.4xlarge"]
desired_size = 1
min_size = 1
max_size = 1
kubernetes_labels = var.kubernetes_labels
disk_size = 100
resources_to_tag = ["instance"]
context = module.this.context
@kgib are you referring to "module.eks_node_group.node_role_arn
?
That is just a role that would add supplemental permissions to the node group. It’s not required.
ok yea, it isn’t working the way I figured. It has no effect on the outcome. I’m just wondering how to get this node group associated with existing role
I’m a bit confused of your issue without more context. Are you passing a role to your existing module.eks_node_group
?
I’m trying to understand how to pass the role, it’s been unsuccessful thus far
Can you share your usage of module.eks_node_group
?
module "eks_node_group" {
source = "cloudposse/eks-node-group/aws"
version = "0.18.3"
subnet_ids = module.subnets.private_subnet_ids
cluster_name = data.null_data_source.wait_for_cluster_and_kubernetes_configmap.outputs["cluster_name"]
instance_types = ["c5d.4xlarge"]
desired_size = 8
min_size = 8
max_size = 8
kubernetes_labels = var.kubernetes_labels
disk_size = 100
resources_to_tag = ["instance"]
context = module.this.context
You’re passing to eks_node_group_driver
the role from eks_node_group
which may be your issue. If you’re passing a specific role to eks_node_group
then you can likely use that same role.
Oh it looks like your issue is that you’re passing the same context without any changes.
Try adding the following argument to eks_node_group_driver
:
attributes = ["driver"]
ok…yea, I’m looking to add a node group, just to clarify
where eks_node_group
is exsting and eks_node_group_driver
is addition
Yeah, got that. What is like happening is that you’re running into issue with name collisions due to passing the same names (bundled together via module.this.context
) to both node group module usages. They have to be named differently within AWS to allow you to move forward.
I gotcha — Try out the above code re attributes
. I believe that will be enough to get you moving forward.
yea, that change seems to help
so then they each have their own policy ARN?
is that a desireable outcome?
Yeah that’ll be the result regardless. And you can customize each node groups policies by passing any external roles to them via existing_workers_role_policy_arns
.
gives
Error: Error creating IAM Role existing-cluster-workers: EntityAlreadyExists: Role with name existing-cluster-workers already exists.
status code: 409, request id: c577f222-6e43-43e0-aa23-ae2848ecaa81
v0.15.0-beta1 Version 0.15.0-beta1
v0.15.0-beta1 0.15.0-beta1 (Unreleased) BREAKING CHANGES:
Empty provider configuration blocks should be removed from modules. If a configuration alias is required within the module, it can be defined using the configuration_aliases argument within required_providers. Existing module configurations which were accepted but could produce incorrect or undefined behavior may now return errors when loading the configuration. (<a href=”https://github.com/hashicorp/terraform/issues/27739“…
Here we have the initial implementation of configuration_aliases for providers within modules. The idea here is to replace the need for empty configuration blocks within modules as a "proxy&qu…
Apologies if this isn’t the right forum for my question regarding the terraform-aws-efs module (https://github.com/cloudposse/terraform-aws-efs)
…it looks like when I upgrade module version from 0.27.0
to current 0.30.0
and then apply
my existing EFS filesystem gets destroyed/replaced and I get a new fs id.
Is this by design and/or unavoidable? Is there any way I can upgrade module version and not have my fs replaced?
Terraform Module to define an EFS Filesystem (aka NFS) - cloudposse/terraform-aws-efs
It’s not enough information - we need to see what is prompting the change.
Terraform Module to define an EFS Filesystem (aka NFS) - cloudposse/terraform-aws-efs
For example, we updated all of our modules to use secure defaults. By default encrypted
is true now.
Perhaps this change is causing it. In that case, the fix is to set encrypted
to false, that way you’re being explicitly insecure
If you share the exact output from the terraform plan as a snippet, we can take a look
ah ok, so if I go back and look at the changes in apply, there will be something (in EFS or related)
yes, let me try that now
thanks for responding! I’m not really a devops guy by trade
ah, no worries, everyone is welcome here
yes I see…like you said:
~ encrypted = false -> true # forces replacement
so I would have to force this deployment to encrypted = false
to upgrade and keep the existing FS it sounds like?
thx for help w this! I think I get it now
Greetings. I had posted a question on /r/terraform, based on a response from u/CrimeInBlink47, who mentioned that I should check in here and cloudposse had published a module/provider that would allow for terragrunt type yaml(2 levels max) merging. Which is the final reason (i think) i’m still using TG and not plain TF. BTW, love the video’s i’ve seen so far, thanks for the content. And again Hello!
Hey @Jeff Dyke — I was the one responding to you on Reddit
Here is the provider I was talking about: https://github.com/cloudposse/terraform-provider-utils. Here is an example of usage for deep merging YAML: https://github.com/cloudposse/terraform-provider-utils#examples
Not sure if that is what you were asking for if or if that’s what you need, but figured that might help.
The Cloud Posse Terraform Provider for various utilities (E.g. deep merging) - cloudposse/terraform-provider-utils
The Cloud Posse Terraform Provider for various utilities (E.g. deep merging) - cloudposse/terraform-provider-utils
Thanks for confirming, i was looking at these today and it would seem to solve my problem. Another thing i do quite often is make use of TG path_relative_to_include
, is there something for that. I’ve only done limited go programming, but if not that could be a great addition.
Just to give you more of an idea, i’m not asking you for a solution. allowing me to have a single remote_state
per vpc, by using key = "prod/${path_relative_to_include()}/terraform.tfstate"
. I also marry the yaml to the directory structure, allowing me to have an load the correct server configs by using
inputs = {
servers = local.servers[path_relative_to_include()]["servers"]
}
The merits of doing that could be argued. Thanks for the pointers.
Huh I don’t know if I know enough about TG to know what you’re referring to… but I don’t believe that has been needed so far. SweetOps has the idea of “Stacks” which utilize Yaml imports and that might solve your problem, but unfortunately it’s not well documented yet (though you can check out https://github.com/Cloudposse/atmos, https://github.com/cloudposse/terraform-yaml-config, and https://github.com/cloudposse/reference-architectures/tree/master/stacks for some idea of what I’m talking about).
Universal Tool for DevOps and Cloud Automation (works with terraform, helm, helmfile, istioctl, etc) - cloudposse/atmos
Terraform module to convert local and remote YAML configuration templates into Terraform lists and maps - cloudposse/terraform-yaml-config
[WIP] Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures
Your question might be a good one to chat about during #office-hours too — Like “Hey I have X and Y problems in Terraform that TG solves… how are ya’ll doing that without TG?”
ok, cool thanks for the advice. Its nice to be politely introduced to the norms. Appreciate it.
Also, I’m sure a proposal and a PR would definitely get some attention on that utils provider — it’s new and it has been discussed that it will be built out as more use-cases come up.
2021-02-25
Q: Has anybody encountered problems running tfenv
on WSL? I’ve tried ubuntu and centos7, both getting similar errors. it works in windows10 git-bash, but not WSL(s).
msew@NOTEBOOK:~ $ which tfenv
/c/users/msew/.local/bin/tfenv
msew@NOTEBOOK:~ $ tfenv
/usr/bin/env: 'bash\r': No such file or directory
^^^ this seems like a windows/unix CrLf error.
siiiigh.. I had to recursively sed
search replace any /r
’s out in all files. it has to do with the way my windows / git core.autocrlf is set (true).
cd ~/.tfenv
sed -i.bak 's/\r$//' ./bin/*
sed -i.bak 's/\r$//' ./lib/*
sed -i.bak 's/\r$//' ./libexec/*
Hi amazing folks I am tring to create very simple RDS mysql - https://github.com/cloudposse/terraform-aws-rds
module "rds_instance" {
source = "cloudposse/rds/aws"
# Cloud Posse recommends pinning every module to a specific version
version = "v0.33.0"
namespace = "backend"
stage = "dev"
name = "somename"
dns_zone_id = var.somezoneid
host_name = "somehostname"
security_group_ids = [module.security-group-mysql-www.this_security_group_id]
// ca_cert_identifier = "rds-ca-2021"
allowed_cidr_blocks = var.dev-vpc-all-bsae-cidr-blocks
database_name = "mysqlwww1"
database_user = "goodone"
database_password = "nicetry"
database_port = 3306
multi_az = false
storage_type = "gp2"
allocated_storage = 100
storage_encrypted = false
engine = "mysql"
engine_version = "8.0.20"
major_engine_version = "8.0"
instance_class = "db.t3.medium"
db_parameter_group = "mysql8.0"
// option_group_name = "mysql-options"
publicly_accessible = false
subnet_ids = [var.dev-vpc-public-subnets[0], var.dev-vpc-public-subnets[1]]
vpc_id = var.dev-vpc-id
snapshot_identifier = null
auto_minor_version_upgrade = true
allow_major_version_upgrade = false
apply_immediately = false
maintenance_window = "Mon:03:00-Mon:04:00"
skip_final_snapshot = false
copy_tags_to_snapshot = false
backup_retention_period = 7
backup_window = "22:00-03:00"
db_parameter = [
{ name = "myisam_sort_buffer_size" value = "1048576" },
{ name = "sort_buffer_size" value = "2097152" }
]
}
Strangely its giving error for DB parameter groups
Error: Missing attribute separator
on 100-rds.tf line 65, in module "rds_instance":
65: { name = "myisam_sort_buffer_size" value = "1048576" },
Expected a newline or comma to mark the beginning of the next attribute.
Don’t think the syntax is wrong somewhere ? Anything ?
Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.
Think the key value pairs within db_parameter need to be on separate lines. See the full example: https://github.com/cloudposse/terraform-aws-rds/blob/master/examples/complete/main.tf#L48
Terraform module to provision AWS RDS instances. Contribute to cloudposse/terraform-aws-rds development by creating an account on GitHub.
yes it works now… the readme should also be updated accordingly i guess
Thanks a lot @Andy
Hello…I have upgraded one of my terraform module from 0.12.29 to 0.14.5.Now I wanted to check if i can restore to old state using a older statefile from s3 with 0.12.29 version.Is this doable?
If you’ve got versioning enabled on the S3 bucket, shouldn’t be hard. If not, good luck.
Versioning is enabled on s3.Are there are steps for restore?
What ended up being the fix? delete current version of s3 object? copy from prior version? did you have to do anything with your dynamoDB lock for the rollback process?
I copied from prior version.Nothing related to dynamodb.
:wave: I have two separate terraform projects (with their own terraform state file). Project A creates a lambda that Project B wants to reference. I was going to use the aws_lambda_function
data source. E.g.
data "aws_lambda_function" "existing" {
function_name = var.function_name
}
How are you meant to handle the situation where Project A may have not yet created that lambda and so it won’t exist?
What are you using to manage tf ? terraform cloud, scalr, env0 pulumi or ?
You mean where do we execute it? We use CircleCI and a bit of Jenkins
yes as well as how managing large env with maybe 1500 workspaces
If you have that many workspaces, I would definitely not use Terraform Cloud. Their business tier pricing is complete robbery.
Check out spacelift.io — Great product company who is going to own the space very soon.
@Matt Gowie Thank you will look into spacelift.io. was looking at scalr, env0, pulumi, morpheus, fylamynt, and cloudify as options as well.
@Harold Reinstein I don’t know anything about Pulumi, Morpheus, Fylamynt, or Cloudify, but Scalr, Env0, TFC, and Spacelift did an #office-hours sessions a few weeks back: https://www.youtube.com/watch?v=4MLBpBqZmpM. If you’re evaluating then that is definitely worth looking at.
Those are the current mainstream TF automation tools out there today. My opinion on all of them —
- TFC is a bad offering right now — they’re not really providing much functionality and their pricing is BS.
- I’m still confused on what problem Scalr is solving.
- Env0 doesn’t have a Terraform Provider to automate their solution so they’re out.
- Spacelift looks great and checks all the boxes from what I can tell, so I’m looking towards steering my clients in that direction going forward.
Hi @Harold Reinstein, I am co-founder and CEO of env0. Feel free to reach out to me personally if you need any help or any questions.
Hi @Matt Gowie - we indeed do not have yet Terraform provider but we do have API +CLI in order to trigger env0 without our GUI. Also, our custom flows https://docs.env0.com/docs/custom-flows allow lots of flexibility to automate actions before/after terraform init/plan/apply/destroy. That being said, TF provider is definitely on our short term product roadmap.
You can create custom flows for a template, that allow you to run whatever you want (bash, python, gcloud, ansible, cloudformation, etc.), whenever you want in the deployment process (before or after Terraform init/plan/apply, and even destroy/error).Create a file named env0.yml to define a custom …
2021-02-26
hi all, I am a bit confused by this module:
Terraform Module to Automatically Generate SSH Key Pairs (Public/Private Keys) - cloudposse/terraform-aws-key-pair
can it be used to create the keypair to provision/boot a machine ?
Sure can
curently I use something like this:
aws ec2 –profile “$customer” create-key-pair –key-name cloudposseisawesome-“$customer” –query ‘KeyMaterial’ –output text
and I store that on my local machine
the public key is auto stored in AWS though
If you set generate_ssh_key to true on the module it will there tls resource to generate a key for you Otherwise it will use a local one you specify to create the aws key.
yeah true the private and public key gets generated on the workstation
how can I push this public key to the EC2 machine ?
Only on the initial provisioning you specify the output key name on your instance
true
so if I use this, what does it do, does the key gets stored on the machine ?
what we normally do here is, we generate a keypair like with the command above, use that private key in our ansible framework and our other admin keys get pushed via ansible
There’s an output for the private key on the module if you choose to have it generate one for you
hmmm, I am still a bit confused, the private key we need because that’s the one I will be using to connect to the machine that gets provisioned
but how does the public key get on the machine ?
if you specify it in the terraform config, it gets uploaded ?
In the case of ansible most my ansible is deployed through a bastion and I like using this module which stores they keypair in ssm parameter store This way it’s not local and can programmatically retrieve it https://github.com/cloudposse/terraform-aws-ssm-tls-ssh-key-pair
Terraform module that provisions an SSH TLS Key pair and writes it to SSM Parameter Store - cloudposse/terraform-aws-ssm-tls-ssh-key-pair
The pub key is stored on aws yes as an ec2 key
ah ok, so you can discard it when the machine is setup ?
That’s what this resource does https://github.com/cloudposse/terraform-aws-key-pair/blob/9536c61866d0edd6c24a7feee75aa831f6581b12/main.tf#L16
Terraform Module to Automatically Generate SSH Key Pairs (Public/Private Keys) - cloudposse/terraform-aws-key-pair
excellent, can ansible query ssm to get the private key ?
There’s probably a way but I have a bash script that calls ansible, before ansible is run it retrieved the key from ssm
that’s even more secure, thanks for the info Patrick !
I will lookup if I can fetch this via ansible, if so, I will let you know
Cool thanks
Hey all, I was checking out this module:
Terraform Module to provision a basic IAM user suitable for humans. - cloudposse/terraform-aws-iam-user
it does not support creation of the groups right ?
Nope! It allows an IAM user to be part of a group through membership however. See:
• https://github.com/cloudposse/terraform-aws-iam-user/blob/master/main.tf#L21
• https://github.com/cloudposse/terraform-aws-iam-user/blob/master/variables.tf#L18
Terraform Module to provision a basic IAM user suitable for humans. - cloudposse/terraform-aws-iam-user
Terraform Module to provision a basic IAM user suitable for humans. - cloudposse/terraform-aws-iam-user
if you like terraform-docs and want to see anchors supported, upvotes plz
What problem are you facing? I cannot link to a specific variable in the markdown using a link How could terraform-docs help solve your problem? I'd like each variable to have an anchor associa…
talking of terraform-docs how do people handle using it with pre-commit hooks?
pre-commit git hooks to take care of Terraform configurations - antonbabenko/pre-commit-terraform
it’s painful when terraform-docs makes a change to its default sections or formats in a new version, and the team is suddenly using different versions
All of my pipelines run a validate stage that runs pre-commit install && pre-commit run -a
. So, team can run whatever they want, but if it is different from what CI is running then they are responsible for fixing it so that the CI pipeline runs clean, (which usually just requires that they sync the dependency version up with what CI is running)
We offer the ability to run the same docker image locally as the CI engine runs in the pipeline
.pre-commit-config.yaml can be used to pin the versions of the pre-commit plugins, which should avoid the “surprise update” problem
we also use a docker image with baked in tool versions. pinning the pre-commit versions also makes sense, though updating those pins could be annoying across a lot of repos
i am wondering now if having a docker image that people mount their local directories into would be ideal, that way the versions are the same as the versions used in CI
that’s how we do it, yep
its slightly annoying though as it needs to handle git config, aws config etc
Here’s ours: https://github.com/saic-oss/anvil
We have a new docker-compose based pattern for local devs to use which is better than just running docker run
, just haven’t gotten it open sourced yet
DevSecOps tools container, for use in local development and as a builder/runner in CI/CD pipelines. Not to be used to run production workloads. - saic-oss/anvil
we have a toolkit image we use with CI
the issue is how we could use this locally to keep the versions consistent
as you’d really need to mount ~
to ~/local
or something like that
How do you guys handle the pre-commit hooks when there are devs using different OSes? Windows, MacOS, Linux?
I’d recommend the “pinning” happens from an automatically updated “build harness”. Only locally pin in individual repositories explicitly for next-gen testing or pinning due to “legacy” reasons.
@zeid.derhally that part… we recommend everyone uses our maintained docker “tools” container as their shell. It’s too painful to maintain all.
Once WSL works better under our corporate bastardized Windows machines maybe we can think about doing it that way but so far WSL doesn’t perform well enough
we have them enabled but when someone does a brew update
all hell breaks lose
I wrote up a proposal for the KMS module about supporting more flexible ways of customising key policy. I’m interested in feedback from maintainers and users who use non default policies: https://github.com/cloudposse/terraform-aws-kms-key/issues/25
This module currently creates KMS keys with a policy stating "any IAM user/role can do anything with this key". If you want a more restrictive policy, you have to write it yourself. I thi…
@Andriy Knysh (Cloud Posse) seems reasonable to me
This module currently creates KMS keys with a policy stating "any IAM user/role can do anything with this key". If you want a more restrictive policy, you have to write it yourself. I thi…
@jose.amengual
@Alex Jurkiewicz sounds good to me as well, thanks
2021-02-27
2021-02-28
anyone using https://github.com/localstack/localstack/releases/tag/v0.12.7 with Terraform for offline Testing in Terraform
Change Log: LocalStack release 0.12.7 1. New Features initial support for Kinesis consumers and SubscribeToShard via HTTP2 push events add LS_LOG option to customize the default log level add Clou…