#terraform (2021-09)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2021-09-01
having a weird issue setting up sso
with iam-primary-roles
, after authenticating with google workspace, leaap opens the aws console. i’m not sure where the misconfiguration is, but my user isn’t getting the arn:aws:iam::XXXXXXXXXXXX:role/xyz-gbl-identity-admin
role assignment. i’m also not sure if i’m supposed to use the idp from the root account or from the identity account. any help is appreciated!
Hi are you using AWS Single Sign-on or a federated role with Google workspace?
a federated role w/ google
This is the doc about your use case:
https://docs.leapp.cloud/use-cases/aws_iam_role/#aws-iam-federated-role
required items are:
• session Alias: a fancy name
• roleArn: the role arn you need to federate access to
• Identity Provider arn: It’s in the IAM service under Identity Providers
• SAML Url: the url of the SAML app connected to google workspace
Leapp is a tool for developers to manage, secure, and gain access to any cloud. From setting up your access data to activating a session, Leapp can help manage the underlying assets to let you use your provider CLI or SDK seamlessy.
On the topic of version tracking of iac, such that only resources in plan get new tag, I found, amazingly, it should be possible to do with https://registry.terraform.io/providers/hashicorp/aws/latest/docs/guides/resource-tagging#ignoring-changes-in-all-resources. I’m going to try this:
locals {
iac_version = ...get git short hash...
}
provider "aws" {
...
default_tags {
tags = {
IAC_Version = local.iac_version
}
}
ignore_tags {
keys = ["IAC_Version"]
}
}
fascinating!
ok, please report back.
I’ve struggled to see a use-case for provider default tags b/c we use null-label and tag all of our resources explicitly.
but I would like to use this if it works in our root modules.
You can use a var for this, but not a data source or resource. Because provider is instantiated before any resources or data sources run
It’s a nice idea though. I wanted to use Yor for this, but found it quite buggy. This approach would get you 80% of the way for 5% of the effort
provider default_tags are kinda nice as aws and the aws provider add support for tagging more types of resources… you can at least get the default tags on those resources without an update to the module, which can also serve as a notification that, hey, the module needs an update
but the current implementation of default_tags leaves a bit to be desired, between errors on duplicate tags and persistent diffs
Thanks for this idea Oliver. I replaced our complex WIP integration of Yor with something much simpler. The Terraform CD platform we use (Spacelift) provides a bunch of variables automatically, so just have to take advantage of them:
provider "aws" {
default_tags {
tags = {
iac_repo = var.spacelift_repository
iac_path = var.spacelift_project_root
iac_commit = var.spacelift_commit_sha
iac_branch = var.spacelift_commit_branch
}
}
}
variable "spacelift_repository" {
type = string
description = "Auto-computed by Spacelift."
}
variable "spacelift_project_root" {
type = string
description = "Auto-computed by Spacelift."
}
variable "spacelift_commit_sha" {
type = string
description = "Auto-computed by Spacelift."
}
variable "spacelift_commit_branch" {
type = string
description = "Auto-computed by Spacelift."
}
Correction to the above. Having every update to any resource cause every resource to get modified in the plan was very annoying. We dropped iac_commit
@Alex Jurkiewicz @Erik Osterman (Cloud Posse) you forgot to use ignore_tags
so obviously you get everything modified, that’s what I explained during the office hours. Ignore-tags will configure the provider to ignore the tag when determining *which* resources to update. Only resources that need updating for some other reason will get the new value of the tag. Look at my original example. It has it.
i saw that, but it seemed a little magic for me
very clever idea tho
Ignore-tags will configure the provider to ignore the tag when determining *which* resources to update. Only resources that need updating for some other reason will get the new value of the tag.
now i get it. yes, clever indeed.
2021-09-02
Hello !
I am maintaining state in S3 and using DynamoDB for state locking. I had to make a manual change to the state file. I successfully uploaded the updated state file. But running any tf
command errors out now due to the md5
digest of the new uploaded file not matching the entry in the DynamoDb table. Looks like the solution is to update the digest manually in the table corresponding to the backend entry. Just wanted to be sure that there isn’t indeed another way to have terraform regenerate/repopulate DynamoDb with the updated md5
easy button is to just delete the item from the dynamodb and let terraform auto-generate it
I am using the tfstate-backend module and noticed some add behavior. This is only when using a single s3 bucket to hold multiple state files. For example, bucket is named tf-state and state file for VPC would be in tf-state/vpc, RDS state file would be in tf-state/rds. The issue is the s3 bucket tag Name gets updated to whatever is set in the module name parameter. What ends up happening is when VPC is created the Name tag would be set as vpc but when RDS is created the tag is updated to rds. This may be by design but is there any way to override this and explicitly set the tag value to something else other than what is set as name in the module?
Can you override it using tags input var?
@RB Yes, but it also updates the dynamoDB tag name. Is there any way to limit this to only the s3 bucket?
Ah no i don’t believe so. You’d have to submit a pr to tag resources differently
OK, thanks!
2021-09-03
I would like to use aws_lb data file arn_suffix, but receive this error aws_lb | Data Sources | hashicorp/aws | Terraform Registry I could see that option in resource atributes aws_lb | Resources | hashicorp/aws | Terraform Registry
Error: Value for unconfigurable attribute
on ../../modules/deployment/data_aws_lb.tf line 3, in data "aws_lb" "lb":
3: arn_suffix = var.arn_suffix
Can't configure a value for "arn_suffix": its value will be decided
automatically based on the result of applying this configuration.
Only values in Argument Reference can be supplied. Values in Attributes Reference are available to read only from the resource and can’t be set.
v1.0.6 1.0.6 (September 03, 2021) ENHANCEMENTS: backend/s3: Improve SSO handling and add new endpoints in the AWS SDK (#29017) BUG FIXES: cli: Suppress confirmation prompt when initializing with the -force-copy flag and migrating state between multiple workspaces. (<a href=”https://github.com/hashicorp/terraform/issues/29438“…
AWS SSO is used in many organizations to authenticate users for access to their AWS accounts. It's the same scale organizations that would very likely also use Terraform to manage their infrast…
The -force-copy flag to init should automatically migrate state. Previously this was not applied to one case: when migrating from a backend with multiple workspaces to another backend supporting mu…
does anyone know a good module for AWS budgets before I created my own?
Hi guys recently I’ve been thinking of ways to make my terraform code DRY within a project, and avoid having to wire outputs from some modules to other modules. I came up with a pattern similar to “dependency injection” using terraform data blocks. Keen to hear your thoughts on this? And also curious how do folks organise their large terraform codebases? https://github.com/diggerhq/infragenie/
decompose your terraform with dependency injection - GitHub - diggerhq/infragenie: decompose your terraform with dependency injection
Nifty
decompose your terraform with dependency injection - GitHub - diggerhq/infragenie: decompose your terraform with dependency injection
2021-09-05
Hey guys, quick q: When using Terraform to manage your AWS account, how do you or you team deploy containers to ECS? Are you using Terraform to do it or some other process to create/update containerdefinitions?
The answer is largely “it depends” based on a few factors. Is the service in question considered “part of the infrastructure” such as a log aggregation system? In that case you might manage it entirely by terraform and specify upgrades to image tags and specs via module versioning and variables. If its part of your actual application layer you can do the same thing but this could get in the way of your app teams managing their own deploys, and then you’re using terraform to deploy software; or you can have terraform deploy an initial dummy container definition that uses a sort of ‘hello world’ service while ignoring any further changes to the Task Definition, and allow your CI/CD system to push new definitions directly to ECS.
Yeah it’s application layer, using Terraform to apply updates by tagging images, and passing the image tags to terraform as var. I had no idea about about https://www.terraform.io/docs/language/meta-arguments/lifecycle.html#ignore_changes if that’s what you are referring to? This seems like a really great solution because with this small change to our ECS services I could hand over the container deploy to something like https://circleci.com/docs/2.0/ecs-ecr/ which seems like an attractive solution.
The meta-arguments in a lifecycle block allow you to customize resource behavior.
How to use CircleCI to deploy to AWS ECS from ECR
https://registry.terraform.io/modules/trussworks/ecs-service/aws/latest here’s an example module
Awesome! Thanks so much for your help
Good morning all!
I have a few quick questions - I think I am doing something wrong because I have not seen anyone else talk about this but here goes! -
I have been trying to use cloudposse/cloudfront-s3-cdn/aws
in github actions to set up the infrastructure for my static site, and I have faced a few issues.
The first was when I was trying to create the cert for the site within main.tf, as per the examples in the README.md but I was getting an error about the zone_id being “”.
I solved that by supplying the cert arn manually.
Now I face the problem of after running terraform and applying the config via github actions, on the next run I get “Error creating S3 bucket: BucketAlreadyOwnedByYou” and it looks like it is trying to create everything again, even though it has been deployed and I can see all the pieces in the aws console. Here is a gist of my main.tf: https://gist.github.com/NeuroWinter/2e1877909ce06bd4ae2719b7d004f721
Sounds like you don’t have a backend set up to store your statefile
Terraform creates a JSON file after running apply that contains details of all infrastructure that was created. It uses this file on subsequent runs to know which infra it has already created.
Most commonly this is stored in S3 using the S3 backend. Read the docs for more info on how to configure this.
To repair your deployment it will take some tedious surgery, btw. The simplest approach would be to manually delete any resource that Terraform claims is in the way, so it can recreate them. (Once your state is set up)
Ahh that makes a lot of sense thank you @Alex Jurkiewicz ! I will read up on the docs on how to do that
Understanding what the statefile is and what terraform does with it (not too complicated) is important
2021-09-06
Hi folks - I appear to be having an issue with the following module: https://github.com/cloudposse/terraform-aws-ecs-alb-service-task
╷
│ Error: Invalid value for module argument
│
│ on main.tf line 40, in module "ecs_alb_service_task":
│ 40: volumes = var.volumes
│
│ The given value is not suitable for child module variable "volumes" defined at .terraform/modules/ecs_alb_service_task/variables.tf:226,1-19: element 0: attributes "efs_volume_configuration" and "host_path" are required.
╵
The above is the error message I get when performing a Terraform plan
The section of code which it is complaining about looks like this:
dynamic "volume" {
for_each = var.volumes
content {
host_path = lookup(volume.value, "host_path", null)
name = volume.value.name
dynamic "docker_volume_configuration" {
for_each = lookup(volume.value, "docker_volume_configuration", [])
content {
autoprovision = lookup(docker_volume_configuration.value, "autoprovision", null)
driver = lookup(docker_volume_configuration.value, "driver", null)
driver_opts = lookup(docker_volume_configuration.value, "driver_opts", null)
labels = lookup(docker_volume_configuration.value, "labels", null)
scope = lookup(docker_volume_configuration.value, "scope", null)
}
}
dynamic "efs_volume_configuration" {
for_each = lookup(volume.value, "efs_volume_configuration", [])
content {
file_system_id = lookup(efs_volume_configuration.value, "file_system_id", null)
root_directory = lookup(efs_volume_configuration.value, "root_directory", null)
transit_encryption = lookup(efs_volume_configuration.value, "transit_encryption", null)
transit_encryption_port = lookup(efs_volume_configuration.value, "transit_encryption_port", null)
dynamic "authorization_config" {
for_each = lookup(efs_volume_configuration.value, "authorization_config", [])
content {
access_point_id = lookup(authorization_config.value, "access_point_id", null)
iam = lookup(authorization_config.value, "iam", null)
}
}
}
}
}
}
With vars for var.volumes declared like this:
variable "volumes" {
type = list(object({
host_path = string
name = string
docker_volume_configuration = list(object({
autoprovision = bool
driver = string
driver_opts = map(string)
labels = map(string)
scope = string
}))
efs_volume_configuration = list(object({
file_system_id = string
root_directory = string
transit_encryption = string
transit_encryption_port = string
authorization_config = list(object({
access_point_id = string
iam = string
}))
}))
}))
description = "Task volume definitions as list of configuration objects"
default = []
}
I am passing in the following:
volumes = [
{
name = "etc"
docker_volume_configuration = {
scope = "shared"
autoprovision = true
}
},
{
name = "log"
host_path = "/var/log/hello"
},
{
name = "opt"
docker_volume_configuration = {
scope = "shared"
autoprovision = true
}
},
]
If I update the module variables file in my .terraform folder to:
variable "volumes" {
type = list(object({
#host_path = string
#name = string
#docker_volume_configuration = list(object({
# autoprovision = bool
# driver = string
# driver_opts = map(string)
# labels = map(string)
# scope = string
#}))
#efs_volume_configuration = list(object({
# file_system_id = string
# root_directory = string
# transit_encryption = string
# transit_encryption_port = string
# authorization_config = list(object({
# access_point_id = string
# iam = string
# }))
#}))
}))
description = "Task volume definitions as list of configuration objects"
default = []
}
This applies no problem, any ideas or will I submit a bug
Terraform module which implements an ECS service which exposes a web service via ALB. - GitHub - cloudposse/terraform-aws-ecs-alb-service-task: Terraform module which implements an ECS service whic…
@David every key in the object has to be set or terraform will error out. this is a limitation in terraform itself.
@David see the optional experiment
Terraform module authors and provider developers can use detailed type constraints to validate the inputs of their modules and resources.
i think i tried this, let me try again
yeah i tried setting the values to null
volumes = [
{
name = "etc"
host_path = null
docker_volume_configuration = {
scope = "shared"
autoprovision = true
},
efs_volume_configuration = {
file_system_id = null
root_directory = null
transit_encryption = null
transit_encryption_port = null
authorization_config = {
access_point_id = null
iam = null
}
}
},
{
name = "log"
host_path = "/var/log/hello"
docker_volume_configuration = {
scope = null
autoprovision = null
},
efs_volume_configuration = {
file_system_id = null
root_directory = null
transit_encryption = null
transit_encryption_port = null
authorization_config = {
access_point_id = null
iam = null
}
}
},
{
name = "opt"
host_path = null
docker_volume_configuration = {
scope = "shared"
autoprovision = true
},
efs_volume_configuration = {
file_system_id = null
root_directory = null
transit_encryption = null
transit_encryption_port = null
authorization_config = {
access_point_id = null
iam = null
}
}
},
]
but just moans about this:
│ Error: Invalid value for module argument
│
│ on main.tf line 40, in module "ecs_alb_service_task":
│ 40: volumes = var.volumes
│
│ The given value is not suitable for child module variable "volumes" defined at .terraform/modules/ecs_alb_service_task/variables.tf:226,1-19: element 0: attribute "docker_volume_configuration": list of object required.
╵
typically, a list of objects can be zeroed using []
. a singular object can be passed as null
you’re giving docker_volume_configuration
a map instead of a list
this
docker_volume_configuration = {
scope = "shared"
autoprovision = true
},
should be
docker_volume_configuration = [{
scope = "shared"
autoprovision = true
}],
see
attribute "docker_volume_configuration": list of object required.
didn’t spot the [] and {}
volumes = [
{
name = "etc"
host_path = null
efs_volume_configuration = []
docker_volume_configuration = [{
autoprovision = true
driver = null
driver_opts = null
labels = null
scope = "shared"
}]
},
{
name = "log"
host_path = "/var/log/gitlab"
efs_volume_configuration = []
docker_volume_configuration = []
},
{
name = "opt"
host_path = null
docker_volume_configuration = [{
autoprovision = true
scope = "shared"
driver = null
driver_opts = null
labels = null
}]
efs_volume_configuration = []
},
]
this works
Nice, glad you got it working!
me too, i really appreciate the help
Np!
I’m having a similar issue as this one, but I’m trying to use efs_volume_configuration
instead of docker_volume_configuration
. I am correctly passing the docker config as an empty list to avoid the problem of a required option, but then when I go to apply, I get the following error:
Error: ClientException: When the volume parameter is specified, only one volume configuration type should be used.
So, Terraform requires me to pass both configurations, but even when one is empty, it’s complaining that both are provided. Is there any way around this problem? @RB any ideas?
the volumes block:
volumes = [{
name = "html"
host_path = "/usr/share/nginx/html"
docker_volume_configuration = []
efs_volume_configuration = [{
file_system_id = dependency.efs.outputs.id
root_directory = "/home/user/www"
transit_encryption = "ENABLED"
transit_encryption_port = 2999
authorization_config = []
}]
}]
Try setting docker_volume_configuration to null instead
@RB no bueno:
Error: Invalid dynamic for_each value
on .terraform/modules/ecs-service/main.tf line 70, in resource "aws_ecs_task_definition" "default":
70: for_each = lookup(volume.value, "docker_volume_configuration", [])
|----------------
| volume.value is object with 4 attributes
Cannot use a null value in for_each.
could you create a ticket with a minimum viable reproducible example in the https://github.com/cloudposse/terraform-aws-ecs-container-definition repo ? doing this would be easier to debug locally.
if this is truly the case, then the issue may be with the terraform resource itself because it should respect passing in null as if the param is not passed in. if it’s not honoring that, then the terraform golang resource in the aws provider is to blame rather than the module itself
Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource - GitHub - cloudposse/terraform-aws-ecs-container-…
will do
@RB the volumes variable is in ecs-service
not aws-ecs-container-definition
. are you sure you want me to submit the issue in the latter?
or maybe i’m not understanding the distinction between volumes_from
in the container definition module and volumes
in the service module
the ecs service module feeds it into the container definition module
ok so i can just use my volumes
arg verbatim as the value for volumes_from
in my reproducer?
appears not. can i give you a reproducer that uses ecs-service
?
I’m using terraform-aws-ecs-alb-service-task
issue submitted: https://github.com/cloudposse/terraform-aws-ecs-container-definition/issues/147
thanks for your help, RB
Describe the Bug I'm trying to use an EFS volume in an ECS service definition. The volumes variable is defined such that one has to supply a value for both the efs_volume_configuration and dock…
2021-09-07
Hi All! How long approximately should it take to deploy AWS MSK? I use this module https://registry.terraform.io/modules/cloudposse/msk-apache-kafka-cluster/aws/latest and I deployment is passed 20 min already and still nothing. Any feedback please?
module.kafka.aws_msk_cluster.default[0]: Still creating... [26m0s elapsed]
module.kafka.aws_msk_cluster.default[0]: Still creating... [26m10s elapsed]
It does take a while
Thank you!
Note that it’s not the module but the aws msk itself
I see, do we need to specify zone_id
or this os optional parameter?
Terraform module to provision AWS MSK. Contribute to cloudposse/terraform-aws-msk-apache-kafka-cluster development by creating an account on GitHub.
yup MSK takes ages to be ready
I see, do we need to specify zone_id
or this os optional parameter?
please suggest regarding this question
All the module arguments are shown in the readme. On the far right, it shows required yes or no
Hello, I am currently using this terraform module https://registry.terraform.io/modules/cloudposse/elastic-beanstalk-environment/aws/latest to create a worker environment. But I can’t find how to configure custom endpoint for the worker daemon to post the sqs queue.
Is there a terraform resource that can provide a custom endpoint? I don’t see one :(
Only one i can see is the environment resources endpoint url as an attribute but i don’t see a way to modify it like in the picture above
I am actually not too familiar with terraform. But after I looked here https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/elastic_beanstalk_environment , I don’t think so
There may be an open pull request in the aws provider? If not, they need all the contributions they can get :)
Have you seen https://www.theregister.com/2021/09/07/hashicorp_pause/ ? Thoughts on this?
A pause on community-submitted PRs
bummed, but glad they’re at least up front about it
A pause on community-submitted PRs
Time to apply to Hashi
ya, so curious what the back story is here…
have they have some recent departures?
have they reached some tipping point?
have they had some incident reported and need to pause all contributions (E.g. like what happened to the linux kernel)?
I wonder where we can get more information about this? Any people you can get some commentary on this?
have they taken some time to pause and regroup on how to scale engineering of open source at this scale?
It’s really interesting to look at this in light of Docker’s issues in the open source world: https://www.infoworld.com/article/3632142/how-docker-broke-in-half.html
The game changing container company is a shell of its former self. What happened to one of the hottest enterprise technology businesses of the cloud era?
I doubt we can get anyone to comment publicly on it.
Not hugely forthcoming in the Reddit threads that I’ve been reading, but it seems that they are growing faster than they are hiring, compounded with some loses in the Terraform department coupled with normal PTO/Vacay overhead
Posted in r/Terraform by u/The-Sentinel • 60 points and 22 comments
I was reading a Tweet from Mitchell too, but I can’t find it now
@gooeyblob This is only for core which should not be noticeable to any end users since providers are the main source of external contribution and there is no change in policy there. This allows our core team to focus a bit more while we hire to fill the team more.
he was basically trying to downplay the situation
Basically it looks like Silicon Valley is hot af right now if you have Terraform skill, they literally cannot hire fast enough because everyone is hiring again after the pandemic and it’s feeding frenzy
I wasn’t joking when I said it’s time to apply to Hashicorp, maybe it’s time to work for a big company…
I also think that a lot of companies haven’t really figured out working full remotely yet, it’s possible that they are having a people issue as well as a resourcing block which is slowing things down
I notice that their SF office isn’t listed on any job listings and they are all fully remote..
Looking at cashflow Hashi is 5.2B valuation, 8 years old, Series E of 175m, so they have fuel in the tank to hire with even if Series E and not revenue positive denotes that they are having trouble monetizing their products
I think Hashi was mostly remote even pre-pandemic. I agree that the market is hot and it’s hard to find good people. There’s a lot of cash running around.
It’s the remote pool that is getting drained hardest now that so many tech companies have been pushed to go remote
could it be a cashflow issue?
The fresh one on the subject https://twitter.com/mitchellh/status/1435674131257651201?s=20
Sharing an update to the recent speculation around Terraform and community contributions. The gist is: we’re growing a ton, this temporary pause is localized to a single team (of many), and Terraform Providers are completely unchanged and unaffected. https://www.hashicorp.com/blog/terraform-community-contributions
Sharing a brief update on Terraform and community contributions, given some recent noise. TL;DR: Terraform is continuing to grow rapidly, we are scaling the team, and we welcome contributions. Also we are hiring! https://www.hashicorp.com/blog/terraform-community-contributions
Is there any existing solution for generating KMS policies that enable the interop with various AWS services?
Some services need actions others don’t such as kms:CreateGrant
. CloudTrail audits will flag that action being granted to services which don’t need it.
Seems like there ought to be a module for creating these policies which already knows the details of individual action requirements vs recreating policies from AWS docs on every project
dealing with exactly this right now, for cloudtrail, config, and guardduty. such a pain to figure out the kms policy and bucket policy!!
I started work on creating canned policies for every service in a PR for the cloudposse key module, but I am no longer actively working on it
If you wanted to improve everyone’s life a little bit, it might be a good launchpad
2021-09-08
• Terraform is not currently reviewing Community Pull Requests: HashiCorp has acknowledged that it is currently understaffed and is unable to review public PRs.
•
Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned. - Be explicit that community PR review is currently paused · hashicorp/terraform@6562466
Terraform enables you to safely and predictably create, change, and improve infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned. - Be explicit that community PR review is currently paused · hashicorp/terraform@6562466
Only applies to terraform core
Not providers
I see.
Lets see how it plays out but I’m not particularly worried
For core I guess yes, maybe they don’t want specific features added by community - example terraform add command, but not sure why
We recently added a note to the HashiCorp Terraform contribution guidelines and this blog provides additional clarity and context for our community and commercial customers.
Hello,
We have a aws_directory_service_directory
resource defined in a service, which creates a security group that allows ports 1024-65535 to be accessible from 0.0.0.0/0 and this is getting flagged by security hub because AWS CIS standards to not recommend allowing ingress from 0.0.0.0/0 for TCP port 3389.
My question is on how to restrict some of the rules in the resultant SG that gets created by the aws_directory_service_directory
resource. How do you remediate this using terraform?
Anyone here using tfexec
/ tfinstall
? https://github.com/hashicorp/terraform-exec
2021/09/08 13:15:58 error running Init: fork/exec /tmp/tfinstall354531296/terraform: not a directory
I feel like there are a few lies in this code here …
This one for example: https://github.com/hashicorp/terraform-exec/blob/v0.14.0/tfexec/terraform.go#L62-L74
As usual… nothing to see here. oh, funny :smile: … Yeah it was all a lie.
I had given a file instead of a directory as its workingDir
.
And the error message was very confusing because it didn’t report THAT variable as “not a directory”
This message was deleted.
:wave: I have the following public subnet resource:
resource "aws_subnet" "public_subnet" {
for_each = {
"${var.aws_region}a" = "172.16.1.0"
"${var.aws_region}b" = "172.16.2.0"
"${var.aws_region}c" = "172.16.3.0"
}
vpc_id = aws_vpc.vpc.id
cidr_block = "${each.value}/24"
availability_zone = each.key
map_public_ip_on_launch = true
}
I want to reference the subnets in an ALB resource I’m creating. At the moment this looks like:
subnet_ids = [
aws_subnet.public_subnet["us-east-1a"].id,
aws_subnet.public_subnet["us-east-1b"].id,
aws_subnet.public_subnet["us-east-1c"].id
]
Is there a way to wildcard the above? I tried aws_subnet.public_subnet.*.id
which doesn’t work because I think the for each object is a map. What is the proper way to handle this?
v1.1.0-alpha20210908 1.1.0 (Unreleased) UPGRADE NOTES: Terraform on macOS now requires macOS 10.13 High Sierra or later; Older macOS versions are no longer supported. The terraform graph command no longer supports -type=validate and -type=eval options. The validate graph is always the same as the plan graph anyway, and the “eval” graph was just an implementation detail of the terraform console command. The default behavior of creating a plan graph should be a reasonable replacement for both of the removed graph…
2021-09-09
does anyone know of an IAM policy that will let people view the SSM parameters names and thats it? I don’t want them to be able to see the values.
“Secret” values would usually be encrypted using a KMS key. So by controlling access to the KMS key could be enough if your intentions is to hide only the encrypted values.
Otherwise, the only thing you can give would be ssm:DescribeParameters
I think.
https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-paramstore-access.html
Restrict access to Systems Manager parameters by using IAM policies.
just give them ssm:DescribeParameters
permission
they will be able to list and view individual parameters metadata but not the values
I had a lot of tags to deploy, and not all resources support tagging .
to be effective in the process and after trying many option to trigger command on *.tf changes.
I finally use watch terraform validate
( inotifywait
don’t seems to work on wsl + vscode )
Hi People, I am creating ecs service using tf 0.11.7 I have set the network_mode default to “bridge” for the ecs task definition but the module can be reused with different network_mode such as “awsvpc”. Since tf 0.11.* doesn’t support dynamic block , I need to find out a way to achieve dynamic block to set arguments such as network_configurations(based on the network_mode) Using locals I guess it can be achieved .Is there any other way to do it in tf 0.11.*?
You can use terraspace / terragrunt / other to do that, but I would advise to update a bit the version of terraform …
has anyone managed to get terraform with when using federated SSO with AWS and leveraging an assume-role in the terraform configuration?
I think you can manage this situation with Leapp Leap manages also the assume role from federated
Have you used https://github.com/99designs/aws-vault
A vault for securely storing and accessing AWS credentials in development environments - GitHub - 99designs/aws-vault: A vault for securely storing and accessing AWS credentials in development envi…
I started an open-source project to manage multi-account access in multi-cloud. It is a Desktop App that Manages IAM Users, IAM federated roles, IAM chained roles and automatically retrieving all the AWS SSO roles. Also, It secures credentials by managing the credentials file on your behalf and generates a profile with short-lived credentials only when needed. If you are interested in the idea, look at the guide made by Nuru:
https://docs.cloudposse.com/howto/geodesic/authenticate-with-leapp/
A vault for securely storing and accessing AWS credentials in development environments - GitHub - 99designs/aws-vault: A vault for securely storing and accessing AWS credentials in development envi…
Its an awesome tool. I am using it for interacting with dozens of AWS accounts whether its IAM users + MFA or AWS SSO
ooof, I just corrupted my local state file and lost the state of a bunch of resources in my terraform (backup was corrupted to ). I don’t actually care about the resources, is there a way I can force terraform to destroy the resources that map to my terraform code and reapply?
No. Run Terraform apply repeatedly and manually delete the resources it says are in the way. But this doesn’t work in all cases. If you had eg S3 buckets it IAM resources with a name prefix specified instead of a name, they will be missed
i was afraid of this
well first thing i’m doing is switching to versioned s3 backend
Good idea
Backup the bucket too :), learned that one after a coworker deleted said versioned bucket
ooof
2021-09-10
hey guys anyone ever implemented a description on what terraform is applying on the approval stage in codepipeline. Like i can see what my terraform is planing in the terraform plan stage and i would like to pass this to details to my approval stage but approval does not support artifact atrtibute. Anyone found a solution for this before
We’re using Spacelift which does that. If you learn hwo to do it with codepipeline, lmk!
How do I access the ARN of the created resource in the sibling modules belonging to same main.tf file? I want to create IAM user, and ECR resource that need’s that user’s ARN (Check line 22). How to reference variables?
Check the outputs of the user module then you would reference it prefixed with module and the name ex. module.gitlab_user.user_arn
Thanks @pjaudiomv
Yes this explains modules and accessing their values https://www.terraform.io/docs/language/modules/syntax.html section Accessing Module Output Values
Modules allow multiple resources to be grouped together and encapsulated.
All of the cloudposse modules reference the inputs/outputs on the respective GitHub repo https://github.com/cloudposse/terraform-aws-iam-system-user#outputs
Terraform Module to Provision a Basic IAM System User Suitable for CI/CD Systems (E.g. TravisCI, CircleCI) - GitHub - cloudposse/terraform-aws-iam-system-user: Terraform Module to Provision a Basic…
Hello - First of all, thank you for having so many wonderful Terraform modules. I have a question about the aws-ecs-web-app
module and task definitions. It seems like neither setting for ignore_changes_task_definition
does quite what I need, so I sense I am ‘doing it wrong’, but I am struggling to find the happy path to doing the right thing.
When I update by pushing new code to Github, and then run terraform apply
the module wants to switch the task definition back to the previous version. Setting ignore_changes_task_definition
to True
fixes that, but if I want to update the container size or environment variables, then those changes do not get picked up.
It seems like the underlying problem is my way of doing things (managing the Task Definition via Terraform) is coupling Terraform and the CI/CD process too tightly, and that either Terraform or CodeBuild should ‘own’ the Task Definition, but not both. I don’t see a clean way to create the Task Definition during the Build phase and set it during the deploy phase. The standard ECS deployment takes the currently-running task definition and updates the image uri. It looks like one needs to use CodeDeploy to do anything more advanced.
I don’t think I’m the first person to want Terraform not to change the revision unless I’ve made changes to the task definition on the Terraform side. How do others handle this? Or is my use-case outside of what the aws-ecs-web-app
module is designed for?
If you made it here, thank you for reading!
I would use the web app module more as a reference for how to tie all the other modules together
you’ll quickly find yourself wanting to make changes
Thank you for the response - that was my sense. It is great to have a working end-to-end example, and it made it easy to set up a Github -> ECS pipeline.
Interestingly, after about a year, the only thing that we’re really missing for our use-case is the ability to generate task definitions after a successful container build. The web-app module got us almost 100% of the way there, and for that I’m grateful.
@Cameron Pope can you say a bit more about how you solved this problem? I’m running into the same conflict between CI/CD (Codefresh in my case) and Terraform. When ignore_changes_task_definition
is on (which it is by default), I’m still getting Terraform wanting to update the task definition to a new revision with the sha256 of the new image as the tag, compared to the GitHub short rev for the CD. This breaks the web app deploy. :disappointed:
I think everything would be fine if it just honored the variable and actually ignored changes to the service’s task_definition
. I don’t have a lot of changes to the instance count planned. I can’t figure out why it’s not honoring the setting.
2021-09-11
2021-09-12
2021-09-13
anyone hooked in the identity provider for EKS yet? any gothcas I should be aware of?
Hey guys I’m writing the Terraform for a new AWS ECS Service, I want to deploy 6 (but effectively n
) similar container definitions in my task definition. What’s the recommended way of looping a data structure (a dict, or list of lists) and creating container_definitions?
- Is it supposed to be done with a JSON file and a
data "template_file"
block with some sort of comprehension? - I’ve found https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ecs_container_definition but it doesn’t have any parameters for
command
which is the part between the container definitions that needs to differ slightly - https://github.com/cloudposse/terraform-aws-ecs-container-definition I’ve also found this, not sure if anyone here has had any experience with it? I was going to experiment
for_each
ing with it to create 6 container_defs I can thenmerge()
in myresource "task_definition"
- is this the right sort of approach?
I believe you want option 3
Just out of interest, can I just do this?
celery_queues = {
1 : ["queue1"],
2 : ["queue2", "blah", "default"],
...
}
resource "aws_ecs_task_definition" "celery" {
for_each = local.celery_queues
family = "celery"
requires_compatibilities = ["FARGATE"]
cpu = "4096"
memory = "8192"
network_mode = "awsvpc"
execution_role_arn = module.ecs_cluster.task_role_arn
container_definitions = jsonencode([
{
name = "celery_${each.key}",
image = blah,
command = ["celery", ${each.value}],
environment = blah,
essential = true,
logConfiguration = {
logDriver = "awslogs",
options = {
awslogs-group = log_group_name,
awslogs-region = log_group_region,
awslogs-stream-prefix = log_group_prefix
}
},
healthCheck = {
command = ["CMD-SHELL", "pipenv run celery -A my_proj inspect ping"],
interval = 10,
timeout = 60,
retries = 5,
startPeriod = 60
}
}
])
}
Ya that would work too
awesome thanks for the help, I’m a devops of one, its so good to have somewhere to work through a solution!
Thanks in advance for any help
Hello everyone, I have a question what is the best way to connect TF module with API ?
AWS API Gateway?
Or something else
I was reading in TF doc HTTP API
2021-09-14
This message was deleted.
good afternoon guys, I think I’ve found a version issue with cloudposse/terraform-aws-ecs-web-app (version = “~> 0.65.2”). Is this a legit upper version limit or perhaps just versions.tf a bit out of date? Thanks
tf -version
Terraform v1.0.2
on linux_amd64
Your version of Terraform is out of date! The latest version
is 1.0.6. You can update by downloading from <https://www.terraform.io/downloads.html>
- services_api_assembly.this in .terraform/modules/services_api_assembly.this
╷
│ Error: Unsupported Terraform Core version
│
│ on .terraform/modules/services_api_alb.alb.access_logs.s3_bucket.this/versions.tf line 2, in terraform:
│ 2: required_version = ">= 0.12.0, < 0.14.0"
│
│ Module module.services_api_alb.module.alb.module.access_logs.module.s3_bucket.module.this (from git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.19.2>) does not support Terraform version 1.0.2. To
│ proceed, either choose another supported Terraform version or update this version constraint. Version constraints are normally set for good reason, so updating the constraint may lead to other errors or unexpected behavior.
╵
╷
│ Error: Unsupported Terraform Core version
│
│ on .terraform/modules/services_api_alb.alb.access_logs.this/versions.tf line 2, in terraform:
│ 2: required_version = ">= 0.12.0, < 0.14.0"
│
│ Module module.services_api_alb.module.alb.module.access_logs.module.this (from git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.19.2>) does not support Terraform version 1.0.2. To proceed, either choose
│ another supported Terraform version or update this version constraint. Version constraints are normally set for good reason, so updating the constraint may lead to other errors or unexpected behavior.
╵
╷
│ Error: Unsupported Terraform Core version
│
│ on .terraform/modules/services_api_alb.alb.default_target_group_label/versions.tf line 2, in terraform:
│ 2: required_version = ">= 0.12.0, < 0.14.0"
│
│ Module module.services_api_alb.module.alb.module.default_target_group_label (from git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.19.2>) does not support Terraform version 1.0.2. To proceed, either
│ choose another supported Terraform version or update this version constraint. Version constraints are normally set for good reason, so updating the constraint may lead to other errors or unexpected behavior.
╵
post here and we’ll get it promptly reviewed
The versions.tf for v0.65.2 https://github.com/cloudposse/terraform-aws-ecs-web-app/blob/master/versions.tf says
terraform {
required_version = ">= 0.13.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 3.34"
}
}
}
Which all looks good. What is the source of the services_api_alb module?
it’s
source = "cloudposse/alb/aws"
version = "0.23.0"
context = module.this.context
`
https://registry.terraform.io/modules/cloudposse/alb/aws/latest is 0.35.3, so you are quite a way behind.
For some reason, ec2 instance does not have public dns assigned, even though it’s part of the public subnet? What could be the case?
During the cretion of the resource, did you specify to attach a public IP? even if the subnet is public, if the default setting for the subnet is to NOT assign a public IP, instances won’t get one. (AFAIK)
Yeah i was under the impression that on was the default. Thanks, i think that solved it
2021-09-15
v1.0.7 1.0.7 (September 15, 2021) BUG FIXES: core: Remove check for computed attributes which is no longer valid with optional structural attributes (#29563) core: Prevent object types with optional attributes from being instantiated as concrete values, which can lead to failures in type comparison (<a…
Version 1.0.7
The config is already validated, and does not need to be checked again in AssertPlanValid, so we can just remove the check which conflicts with the new optional nested attribute types. Add some mor…
2021-09-16
Fellas, Is there a way to add a condition when adding S3 bucket/folder level permissions here at: https://github.com/cloudposse/terraform-aws-iam-s3-user
For example, I want to give like this string query:
{
"Sid": "AllowStatement3",
"Action": ["s3:ListBucket"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::DOC-EXAMPLE-BUCKET"],
"Condition":{"StringLike":{"s3:prefix":["media/*"]}}
}
2021-09-17
spamming channels: https://tech.loveholidays.com/enforcing-best-practice-on-self-serve-infrastructure-with-terraform-atlantis-and-policy-as-code-911f4f8c3e00
Here at loveholidays we are heavily dependant on Terraform. All of our Google Cloud infrastructure is managed using Terraform, along with a…
i really wish it were easier to extend atlantis to additional source code hosts. would be fantastic if it worked with codecommit
Here at loveholidays we are heavily dependant on Terraform. All of our Google Cloud infrastructure is managed using Terraform, along with a…
as in one multiple atlantis one repo?
no, just as in developing the code to support new source code hosts. last time i looked, it was a bit of a spaghetti mess touching all sorts of core internal parts
2021-09-18
Hello Guys, I’m trying to create parameters in AWS SSM- any ideas/solution will be much appreciated.
data "aws_ssm_parameter" "rds_master_password" {
name = "/grafana/GF_RDS_MASTER_PASSWORD"
with_decryption = "true"
}
resource "aws_ssm_parameter" "rds_master_password" {
name = "/grafana/GF_RDS_MASTER_PASSWORD"
description = "The parameter description"
type = "SecureString"
value = data.aws_ssm_parameter.rds_master_password.value
}
resource "aws_ssm_parameter" "GF_SERVER_ROOT_URL" {
name = "/grafana/GF_SERVER_ROOT_URL"
type = "String"
value = "https://${var.dns_name}"
}
resource "aws_ssm_parameter" "GF_LOG_LEVEL" {
name = "/grafana/GF_LOG_LEVEL"
type = "String"
value = "INFO"
}
resource "aws_ssm_parameter" "GF_INSTALL_PLUGINS" {
name = "/grafana/GF_INSTALL_PLUGINS"
type = "String"
value = "grafana-worldmap-panel,grafana-clock-panel,jdbranham-diagram-panel,natel-plotly-panel"
}
resource "aws_ssm_parameter" "GF_DATABASE_USER" {
name = "/grafana/GF_DATABASE_USER"
type = "String"
value = "root"
}
resource "aws_ssm_parameter" "GF_DATABASE_TYPE" {
name = "/grafana/GF_DATABASE_TYPE"
type = "String"
value = "mysql"
}
resource "aws_ssm_parameter" "GF_DATABASE_HOST" {
name = "/grafana/GF_DATABASE_HOST"
type = "String"
value = "${aws_rds_cluster.grafana.endpoint}:3306"
}
Error: Error describing SSM parameter (/grafana/GF_RDS_MASTER_PASSWORD): ParameterNotFound:
│
│ with module.Grafana_terraform.data.aws_ssm_parameter.rds_master_password,
│ on Grafana_terraform/ssm.tf line 1, in data "aws_ssm_parameter" "rds_master_password":
│ 1: data "aws_ssm_parameter" "rds_master_password" {
│
Looks like you don’t have the parameter created and so your data source is failing to pull it
@RB thanks. Sorted now.
@Ozzy Aluyi you have a conflict with the data and resource for the parameter named rds_master_password
On line 1, you are trying to read it as data. and on line 5 you are trying to create it as a resource.
If its already created and you just want to read it, remove the resource "aws_ssm_parameter" "rds_master_password" {…
section.
If you are trying to create it, remove the data "aws_ssm_parameter" "rds_master_password" {...
section.
Of course, if you are reading it, you will need to find a way to get the value into place. In summary, you can’t have a data resource that calls on itself.
If you are trying to create and store a password, consider using the random_password
resource and storing the result of that in the parameter.
https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/password
hey guys, i am a little confused about what dns_gbl_delegated
refers to in eks-iam
https://github.com/cloudposse/terraform-aws-components/blob/master/modules/eks-iam/tfstate.tf#L51
Opinionated, self-contained Terraform root modules that each solve one, specific problem - terraform-aws-components/tfstate.tf at master · cloudposse/terraform-aws-components
is delegated-dns
supposed to be added to the global env as well as regional?
i modified the remote state for dns_gbl_delegated
to point to primary-dns
– not sure if that’s going to cause any issues later on
@managedkaos thanks for the solution. the random_password
will make more sense,
2021-09-19
Would like some assistance with the following error with fargate task. It seems like the stuff inside container_definitions
isnt being registered at all… im getting all sorts of error saying args not found when they are clearly within the template. EDIT: terraform state show data.template_file.main
got all the right args in the json.
Fargate only supports network mode 'awsvpc'.
Fargate requires that 'cpu' be defined at the task level.
resource "aws_ecs_task_definition" "main" {
family = "${var.app_name}-app"
execution_role_arn = aws_iam_role.ecs_task_execution_role.arn
#network_mode = "awsvpc"
requires_compatibilities = ["FARGATE"]
#cpu = var.fargate_cpu
#memory = var.fargate_memory
container_definitions = data.template_file.main.rendered
}
data "template_file" "main" {
template = file("./templates/ecs/main_app.json.tpl")
vars = {
app_name = var.app_name
app_image = var.app_image
container_port = var.container_port
app_port = var.app_port
fargate_cpu = var.fargate_cpu
fargate_memory = var.fargate_memory
aws_region = var.aws_region
}
}
# ./templates/ecs/main_app.json.tpl
[
{
"name": "${app_name}",
"image": "${app_image}",
"cpu": ${fargate_cpu},
"memory": ${fargate_memory},
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/${app_name}",
"awslogs-region": "${aws_region}",
"awslogs-stream-prefix": "ecs"
}
},
"portMappings": [
{
"containerPort": ${container_port},
"hostPort": ${app_port}
}
]
}
]
Try using this module
https://github.com/cloudposse/terraform-aws-ecs-container-definition
Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource - GitHub - cloudposse/terraform-aws-ecs-container-…
Hello folks, Im trying to use [AWS MQ Module](https://github.com/cloudposse/terraform-aws-mq-broker) but it look
s to have a issue on Benchmark Infraestructure Security. However i cant see what
s kind of issues it is on github page. Anyone can explain this for me ?
Terraform module for provisioning an AmazonMQ broker - GitHub - cloudposse/terraform-aws-mq-broker: Terraform module for provisioning an AmazonMQ broker
2021-09-20
Fellas, is there a way to create multiple users with the module//github.com/cloudposse/terraform-aws-iam-s3-user> I tried to add a variable for creating multiple users, but its not picking up as two users instead its combining into one//github.com/cloudposse/terraform-aws-iam-s3-user/blob/master/examples/complete/fixtures.us-west-1.tfvars#L9>
It ended up doing like this:
~ user = "user1" -> "user1user2" # forces replacement
This is the tfvars entry
iam_user_name = "user1, user2"
Any clue here fellas @channel
can you reference that module more than once - once for each user ?
I basically pulled this module into our gitlab and referred it as a child module from my parent module.
Not sure how can I add one more reference again within the same parent module Ronak
If I add on multiple references in my parent module like this
# CloudPosse Module for creating AWS IAM User along with S3 Permissions
module "aws-iam-s3-user" {
count = var.aws-iam-s3-user_enabled ? 1 : 0
source = "[email protected]:qomplx/engineering/infrastructure/terraform-modules/terraform-cloudposse-aws-iam-s3-user.git"
name = var.iam_user_name
s3_actions = var.s3_actions
s3_resources = var.s3_resources
}
It will be complicated when the time comes for 50 - 100 users.
you would need to do a for_each
like this
# CloudPosse Module for creating AWS IAM User along with S3 Permissions
module "aws_iam_s3_user" {
for_each = var.aws-iam-s3-user_enabled ? toset(var.users) : 0
source = "cloudposse/iam-s3-user/aws"
version = "0.15.3"
name = each.key
s3_actions = var.s3_actions
s3_resources = var.s3_resources
}
then you can pass in var.users = ["user1", "user2"]
something like that would work
Sure, let me try this option…
note: for best practices
• i renamed the module name so it uses underscores instead of dashes
• i set the source and version so its pinned
Unserstood…
Testing this for_each method…
It ended up with this output Ronak
│ Error: Invalid value for input variable
209│
210│ on ./terraform.tfvars line 34:
211│ 34: users = ["user1", "user2"]
212│
213│ The given value is not valid for variable "users": string required.
You need to create a variable
Actually hang on
Or a local
I actually created a variable for users and passed the values after your change
And I ended up with The given value is not valid for variable "users": string required.
Change the variable type to list
aah ok ok
one sec
It might help to do some terraform tutorials to pick up the basics
Yeah, I am not an expert in TF… Learning as I go. And in this case, instead of 1 for for_each condition a set ([]) and few other changes worked
Thanks Ronak for the inputs here. Appreceate it
Hi Ronak, can I bother you for one more question I am having here while dealing with this module?
Sure then…
My question here is, since I got the creation multiple users sorted out, I am trying to give permissions for an individual user for a specific S3 resource. But the problem here is when I give multiple S3 resources (under s3_resources
), all users are getting the permissions applied for all S3 resources by default.
In my case, basically I want to target an individual user for an individual S3 resource.
I am missing the logic on how to get to this objective here Ronak using this module…
couldnt you use something like module.aws_iam_s3_user.user-1
to reference a specific user ?
or perhaps im misunderstanding
Basically, this is how my setup is: main.tf
iam_user_name = local.iam_user_name
s3_actions = var.s3_actions
s3_resources = local.s3_resources
aws-iam-s3-user_enabled = var.aws-iam-s3-user_enabled
locals {
s3_resources = ["S3 bucket 1", "S3 bucket 2"]
iam_user_name = ["IAM User 1", "IAM User 2"]
}
And the tfvars file has the S3:actions (get object)
So whats happening here is, all IAM Users are getting permissions on all S3 buckets. So, I am trying to tag basically IAM user 1 with only S3 bucket 1 only and IAM user 2 with S3 bucket 2 and so on….
In the above code, I need to link each iam_user_name
with a specific s3_resources
i thnk you may want this zipmap function https://www.terraform.io/docs/language/functions/zipmap.html
The zipmap function constructs a map from a list of keys and a corresponding list of values.
zipmap(local.iam_user_name, local.s3_resources)
that will create a mapping of the user to the s3 resource
Ok, I am gonna try to work with this zipmap
Function and will let u know if i find a solution
Thanks again Ronak
I used a key/value pair to match the iam user and s3 buckets Ronak….
Awesome!
Hi, all. I’m trying to use cloudposse/terraform-aws-cloudfront-s3-cdn
in a module with an existing origin bucket managed in a higher level block using cloudposse/terraform-aws-s3-bucket
. I’m getting a continual change cycle where the CDN module sets the origin bucket policy, but then the S3 module goes in and wants to re-write the policy. I’m not sure how to address this. Is there a way to get the S3 module to ignore_changes on the bucket policy or pass in the CDN OAI policy bits so that they’re not stomped on by S3 module runs?
FYI, I addressed this via copying out the bucket policy and hard coding it into the s3-bucket module. This is exceptionally gross, but it lets my applys proceed.
:wave: Anyone know if possible to ignore_changes
to an attribute in a dynamic block? Doesn’t seem so.
2021-09-21
Anyone building self-hosted GitHub Action Runners using terraform? I found this module, which looks pretty reasonable… https://github.com/philips-labs/terraform-aws-github-runner
Terraform module for scalable GitHub action runners on AWS - GitHub - philips-labs/terraform-aws-github-runner: Terraform module for scalable GitHub action runners on AWS
Yes, I’ve come across this one. It’s very nice!
Terraform module for scalable GitHub action runners on AWS - GitHub - philips-labs/terraform-aws-github-runner: Terraform module for scalable GitHub action runners on AWS
We use a similar but smaller one at cloudposse
https://github.com/cloudposse/terraform-aws-components/tree/master/modules/github-runners
Opinionated, self-contained Terraform root modules that each solve one, specific problem - terraform-aws-components/modules/github-runners at master · cloudposse/terraform-aws-components
oh nice! in case you didn’t see it, support for ephemeral (one-time) runners was just released, https://github.blog/changelog/2021-09-20-github-actions-ephemeral-self-hosted-runners-new-webhooks-for-auto-scaling/
GitHub Actions: Ephemeral self-hosted runners & new webhooks for auto-scaling
What is considered a “best practice” when dealing with many projects that are mostly similar in setup / configuration? A lot of our projects share ~90-95% of the same setup approach (e.g. VPC + ALB + ECS + RDS + Redis + SES + ACM + SSM) and only differ slightly (some have no Redis or no RDS, or additional parameters assigned to the ECS instance).
For each project we currently have separate Git repositories and the current approach when new infrastructures needs to be built that all Terraform code for one of the other projects is copied in and modified accordingly (mostly replacing vars, adding in some additional ECS Secrets / Parameters etc). This is fairly quick to do and is also flexible as we can simply add or remove things we do (not) need.
But it doesn’t feel like the most optimal approach. It’s also somewhat of a PITA if a change has to be made across all projects.
A few idea’s that spring to mind to address this:
- Create a Terraform “app” module where we can toggle components using variables (e.g.
redis_enable = false
), use this as only module and add in optional custom extra’s (e.g. a project that needs a service not covered by theapp
module) - Use
Atmos
(but this appears to be pretty much the same way by copy/pasting) I’m eager to learn how others are doing this.
+1 for Atmos
Problem with the single App module is that you’ll run into your root module being too large, which can be a huge pain due to large blast radius and a host of other annoying problems.
I’d suggest atmos and the SweetOps workflow as well. It is copying + pasting using vendir
, so it follows a defined pattern and ensures that you don’t end up drifting your components (root modules) from one another. You’ll need to make that a policy at your org, but that shouldn’t be too hard: “No one updates components locally — updates only go upstream and then they’re updated in the consuming project via vendir”.
You could also look into potentially consolidating all your git repos and then each of your environments / projects just becomes another Stack file.
Yeah, I’ve stayed away from a single app module but have a similar issue. Lots of same but slightly different modules to compose a service. One way could be to have a “template” terraform repo that creates the real service repo based on some vars. Not sure how I feel about this. Plenty of tools out there for templating same but different services
Thanks @Matt Gowie!
The root module being too large is definitely a problem.
Yesterday - before I asked this question - I was experimenting with building one but I wanted everything to be toggle-able (ecs on/off, redis on/off, acm cert on/off, rds on/off etc) but even after tinkering on it for ~2 hours it already became quite complex with a large number of enabled
/count
/try()
etc.
Looking into Atmos
has been on my backlog ever since its demo in Office Hours a few months ago. Good excuse to spend some time on that now I guess :-)
I did find https://github.com/cloudposse/atmos/blob/master/example/vendir.yml and https://github.com/cloudposse/terraform-aws-components which seems like a good starting point.
Universal Tool for DevOps and Cloud Automation (works with terraform, helm, helmfile, istioctl, etc) - atmos/vendir.yml at master · cloudposse/atmos
Opinionated, self-contained Terraform root modules that each solve one, specific problem - GitHub - cloudposse/terraform-aws-components: Opinionated, self-contained Terraform root modules that each…
@Frank Start with https://docs.cloudposse.com/ — I wrote those up earlier this year and they cover a good intro of what you can do and how it all works out. Would be great to hear any feedback as well!
Keep in mind that with atmos you get import
functionality, so you can define the stack and then import it to rapidly deploy. However, there’s a lot of other architectural decisions we make in how we design our modules/components that ensures it works very well for us.
Excellent, thanks. It’s quite a shift from how we’re doing things right now but its a better approach for maintaining many projects. And of course being able to onboard new customers/environments even faster.
Yes, agreed - it’s a shift that may require some juggling.
has anyone been able to get the terraform-aws-ecs-web-app to work with for_each
it seems to be cranky with the embedded provider configuration in the github-webhooks module. https://github.com/cloudposse/terraform-github-repository-webhooks/blob/master/main.tf
Terraform module to provision webhooks on a set of GitHub repositories - terraform-github-repository-webhooks/main.tf at master · cloudposse/terraform-github-repository-webhooks
I have been a contributor for that module
Terraform module to provision webhooks on a set of GitHub repositories - terraform-github-repository-webhooks/main.tf at master · cloudposse/terraform-github-repository-webhooks
The Terraform configuration must be valid before initialization so that
Terraform can determine which modules and providers need to be installed.
╷
│ Error: Module module.apps.module.web_app.module.ecs_codepipeline.module.github_webhooks contains provider configuration
│
│ Providers cannot be configured within modules using count, for_each or depends_on.
yeah i think im in there somewhere also
there was a conversation about moving the provider out of the module
would be bueno, can you link me to that ?
I mean internally
so the way I used it is that I added the provider in my module and that will take precedence over the cloudposse module
but will that get rid of the error… the provider is still there
the reason why it was there was that you could use the anonymous API or credentials pass through the GITHUB_ ENV variables which the provider can read
right, would be nice if it just needed to be defined in the root
send a PR, I can approve it
yeah i think it fundamentally changes the codepipeline module
not sure anyone would be too happy with that change
it is a pretty bad practice to set the provider in a submodule
what I did was to use the ecs-web-app module but I set the github stufff outside of that module
the access for codepipeline can be done after the fact and it will still work
yeah, none of that will work with a for_each loop
this is what i need https://github.com/hashicorp/terraform/issues/24476
Use-cases I'd like to be able to provision the same set of resources in multiple regions a for_each on a module. However, looping over providers (which are tied to regions) is currently not sup…
yes
Following up on this question, I’m having the same issue, and wondering if anyone has a workaround. I’m using ecs-web-app module, that calls codepipeline child module, which in turn calls github webhooks child module. I get the following error
│ Error: Module module.ecs_web_app.module.ecs_codepipeline.module.github_webhooks contains provider configuration │ │ Providers cannot be configured within modules using count, for_each or │ depends_on.
I’m using codestar connections so would not need the webhooks module at all. Any way to disable github webooks module from ecs-web-app? My only idea right now is to have all these modules in a local source and modify them to get rid of the validation error.
that module is opinionated and uses github so you could disable the webhook and do it yourself
@András Sándor i forked it https://github.com/itsacloudlife/terraform-aws-ecs-web-app-no-pipeline
Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more.
I have use that module with no codepipeline before but if you want to support other products PRs are welcome
im not sure if they updated it but the github provider in the codepipeline sub module is what busted it
even when i disable the sub-module it’s still cranky
2021-09-22
any good resources to learn terraform for gcp?
HI everyone :hand:,
I have weird behavior with s3 terraform resource. Specifically with this aws_s3_bucket_object
.
I have a local property array list, containing a .csv values, and I need to create a s3 object for each element array list value.
This is my terraform code:
local{
foo_values = [
{
"name" = "foo_a"
"content" = <<-EOT
var_1,var_2,var_3,var_4
value_1,value_2,value_3,value_4
EOT
},
{
"name" = "foo_b"
"content" = <<-EOT
var_1,var_2,var_3,var_4
value_1,value_2,value_3,value_4
EOT
}
]
}
aws_s3_bucket_object
resource "aws_s3_bucket_object" "ob" {
bucket = aws_s3_bucket.b.id
count = length(local.foo_values)
key = "${local.foo_values[count.index].name}.csv"
content = local.foo_values[count.index].content
content_type = "text/csv"
}
When i apply it locally, all works fine, and then when i try to make a terraform plan it give me a No changes. Infrastructure is up-to-date
message
My coworkers tried to make a terraform plan and they got the same
message.
But, when I launch a terraform plan into Codebuild container, with the same terraform version and with no code changes. The terraform plan give me this changes to apply.
The atrr content
of aws_s3_bucket_object
makes a diff in terraform tfstate when the code has not been modified. And this only appear when run terraform plan on CodeBuild context. If run locally all is ok.
¿Does anyone know what I’m doing wrong?.
I am using terraform version = 0.12.29
Thanks!!
looking for some advise if possible … i have a go binary called rds-to-s3-exporter
it needs to run as a lambda in each account, I have two options here
- Add the binary as a zip file to an core s3 bucket
- Push a docker image to a core ECR registry however on both occasions I need to make changes to their the bucket policy or registry policy when we create a new account.
does anyone have a nice way to do this?
Are all accounts in the same organization?
run the lambda centrally, using assume-role to gain access to other accounts?
as part of the new account process, create an s3 bucket in the account, push the binary there, and create the lambda in the account?
my other option is using a gitlab release for the binary and then using a local provisioner in the module to get the zip file
this way i don’t need to worry about how many accounts we create as this will just work regardless
a gitlab release… that would be an interesting provider datasource… have the provider retrieve the binary instead of a local provisioner…
For speed I’m think of just using a local provisioner trying to work out how to obtain it though as the glab binary requires interaction
v1.1.0-alpha20210922 1.1.0 (Unreleased) UPGRADE NOTES: Terraform on macOS now requires macOS 10.13 High Sierra or later; Older macOS versions are no longer supported. The terraform graph command no longer supports -type=validate and -type=eval options. The validate graph is always the same as the plan graph anyway, and the “eval” graph was just an implementation detail of the terraform console command. The default behavior of creating a plan graph should be a reasonable replacement for both of the removed graph…
Question: Curious to know if someone has a solution to bootstrap RDS Postgres for IAM authentication, specifically creating and granting the IAM user in the database?
for more context: https://aws.amazon.com/premiumsupport/knowledge-center/rds-postgresql-connect-using-iam/
Can you explain what the gap is? Technically, you set iam_database_authentication_enabled
to true on the aws_db_instance
On the IAM role front, this is what we do:
# IAM Policy: allow DB auth
resource "aws_iam_role_policy" "db-auth" {
count = length(local.psql_users)
name = "db-auth"
role = element(local.roles, count.index)
policy = <<-EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DBpermissions",
"Effect": "Allow",
"Action": [
"rds-db:connect"
],
"Resource": [
"arn:aws:rds-db:${var.aws_region}:${var.aws_account}:dbuser:${module.rds.rds_resource_id}/${element(local.psql_users, count.index)}"
]
}
]
}
EOF
}
@Yoni Leitersdorf (Indeni Cloudrail) maybe I missed the ease of it but how are you populating the local user?
CREATE USER iamuser WITH LOGIN;
GRANT rds_iam TO iamuser;
currently teams are doing this manually. local-exec provisioner requires connectivity and access which our gitlab runner do not have. Wondering what others do, before I dive this.
Ah yes, we use the local-exec to do it.
How are folks dealing with the braindeadedness that is TF 0.14+ .terraform.lock.hcl
files ? We have a pretty large set of Terraform roles/modules, and boy what a pain to manage & upgrade a zillion different .terraform.lock.hcl
files..
using terragrunt, i just delete it using hooks, but also add it to .gitignore…
before_hook "terraform_lock" {
commands = ["init"]
execute = ["rm", "-f", ".terraform.lock.hcl"]
}
after_hook "terraform_lock" {
commands = concat(get_terraform_commands_that_need_locking(), ["init"])
execute = ["rm", "-f", "${get_terragrunt_dir()}/.terraform.lock.hcl"]
}
Ha, yes, @Gabe is trying to get me to just git ignore them …
I wish they had some sort of hierarchical method like .gitconfig
so I could populate the list once per git repository…
for CI, we do also already zip up a pinned terraform binary and provider cache, and host the zip. then before execution, retrieve and extract the bundle. so not too much concern about the supply chain risks that the lock is trying to protect you from…
@mrwacky like **/.terraform.lock.hcl
?
How do you manage that Loren?
you mean the bundle @Alex Jurkiewicz? currently still using terraform-bundle. eventually we’ll switch to terraform providers mirror
. wrapped in a make target
@Erik Osterman (Cloud Posse) - yup
I have worked up a disgusting shell script to regenerate all of them as quickly as possible.
I mean – I wish Terraform would walk up the filesystem tree to find .terraform.lock.hcl
similar to how git
searches for .gitignore
files.. Then I could have as few as 1 lockfile per repo
Hi All, I’ve started using the following module in one of my customers as a quickstart. We are making some modifications to meet our requirements. We’ve added the LICENSE
file but I can’t find the NOTICE
file as stated in the README.md
file. By not having a NOTICE
file I believe we need to add a header to our *tf
files, correct?
https://github.com/cloudposse/terraform-aws-ecs-alb-service-task
Not quite sure what you are thinking of, but the Apache Software Licence is permissive. If you fork the module, you can do whatever you want, except strip the CloudPosse copyright
I was under the impression that we must keep the LICENSE
file and add the CloudPosse copyright as header
for every file
do you want to relicense your fork? What you describe might be necessary in that case. But the simplest approach is to fork and change nothing about the license, commit your changes on top of the existing files
Your changes are too custom to send back as a PR to the cloudposse version?
2021-09-23
Hello, anybody hitting the issue with multiple MX Records on https://github.com/cloudposse/terraform-cloudflare-zone, getting stopped due to duplicate object error’s
Terraform module to provision a CloudFlare zone with DNS records, Argo, Firewall filters and rules - GitHub - cloudposse/terraform-cloudflare-zone: Terraform module to provision a CloudFlare zone w…
i think the object key may need to pull in the priority into the key id to differentiate
i changed the local.records to pull it in … bit hacky i got lost down the rabbit hole with the if logic and formatting if the record.priority was present, went with the try() instead seems to work cloudflare must throw it away if it doesn’t make sense
I think use of try
here makes sense. You could also do something with lookup
and coalesce
, but try
seems like a good simple fit
i think i’d prefer to have it that it checks for the record.priority and creat the record if exists, than just blat in a default and send it to cloudflare and hope they don’t stop taking it, if it’s not appropriate down the track what you reckon ?
I don’t know this module, from what I can see of your change you only changed the key used by items in local.records
. But it seems you are now talking about changing the records this module creates also? I can’t comment on that, I don’t know enough
Some records have a priority and some don’t, the try() will throw in a default value and it will be sent to cloudflare. Cloudflare take it and probably just don’t anything with for that record type.
i’ll play with it and see how it goes
2021-09-24
Hi Folks, I’m using your s3-website module, but whenever I try to run terraform plan
the data source data "aws_iam_policy_document" "default"
gets refreshed with different output and it messes up my plan, which should produce “no chanfges”. I’m on latest terraform, the module version is 0.17.1
. In the thread I’m attaching what it produces.
data "aws_iam_policy_document" "default" {
~ id = "3597815271" -> (known after apply)
~ json = jsonencode(
{
- Statement = [
- {
- Action = "s3:GetObject"
- Effect = "Allow"
- Principal = {
- AWS = "*"
}
- Resource = "arn:aws:s3:::sandbox.example.com/*"
- Sid = ""
},
]
- Version = "2012-10-17"
}
) -> (known after apply)
- version = "2012-10-17" -> null
~ statement {
- effect = "Allow" -> null
- not_actions = [] -> null
- not_resources = [] -> null
# (2 unchanged attributes hidden)
# (1 unchanged block hidden)
}
}
and that’s how I invoke it:
module "this_s3_website" {
source = "cloudposse/s3-website/aws"
version = "0.17.1"
context = module.this.context
logs_enabled = true
encryption_enabled = false
hostname = var.hostname
parent_zone_id = var.parent_zone_id
}
I did some troubleshooting and the data "aws_iam_policy_document"
gets “rebuilt” on every terraform plan
only when I have
provider "aws" {
default_tags {
tags = ...
}
}
If I remove it, the plan is correct - No changes. Your infrastructure matches the configuration.
Is it something to raise a bug for?
that’s a bug with the provider’s default_tags parameter
Hi all, in our terraform, we got environments
and we differentiate between the different envs by using different variables.
So far so good, but what happens when we don’t want the terraform code to be exactly the same in all envs?
For example, in dev
i want to do waf filter by ip’s in staging
i need to combine ip’s & urls and this is changing the terraform code and of course its trying to apply this code everywhere and not only in one specific env.
Is there any way to make some programmatic intelligence behind the tf like
if env = dev then run code A
elseif env = stage run code B
elseif env = prod run code C
thanks.
Could be a nifty tool … https://github.com/im2nguyen/rover
Interactive Terraform visualization. State and configuration explorer. - GitHub - im2nguyen/rover: Interactive Terraform visualization. State and configuration explorer.
nice, does it support multiple state file? a replacement for terraboard?
Hi, it looks like the desired_size variable from the eks-node-group module is not working.
Anyone else going through this?
terraform-aws-eks-node-group - Version 0.26.0 Terraform v0.14.11
the input var is passed to the local ng
map which is then passed in as scaling_config
in the aws_eks_node_group
resource
Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.
Initial desired number of worker nodes (external changes ignored)
Does the desired_size variable only work when we create the nodes? After creating the nodes, this variable no longer works. That’s right?
This message was deleted.
is there a way to fire a cloudwatch event ad-hoc?
change the cron to run every 10 min and check
you can do a aws ecs task-run
command i believe
i am getting this …
There was an error while saving rule cron-re-dev-pe-sbx-lambda-monthly-snapshots-to-s3-eu-west-1.
Details: 1 validation error detected: Value 'AWSEvents_cron-re-dev-pe-sbx-lambda-monthly-snapshots-to-s3-eu-west-1_terraform-20210924172025276000000001' at 'statementId' failed to satisfy constraint: Member must have length less than or equal to 100.
✗ echo 'AWSEvents_cron-re-dev-pe-sbx-lambda-monthly-snapshots-to-s3-eu-west-1_terraform-20210924172025276000000001' | wc -c
107
you need to reduce the number of chars of that name
my rule is called [cron-re-dev-pe-sbx-lambda-monthly-snapshots-to-s3-eu-west-1](https://eu-west-1.console.aws.amazon.com/cloudwatch/home?region=eu-west-1#rules:name=cron-re-dev-pe-sbx-lambda-monthly-snapshots-to-s3-eu-west-1)
it looks like it’s prefixing AWSEvents_
to it and suffixing it with _terraform-20210924172025276000000001
which increases your name which goes over the max chars
are you using a name_prefix
instead of a name
argument on the resource ?
name
resource "aws_cloudwatch_event_rule" "weekly" {
count = var.schedule == "weekly" ? 1 : 0
name = "cron-${var.database_name}-lambda-weekly-snapshots-to-s3-${data.aws_region.current.name}"
description = "Cron to start the lambda that exports ${var.database_name} snapshots to S3 every Monday at 10am."
schedule_expression = "cron(0 10 ** MON *)"
}
make the name less descriptive and rely on the description field…?
you could put two rules, one that triggers one time now…
its weird that it was created fine
its when i tried to change it that it didn’t like it
i find it odd that it’s using that random terraform suffix without you using a name_prefix
is there a recommended way to alert on a failed lambda invocation?
2021-09-25
SweetOps is no longer using helmfile? Is terraform used instead for k8s/helm? Any issues w/ current APi not supported w/ k8s provider, e.g. Ingress?
we’ve been using helm_release recently
https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release
we’ve converted a few of the helm files and haven’t noticed any glaring issues so far
yea, we’re mostly using terraform’s helm provider now natively
Create helm release and common aws resources like an eks iam role - GitHub - cloudposse/terraform-aws-helm-release: Create helm release and common aws resources like an eks iam role
or where we’re deep in with helmfile for backing-services, we’ve started using the helmfile provider for terraform.
for CD, we’re mostly investing in helm + argocd
there’s nothing wrong with helmfile, it’s just we were able to consolidate without giving up too much.
2021-09-26
Hi all, i am trying to use the and_statement
to combine different statements (we need to combine ip filtering with url).
The issue is that from the documentation is not clear if the and_statement
block should include inside it the statement
argument, or the opposite, the statement
block should include inside it the and_statement
argument:
I tried several ways of composing the code, can please someone tell me what i am doing wrong?
resource "aws_wafv2_web_acl" "alb_waf" {
name = "ALB-WAF"
description = "ALB"
scope = "REGIONAL"
default_action {
block {}
}
rule {
name = "allow-specific-ips"
priority = 1
action {
allow {}
}
statement {
and_statement {
ip_set_reference_statement {
arn = aws_wafv2_ip_set.ipset.arn
}
regex_pattern_set_reference_statement {
arn = aws_wafv2_regex_pattern_set.staging_regex.arn
}
} # and_statement
} # statement block
error code
Error: Unsupported block type
on main.tf line 56, in resource "aws_wafv2_web_acl" "alb_waf":
56: regex_pattern_set_reference_statement {
Blocks of type "regex_pattern_set_reference_statement" are not expected here.
cc: @Ben Smith (Cloud Posse)
Hi @Almondovar, I agree these docs can be terribly confusing. So it looks like the rule{}
must contain a statement{}
which itself can contain a and_statement
, then within the and_statement
it can contain multiple statements
to join by and.
something like:
resource "aws_wafv2_web_acl" "alb_waf" {
name = "ALB-WAF"
description = "ALB"
scope = "REGIONAL"
default_action {
block {}
}
rule {
name = "allow-specific-ips"
priority = 1
action {
allow {}
}
statement {
and_statement {
statement {
ip_set_reference_statement {
arn = "aws_wafv2_ip_set.ipset.arn"
}
}
statement {
regex_pattern_set_reference_statement {
arn = "aws_wafv2_regex_pattern_set.staging_regex.arn"
text_transformation {
priority = 0
type = ""
}
}
}
}
# and_statement
}
# statement block
visibility_config {
cloudwatch_metrics_enabled = false
metric_name = null
sampled_requests_enabled = false
}
}
visibility_config {
cloudwatch_metrics_enabled = false
metric_name = null
sampled_requests_enabled = false
}
}
Another option for WAF rules would be to create them through AWS Firewall manager under WAF / WAF_v2 Policies
Note the above won’t just work as visibility config has to be set properly. but that should atleast help with the format of the rules
Try statement -> and_statement-> statement -> ip_set_reference_statement
thank you very much Fizz
2021-09-27
Hi all. I’m not sure if this is the right place but I’m looking for a review for a PR I made to one of the Cloudposse Terraform AWS modules: https://github.com/cloudposse/terraform-aws-cloudtrail-s3-bucket/pull/54
what Allows the policy variable to be used in a useful way to set a custom S3 bucket policy Conditionally the data resource for the unused default bucket policy why Issue #19 outlines why this i…
cc @Ben Smith (Cloud Posse)
what Allows the policy variable to be used in a useful way to set a custom S3 bucket policy Conditionally the data resource for the unused default bucket policy why Issue #19 outlines why this i…
Thanks Erik. I see an approval and tests passing. Now it needs merged.
I also have a PR needing review please https://github.com/cloudposse/terraform-aws-rds-cluster/pull/119
Fixes errors like: Error: error creating RDS cluster: InvalidParameterCombination: Cannot specify user name for instance cluster replication cluster
cc @Yonatan Koren
Fixes errors like: Error: error creating RDS cluster: InvalidParameterCombination: Cannot specify user name for instance cluster replication cluster
2021-09-28
I am getting timeout when creating an eks cluster using module 0.43.2.
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
2021-09-29
v1.0.8 1.0.8 (September 29, 2021) BUG FIXES: cli: Check required_version as early as possibly during init so that version incompatibility can be reported before errors about new syntax (#29665) core: Don’t plan to remove orphaned resource instances in refresh-only plans (<a href=”https://github.com/hashicorp/terraform/issues/29640“…
Version 1.0.8
Our current check of required_version happens after parsing the configuration, which may not be possible if new configuration constructs have been added to the language since the declared required_…
When planning in refresh-only mode, we must not remove orphaned resources due to changed count or for_each values from the planned state. This was previously happening because we failed to pass thr…
2021-09-30
AWS just launched a new Cloud Control API ( 1 single CRUD API for AWS resources) and Terraform has a new provider for it (links still WIP I guess?): https://aws.amazon.com/blogs/aws/announcing-aws-cloud-control-api/
Today, I am happy to announce the availability of AWS Cloud Control API a set of common application programming interfaces (APIs) that are designed to make it easy for developers to manage their AWS and third-party services. AWS delivers the broadest and deepest portfolio of cloud services. Builders leverage these to build any type of […]
Link to the new provider: https://github.com/hashicorp/terraform-provider-awscc
Hasicorp blog yet to be posted
yeah that thing seems incredibly aspirational. we’ll see.
docs indicate it depends on cloudformation resource support. i guess it’s nice to have that exposed natively (best of both worlds!), but that support hasn’t always moved quickly…
i’m curious if the awscc provider accepts the same authentication mechanisms and configuration settings as the aws provider… can i pass a profile? a role_arn? credential_process? how do i override endpoints?
(Just saw the previous post about AWS Cloud Control) Being based on CloudFormation, I wonder how much of that bleeds through, esp since CF now supports stop-on-exception and resume-from-last-exception maybe TF interface to AWS Cloud Control API is ok.
i’m figuring we’ll see more multi-provider modules for a bit… things the aws provider does, things the awscc provider does… not loving that idea
registry docs went live recently, answering some of my questions on authentication… https://registry.terraform.io/providers/hashicorp/awscc/latest/docs#authentication
hello folks, im very newbie on devops culture. so i was figuring if docker and terraform do the same job and why use terraform instead of docker who has a bigger marketshare, sorry if i was rough but im just a beginner trying to catch what is better to learn by now days
they are orthogonal. learn both.
@lucaslu they are very different:
• with terraform you write code the describes infrastructure resources like load balancers, security groups, virtual private clouds, etc
• with docker you build and run docker “images” in “containers” an image is like a snapshot of a mini linux environment and the container is like the computer running that linux Normally you need both: You will use terraform to setup resources that will be used to run docker containers, such as AWS ECS or EKS (or Azure AKS or Google GKE), databases, message queues, etc.
thank u so much for the explanation OliverS
You’re welcome, good luck!
hey ya’ll, not sure if it’s possible, but heres a tiny problem im hitting…
1> Someone deployed some tf stuff from local, state file is stored in s3 2> Presumably this someone got thrown under the bus and didnt have a chance to push the iac, assuming iac is lost 3> The actual resources went thru some manual hell… and i would like to restore/revert back to the original state based on the json
is this possible? something to do with tainting…?
terraform will do this automatically
it will make the cloud infra look like what your local code specifies
eg, you have a stack which creates an rds instance of type r5.4xlarge. Someone comes along and changes the instance’s type to t3.small If you re-ran terraform, it would detect this change and propose changing the size back to r5.4xl
hmm maybe u missed the point where i dont have the actual tf code
i only have the state file
still doable u think?
oh. that’s not really possible
you can read the statefile and attempt to write the configuration it describes, by hand
well, I guess there is one other approach.
If you try and apply a blank configuration with this statefile, it will propose deleting every resource. You could copy and paste the resource definitions it proposes deleting into your local configuration. That will speed things up. If there were no modules involved..
hmm i’ll give this magic a shot
it’s complaining already…
Error: Provider configuration not present
To work with aws_route_table_association.public[2] its original provider
configuration at provider["registry.terraform.io/-/aws"] is required, but it
has been removed. This occurs when a provider configuration is removed while
objects created by that provider still exist in the state. Re-add the provider
configuration to destroy aws_route_table_association.public[2], after which
you can remove the provider configuration again.
you’ll need to add the aws provider at least
yes i did
provider "aws" {
shared_credentials_file = "~/.aws/credentials"
profile = "iac_hello_world" # CHANGE ME
region = "us-east-1"
}
ah. the original code was using a much older terraform version
you have to update the provider address from
registry.terraform.io/-/aws
to
registry.terraform.io/hashicorp/aws
there is a command to do it in your statefile automatically, but I forget it. You might be able to find it. Or you can edit the statefile manually
as in cli from tf?
yes
The terraform state replace-provider
command replaces the provider for resources in the Terraform state.
something like terraform state replace-providers -/aws hashicorp/aws
yup!
magic…
thanks, at least i see it plans to destroy everything now…
ok it turns out this is a vpc stack, and it appears that some NAT got deleted already… so in this case i guess theres no chance of bringing it back?
If the IAC is lost, you need to recreate it from scratch and bring the existing resources under its management.
You could loop over the items in the state file and auto create entries in a main.tf. Have a look at terraformer also, as it will generate a skeleton tf file, you will use the existing state file to tell it what to import.
Once all of the existing infra is back under tf management, you will have create definitions for the resources that have been deleted, you will have to use terraform state show NAME
and guess the spec that will recreate the missing resources.
let me give it a whirl