#terraform (2021-01)
Discussions related to Terraform or Terraform Modules
Archive: https://archive.sweetops.com/terraform/
2021-01-01
2021-01-02
2021-01-03
2021-01-04
Question: Route53 in multiple AWS accounts environment [with Organization used] I’m using Route53 to create Route53 zones in Master, dev and Prod account:
• Master account: root level domain = example.com
• dev account: sub root level domain = dev.example.com
• prod account: ? not sure here should be prod.example.com or aws.example.com or another domain or even not needed, and only use the master zone I need this to create delegation in master zone for dev account and prod account (not decided yet)
Depends on your company and how you use domains and Route53 :slightly_smiling_face:
You could have [example.com](http://example.com)
in the main account be an alias for [prod.example.com](http://prod.example.com)
which is in prod account if you really want to keep that.
You could have a static-only Amplify Console for [example.com](http://example.com)
in the main account, and then have your actual app at [app.example.com](http://app.example.com)
or [live.example.com](http://live.example.com)
in the prod account.
If you’re doing SaaS is gets even wilder Global Accelerator, WAFs, multi-region with multi-domain and so on gets even weirder
Loop in Marketing/Sales cause they’ll definitely have opinions on this and they might make the decision for you
We’re doing something very similar; multiple AWS accounts based on environment (prod, staging, dev, etc) using AWS Organizations from the prod account to manage the others.
We have Terraform creating the top level Route 53 public hosted zone (eg. [company.com](http://company.com)
) and we manually add the NS records to our registrar. We experimented with a Terraform provider to manage the records with the registrar, but it created too much risk with the limited API interface that the registrar provided.
Within our non-prod AWS accounts, we have Terraform creating public hosted zones like [staging.company.com](http://staging.company.com)
, [dev.company.com](http://dev.company.com)
, etc. and we use the outputs of those zones to create NS records in the prod public hosted zone for those subdomains.
As @Vlad Ionescu (he/him) mentioned, things start to get complicated when you start getting into multi-region, but we’re handing this with private hosted zones for each VPC. As an example, within us-east-1
, hosts within the VPC and users connected via VPN are able to resolve things like [hostname.use1.dev.company.net](http://hostname.use1.dev.company.net)
.
@Drew Davies I see, very similar, I added the top level domain to master account not to prod account. so for master account >> example.com and for prod account prod.example.com what I need to do is create cname from example.com to prod.example.com and then from prod.example.com to ALB/EC2/Cloudfront ….
I think CNAME on Apex is not possible
If you need the apex redirect in AWS, you could utilize s3 (+cloudfront) to redirct http(s):<//example.com> to www.example.com or prod.example.com
I want to send to ALB,
I end up moving top-level domain to prod account. and use dev.X in dev account and delegate dev.x in prod top level domain
That is also an option
If for whatever reason this would not be possible, you could still use the apex redirect trick via s3 - then it redirects to the hosted zone in the sub-account where you can then alias the alb
good to know, thanks. I will test it later
anyone using terratest with the terratest upstream packages?
package s3
import (
"math/rand"
"os"
"testing"
"github.com/gruntwork-io/terratest/modules/aws"
"github.com/gruntwork-io/terratest/modules/terraform"
"github.com/stretchr/testify/assert"
)
const letterBytes = "abcdefghijklmnopqrstuvwxyz"
func RandStringBytes(n int) string {
b := make([]byte, n)
for i := range b {
b[i] = letterBytes[rand.Intn(len(letterBytes))]
}
return string(b)
}
func TestAssertBucketPolicyExists(t *testing.T) {
pwd, _ := os.Getwd()
randomString := RandStringBytes(16)
terraformOptions := &terraform.Options{
EnvVars: map[string]string{
"AWS_DEFAULT_REGION": "eu-west-1",
},
TerraformDir: pwd,
Vars: map[string]interface{}{
"environment": randomString,
"region": "eu-west-1",
},
}
//Execute Terraform
defer terraform.Destroy(t, terraformOptions)
terraform.InitAndApply(t, terraformOptions)
//Perform assertions
aws.AssertS3BucketPolicyExists(t, "eu-west-1", "ume"+"-"+randomString+"-access-logs")
}
TestAssertBucketPolicyExists 2021-01-04T15:23:07Z logger.go:66: logging_bucket_name = ume-xvlbzgbaicmrajww-access-logs
s3.go:355:
Error Trace: s3.go:355
s3policy_test.go:45
Error: Received unexpected error:
Error finding AWS credentials. Did you set the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables or configure an AWS profile? Underlying error: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Test: TestAssertBucketPolicyExist
it creates the resources via terraform without an issue but when trying to run the AssertS3BucketPolicyExists
function it cant create a session correctly
i’ve had to export AWS_REGION
for terratest to work. i don’t know why, i just do it
my env:
AWS_PROFILE=<profile>
AWS_DEFAULT_REGION=us-east-1
AWS_SDK_LOAD_CONFIG=1
AWS_REGION=us-east-1
Do you have a default profile @loren ?
I’m looking at https://github.com/gruntwork-io/terragrunt/issues/671
Hi guys, Kindly asking for your advice. I see that that the issue was previously discussed several times from different angles. I can't make our TG code working with my creds in AWS multiaccoun…
Adding the additional env vars helped but now I am getting this …
TestAssertBucketPolicyExists 2021-01-04T1914Z logger.go TestAssertBucketPolicyExists 2021-01-04T1914Z logger.go Destroy complete! Resources: 3 destroyed. — PASS: TestAssertBucketPolicyExists (22.53s) FAIL FAIL tf-modules/modules/s3-logs 49.090s FAIL make: *** [terratest-local] Error 1
that doesn’t include the error, so….
i don’t use a default profile, no
Yeh for some reason the error just gets swallowed
TestValidateS3BucketName 2021-01-04T20:03:53Z logger.go:66: Error: Error creating S3 bucket: Error creating S3 bucket ume-xvlbzgbaicmrajww-access-logs, retrying: OperationAborted: A conflicting conditional operation is currently in progress against this resource. Please try again.
TestValidateS3BucketName 2021-01-04T20:03:53Z logger.go:66: status code: 409, request id: A9C34D53FF312801, host id: mZDfV01Dh0VgijnLuxz7AUyd9iJt0PpiW9KPDv+94NqPUak67BVK7K4psny+srW5SyWcDX27bsQ=
TestValidateS3BucketName 2021-01-04T20:03:53Z logger.go:66:
TestValidateS3BucketName 2021-01-04T20:03:53Z logger.go:66:
TestValidateS3BucketName 2021-01-04T20:03:53Z retry.go:80: Returning due to fatal error: FatalError{Underlying: error while running command: exit status 1;
Error: Error creating S3 bucket: Error creating S3 bucket ume-xvlbzgbaicmrajww-access-logs, retrying: OperationAborted: A conflicting conditional operation is currently in progress against this resource. Please try again.
status code: 409, request id: A9C34D53FF312801, host id: mZDfV01Dh0VgijnLuxz7AUyd9iJt0PpiW9KPDv+94NqPUak67BVK7K4psny+srW5SyWcDX27bsQ=
}
apply.go:15:
Error Trace: apply.go:15
s3logs_test.go:43
Error: Received unexpected error:
FatalError{Underlying: error while running command: exit status 1;
Error: Error creating S3 bucket: Error creating S3 bucket ume-xvlbzgbaicmrajww-access-logs, retrying: OperationAborted: A conflicting conditional operation is currently in progress against this resource. Please try again.
status code: 409, request id: A9C34D53FF312801, host id: mZDfV01Dh0VgijnLuxz7AUyd9iJt0PpiW9KPDv+94NqPUak67BVK7K4psny+srW5SyWcDX27bsQ=
}
Test: TestValidateS3BucketName
TestValidateS3BucketName 2021-01-04T20:03:53Z retry.go:72: terraform [destroy -auto-approve -input=false -var environment=xvlbzgbaicmrajww -var region=eu-west-1 -lock=false]
TestValidateS3BucketName 2021-01-04T20:03:53Z logger.go:66: Running command terraform with args [destroy -auto-approve -input=false -var environment=xvlbzgbaicmrajww -var region=eu-west-1 -lock=false]
TestValidateS3BucketName 2021-01-04T20:03:56Z logger.go:66: data.aws_elb_service_account.current: Refreshing state... [id=156460612806]
TestValidateS3BucketName 2021-01-04T20:03:56Z logger.go:66: data.aws_caller_identity.current: Refreshing state... [id=647217122429]
TestValidateS3BucketName 2021-01-04T20:03:58Z logger.go:66:
TestValidateS3BucketName 2021-01-04T20:03:58Z logger.go:66: Destroy complete! Resources: 0 destroyed.
note this is for a different test
somehow you have more than one thing trying to operate on the bucket at a time. it’s a race condition in your tf code
for example, if you are using the bucket_policy resource and the public_access_block resource and the bucket_notification resource, only one can hit the api action at a time
you have to thread the bucket name/id/arn from resource to resource so it happens in order
they are completely different buckets though
the are two completely different tests
terratest runs tests in parallel also, by default. the error seems pretty clear. it’s definitely a race condition on the bucket operations. just gotta trace through how you’re doing things to figure out what is causing the conflicting operations
the two tests create two seperate buckets, it tries to get bucket X after terraform has successfully destroyed bucket Y
keep going
now i am getting this …
s3-logs/s3logs_test.go:4:2: use of vendored package not allowed
FAIL tf-modules/modules/s3-logs [setup failed]
google “golang use of vendored package not allowed”
that was my made was a mismatch
i am wondering if its a race condition in the terraform the requires a depends_on
because if i comment out the other test in the module it works perfectly fine
resource "aws_s3_bucket_public_access_block" "this" {
bucket = aws_s3_bucket.this.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
depends_on = [aws_s3_bucket_public_access_block.this]
}
@loren adding that depends_on fixed it
depends_on itself?
appears to be what i was describing here: https://sweetops.slack.com/archives/CB6GHNLG0/p1609791301011500?thread_ts=1609782809.000200&cid=CB6GHNLG0
for example, if you are using the bucket_policy resource and the public_access_block resource and the bucket_notification resource, only one can hit the api action at a time
yeh adding that fixed concurrent test execution for the module
you’ve maybe found an odd way of fixing it. or maybe just the race condition not manifesting for a run or two
Hmm.. Within the terraform-aws-acm-request-certificate
module, I keep getting:
27: for_each = {
28: for dvo in local.domain_validation_options_list : dvo.domain_name => {
29: name = dvo.resource_record_name
30: record = dvo.resource_record_value
31: type = dvo.resource_record_type
32: }
33: }
the new version?
Yeah, I think this is my issue though unfortunately for me. Looks like the state was saved locally instead of in S3 and now no longer exists
did you have the module pinned at a version
Yeah, v0.11.0
hmm let me take a look, i tested this last night and was fine
what aws version are you using, im guessing it must be 3.x
also just for reference what terraform version are you on
I’m 99% sure the problem is the deleted .tfstate
file.
gotcha, is there a backup in your directory
In the .terraform
directory?
no your apply directory
Ahh, no, doesn’t appear to be. I’m looking into importing the state from existing resources.
gotcha yea you won’t be able to import aws_acm_certificate_validation as its not an actual aws resource not sure if that will cause a problem or not
I know there are ways to import modules, terraform import module.aws_acm_certificate_validation["blahblah"] <resource-id>
Haven’t gotten to that step though, honestly, the Route53 Zones are the only thing I’m worried about losing, as then I’ll have to reset the DNS NS Records
Totally different issue, but having problems importing the zone when the zone has a period in the name. >_>
oh
crucial issue ugh, forgot to push a local commit
ok yea this fixes that oops
what there is no need to convert to a list anymore as its a set why fixes bug introduced by me :) references issue from slack thread https://sweetops.slack.com/archives/CB6GHNLG0/p1609784495001600
@pjaudiomv How would I use the pull request in my local modules? And what’s the general merge time like for these?
I recovered my state, but have the same issue as initially described (which would be fixed by your PR)
once Patric is done I will be merging right away
Thanks PePe
im almost done, I was trying to do too many. things at once. I stepped away from day job and am finishing testing now
ok @David Napier i believe you should be able to use 0.12.0 now of module
version = "0.12.0"
Yesss, it worked, thanks!
You made my day.
thank you PePe for the patience and quick reviews
no problem
thanks for your contributions
Hello all and Happy New Year from the UK. Could somebody in the know please confirm if what I’m trying to do is possible. I’ve written a little module to create ACM cert’s in AWS. Its working great but I’ve tried to modify it to allow it to create the cert in AWS as normal but create the DNS verification records that the cert needs via a different DNS supplier, Cloudfare in my particular case.
More details within thread…
More details….
Terraform initially complained that I need to supply providers argument along with the module
Error: Module does not support for_each
on acm_us-east-1.tf line 14, in module "acm_us-east-1":
14: for_each = { for config_key, config in var.site_configs : config_key => config if config.ssl_config.aws.create_ssl_cert == true }
Module "acm_us-east-1" cannot be used with for_each because it contains a
nested provider configuration for "cloudflare", at acm\versions.tf:13,10-22.
This module can be made compatible with for_each by changing it to receive all
of its provider configurations from the calling module, by using the
"providers" argument in the calling module block.
So I updated the code to include
providers = {
aws = aws.us-east-1
cloudflare = cloudflare
}
but now I’m stuck in an endless loop of terraform complaining I need to re initialise.
Error: Could not load plugin
Plugin reinitialization required. Please run "terraform init".
Plugins are external binaries that Terraform uses to access and manipulate
resources. The configuration provided requires plugins which can't be located,
don't satisfy the version constraints, or are otherwise incompatible.
Terraform automatically discovers provider requirements from your
configuration, including providers used in child modules. To see the
requirements and constraints, run "terraform providers".
Failed to instantiate provider "registry.terraform.io/cloudflare/cloudflare"
to obtain schema: Unrecognized remote plugin message:
This usually means that the plugin is either invalid or simply
needs to be recompiled to support the latest protocol.
Can anybody make a suggestion?
oh and if it helps my providers look like this currently:
.
├── provider[registry.terraform.io/cloudflare/cloudflare] ~> 2.0
├── provider[registry.terraform.io/hashicorp/aws]
├── provider[registry.terraform.io/hashicorp/random]
├── module.acm_us-east-1
│ ├── provider[registry.terraform.io/hashicorp/aws] >= 2.53.*
│ ├── provider[registry.terraform.io/hashicorp/tls]
│ └── provider[registry.terraform.io/cloudflare/cloudflare]
├── module.ses_user_enable_send
│ ├── provider[registry.terraform.io/hashicorp/aws]
│ └── provider[registry.terraform.io/hashicorp/null]
└── module.acm_eu-west-1
├── provider[registry.terraform.io/hashicorp/aws] >= 2.53.*
├── provider[registry.terraform.io/cloudflare/cloudflare]
└── provider[registry.terraform.io/hashicorp/tls]
Providers required by state:
provider[registry.terraform.io/hashicorp/aws]
provider[registry.terraform.io/hashicorp/null]
provider[registry.terraform.io/hashicorp/random]
did you try to delete the cache in the folder .terraform.d
Hello Andriy, yes; many times
also, since you are using just one provider of each type, you can use Implicit Provider Inheritance
Terraform by HashiCorp
w/o providing the providers to the module
in short, I assume you can’t have a AWS resource and a Cloudflare resource in the same .tf file?
you can have any resources in a file
I only specified them to the module as it fails to initialise unless I do so but I’ll rip it out again and retry. I might have missed something. Thanks for the advice
in your case, I guess TF just stuck at something when you changed the provider config
did you try to destroy using the old code, and plan again?
I haven’t as I didn’t really want to destroy anything in my test environment, If everything works as it should it shouldn’t see any changes in state. Other than the new AWS based certificate I’m trying to create with cloudflare validated dns records.
try to use https://www.terraform.io/docs/modules/providers.html#implicit-provider-inheritance w/o destroying the SSL cert
I’ll try starting again with a bigger hammer. As you say, It might be confused. Thanks for the advice.
specify dthe AWS and CloudFlare providers in the top-level module
don’t send any provider to the low-level module
Okay. Would I still specify required providers within the lower module?
e.g.
terraform {
required_version = ">= 0.13.5"
required_providers {
aws = ">= 2.53"
cloudflare = {
source = "cloudflare/cloudflare"
}
}
}
or completely ditch those?
You can specify those
apologies, on second thoughts won’t I still need to specify the provider in the top level calling module?
module "acm_us-east-1" {
source = "./acm"
providers = {
aws = aws.us-east-1
cloudflare = cloudflare
}
*** rest snipped ***
Otherwise, I wont be able to target the us-east-1 region to create the certificate? as I normally run out of eu-west-1
providers = {
aws = aws.us-east-1
}
if you run from a diff region, you need to specify the correct aws provider to the module
(i’m not sure if you have to specify cloudflare in this case, or Implicit inheritance will work for it. Try not to specify it)
Initializing modules...
Initializing the backend...
Initializing provider plugins...
- Using previously-installed hashicorp/null v3.0.0
- Using previously-installed hashicorp/aws v3.22.0
- Using previously-installed hashicorp/random v3.0.0
- Using previously-installed cloudflare/cloudflare v2.15.0
The following providers do not have any version constraints in configuration,
so the latest version was installed.
To prevent automatic upgrades to new major versions that may contain breaking
changes, we recommend adding version constraints in a required_providers block
in your configuration, with the constraint strings suggested below.
* hashicorp/null: version = "~> 3.0.0"
* hashicorp/random: version = "~> 3.0.0"
Terraform has been successfully initialized!
terraform_0.13.5 plan
Releasing state lock. This may take a few moments...
Error: Could not load plugin
Plugin reinitialization required. Please run "terraform init".
Plugins are external binaries that Terraform uses to access and manipulate
resources. The configuration provided requires plugins which can't be located,
don't satisfy the version constraints, or are otherwise incompatible.
Terraform automatically discovers provider requirements from your
configuration, including providers used in child modules. To see the
requirements and constraints, run "terraform providers".
Failed to instantiate provider "registry.terraform.io/cloudflare/cloudflare"
to obtain schema: Unrecognized remote plugin message:
This usually means that the plugin is either invalid or simply
needs to be recompiled to support the latest protocol.
only provider blocks are now at the top level
If this is something that should be possible then I’ll start again, with a basic config block and work backwards from there.
did you delete the cached files in ~/.terraform.d
?
i don’t have a .terraform.d folder only a .terraform folder which I have deleted. Out of total paranoia I also blasted the code away, rebooted and then checked it out again but no difference.
I’ll try a single high level simplified version
resource "aws_acm_certificate" "cert" {
domain_name = var.domain
validation_method = "DNS"
lifecycle {
create_before_destroy = true
}
}
data "cloudflare_zones" "find_zone_ids" {
for_each = {for ssl_key, ssl_value in var.ssl_root_domain_names : ssl_key => trimsuffix(replace(ssl_value, "*.", ""), ".") if var.ssl_validation_method == "DNS_Cloudflare" && var.ssl_perform_validation == true}
filter {
name = each.value
}
}
resource "cloudflare_record" "cloudflare_cert_validation_records" {
for_each = {
for dvo in aws_acm_certificate.this.domain_validation_options : dvo.domain_name => {
name = dvo.resource_record_name
record = dvo.resource_record_value
type = dvo.resource_record_type
zone_id = data.cloudflare_zones.find_zone_ids[dvo.domain_name].zone_id
} if var.ssl_validation_method == "DNS_Cloudflare" && var.ssl_perform_validation == true
}
allow_overwrite = var.ssl_allow_overwrite_dns
zone_id = each.value.zone_id
name = each.value.name
value = [each.value.record]
type = each.value.type
proxied = true
}
I assume something as simple as that should work and just keep the providers at the parent level without the need to send them on to a module.
I’ll need to clean the “var.” names up as I’ve only just quickly pulled that together. Main point is that the mixed up resources types should be fine. Based on your initial statement?
check for .terraform.d
folder in your $HOME directory (not in the project)
Okay, looks like I owe you a BIG thanks and a even bigger SORRY for not clarifying about the .terraform.d directory earlier!
I stupidly thought, it was a difference between Window or Unix terraform or terraform grunt or whatever other people use. I unfortunately didn’t know it was a separate directory altogether. I’ve mistakenly thought everything was under the .terraform directory.
Feel a right Fool now.
While the code still isn’t working its no longer complaining about needing to re init.
Thank you and sorry for wasting more of your time than was needed.
no problem at all
(that folder always the culprit if you have issues with provider initialization)
Internal Wiki updated and lesson truly learnt. So thanks! I’d be lost without all the support you and others have given me here. I still suggest Cloud Posse publish a charity payme type of thing, As I’d happily donate each time somebody helped me.
Hello hello is it possible to use some provider functionality only on brach on the terraform repo? I’m trying to utilize the functionality on this branch to test Amazon Managed Workflows for Apache Airflow
Community Note Please vote on this pull request by adding a reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave "…
if you can compile the provider and place the binary in the ~/.terraform.d
folder, you can test it
Community Note Please vote on this pull request by adding a reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave "…
or, you can publish it in your own registry (never did it before) and then load it from there https://www.terraform.io/docs/internals/provider-registry-protocol.html#provider-addresses
The provider registry protocol is implemented by a host intending to be the origin host for one or more Terraform providers, specifying which providers are available and where to find their distribution packages.
2021-01-05
Hello, I push scripts to azure container with the code below, but file change is not detected as the resource is already created . Is there a way to recreate the resource when the file has changed ?
resource "azurerm_storage_blob" "linux_postinstall_scripts" {
for_each = local.files
name = "{each.key}_${each.value["StartupScript"]}"
storage_account_name = var.storage_account_name
storage_container_name = var.storage_container_name
type = "Block"
source = "${path.root}/files/${each.value["StartupScript"]}"
}
can you paste local.files
it’s a map with servers option. If linux_init.sh is updated after resource creation, then it is not uploaded again
{
server1 = {
"StartupScript" = "linux_init.sh"
...
}
server2 = {
"StartupScript" = "linux_init.sh"
...
}
}
repos = {
master = {
repo_name = "xxx-terraform-master",
repo_description = "xxx Terraform Master Account",
branch_name = "master"
},
bootstrap = {
repo_name = "xxx-terraform-bootstrap",
repo_description = "xxx Terraform Bootstrap",
branch_name = "master"
},
}
resource "aws_codecommit_repository" "repo" {
for_each = var.repos
repository_name = each.value.repo_name
description = each.value.repo_description
default_branch = each.value.branch_name
tags = var.tags
}
this worked with me on Terraform 0.14.3
Oh I see, you mean updating the file it self
yes
hmm, try to use template with path
Thanks ! I ‘ll check tomorrow but that’s look to do the trick by googling on template I’ve found someone who did it with your point
Has anybody got experience with Cloudflare terraform provider? I’m creating certificates in AWS in two regions but I use Cloudflare DNS for the validation records.
AWS looks to support the overwriting of DNS records but Cloudflare appears to error if a record already exists. The reason that the validation records already exist is because the creation of the cert in the first region creates the validation records ,when the second region tries to then create the validation records it errors are they are the same.
Error: expected DNS record to not already be present but already exists
Has anybody got a solution other than moving all validation request to a separate module and trying to filter them all, which I think would be a nightmare.
what are this certificates the same? are this certificates use in like a alb? one on each region?
The certificates are the same but one is used on a ALB in eu-west-1 and the other is used on Cloudfront and needs to be created in eu-east-1 because they are the same, the verification records are identical.
so you need 2 acm certificates one in the eu-west-1 region and another in us-east-1 (this one is for CloudFront since it requieres you to have it on this region) then aws acm will generate diferent Name and Value so you can add those dns records to your cloudflare
I have done this before and I have not get the same acm certificate, another things is that if you will have one certificate for your alb and another for your cloudfront they should have different subdomains on leas you will route your alb throw cloudfront in this case you will only need 1 certificate witch is the cloudfront certificate
cheers, Miguel. I’ll have a look at my design again.
Equally, if there isn’t an overwrite option and looking at the resource_cloudflare_record.go file on git hub there isn’t. Is there any way to tell terraform to ignore an error and continue?
yo can ignore maybe do this with ignore_changes: https://www.terraform.io/docs/configuration/meta-arguments/lifecycle.html
Terraform by HashiCorp
thank you
So I’m loving the API driven approach for adding comments into the PR for review.
However, on merge I want to run the plan, but still require terraform cloud approval. Running the API request on merge is synchronous at that point and causes timeout failure if you don’t approve immediately in terraform cloud.
So….
- Is there a way to just use VCS driven workflow + still allow the API driven plan and preview to PR?
- If I have to stick with API driven workflow, then upon merge to trunk, can I submit an asynchronous request so github actions proceeds without issue but the pending plan in Terraform Cloud remains “pending apply”?
Different Question: Has anyone got Terraform Cloud notifications working with Microsoft Teams or service account email?
• The email requires user email, can’t send to Microsoft teams
• The hooks don’t work with Teams. Any integrations or work arounds?
There are a few different ways to write maps in Terraform. What do you think is the canonical way?
I think it’s { key1 = "foo", key2 = "bar" }
You can use =
or :
, but the former seems strongly preferred. You can also quote the key or not, the latter is required if the key has a space but the former is generally used in the docs. There are other things you can tweak too…
Also, RFC. I’ve written a short proposal for adding pre-truncated id
outputs to terraform-null-label (and eventually, removing id
?! ).
https://github.com/cloudposse/terraform-null-label/issues/117
I am happy to implement this feature. I wanted to see if it would be supported first though Describe the Feature Add outputs id_16, id_32, id_64. Expected Behavior They would act like id output w…
@Alex Jurkiewicz that’s a very good proposal, thank you
I am happy to implement this feature. I wanted to see if it would be supported first though Describe the Feature Add outputs id_16, id_32, id_64. Expected Behavior They would act like id output w…
we can add those diff length IDs (w/o changing the current functionality)
great! I’ll code it up… soon
Truncated forms of id_full which are always available. This is useful when you want to use the same label for several resources with different length restrictions. Closes #117.
2021-01-06
Anyone have experience using packer and terraform together. Currently I use null_resource(s) to push docker images to ecr for terraform. I was wondering if I could do this with packer instead. My question is there a way to have the ecr url injected into terraform from packer can they communicate similar to how I do it currently ?
AFAIU, you’re thinking of Packer wrong. Packer is for building AMIs. When you’re dealing with Docker, images, and ECR then you’re dealing with a different toolset then what packer is used for.
You use-case looks like you could look into Hashi’s new OOS project Waypoint though — https://www.waypointproject.io/
Waypoint is an open source solution that provides a modern workflow for build, deploy, and release across platforms.
@Matt Gowie Packer can definitely build docker images https://www.packer.io/docs/builders/docker it can also push them to ecr as well https://www.packer.io/docs/post-processors/docker-push the missing piece for me is how to make terraform aware that it did this and what the ecr url is
The docker Packer builder builds Docker images using Docker. The builder starts a Docker container, runs provisioners within this container, then exports the container for reuse or commits the image.
The Packer Docker push post-processor takes an artifact from the docker-import post-processor and pushes it to a Docker registry.
using packer to build docker seems odd since there’s already a ‘native’ build tool for that
at any rate, packer doesn’t really provide any outputs except a manifest.json if you enable it (and I’m not actually sure if that’s universal to every builder) and then you an add k/v pairs to it with additional data.
you can say ami also have a native tool too I guess https://docs.aws.amazon.com/cli/latest/reference/imagebuilder/index.html but be cool to just work in hcl2 for both docker and ami images. Thanks for the info
I try to keep these in separate repos and stick to conventions for where the AMIs and Docker Images end up.
v0.12.30 0.12.30 (January 06, 2021) UPGRADE NOTES: The builtin provider’s terraform_remote_state data source no longer enforces Terraform version checks on the remote state file. This allows Terraform 0.12.30 to access remote state from future Terraform versions, up until a future incompatible state file version upgrade is required. (#26692)
The builtin Terraform provider's remote state data source uses a configured backend to fetch a given state, in order to allow access to its root module outputs. Until this change, this was only…
@Jeremy G (Cloud Posse)
The builtin Terraform provider's remote state data source uses a configured backend to fetch a given state, in order to allow access to its root module outputs. Until this change, this was only…
That is a big relief.
v0.13.6 0.13.6 (January 06, 2021) UPGRADE NOTES: The builtin provider’s terraform_remote_state data source no longer enforces Terraform version checks on the remote state file. This allows Terraform 0.13.6 to access remote state from future Terraform versions, up until a future incompatible state file version upgrade is required. (#26692)…
v0.14.4 0.14.4 (January 06, 2021) UPGRADE NOTES: This release disables the remote Terraform version check feature for plan and apply operations. This fixes an issue with using custom Terraform version bundles in Terraform Enterprise. (#27319) BUG FIXES: backend/remote: Disable remote Terraform workspace version check when the remote…
0.14.4 (January 06, 2021) UPGRADE NOTES: This release disables the remote Terraform version check feature for plan and apply operations. This fixes an issue with using custom Terraform version bun…
Terraform remote version conflicts are not a concern for operations. We are in one of three states: Running remotely, in which case the local version is irrelevant; Workspace configured for local …
I use the module, https://github.com/cloudposse/terraform-aws-elasticsearch to provision ElasticSearch. I set kibana_hostname_enabled = false, and domain_hostname_enabled = false. Per document, dns_zone_id is not required. But, it asks for dns zone id when I run terraform plan.
terraform plan
var.dns_zone_id
Route53 DNS Zone ID to add hostname records for Elasticsearch domain and Kibana
Enter a value:
Terraform module to provision an Elasticsearch cluster with built-in integrations with Kibana and Logstash. - cloudposse/terraform-aws-elasticsearch
If dns_zone_id will not be used then you can just pass it as dns_zone_id = null
to the module and that should do the trick.
Terraform module to provision an Elasticsearch cluster with built-in integrations with Kibana and Logstash. - cloudposse/terraform-aws-elasticsearch
I prefer not to use Route53. How to avoid dns_zone_id? Below is the code:
module "elasticsearch" {
source = "git::<https://github.com/cloudposse/terraform-aws-elasticsearch.git?ref=tags/0.24.1>"
security_groups = [data.terraform_remote_state.vpc.outputs.default_security_group_id]
vpc_id = data.terraform_remote_state.vpc.outputs.vpc_id
zone_awareness_enabled = var.zone_awareness_enabled
subnet_ids = slice(data.terraform_remote_state.vpc.outputs.private_subnets, 0, 2)
elasticsearch_version = var.elasticsearch_version
instance_type = var.instance_type
instance_count = var.instance_count
encrypt_at_rest_enabled = var.encrypt_at_rest_enabled
dedicated_master_enabled = var.dedicated_master_enabled
create_iam_service_linked_role = var.create_iam_service_linked_role
kibana_subdomain_name = var.kibana_subdomain_name
ebs_volume_size = var.ebs_volume_size
#dns_zone_id = var.dns_zone_id
kibana_hostname_enabled = false
domain_hostname_enabled = false
iam_role_arns = ["*"]
iam_actions = ["es:*"]
enabled = var.enabled
vpc_enabled = var.vpc_enabled
name = var.name
tags = var.tags
advanced_options = {
"rest.action.multi.allow_explicit_index" = "true"
}
}
Hi All - first post so excuse if silly :)
Wondering what’s best practice for creating kafka topics post cluster creation using the CloudPosse MSK Module?
AWS doesn’t appear to support anything directly on MSK and even references the apache shell scripts (here points to here)
If really cli only, is it possible to run a template file after the MSK Cluster is created to run the shell scripts? e.g.
$ bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic my-topic --partitions 1 \
--replication-factor 1 --config max.message.bytes=64000 --config flush.messages=1
Thanks for any help
This likely falls into the fun data tier updates you can do with providers like postgres, elasticsearch, rabbit, etc.
There is Kafka provider that will do this for you (https://github.com/Mongey/terraform-provider-kafka) but you will need to have an accessible route to hit Kafka outside your cluster.
Terraform provider for managing Apache Kafka Topics + ACLs - Mongey/terraform-provider-kafka
Hi all, newbie terraform question here: I’m trying to use the https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/cosmosdb_account#connection_strings attribute in another module, and even though the documentation clearly says it produces a list of strings, I’m ending up with a string, and the error message “Invalid index; This value does not have any indices.” How can I test the type of the attribute so I can know whether it needs wrapping to ensure my consuming modules get a consistent type?
Terraform by HashiCorp
you can use `
[*]
if a splat expression is applied to a value that is not a list or tuple then the value is automatically wrapped in a single-element list before processing
Interesting behaviour - I’ll give it a shot. Thanks!
And after all that, it was my mock data that was the problem rather than the source data.
2021-01-07
Anyone had this when releaseing cert manager via terraform/helmfile?
Internal error occurred: failed calling webhook "webhook.cert-manager.io": Post <https://cert-manager-qa-webhook.cert-manager.svc:443/mutate?timeout=10s>: x509: certificate signed by unknown authority
Guessing it’s cause the webhook pod isn’t ready yet?
Can I add a delay?
Hi all, new member of SweetOps community here. I have a question regarding terraform-aws-ec2-ami-backup that I found to enable me to run regular backups od Windows EC2 instances. Since it is recommended that the Windows instance that is being backed up, is in shutdown state during the backup, is it possible for me to add a AWS SDK call to shut down an instance before backup and then start it back up after the Snapshot is completed?
Terraform module for automatic & scheduled AMI creation - cloudposse/terraform-aws-ec2-ami-backup
Hi all, We have a combination of ASG and launch config that spins up on-demand ec2 instances for us. However, due to recent changes in budget, we would like to use the combination of on-demand and spot instances. How and what changes I should make in my ASG and launch config to start spinning up Spot instances? Thanks!
You can just add the mixed instances policy on your ASG: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/autoscaling_group#instances_distribution
@Miguel Zablah thanks for sharing it. However, how can we add a conditional statement here to allow a few ec2 to use spot and a few to use on-demand within the same ASG?
You will add it like this:
mixed_instances_policy = {
instances_distribution = {
on_demand_base_capacity = 1
on_demand_percentage_above_base_capacity = 50
}
...
}
on_demand_base_capacity
this will set how many of on demand instances you will like as basis (default)
on_demand_percentage_above_base_capacity
will tell how many of the instances that you tell the ASG to have will be on demand (the other % will be spot)
In this example you will have 1 instance on demand always and 50% will be spot this means that if you have set your ASG to have 4 instances running 2 will be on demand and 2 will be spot
Hi All, has anybody got a clever idea on how to only select one node out of an ASG to include in a AWS alb_target_group? I normally supply a list of ARN’s to the ASG and in turn all the machine in the ASG are added to the target groups.
resource type "aws_autoscaling_group"
*snipped to cut down on text*
target_group_arns = compact(
concat(
module.cms_public_target_group.arn,
module.maintenance_public_target_group.arn,
module.authoring_public_target_group.arn,
),
)
However, I could really do with doing this in reverse and attach an instance from the ASG to the target group I’ve start to look if I could run a data lookup on ec2 instances filtered on tags that matched the ASG and then take the first instance ID in the list and use the
aws_lb_target_group_attachment
resource to then add it to the target group but I’m guessing there probably a chicken and egg issue here. I could also maybe write something to do similar to the data look or above via userdata.
But before I go off a invent something crazy, has anybody got that clever idea I’m hopeful for?
Is anyone familiar with https://github.com/TelkomIndonesia/terraform-provider-linux provider ?
I used terrafrom to create a mysql db instance and for the first time all went well i made few change to other modules not to the RDS module and when i run the terraform apply it says db instance already exist terrafrom is suppose to store the db instance that it was created earlier in it state why is it not happening any idea.
Output:
Warning: Quoted references are deprecated
on modules/elasticsearch/main.tf line 7, in resource "aws_elasticsearch_domain" "es":
7: ignore_changes = ["access_policies"]
In this context, references are expected literally rather than in quotes.
Terraform 0.11 and earlier required quotes, but quoted references are now
deprecated and will be removed in a future version of Terraform. Remove the
quotes surrounding this reference to silence this warning.
(and one more similar warning elsewhere)
Error: Error creating DB Instance: DBInstanceAlreadyExists: DB Instance already exists
status code: 400, request id: a753d1ca-b0af-447c-85e6-d1b7bd672f34
v0.15.0-alpha20210107 0.15.0 (Unreleased) UPGRADE NOTES: config: The list and map functions, both of which were deprecated since Terraform v0.12, are now removed. You can replace uses of these functions with tolist([…]) and tomap({…}) respectively. (#26818) cli: Interrupting execution will now cause terraform to exit with a non-0 status. (<a…
0.15.0 (Unreleased) UPGRADE NOTES: config: The list and map functions, both of which were deprecated since Terraform v0.12, are now removed. You can replace uses of these functions with tolist([…..
Prior to Terraform 0.12 these two functions were the only way to construct literal lists and maps (respectively) in HIL expressions. Terraform 0.12, by switching to HCL 2, introduced first-class sy…
ffs
0.15.0 (Unreleased) UPGRADE NOTES: config: The list and map functions, both of which were deprecated since Terraform v0.12, are now removed. You can replace uses of these functions with tolist([…..
Prior to Terraform 0.12 these two functions were the only way to construct literal lists and maps (respectively) in HIL expressions. Terraform 0.12, by switching to HCL 2, introduced first-class sy…
Release v3.23.0 · hashicorp/terraform-provider-aws
breathingdust released this 4 hours ago
• New Data Source: aws_ssoadmin_instances
(#15808)
• New Data Source: aws_ssoadmin_permission_set
(#15808)
• New Resource: aws_sagemaker_image
(#16082)
• New Resource: aws_ssoadmin_managed_policy_attachment
(#15808)
• New Resource: aws_ssoadmin_permission_set
(#15808)
• New Resource//github.com/hashicorp/terraform-provider-aws/issues/15808))
Finally SSO is implemented , no more CFN templates
Really? SSO? - i’ve just written some CFN terraform wrapper stuff and was going mad
me too, I’m releasing the module ASAP
I created CFN template and call it from Terraform, now what I need to do is, create the module and import the resources there
I found that this missing account assignments, so partial usage for now
@Jeremy G (Cloud Posse)
@Ayman
2021-01-08
Hi, it seems the RDS instance will be recreated each time, how can I avoid it? or ignore the RDS instance?
if you change the name it will be recreated each time
it will only be recreated if there is a change in an attribute that forces replacement
look at the plan and see what is the red - sign and see which one is it
I use a var for the name attribute, it should be the same
will try a plan
+ name = "test" # forces replacement
any name change will force replacement, you can’t change it every time
it uses a fixed value from variables.tf file
for example, var.name
I’m using terraform 0.13.5
if it is a fixed value then this should not happen
so I think it is different
did you changed var.attributes , var.stage ?
no, the attribute is ['rds']
which is fixed
the database was created from snapshot
manually or using TF?
using TF
But using tf from the snapshot, correct?
yes
terraform state show module.rds.xxxxxx
use the resource name that shows in the plan
and see what the name is
and compare it
it doesn’t have name
from the outputs
address
allocated_storage
allow_major_version_upgrade
apply_immediately
arn
auto_minor_version_upgrade
availability_zone
backup_retention_period
backup_window
ca_cert_identifier
copy_tags_to_snapshot
db_subnet_group_name
delete_automated_backups
deletion_protection
enabled_cloudwatch_logs_exports
endpoint
engine
engine_version
final_snapshot_identifier
hosted_zone_id
iam_database_authentication_enabled
id
identifier
instance_class
iops
latest_restorable_time
license_model
maintenance_window
max_allocated_storage
monitoring_interval
multi_az
option_group_name
parameter_group_name
password
performance_insights_enabled
performance_insights_retention_period
port
publicly_accessible
replicas
resource_id
security_group_names
skip_final_snapshot
snapshot_identifier
status
storage_encrypted
storage_type
tags
"Attributes"
"Environment"
"Name"
"Namespace"
username
vpc_security_group_ids
list of the rds attributes from state command
This issue was originally opened by @hunt3ri as hashicorp/terraform#18563. It was migrated here as a result of the provider split. The original body of the issue is below. Terraform Version 0.11.7 …
what is terraform trying to replace? the rds instance or the rds cluster?
rds instance
look then at the id, identifier and cluster_identifier
the id is the name basically
yeah
is that being changed?
identifier = "test"
+ identifier_prefix = (known after apply)
identifier_prefix
is added
but the one is complaining that forces replacement is the name only?
is this a cluster or a rds instance only?
are you using cloudposse modules for this?
-/+ resource "aws_db_instance" "default" {
~ address = "a-staging-test-rds.xxxx.us-east-1.rds.amazonaws.com" -> (known after apply)
allocated_storage = 20
allow_major_version_upgrade = false
apply_immediately = true
~ arn = "arn:aws:rds:us-east-1:xxx:db:a-staging-test-rds" -> (known after apply)
auto_minor_version_upgrade = true
~ availability_zone = "us-east-1b" -> (known after apply)
backup_retention_period = 0
backup_window = "22:00-03:00"
ca_cert_identifier = "rds-ca-2019"
+ character_set_name = (known after apply)
copy_tags_to_snapshot = true
db_subnet_group_name = "a-staging-test-rds"
delete_automated_backups = true
deletion_protection = false
- enabled_cloudwatch_logs_exports = [] -> null
~ endpoint = "a-staging-test-rds.xxx.us-east-1.rds.amazonaws.com:3306" -> (known after apply)
engine = "mysql"
engine_version = "8.0.20"
final_snapshot_identifier = "a-staging-test-rds-final-snapshot"
~ hosted_zone_id = "xxx" -> (known after apply)
iam_database_authentication_enabled = false
~ id = "a-staging-test-rds" -> (known after apply)
identifier = "a-staging-test-rds"
+ identifier_prefix = (known after apply)
instance_class = "db.t2.small"
iops = 0
+ kms_key_id = (known after apply)
~ latest_restorable_time = "0001-01-01T00:00:00Z" -> (known after apply)
~ license_model = "general-public-license" -> (known after apply)
maintenance_window = "mon:03:00-mon:04:00"
max_allocated_storage = 1000
monitoring_interval = 0
+ monitoring_role_arn = (known after apply)
multi_az = false
+ name = "teststage" # forces replacement
option_group_name = "a-staging-test-rds"
parameter_group_name = "a-staging-test-rds"
password = (sensitive value)
performance_insights_enabled = false
+ performance_insights_kms_key_id = (known after apply)
~ performance_insights_retention_period = 0 -> (known after apply)
port = 3306
publicly_accessible = false
~ replicas = [] -> (known after apply)
~ resource_id = "db-xxx" -> (known after apply)
- security_group_names = [] -> null
skip_final_snapshot = true
snapshot_identifier = "orig-stage-database"
~ status = "available" -> (known after apply)
storage_encrypted = false
storage_type = "gp2"
tags = {
"Attributes" = "rds"
"Environment" = "staging"
"Name" = "a-staging-test-rds"
"Namespace" = "a"
}
+ timezone = (known after apply)
username = "admin"
vpc_security_group_ids = [
"sg-xxx",
]
}
yes, I am using cloudposse rds module
here are the outputs of plan command
but are you creating a cluster or just an rds instance?
just rds instance
ok
the “name” sttribute value in the state is test
or teststage
teststage
it is database name passed into terraform rds resource
so you created the instance from snapshot using the module
and now you are running tf again without changing anything and is trying to recreate it?
yes
if that is the case remove the snapshot_identifier
do not pass it after is created
I think is trying to recreated again from the snapshot
+ name = "test" # forces replacement
- snapshot_identifier = "orig-stage-database" -> null # forces replacement
there is one more replacement after removing snapshot from tf file
Ok but is still complains about the name
Mmmmm
What happened if do not pass a name?
Have you try without using a snapshot? And see if keep trying to replace the instance?
name
variable should be required from database_name
I personally never had this problem
I think it is caused by snapshot
I have not use snapshot from an instance but I have done it for a cluster
But I didn’t have this problem
ok, thanks, I’ll give it a try tomorrow
Can’t change snapshot identifier without rolling database :) Also instances module doesn’t have apply immediately false and ignore lifecycle for some things so it can roll too.
I tried not to create RDS from snapshot and it worked
so I think this should be a terraform bug
created an issue to follow up, https://github.com/hashicorp/terraform-provider-aws/issues/17037
Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or other comme…
2021-01-09
Since any 0.14 version can apply a 0.14 module, has everyone moved to this new version? I’m still lagging behind with 0.12.x for most of our modules but considering a migration now that it’s been out for a while
Referring to this
Terraform will now support reading and writing all compatible state files, even from future versions of Terraform. This means that users of Terraform 0.14.0 will be able to share state files with future Terraform versions until a new state file format version is needed. We have no plans to change the state file format at this time. (#26752)
As i understand it, i can apply a module with tf 0.14.6 and then try to apply it with 0.14.3 without an error thrown
if you bump to 0.12.30, you’ll even keep you backwards compatibility, i think. so you can easily test 0.14, and revert to 0.12.30
v0.12.30 0.12.30 (January 06, 2021) UPGRADE NOTES: The builtin provider’s terraform_remote_state data source no longer enforces Terraform version checks on the remote state file. This allows Terraform 0.12.30 to access remote state from future Terraform versions, up until a future incompatible state file version upgrade is required. (#26692)
or maybe not, that might only be relevant for the remote state data source, my bad
Ya i thought the same at first until i realized it was just the data source
Hey all! I’m using Terraform 0.14.3
(+Terragrunt), but I’m facing some issues with the terraform-aws-alb:0.26.0
module. When running terragrunt plan
(which runs terraform init at first), I’m getting the following error
Error: Unsupported Terraform Core version
on .terraform/modules/alb.access_logs.s3_bucket.this/versions.tf line 2, in terraform:
2: required_version = ">= 0.12.0, < 0.14.0"
...
Error: Unsupported Terraform Core version
on .terraform/modules/alb.access_logs.this/versions.tf line 2, in terraform:
2: required_version = ">= 0.12.0, < 0.14.0"
I just learned about the [context.tf](http://context.tf)
file and the issue might be coming from there. In the terraform-aws-alb:0.26.0
module, the null-label
module is called with a version that supports Terraform 0.14+
like so
module "this" {
source = "cloudposse/label/null"
version = "0.22.1" // requires Terraform >= 0.12.26
...
But in some underlying modules, for example terraform-aws-lb-s3-bucket:0.9.0
, it seems to be called with a version that does not support Terraform 0.14+ like so
module "this" {
source = "git::<https://github.com/cloudposse/terraform-null-label.git?ref=tags/0.19.2>"
...
I’m not too sure if I might be something wrong or if there’s a way to override the version in the underlying modules?
I managed to make it work with what seem to be minor changes. Here are the modifications I made:
• terraform-aws-lb-s3-bucket
# <https://github.com/cloudposse/terraform-aws-lb-s3-bucket/blob/master/main.tf#L29>
# terraform-aws-s3-log-storage 0.15.0+ supports TF 0.14.0+
source = "git::<https://github.com/cloudposse/terraform-aws-s3-log-storage.git?ref=tags/0.15.0>"
# <https://github.com/cloudposse/terraform-aws-lb-s3-bucket/blob/master/context.tf#L22>
source = "cloudposse/label/null"
version = "0.22.1"
• terraform-aws-alb
# <https://github.com/cloudposse/terraform-aws-alb/blob/master/main.tf#L43>
# Update the version to the latest one after applying the changes listed above
I did not run any deep tests tho, but now I can successfully run a plan
Those PR are related
• https://github.com/cloudposse/terraform-aws-lb-s3-bucket/pull/21
• https://github.com/cloudposse/terraform-aws-lb-s3-bucket/pull/27
Updated module terraform-aws-s3-log-storage tag to 0.14.0. The tag 0.13.1 causes error with Terraform 0.13: Error: Failed to query available provider packages Could not retrieve the list of availab…
what Upgrade to support Terraform 0.14 and bring up to current Cloud Posse standard why Support Terraform 0.14
2021-01-10
Hi, is there a way to define memory/cpu to null
for the module cloudposse/ecs-container-definition/aws
and cloudposse/ecs-alb-service-task/aws
?
Hey Hao, are you running your tasks on EC2 instances or Fargate?
If you’re on Fargate, no choice than to define the CPU/RAM for your task.
If your tasks are backed by EC2 instances, keep in mind that you could starve your OS if your tasks use all the resources.
Hey @Coco, yeah, I know this
now the container can run without memory setting
so I’d like to pass null
to cpu/memory
also tried with cpu/ram but failed with 137 error, which should be some issues with resource constraints
you can pass soft and hard memory limits to ECS+EC2
can I pass hard memory only since soft one is optional? I tried both memories but still got 137 error.
it worked if I only set hard memory manually for a testing container definition
task_cpu, task_memory
are not required to be passed when ECS+EC2 is used but in the container definition it is required and then you can just set it as 0 if you want
got it, let me give it a try
got this error now:
Error: ClientException: Invalid 'memory' setting for container 'test'.
with:
container_memory = 0
container_memory_reservation = var.container_memory_reservation
without task’s memory and cpu used
do not define any cpu or memory in the task def
sorry that is what I mean t to say
that is how we set up our instances so it uses the available memory and cput of the host
yes, I commented out them:
# task_memory = var.task_memory
# task_cpu = var.task_cpu
so the containers can use available resources
2021-01-11
I’m having an issue getting terraform-aws-service-control-policies to create a policy. Here is the output of my terraform run:
module.service_control_policies.aws_organizations_policy.this[0] will be created
- resource “aws_organizations_policy” “this” {
- arn = (known after apply)
- content = jsonencode( { + Statement = [] + Version = null } )
- description = “Policy Staging OU SCP”
- id = (known after apply)
- name = “namespacetest-envtest-stagetest-nametest”
- tags = {
- “Environment” = “envtest”
- “Name” = “namespacetest-envtest-stagetest-nametest”
- “Namespace” = “namespacetest”
- “Stage” = “stagetest” }
- type = “SERVICE_CONTROL_POLICY” }
# module.service_control_policies.aws_organizations_policy_attachment.this[0] will be created
- resource “aws_organizations_policy_attachment” “this” {
- id = (known after apply)
- policy_id = (known after apply)
- target_id = “ou-<redacted>” }
Plan: 2 to add, 0 to change, 0 to destroy.
Do you want to perform these actions? Terraform will perform the actions described above. Only ‘yes’ will be accepted to approve.
Enter a value: yes
module.service_control_policies.aws_organizations_policy.this[0]: Creating…
Error: error creating Organizations Policy (namespacetest-envtest-stagetest-nametest): MalformedPolicyDocumentException: The provided policy document does not meet the requirements of the specified policy type.
Here is my simple policy yaml:
- sid: "deny_eks"
effect: “Deny”
actions:
- “eks:”
resources:
- “”
+ content = jsonencode(
{
+ Statement = []
+ Version = null
}
)
Looks like you’re having an issue passing the policy to the aws_organizations_policy
resource
Can you paste your TF code here?
Did you implemented based on the example & using our catalog for testing:
Terraform module to provision Service Control Policies (SCP) for AWS Organizations, Organizational Units, and AWS accounts - cloudposse/terraform-aws-service-control-policies
Terraform module to provision Service Control Policies (SCP) for AWS Organizations, Organizational Units, and AWS accounts - cloudposse/terraform-aws-service-control-policies
Here is all of the relevant code. I didn’t try it specifically with one of the example configs, but tried creating my own with a very simple change from one of the examples I appreciate the replies and any help you can offer.
Here is the actual yaml I’m using
I figured it out. While going through the code I just pasted, I noticed that I used “map_config_local_base_path” instead of “list_config_local_base_path”. I’d changed those while trying to get 0.3.0 to work and forgot to change them back. The actual error was due to it not being able to find the yaml file at all. Thanks for the replies!
The actual error was due to it not being able to find the yaml file at all.
@Andriy Knysh (Cloud Posse) looks like we need a check for this?
…in the yaml config module
Hey guys, is there some doc somewhere about the context.tf and attributes
parameter in the modules? I’m a bit confused as to how they behave
I’m using the aws-alb
modules, but it looks like the Name tags on the S3 access logs does not match the bucket ID. I was trying to figure out if I could work around that using the attributes
parameter, but not too sure how it should be used
Hiii amaziing folks, Good morning
We are using the following repos in our aws infrastructure
• https://github.com/cloudposse/terraform-aws-s3-bucket.git
• https://github.com/cloudposse/terraform-aws-iam-s3-user.git
We are facing issues while upgrading
to terraform
version 0.14.2
I see PR’s have already been raised in these repositories for the upgrade
When are we planning to merge them ?
Thanks
Terraform module that creates an S3 bucket with an optional IAM user for external CI/CD systems - cloudposse/terraform-aws-s3-bucket
Terraform module to provision a basic IAM user with permissions to access S3 resources, e.g. to give the user read/write/delete access to the objects in an S3 bucket - cloudposse/terraform-aws-iam-…
@Maxim Mironenko (Cloud Posse) can you help with this
Terraform module that creates an S3 bucket with an optional IAM user for external CI/CD systems - cloudposse/terraform-aws-s3-bucket
Terraform module to provision a basic IAM user with permissions to access S3 resources, e.g. to give the user read/write/delete access to the objects in an S3 bucket - cloudposse/terraform-aws-iam-…
@Ankit Rathi please, try this new releases:
<https://github.com/cloudposse/terraform-aws-s3-bucket/releases/tag/0.28.0>
and
<https://github.com/cloudposse/terraform-aws-iam-s3-user/releases/tag/0.13.0>
@Maxim Mironenko (Cloud Posse) @Erik Osterman (Cloud Posse) yeah its working now for us … thanks a lot for the amazing work
2021-01-12
Hi! Quick question - how do you work as a team over terraform/other IaC? we’re working on a single dev stack, but sometimes two developers are adding a feature to the stack, in the same week. Having two different branches on github is something we obviously want, but we can’t really let them work simultaneously without taking the other developer’s changes. One option would be to have two stacks, but we are trying to find a different solution due to internal reasons.
Hello @Ofir Rabanian, on my side I have divided the code in smaller project , and have one terraform state for each. Of course each project can read other project resources.
Don’t do too small project as you have to maintain and update terraform version for each
Yup, this is a problem.
#atlantis solves it by locking the workspaces (not the same as terraform locks)
will bring up in #office-hours
I made a video about it and have presented it , to the previous HashiConf in October 2020 as a 15min quick talk. I didn’t cut / encode the video yet . but if the topic interest some people I can move forward to do it.
@Pierre-Yves very much so! would love to get a link.
Here’s a link to the recording: https://sweetops.slack.com/archives/CHDR1EWNA/p1610574702013100
New Zoom Recording from our Office Hours session on 2021-01-13 is now available.
We’ll post a “Cloud Posse Explains” video clip once it’s available (@Andy Miguel)
you can also find the office hours where hashicorp, spacelift, scalr, and env0 demoed their products on this episode: https://www.youtube.com/watch?v=4MLBpBqZmpM
and those presentations are also available here: https://www.youtube.com/playlist?list=PLhRztDM6UvndoGRk0h1L_4M9xjtjbu0Jb
Share your videos with friends, family, and the world
Regarding module terraform-aws-service-control-policies: I’m trying to figure out how to use a rule based on actions NOT being something. For instance, here is an AWS-supplied rule for restricting regions: { “Version”: “2012-10-17”, “Statement”: [ { “Sid”: “RestrictRegion”, “Effect”: “Deny”, “NotAction”: [ “a4b:”, “budgets:”, “ce:*”, …
You can see there is a “NotAction” statement. I tried doing this in the module with “notactions:”, but that didn’t work. I couldn’t find any examples of how to do this. Is it possible?
not_actions
should work the same way as actions
Terraform module to provision Service Control Policies (SCP) for AWS Organizations, Organizational Units, and AWS accounts - cloudposse/terraform-aws-service-control-policies
Awesome. Thanks!
note that the YAML config in the catalog https://github.com/cloudposse/terraform-aws-service-control-policies/tree/master/catalog
Terraform module to provision Service Control Policies (SCP) for AWS Organizations, Organizational Units, and AWS accounts - cloudposse/terraform-aws-service-control-policies
is Terraform-style YAML, not CloudFormation-style YAML
also keep in mind that NotAction
can be used only with effect Deny
might be a cool tool… https://driftctl.com/2020/12/22/announcing-driftctl
driftctl is a free and open-source CLI for infrastructure developers, DevOps, SRE, and cloud practitioners, that tracks, analyzes, prioritizes, and warns of infrastructure drift
looks neat
driftctl is a free and open-source CLI for infrastructure developers, DevOps, SRE, and cloud practitioners, that tracks, analyzes, prioritizes, and warns of infrastructure drift
dumb question: if we have an RDS instance provisioned via a snapshot using snapshot_identifier
, will subsequent applys re-restore that snapshot?
there was someone asking a similar question the other day
the answer is no but it seems to be that there might be a bug on the TF provider
in a RDS cluster you can define the snapshot_identifier as null after the creation
but now you can clone databases so maybe you can clone db instances?
If you provision with a snapshot id in the terraform then you need to keep that id in the terraform else you’ll trigger a roll back or wipe. At least for us when I tried changing id it was going to recreate which I think would result in data change.
it was me lol
I believe it is a bug, so created an issue to follow up, https://github.com/hashicorp/terraform-provider-aws/issues/17037
Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or other comme…
2021-01-13
Hi there. A quick question about terraform-aws-vpc-peering
module. Does it works across regions ? Got a VPC id not found, and there’s no typo
no, it does not support cross-region peering (it was created before AWS had cross-region support and was not updated)
Terraform module to provision a VPC peering across multiple VPCs in different accounts by using multiple providers - cloudposse/terraform-aws-vpc-peering-multi-account
you can use the requster and accepter in the same account, but in different regions
Terraform module to provision a VPC peering across multiple VPCs in different accounts by using multiple providers - cloudposse/terraform-aws-vpc-peering-multi-account
hi, need insights, on using terraform on aws eks.
which is better used in creating worker nodes? cloudformation stacks
or node groups
?
If you’re already using Terraform then you should use https://github.com/cloudposse/terraform-aws-eks-node-group
Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.
Or https://github.com/cloudposse/terraform-aws-eks-workers (haven’t used myself but similar to node-group)
Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers
right. thanks
will discuss in #office-hours
is this session done?
@Erik Osterman (Cloud Posse)
Does anyone here use github actions in combination with terraform? I am particularly struggling with modelling an approval process in github actions
How to create an approval-based workflow with GitHub Actions
Thanks for sharing - taking a deeper look. Another thing that strikes me hard is the fact of sharing the plan artifact between plan and apply if its not done in the same run. Using artifacts I’m a bit concerned of sensitive data leaking - was thinking of encryption
Will discuss in #office-hours
Thanks for the insightful remarks during yesterdays office hours. Will take a look again at the TACOS recording
I use terraform to provision the S3 bucket. I would like to create multiple keys in one bucket. For instance, in the bucket, my_bucket, I would like to have keys like “data”, “url”, “aps/core” and “aps/app”. Below is the sample code. Do I need to duplicate this code for each key?
resource "aws_s3_bucket_object" "create_folder" {
bucket = "my_bucket"
acl = var.acl
key = "data"
}
use for_each
Terraform by HashiCorp
Catalog of reusable Terraform components and blueprints for provisioning reference architectures - cloudposse/terraform-aws-components
yes Andriy pointed out the best way to accomplish this ex.
resource "aws_s3_bucket_object" "create_folder" {
for_each = {
data = "data"
url = "url"
}
bucket = "my_bucket"
acl = var.acl
key = each.value
}
Thank you very much!
How to output values created by for_each? The code below creates two S3 buckets. I would like to output the names of the buckets. The code below does not work.
module "s3_bucket_for_emr_logs" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "1.17.0"
for_each = toset(["${var.prefix}-emr-logs", "${var.prefix}-emr-logparser-logs"])
bucket = each.key
acl = var.acl
}
output "bucketname_emrs" {
value = module.s3_bucket_for_emr_logs[*].this_s3_bucket_id
}
the type of resource with for_each
is a map
values(module.s3_bucket_for_emr_logs)
Either just output the whole module object, module.s3_bucket_for_emr_logs
, or use for
to construct an object (list/map) of the attribute/s you want
will return a list of resources, from which you can get the attributes you need
values(module.s3_bucket_for_emr_logs)[*].this_s3_bucket_id
outputting the whole module like module.s3_bucket_for_emr_logs
will output a map of objects
the map’s keys will be values from toset(["${var.prefix}-emr-logs", "${var.prefix}-emr-logparser-logs"])
which should be similar to values(module.s3_bucket_for_emr_logs)[*].this_s3_bucket_id
yeah, i hadn’t seen the pattern like values(module.s3_bucket_for_emr_logs)[*].this_s3_bucket_id
before… i’ve just gotten in the habit of outputting the module and letting the user pick what they want
Got it. Thank you very much for your help.
2021-01-14
Hi all, would anyone know how to create a node of instance_type “Fargate” with TF? I know how to create the EKS cluster, how to create the Fargate profile etc. but I do not know how I can create nodes of type Fargate. When I create a cluster on AWS with eksctl and the command line option –fargate, it does exactly that: it creates nodes of instance_type Fargate but in TF I can only pass EC2 instance types like t1.micro etc.
I think it’s done by setting the launch type, eg:
resource "aws_ecs_service" "service" {
name = "myservice"
cluster = var.ecs_cluster_id
task_definition = aws_ecs_task_definition.my_task.arn
launch_type = "FARGATE"
platform_version = "1.4.0"
...
Hmm but the eksctl crate-cluster command immediately creates these nodes, even before the first workload is deployed
oh sorry, didnt see you’re using eks.
i use ecs directly
I got in the same issue. you need the namespace and other labels to match fargate profile, let me pull examples files
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "13.2.1"
tags = var.tags
cluster_name = var.cluster_name
cluster_version = var.cluster_version
map_roles = var.map_roles
map_users = var.map_users
map_accounts = var.map_accounts
subnets = data.aws_subnet_ids.private.ids
vpc_id = data.aws_vpc.this.id
fargate_profiles = {
default = {
namespace = var.namespace
},
kube-system = {
namespace = "kube-system"
},
kubernetes-dashboard = {
namespace = "kubernetes-dashboard"
}
}
}
I’m using the terraform-aws-eks module, notice how each profile point to a nampespace, if you want more than one namespace in a profile, I think you need to use the console to do it, not yet test that use case
fargate_profiles = {
example = {
namespace = "default"
# Kubernetes labels for selection
# labels = {
# Environment = "test"
# GithubRepo = "terraform-aws-eks"
# GithubOrg = "terraform-aws-modules"
# }
tags = {
Owner = "test"
}
}
}
that is the first part, 2nd part in yaml files, let me pull something
example for k8s-dashboard.yml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
notice the k8s-app label match the profile name in .tf file, but where I add this k8s-app
? I can’t remember
On the Configure pods selection page, enter the following information and choose Next.
For Namespace, enter a namespace to match for pods, such as kube-system or default.
(Optional) Add Kubernetes labels to the selector that pods in the specified namespace must have to match the selector. For example, you could add the label infrastructure: fargate to the selector so that only pods in the specified namespace that also have the infrastructure: fargate Kubernetes label match the selector.
so back to you question, you don’t create Fargate nodes, because it’s a managed service, you only create Fargate pods inside these nodes.
This section describes some of the unique pod configuration details for running Kubernetes pods on AWS Fargate.
I hope this give you some light
Thanks Mohammed for your input.. I manage running fargate pods on the cluster I created with eksctl –fargate .. but the thing I do not get is: when I run the cluster creation with eksctl it DOES create nodes of type “Fargate” (see screenshot I posted at the beginning of the thread) - but when I use the terraform eks module, it does not create nodes of this type.. and if I create a cluster without node_groups and a fargate profile, I have no nodes to run the pods on. So I am trying to figure out how I can get the same result with terraform as with eksctl, this being a running cluster with let’s say 2 nodes of type Fargate…
I see, no nodes will be available unless you deploy some yaml files
I saw the same thing, when the pods start creation, fargate will create the underlaying nodes for you.
think of it same as ecs tasks, you can create ecs service with zero tasks. same here apply you can create eks with fargate compute backend with zero nodes, since no pods are running.
I see.. but if I just use TF to create the cluster and the profile, then I would see this after creation completes:
which I thought was kinda odd
Yes totally fine for the last two
the coredns needs to be patched to work with fargate profiles
do onething: deploy sample k8s yaml file there ( match the ^^ notes)
and you will see nodes
ok will do that - and how do I patch coredns as you mentioned?
sorry and what do you mean by match the ^^ notes
https://docs.aws.amazon.com/eks/latest/userguide/fargate-getting-started.html check section (Optional) Update CoreDNS
This topic helps you to get started running pods on AWS Fargate with your Amazon EKS cluster.
^^ means upper messages
here a sample dashboard
Ok great.. thanks - so that would mean, with TF: I create a fargate profile for my namespace but also another one for coredns and then use kubectl to patch coredns. Then once I spin up the first pods I will see the actual fargate nodes appearing
exactly
cool thanks a lot - I will give that a go
I created the cluster like this with TF:
fargate_profiles = { default = { namespace = local.namespace }, kube-system = { namespace = “kube-system” }, python-web ={ namespace = “python-web” } kubernetes-dashboard = { namespace = “kubernetes-dashboard” } }
Then I ran the patch coredns command
and then I deployed a pod
it works - the node is now visible too of type Fargate
the only thing I see is that 0/2 pods is still mentioned for coredns .. guess that is wrong right?
@Mohammed Yahya
let me pull something
but awesome already to see the node appearing!
I feel you
### Patch coredns - DONE
kubectl patch deployment coredns -n kube-system --type json \
-p='[{"op": "remove", "path": "/spec/template/metadata/annotations/eks.amazonaws.com~1compute-type"}]'
# OR
kubectl rollout restart deployment coredns -n kube-system
use export KUBECONFIG="${PWD}/kubeconfig_XXX"
hmm the first command I had already executed, didn’t work, second one is having no effect either
I see I remember something missing from my documentation, let me pull something else
can you screenshot the labels of the coredns pods
sure
also run kubectl logs coredns-xxxx
the last 2 ones are the ones which are up since I executed the restart command you proposed
oh
they are suddenly ready…
yes restart take time
fantastic
awesome, thanks so much for your help
anytime
remember to update your yaml files with labels and namespaces before deployment or use default namespace
that’s the trick
yes, the namespace must match a fargate_profile namespace
and is the label important too?
Yes, don’t ask me why , did not work for me without label
this is fairly new, so expect some odd behaviors
but much cheaper than ec2 slave nodes
and you can scale up and down on the salve node level
Am i missing something or is terraform-aws-ecs-cloudwatch-sns-alarms
indeed not adding any tags to the alarms it creates? See https://github.com/cloudposse/terraform-aws-ecs-cloudwatch-sns-alarms/blob/ad8a6519b757bd497db8d0a0abf7403ebb2b9216/main.tf#L48
Variables.tf does specify that tags can be added.
BTW, tags on alarms are not visible in the AWS console but they do appear on the cli.
Terraform module to create CloudWatch Alarms on ECS Service level metrics. - cloudposse/terraform-aws-ecs-cloudwatch-sns-alarms
Looks like you’re correct to me. Feel free to put up a PR and mention it in #pr-reviews — simple things like that are quick to get merged if you post about your PR there.
Terraform module to create CloudWatch Alarms on ECS Service level metrics. - cloudposse/terraform-aws-ecs-cloudwatch-sns-alarms
Ya, we want to make sure we tag all the resources. Sometimes it slips through.
Adding tags should be a one line fix like this: https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/iam.tf#L19
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
I’d create the PR if I understood the mechanism you’re intending with [context.tf](http://context.tf)
which exists in the eks-repo, but not in the one i was referring to
Hrmm, looks like terraform-aws-ecs-cloudwatch-sns-alarms
was updated to use [context.tf](http://context.tf)
pattern
Please upvote this + https://github.com/hashicorp/terraform-provider-aws/issues/16030
Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or other comme…
I don’t understand why something so popular like Terraform moving kind of slow on these PR
Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or other comme…
They do it with odd things and it’s very frustrating. I’m approaching waiting 1 year on Amplify support.
https://github.com/hashicorp/terraform-provider-aws/issues/6917 https://github.com/hashicorp/terraform-provider-aws/pull/15966
Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or "me to…
Community Note Please vote on this pull request by adding a reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave "…
Discovered this week that AWS Macie support is also behind. They only have classic, and classic doesn’t any more in new accounts. So they are waiting for more interest.
I can see how they struggle - bet a team of 30 engineers wouldn’t be enough to manage just the AWS provider, let alone any others.
i don’t see how they keep up without switching to code generation to create basic resources and data sources that map 1:1 with the golang sdk
Yea, I think that’s the only way. I think pulumi has started doing that
Hello,
I have a question regarding the terraform-aws-eks-node-group (v0.16.0) module. When I set launch_template_disk_encryption_enabled
to true
is the supposed to encrypt the default managed node group?
I’m using a m5.xlarge instance type.
I see that it creates 2 launch templates. The first launch template is attached to auto scaling group. The second template has encryption options but it isn’t in use.
Hey, is there a module for creating security_groups in AWS?
There is a terraform-aws-modules one — https://github.com/terraform-aws-modules/terraform-aws-security-group
Terraform module which creates EC2-VPC security groups on AWS - terraform-aws-modules/terraform-aws-security-group
Ahh, thanks for that! I’m not sure why that didn’t come up in my search.
Would love to see this merged soon. https://github.com/cloudposse/terraform-aws-alb/pull/68
what Upgrade to support Terraform 0.14 and bring up to current Cloud Posse standard why Support Terraform 0.14
Terraform 0.14 upgrade @maximmi (#68) what Upgrade to support Terraform 0.14 and bring up to current Cloud Posse standard why Support Terraform 0.14
Thank you!!
Question on the module, https://github.com/cloudposse/terraform-aws-emr-cluster I use this module to provision EMR cluster. Below are the outputs. cluster_master_host = cluster_master_public_dns = ip-50-20-1-177.us-west-2.compute.internal cluster_name = emr-test
ssh -i Dev-Keys.pem [email protected] ssh: Could not resolve hostname ip-50-20-1-177.us-west-2.compute.internal: Name or service not known
Questions:
- Why output of cluster_master_host is empty?
- I am not able to login to ip-50-20-1-177.us-west-2.compute.internal. It complains, “Could not resolve hostname”.
Below is the source code:
module "emr_cluster" {
source = "git::<https://github.com/cloudposse/terraform-aws-emr-cluster.git?ref=tags/0.16.0>"
master_allowed_security_groups = [data.terraform_remote_state.vpc.outputs.default_security_group_id]
slave_allowed_security_groups = [data.terraform_remote_state.vpc.outputs.default_security_group_id]
region = var.region
vpc_id = data.terraform_remote_state.vpc.outputs.vpc_id
subnet_id = data.terraform_remote_state.vpc.outputs.private_subnets[0]
route_table_id = data.terraform_remote_state.vpc.outputs.private_route_table_ids[0]
subnet_type = "private"
ebs_root_volume_size = var.ebs_root_volume_size
visible_to_all_users = var.visible_to_all_users
release_label = var.release_label
applications = var.applications
configurations_json = var.configurations_json
core_instance_group_instance_type = var.core_instance_group_instance_type
core_instance_group_instance_count = var.core_instance_group_instance_count
core_instance_group_ebs_size = var.core_instance_group_ebs_size
core_instance_group_ebs_type = var.core_instance_group_ebs_type
core_instance_group_ebs_volumes_per_instance = var.core_instance_group_ebs_volumes_per_instance
master_instance_group_instance_type = var.master_instance_group_instance_type
master_instance_group_instance_count = var.master_instance_group_instance_count
master_instance_group_ebs_size = var.master_instance_group_ebs_size
master_instance_group_ebs_type = var.master_instance_group_ebs_type
master_instance_group_ebs_volumes_per_instance = var.master_instance_group_ebs_volumes_per_instance
create_task_instance_group = var.create_task_instance_group
log_uri = format("<s3://%s/%s>", data.terraform_remote_state.s3.outputs.bucketname_emrs[1], "emr-logs/")
key_name = "Dev-Keys"
context = module.this.context
}
Terraform module to provision an Elastic MapReduce (EMR) cluster on AWS - cloudposse/terraform-aws-emr-cluster
cluster_master_host
is empty b/c it’s the output of this https://github.com/cloudposse/terraform-aws-emr-cluster/blob/master/main.tf#L499
Terraform module to provision an Elastic MapReduce (EMR) cluster on AWS - cloudposse/terraform-aws-emr-cluster
you need to specify var.zone_id
in Route53 so the module would create a record in the DNS zone and point it to cluster_master_public_dns
- You created the cluster in a private subnet
subnet_id = data.terraform_remote_state.vpc.outputs.private_subnets[0]
you can’t access it from the outside of the VPC
you can use a bastion host to SSH to it, or a VPN
Thank you.
*New Big Release* >> More AWS SSO Support, Github v2 support in codepipeline, api_gateway waited fixes terraform-provider-aws 3.24.0 (January 14, 2021) FEATURES
• New Data Source: aws_api_gateway_domain_name
(#12489)
• New Data Source: aws_identitystore_group
(#15322)
• New Data Source: aws_identitystore_user
(#15322)
• New Resource: aws_cloudwatch_composite_alarm
(#15023)
• New Resource: aws_fms_policy
(#9594)
• New Resource: aws_route53_resolver_dnssec_config
(#17012)
• New Resource: aws_sagemaker_domain
(#16077)
• New Resource//github.com/hashicorp/terraform-provider-aws/issues/15322))
https://github.com/hashicorp/terraform-provider-aws/blob/v3.24.0/CHANGELOG.md#3240-january-14-2021
Terraform AWS provider. Contribute to hashicorp/terraform-provider-aws development by creating an account on GitHub.
Just release simple terraform-aws-sso
module to create an SSO Permission Set, attach managed policy and existing Group to target account.
https://github.com/mhmdio/terraform-aws-sso
Contribution are most welcome, still v0.1.1 version.
2021-01-15
Is aws_ssoadmin_permission_set_inline_policy
ready yet?
yes
resource "aws_ssoadmin_permission_set_inline_policy" "this" {
inline_policy = var.inline_policy
instance_arn = aws_ssoadmin_permission_set.this.instance_arn
permission_set_arn = aws_ssoadmin_permission_set.this.arn
}
Annoying that aws sso doesn’t allow reusing iam managed policies or allowing multiple policy attachments
True, no multiple attachment, it’s one-one mapping, I guess it make sense to them
I hope they fix that this year
Hello, did someone succeed in using the terraform-aws-eks-cluster
in AWS China? I have tried and it fails when it tries to create the IAM role, with this error An error occurred (InvalidClientTokenId) when calling the CreateRole operation: The security token included in the request is invalid.
I get the same error if i try to do the same with the CLI:
❯ aws iam create-role \
--role-name myAmazonEKSClusterRole \
--assume-role-policy-document file://"cluster-role-trust-policy.json"
An error occurred (InvalidClientTokenId) when calling the CreateRole operation: The security token included in the request is invalid
From the web console I can create the role without problems. Any idea?
“The security token included in the request is invalid” prob means that you don’t have permissions to do that. When you do it from the console, you are logged in. When you do it from the command line, you use some AWS profile with some config (access keys or roles). Check the AWS profiles
i am using the same user in terraform, cli and web console
keeping it simple since it’s the first time i use the china partition
web console login is using user/password (w/o mfa), while cli/terraform are using key you created under IAM user tab. put the key into your aws profile or env variable.
i did that. I was able to create a vpc and other things with the same user/profile
it’s just the creation of this role that doesn’t succeed
are you able to create IAM role via web console using the same user ?
yes
try to attach AdministratorAccess
policy directly your terraform user
https://github.com/cloudposse/terraform-aws-eks-cluster/blob/470a2237a6b5678fbfdbd5173e4f29db4d2396da/iam.tf#L15. this is failing
Terraform module for provisioning an EKS cluster. Contribute to cloudposse/terraform-aws-eks-cluster development by creating an account on GitHub.
@Qing Jiang did you manage to have the eks module working in the chinese partition?
oh i have no experience on the chinese parition
part of aws. my working region is mostly us-west-2
I solved it. I was using aws-vault, I needed to use the --no-session
option to make it work. In the “regular” AWS it was not needed so that created a bit of confusion
Hi, does anonye know how a deployed alb-ingress-controller’s load balancer can be removed - when calling terraform destroy on an EKS cluster, the associated VPC cannot be destroyed because there’s still the load balancer created by the deployment of a helm chart which included the alb ingress controller.. this is not particularly great as it destroys the benefit of cleaning up resources with the tf destroy command…
Ideally you’d delete the helm chart/app/yaml that defines the Ingress
. The ALB Ingress Controller would see that it was deleted and it would delete the ALB for you.
The workflow for an ideal destroy should look like this:
• delete all apps, except alb-ingress-controller
, external-dns
, and maybe cluster-autoscaler
• Wait a while for the ALBs, Route53 records, and some EC2s to be deleted
• Delete the remaining apps
• Run terraform destory
Agree with @Vlad Ionescu (he/him) - that destroying the resources in the cluster before destroying the cluster is the best way.
does anyone know a decent tool to delete everything in an AWS account back to the initial account setup?
Nuke a whole AWS account and delete all its resources. - rebuy-de/aws-nuke
Exactly what I was about to suggest. I used it, works well. Has a few kinks to figure out.
cool app. saved for later thanks
@Yoni Leitersdorf (Indeni Cloudrail) anything specifically i should know?
i basically want to leave the account there as well as the default admin role and policy
cloud-nuke https://github.com/gruntwork-io/cloud-nuke
A tool for cleaning up your cloud accounts by nuking (deleting) all resources within it - gruntwork-io/cloud-nuke
Here is what I posted in our internal slack:
So notice that it’s important to exclude certain roles. For example, if I nuked the SSO, I would locked myself out of the account.
Also, using aws-nuke with aws-vault is a bit odd, took some time to figure out. See my message screenshot above.
@Yoni Leitersdorf (Indeni Cloudrail) thank for sharing, looks like a perfect tool. about to use it to nuke every asset not created by terraform in our env ..
By default, aws-nuke will use dry-run. Watch its results carefully to see if you missed something in the filters.
If you’re looking to run this as part of a schedule / CI workflow:
- Cloud Posse way is via GH Actions: https://github.com/cloudposse/testing.cloudposse.co/blob/master/.github/aws-nuke.yaml
- My solution (probably wouldn’t recommend): https://github.com/masterpointio/terraform-aws-nuke-bomber
Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co
A Terraform module to create a bomber which nukes your cloud environment on a schedule - masterpointio/terraform-aws-nuke-bomber
Cloud nuke as @Mohammed Yahya says very good
i don’t really want to have run terraform destroy
on X repos if i can help it
Has anyone ever dealt with a time when a tf PR was approved but has been sitting for months? Was it possible to nudge the maintainers to merge it? It’s not my PR, but it’s something I’m looking to implement for a client. Specifically https://github.com/hashicorp/terraform-provider-aws/pull/14974
Community Note Please vote on this pull request by adding a reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave "…
Assuming you’ve plus one’d it? Reviewing the code and then commenting with how it would work for your? There are a lot of open PR’s in that repo including one about rabbit which we want to use. I guess it’s either merge locally into a fork or maybe try your TAM even? I don’t know if anyone else has any ideas.
Community Note Please vote on this pull request by adding a reaction to the original pull request comment to help the community and maintainers prioritize this request Please do not leave "…
Yeah I it haha Someone I work with mentioned pulling it in locally but it would be pulling it into terraform managed by gitlab which could be a bit dodgy. This resources is a very minor piece at least, and could even be replaced with an external_data source object that using an sdk to deploy the resource.
Yeah tough isn’t it. Big job for maintainers too Good luck :)
This got merged finally . I also posted on the terraform community forum, perhaps a project manager saw it? Could also be coincidence
is a VPC name unique across regions in the same account?
e.g. can i have a dev
VPC in Ireland and another dev
VPC in Singapore within the same account?
name as Name tag?
no as in the name of the VPC itself
I have vpc with the same name in different regions
The name of a vpc is the Name tag
Is it not?
we are saying the same thing
yeh we are
can you have the same name for a VPC in different regions in the same account?
but yes in different region is fine
perfect thanks
2021-01-16
I created a couple terraform issues today. I didn’t see them written up before and id like to get community feedback on them.
• Pass in all resource arguments with a single map
• Override a module resource from outside a module reference Thanks!
It would be nice to pass in a single argument to a resource using a map. This would allow modules to create generic inputs to add additional arguments to existing resources without having to update…
Instead of having to wait for a module owner to update a resource in their tf module and instead of having to fork the module itself to make those changes, it would be nice to override the module r…
for the second, have you tried override files? https://www.terraform.io/docs/configuration/override.html
Override files allow additional settings to be merged into existing configuration objects.
Would that work with a specific modules resource tho?
i don’t understand the question
I see so similar to this
Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group
Maybe?
I copied that from the datadog module
Those are Interesting propositions but I wonder about the outside module resource management, I think that will be a core change in terraform
Ya it would be a core change. I spoke to one of the tf devs and they said those two feature requests would defeat the purpose of a module being an isolated item
It’s too bad, because at times i find a module is like 90% covering my use case and just 10% is wrong which renders that module unusable
yep
fwiw, updated a decent number of tf states from 0.13.5 to 0.14.4 over the last week… no significant issues, but a few things took a little while to understand:
• sensitive values may be marked in the provider, i.e. an iam access/secret key. you cannot for_each
over objects containing these values, but you can for_each
over non-sensitive keys and index into the object. any outputs containing provider-marked sensitive values must also be marked sensitive
• some of the output handling is a little odd, particularly with conditional resources/modules and accordingly conditional outputs. in some places, outputting null
as the false condition caused a persistent diff. worked fine in tf 0.13.5, but not in tf 0.14.4. changing it to ""
fixed it :man-shrugging::skin-tone-2:
• the workflow around the new lock file, .terraform.lock.hcl
, is quite cumbersome. it really clutters up the repo when you have a lot of root modules, and means you have to init
each root somehow to generate the file, and commit it, anytime you want to update providers? no thanks! but, unfortunately, there is no way to disable it. the file is mandatory for a plan/apply. i’m using terraform-bundle already, setting up the plugin-cache in advance, restricting versions, and restricting network connectivity in CI. so i thought i could just remove the file after init
, but no dice. you can remove it after apply
, and don’t have to commit it (but that means CI will need to generate it)
• if you are updating from 0.12, you’ll likely want to (or need to) first update to tf 0.13 for the new provider/registry syntax, to get the old syntax out of your tf 0.12 tfstate
I also struggled quite a bit with the .terraform.lock.hcl
- somehow for us it also had sometimes changes after a colleague pushed (after init+apply) and I was trying to developer further - not sure what that related to.
Also wonder how far along renovate / dependabot are in terms of also re-creating the file on a module bump for a root module
good question on dependency update tooling. they are zero far along, i expect… though looks like renovate is at least talking about it… https://github.com/renovatebot/renovate/issues/7895
What would you like Renovate to be able to do? Since terraform will write a lockfile (.terraform.lock.hcl) since v0.14. so if dependencies are updated by renovate, we need to update the lockfile to…
dependabot still lacks hcl2 support, so, probably much further off for that one
Do u have an AWS account and you want to deploy your static website in less than 5 minutes? I have published a terraform module to do that for u, so fast and quick and it doesn’t require any complicated stuff from u, check it out and let me know what do you think? https://www.dailytask.co/task/deploy-you-static-website-in-s3-and-cloudfront-using-terraform-ahmed-zidan
Deploy you static website in S3 and cloudfront using terraform written by Ahmed Zidan
like this one : https://github.com/cloudposse/terraform-aws-cloudfront-s3-cdn
Terraform module to easily provision CloudFront CDN backed by an S3 origin - cloudposse/terraform-aws-cloudfront-s3-cdn
2021-01-17
Hi All, I am also facing the same issue. have you found the solution for this scenario. https://github.com/cloudposse/terraform-aws-elastic-beanstalk-environment/issues/47
Hi Guys, Thank you for the great work you made available. Currently I am bumped to this particularly use case: Using Multi-Docker environments which are reachable through an Application Loadbalance…
does anyone know in terraform if you can work out where a map is empty?
i have a map like this
yellowfin = {
instances = {}
schedule = {
down = {}
up = {}
}
}
}
if its empty like this I don’t want to execute a module
You can test == {}
count = var.worker_groups["yellowfin"] == {} : 0 ? 1
depends on how you define it… if it’s an object with required attributes, might need to be more pedantic:
yellowfin.instances == {} && yellowfin.schedule.down == {} && yellowfin.schedule.up == {}
you could also check against null
instead…
var.worker_groups.yellowfin == null ? 0 : 1
the null
check works even for objects with required attrs…
Error: Missing newline after argument
on eks-workers-yellowfin.tf line 4, in module "yellowfin_worker_node_group":
4: count = var.worker_groups["yellowfin"] == null : 0 ? 1
An argument definition must end with a newline.
i got the order wrong
that did not seem to work
@loren
Yellowfin is a var or a local?
Don’t you have to use var.Yellowfin or local.yellowfin?
i have a map of maps
so the yellowfin = {}
and have inside worker_grpups
it species the nodes for each type of worker node as we have different ones
like instance type, min and max numbers
but the yellowfin map can be empty as we only need these nodes in certain situations
Ahhhh ok
before i used to do this with an enabled
map where yellowfin = 1 or 0
And if you test for {} instead of null?
module "yellowfin_worker_node_group" {
source = "git::[email protected]:redacted/tf-modules.git//modules/eks-node-group?ref=bf46d92"
count = var.worker_groups["yellowfin"] == {} ? 0 : 1
this does not work
the module still gets executed based on the output of the plan
== null
does that same thing
Mmmm
yeh its weird
i am not sure if its because i am doing a map inside of map or something
any ideas as i am running on empty
i worked it out
yellowfin is not {}
but yellowfin.instances == {}
will work
I was dealing with something similar to this yesterday
is there a len function for a map?
Right, so like I first said… https://sweetops.slack.com/archives/CB6GHNLG0/p1610915155075300?thread_ts=1610914335.074800&cid=CB6GHNLG0
depends on how you define it… if it’s an object with required attributes, might need to be more pedantic:
yellowfin.instances == {} && yellowfin.schedule.down == {} && yellowfin.schedule.up == {}
You basically have to define what “empty” means for your object
You can null an object and test for that, but it means you need to set the var accordingly, e.g. worker_groups = { yellowfin = null }
Hello @Steve Wade (swade1987), here is the code I use to clean a config map from its empty value where I create a new map without the empty item:
cleaned_config_map = {
for srv, cfg in local.config_map: srv => cfg
if cfg != {}
}
then parse cleaned_config_map with for_each and you’ll only have the map entries who have non empty values. I didn’t find a way to do it without a temporary map.
Checking out module: https://github.com/cloudposse/terraform-aws-elasticache-redis Able to correctly create a cluster with when not in clustering mode but I am getting the following errors when I change to using clustering mode
Error: Invalid function argument
on .terraform/modules/redis/main.tf line 169, in module "dns":
169: records = var.cluster_mode_enabled ? [join("", aws_elasticache_replication_group.default.*.configuration_endpoint_address)] : [join("", aws_elasticache_replication_group.default.*.primary_endpoint_address)]
|----------------
| aws_elasticache_replication_group.default is tuple with 1 element
Invalid value for "lists" parameter: element 0 is null; cannot concatenate
null values.
Error: Invalid function argument
on .terraform/modules/redis/outputs.tf line 17, in output "endpoint":
17: value = var.cluster_mode_enabled ? join("", aws_elasticache_replication_group.default.*.configuration_endpoint_address) : join("", aws_elasticache_replication_group.default.*.primary_endpoint_address)
|----------------
| aws_elasticache_replication_group.default is tuple with 1 element
Invalid value for "lists" parameter: element 0 is null; cannot concatenate
null values.
with the following module configuration
module "redis" {
source = "cloudposse/elasticache-redis/aws"
version = "0.27.3"
availability_zones = ["us-east-1a", "us-east-1b", "us-east-1a"] #1a, 1b, 1c
# namespace = "es.dev.spotlightnews.us"
stage = local.env
name = local.svc_name
# zone_id = var.zone_id
vpc_id = data.terraform_remote_state.network.outputs.vpc_id
subnets = data.terraform_remote_state.network.outputs.private_subnet_ids_list #slice?
allowed_cidr_blocks = ["10.0.0.0/16"] ## get from private subnets
cluster_mode_enabled = true
cluster_mode_num_node_groups = 1
cluster_mode_replicas_per_node_group = 1
instance_type = "cache.t3.small"
apply_immediately = true
automatic_failover_enabled = false
engine_version = "5.0.6"
family = "redis5.0"
at_rest_encryption_enabled = false
transit_encryption_enabled = true
auth_token = "1234567890asdfghjkl"
# parameter = [{}]
}
please let me know if you would prefer I open bug report for this or if its just incorrect usage on my part. Thanks!
Terraform module to provision an ElastiCache Redis Cluster - cloudposse/terraform-aws-elasticache-redis
is there a way to have a before hook globally for terragrunt ?
use this https://terragrunt.gruntwork.io/docs/features/before-and-after-hooks/ if not cover your use case, use makefile and then call terragrunt within
Learn how to execute custom code before or after running Terraform.
You can put the before_hook in your parent config, and all the terragrunt configs inheriting from that parent will pick it up
But if you want a before_hook across completely separate sets of terragrunt configs, i.e. no shared parent config, you might have to get creative with the function read_terragrunt_config()….
2021-01-18
Is there a way for Terraform not to create a new revision if task already exists in ECS cluster but to use the latest existing revision? Or if it doesn’t exists at all to create it? Thanks!
use the lifecycle block to ignore changes to the task revision
Thanks, I’m already using those. I’ll try without creation of task-definition, so it will use latest revision for that family.
So:
task_definition = try(aws_ecs_task_definition.task.*.arn[count.index], var.family)
if task definition is defined in terraform it will use it when creating service, if not it will use latest revision in that family
@Alex Jurkiewicz Some updates here: https://github.com/cloudposse/terraform-null-label/pull/118
Truncated forms of id_full which are always available. This is useful when you want to use the same label for several resources with different length restrictions. Closes #117.
@loren since you are the for_each/maps Guru do you have an idea why that does not work ? https://github.com/cloudposse/terraform-aws-ec2-autoscale-group/pull/43
what Add Ability to customize alarms for autoscaling why sometimes EC2 instances need to scale base on ECS, ALB, or another type of cloudwatch alarms
the error
what Add Ability to customize alarms for autoscaling why sometimes EC2 instances need to scale base on ECS, ALB, or another type of cloudwatch alarms
Error Trace: apply.go:15
examples_complete_test.go:26
Error: Received unexpected error:
FatalError{Underlying: error while running command: exit status 1;
Error: Invalid for_each argument
on ../../autoscaling.tf line 63, in resource "aws_cloudwatch_metric_alarm" "all_alarms":
63: for_each = module.this.enabled ? local.all_alarms : null
The "for_each" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the for_each depends on.
}
Test: TestExamplesComplete
This is a known issue with for_each where if the items on which you are iterating doesn’t yet exist, it won’t work
that sucks
is there a workaround?
first thing i see is that the “false” condition is invalid… can’t for_each over null
. change that to {}
that error generally occurs only when the for_each keys are unknown. it is fine if the values of the maps are unknown, that’s generally the whole point after all
the keys i see are cpu_high
and cpu_low
, which are certainly known, so i think i need to look at the test config…
though the test config hasn’t been updated, so it’s just using the default value for custom_alarms… hmm…
it could be the way module.this.enabled
is used… need to trace through how that is set, to see if it relies exclusively on user input or static locals or data sources that themselves do not depend on resources…
i would say first try just:
for_each = local.all_alarms
and see if that works
If I try this in tf 0.13.5 on my local ( same code as the repo) it works
but on the tests it does not
easy then, bump the requirement to tf 0.13
HAHAHAH
there were tons of changes in for_each between 0.12 and 0.13, it got a lot more flexible
mmmmmm that could be then
personally i’d say the null label and context stuff is trying to be a bit to smart/magic, and it’s overwhelming the ability for 0.12 to process for for_each
give this a try to confirm it is the enabled
part of the logic:
for_each = local.all_alarms
alternatively, another way to construct your for_each:
for_each = { for k, v in local.all_alarms : k => v if module.this.enabled }
i prefer your first approach as it’s less loopy, but this might give 0.12 enough of a clue about how to construct the map
testing
something else to think about , is what if the user doesn’t want the default_alarms at all?
welllllllllllllll they can……..
the module already had some ( I’m keeping backwards compatibility) but I could default that too
so that failed too… looks like you’re bumping into some limitation in tf 0.12 and how it evaluates for_each expressions, that is fixed in a later version :(
yep
2021-01-19
woohoo! @Matt Gowie added support for synthetics to our datadog module. https://github.com/cloudposse/terraform-datadog-monitor/pull/25
what Adds support for creating DataDog Synthetic Tests why Synthetic Tests are very similar to monitors and they end up creating a monitor under the hood, so it's useful to create them via t…
So this just happened to me: working on a module for a project using a dev environment but I have another co-worker working in another branch and then she did TF apply and I just got a Your query returned no results
and I thought I broke something ( we use atlantis for other projects so this that do not happen) is there a way to check the state if it was changed ( like doing a git pull) ?(keep in mind in this case was a data. resource so is not going to be recreated)
What about the workspace locking in atlantis? Did that not kick in?
for this project we are not in atlantis yet
the problem is that this is data resource tag filter that was changed
so if I run plan
it just complained about the query
data resouce name did not change
this looks like a bug to me
did anyone shared this before? https://github.com/nozaq/terraform-aws-secure-baseline anyone using it?
Terraform module to set up your AWS account with the secure baseline configuration based on CIS Amazon Web Services Foundations and AWS Foundational Security Best Practices. - nozaq/terraform-aws-s…
ive pulled pieces out of it and used them
Terraform module to set up your AWS account with the secure baseline configuration based on CIS Amazon Web Services Foundations and AWS Foundational Security Best Practices. - nozaq/terraform-aws-s…
No, but I wonder how much that first configuration will cost if you run everything by default LOL
At a quick glance it’s very similar to what I’ve seen
yea im using the alarm, analyzer and ebs-baseline right now from it
interesting
I used it multiple times, contribute to it also, If you use this and integrate the new AWS SSO resources, you get a landing zone for small-medium business
For a current project I also started to use bits and pieces from it. The VPC baseline and ebs parts
Mostly as I did not want to use aws-nuke
or likes and have it in terraform to get started from scratch
Question from my side: in which limitations did you run when using it - and how did you extend it in order to overcome these?
Is there a reason to implement Atlantis when using GitHub and GitHub actions now that you can create your own comment tool in actions as well as simply apply there as well. Just curious.
here’s the recording: https://youtu.be/R8PkldStQKw
The main problem is implementing workflows for terraform the right way (tm) is non-trivial and most don’t do it correctly
We also did a special edition office hours on TACOS: https://www.youtube.com/watch?v=4MLBpBqZmpM check it out
Will this affect Terraform ? https://aws.amazon.com/about-aws/whats-new/2021/01/aws-sdk-for-go-version-2-now-generally-available/
not immediately, i’d expect. aws tends to support multiple versions for quite a while.
looks like v2 simplifies the credential/config handling quite a bit, which will be nice. that’s always been hard for developers to implement well
2021-01-20
Hi, I was trying to use the cloudposse/terraform-aws-s3-bucket module but was getting this error:
Error: Error creating IAM User my-non-prod-deployment-artefacts: InvalidInput: Duplicate tag keys found. Please note that Tag keys are case insensitive.
I think because it tries to set two tags: Environment
(one of my companies default tags) and environment
as set by the module. Is there a workaround for this?
Terraform module that creates an S3 bucket with an optional IAM user for external CI/CD systems - cloudposse/terraform-aws-s3-bucket
In the end I got this working by setting this to blank:
environment = ""
Terraform module that creates an S3 bucket with an optional IAM user for external CI/CD systems - cloudposse/terraform-aws-s3-bucket
v0.14.5 0.14.5 (January 20, 2021) ENHANCEMENTS: backend/pg: The Postgres backend now supports the “scram-sha-256” authentication method. (#26886) BUG FIXES: cli: Fix formatting of long integers in outputs and console (<a href=”https://github.com/hashicorp/terraform/issues/27479” data-hovercard-type=”pull_request”…
This is needed to make it possible to use the scram-sha-256 authentication method for the pg backend. It's not easy to write unit-tests for this since it requires a specific configuration of th…
Backport This PR is auto-generated from #27440 to be assessed for backporting due to the inclusion of the label 0.14-backport. The below text is copied from the body of the original PR. Recent cha…
Is there anything other than tfenv that provides that smooth experience for various terraform versions. Maybe a docker driven approach that’s not hideous to look at with something like whalebrew or the like?
Also I kinda wanted on installing a new version for it to prompt me to set as default instead of having to do 2 commands so before I dive into exploring submitting a PR or something on that, would like to know if it’s still the best tool to use for managing various versions of terraform
i was kinda asking down a parallel road recently, @Erik Osterman (Cloud Posse) brought up env-cli, which looks cool but i still haven’t played with… https://github.com/EnvCLI/EnvCLI
i’ve got a probably dumb question about using docker containers… is there a simple/automatic way to refer to local files from the host, within the container environment? i was just playing with the terraform container, which says to do this:
docker run -i -t hashicorp/terraform:light plan [main.tf](http://main.tf)
but of course that fails because 1) it’s invalid syntax for terraform and 2) the container workdir does not have my main.tf. i do know about -v of course, and can mount $PWD to /, but what i’m more interested in is the idea of using a docker image to replace a binary installed to my system. if i have to mount $PWD to the workdir every time, that seems a little more annoying?
Terraform & Terragrunt Version Manager. Contribute to aaratn/terraenv development by creating an account on GitHub.
Hi Guys, I am using terraform 12.24 and trying to run cloudposse asg module,but getting below error though iam using correct version,not sure if iam missing anything else.
terraform init
Initializing modules...
Downloading git::<https://github.com/cloudposse/terraform-aws-ec2-autoscale-group.git?ref=tags/0.10.0> for autoscale_group...
- autoscale_group in .terraform/modules/autoscale_group
Downloading cloudposse/label/null 0.22.1 for autoscale_group.this...
- autoscale_group.this in .terraform/modules/autoscale_group.this
Error: Unsupported Terraform Core version
Module autoscale_group (from
"git::<https://github.com/cloudposse/terraform-aws-ec2-autoscale-group.git?ref=tags/0.10.0>")
does not support Terraform version 0.12.24. To proceed, either choose another
supported Terraform version or update the module's version constraint. Version
constraints are normally set for good reason, so updating the constraint may
lead to other errors or unexpected behavior.
Hey, the version of the module you’re using needs Terraform 0.13 or greater as you can see here: https://github.com/cloudposse/terraform-aws-ec2-autoscale-group/blob/b6c072f676b0b3ddcecb705dd9ffbbd6f92aba1b/versions.tf#L2
Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group
Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group
branch 0.12
Hmm in the output it says you’re trying to use tag v0.10.0:
2021-01-21
Getting error while configure eks_cluster_node
using terraform-aws-eks-cluster
?
yes @Michael Dizon
what is kubernetes_config_map_ignore_role_changes
set to?
kubernetes_config_map_id = module.eks_cluster.kubernetes_config_map_id
hey guys, if anyone can help me understand the https://registry.terraform.io/modules/cloudposse/eks-node-group/aws/latest module - I’d appreciate it. Right now it’s creating everything except the nodes themselves. here’s a snippet: https://gist.github.com/todd-dsm/7a8f96fe488917f3d7dd1fc3516e3c3c#file-main-tf-L36-L60
$ tf apply
...
module.apps_cluster.null_resource.wait_for_cluster[0]: Still creating... [3m40s elapsed]
Error: Error creating IAM Role smpl-stage-pipes-workers: EntityAlreadyExists: Role with name smpl-stage-pipes-workers already exists.
status code: 409, request id: 7bfdb776-988c-4f43-8ac0-4e596d1ee261
Error: Error creating IAM Role smpl-stage-pipes-workers: EntityAlreadyExists: Role with name smpl-stage-pipes-workers already exists.
status code: 409, request id: 3657c9be-1038-4d79-870e-3ae73ce83a64
Error: Error creating IAM Role smpl-stage-pipes-workers: EntityAlreadyExists: Role with name smpl-stage-pipes-workers already exists.
status code: 409, request id: 068bbe8a-b6da-47a7-a53c-b584873f84c9
Error: Error running command 'curl --silent --fail --retry 60 --retry-delay 5 --retry-connrefused --insecure --output /dev/null $ENDPOINT/healthz': exit status 7. Output:
Releasing state lock. This may take a few moments...
https://gist.github.com/todd-dsm/7a8f96fe488917f3d7dd1fc3516e3c3c#file-main-tf-L49 your variable for the instance family is in quotes
@Yonatan Koren good eye, removed the quotes; still the same issue. does your config look similar (or the same) as mine? anything else stand out to you?
did you start by using the example?
Terraform module to provision an EKS Node Group. Contribute to cloudposse/terraform-aws-eks-node-group development by creating an account on GitHub.
@voidSurfr looks like Terraform is trying to create IAM Roles which already exist. I think you may have exited out of Terraform ungracefully. I’d try cleaning up those roles and then trying again
@Erik Osterman (Cloud Posse), yes, started with exactly that.
@Yonatan Koren yeah, I’ve deleted the role, re-plan, re-apply and get the same error, over and over. The next diagnostic step is to pour over every line of TF which I really don’t want to do.
Process wise: I started incrementally with a VPC, no issues. Subnets, no issues. Cluster, some issues but easily solvable. Nodes, just will not process due to this error.
as a continuous theme, the context = module.this.context
had to be removed from each module before any of the pieces would work.
here’s a isolated example of the VPC via direct copy/paste:
$ make plan
terraform plan -no-color \
-out=/tmp/pipes-stage.plan 2>&1 | \
tee /tmp/tf-pipes-stage-plan.out
Releasing state lock. This may take a few moments...
Error: Reference to undeclared module
on actuate.tf line 11, in module "label":
11: context = module.this.context
No module call named "this" is declared in the root module.
Error: Reference to undeclared module
on actuate.tf line 35, in module "vpc":
35: context = module.this.context
No module call named "this" is declared in the root module.
Hrmm something’s fishy
all of our examples are tested every single time we open a PR. The context stuff definitely works.
we use terratest
for automated testing.
Can you link me to the example you copied?
as an additional wrinkle, when I started experimenting the tf was not yet updated to v14 so I stayed with v0.13.5; I’m still on the old version and I see commits with “Terraform 0.14 upgrade”. So, that might be part of it.
@Erik Osterman (Cloud Posse) I’ve dumped it into a temp repo https://github.com/todd-dsm/cp-eks left 2 open issues in with it
https://github.com/hashicorp/terraform-provider-aws/releases/tag/v3.25.0
• New Resource: aws_backup_global_settings
(#16475)
• New Resource: aws_sagemaker_feature_group
(#16728)
• New Resource: aws_sagemaker_image_version
(#17141)
• New Resource//github.com/hashicorp/terraform-provider-aws/issues/17123))
Which task runner you would use? for Terraform operations?
I have not used Task before! That looks like a nice tool, I will certainly be trying it! So far I have been using Makefiles and Ansible roles but makefiles can be cumbersome to use and more difficult to understand!
My team has been using go-task for over a year now. It is just so much cleaner and easier to understand than make. Highly recommend.
Aha! I missed this.
We started with https://github.com/mumoshu/variant and now use https://github.com/mumoshu/variant2
Wrap up your bash scripts into a modern CLI today. Graduate to a full-blown golang app tomorrow. - mumoshu/variant
Turn your bash scripts into a modern, single-executable CLI app today - mumoshu/variant2
variant
looks almost identical to taskfile
but with richer interface. variant2
is a complete rewrite that uses HCL. Both are a bit more complicated to use than taskfile
(#variant)
2021-01-22
Hello, the code below trigger a null_resource for each key at creation and when the resource is removed as well .. Can you help me figuring out how to triggered it only at resource creation ?
resource "null_resource" "connection_test" {
for_each = local.config_map
triggers = {
key = each.key
}
connection {
..
}
}
resource "null_resource" "project_mgmt" {
triggers = {
project_id = google_project.my_project.project_id
}
provisioner "local-exec" {
command = <<-EOC
gcloud pubsub topics publish projects/project_mgmt/topics/register --message '{"project":"${self.triggers.project_id}"}'
EOC
}
provisioner "local-exec" {
when = destroy
command = <<-EOD
gcloud pubsub topics publish projects/project_mgmt/topics/unregister --message '{"project":"${self.triggers.project_id}"}'
EOD
}
}
thnaks a lot ! so I use “when = create” in my case
Has anyone in here created (CICD) deployment pipelines for ECS Fargate container based services? I’m currently falling back to AWS CLI after trying too hard to leverage aws_ecs_task_definition
outside the “resource provisioning” context. So I’m just curious if anybody else had been there – I found surprisingly little out there in the blogosphere.
Yup, what would you like to know?
Hi there - nothing specific - just wanted to chat about this So small shell scripts registering tasks and updating the service (still minted with terraform at “provisional” stage) during CICD steps as well?
One part I feel somewhat bad about is that I keep a “default boilerplate task” in terraform from which I get the pertaining services’ boilerplate task json through AWS CLI -> create new task -> update service.
It’s only because I leverage terraform and fill environment variable values - TF is just the right tool still for that IMHO.
So I guess I’m just curious how you went about this - what did you do differently in your case then?
Terraform creates the ECS service, initial task definition, container definition etc, but ECS service ignores changes to the task definition
I do a lifecycle ignore_changes
on service task reference just wanted to say !
Use ecs-deploy for CI/CD of the service from that point
Simple shell script for initiating blue-green deployments on Amazon EC2 Container Service (ECS) - silinternational/ecs-deploy
Just need to make sure your ECS service Terraform code can deal with making changes to things that require a new ECS service to be created, ensuring no downtime
e.g. if you want to change your load balancer configuration, that is a ForceNew on the aws_ecs_service
and if you spin up a new service, make sure you run the correct image version from CICD, not the one that Terraform initially created
Some nuances there.
I’m not sure I’m following with regards to forced new ecs_service unless you mean the same thing here -> drift of actually current task definition (because the one on the terraform service will retrigger a deployment of the original “boilerplate” task).
Nope
What happens if you want to change an attribute that is a ForceNew attribute?
e.g. I want to change the load balancer than fronts my ECS service.
Or the role? Or the service registries
If you change any of those attributes, Terraform is going to want to throw away your ECS service, and create a new one
I see - in my case that should be fine though as my shell script looks up the service / task and updates accordingly at the latest at (re)deployment-of-service stage? Maybe I’m still not following though.
Definitely a manual step though.
Think missed the point
I will look into ecs-deploy
this is maybe a better idea than managing my own scripts.
If you change any of those attributes, Terraform is going to want to throw away your ECS service, and create a new one
^ I see this as manageable as long as I keep the task insync (outside of Terraform)
If that’s not the point - again I’m missing it
This isn’t a scripts/deployment thing, specific to managing your ECS service with Terraform
Just need to make sure your ECS service Terraform code can deal with making changes to things that require a new ECS service to be created, ensuring no downtime
So this then? I guess I was assuming terraform taking care of that? So in your load balancer scenario: I was assuming as long as my stack configuration is sane this will be taken care of “automatically” by Terraform (“DAG”). Load balancer update -> registering new version of services. Maybe not 0 downtime though as in k8s service redeployment kind of sense. I might be good with that but good point though. How did you fortify for that?
It won’t be 0 downtime.
Terraform will want to destroy/create your ECS Service
It can’t create the new service before the old is destroyed if the service names are non unique
Ok so do you append a git sha to the name for example - to mitigate that?
One could still look up the service by tag and take the latest for the deployment part?
The way we deal with it Terraform picks up a change to e.g. load balancer, creates a new ECS service and associated resources, suffixes the ecs service name with a random string, brings it up, tests that the new containers come up healthy at the same version your CICD thinks should be deployed (not what Terraform first thought to deploy as this will now be out of date), tests that the new ECS service is healthy in the ALB, and then destroys your old service
Moving tags on deployments for environments
Terraform will always deploy $service_name:$env
CICD deployment logic tags the image it deploys to $env
Perfect sounds like something I also would have ended up doing - thanks for getting me on the 0 downtime vector there - I was disregarding this for now.
I will probably end up using the git sha of the latest commit at hand.
Most ECS modules out there don’t deal with this edge cases and IMO aren’t production safe
Yeah I quickly realized I had to make this up myself - again didn’t see ecs-deploy for that reason.
Been running this setup for a year at current big client
No issues blowing away production ECS services and doing blue/green deployments for new ones
Awesome, thanks a lot
As I’ve been working in a vacuum and while I have you - look at the insane healthcheck I’m sideloading with terraform task config:
/bin/bash -c "\
exec 3<>/dev/tcp/${HOSTNAME}/${PORT} \
&& printf '\
GET ${HEALTHCHECK_ENDPOINT} HTTP/1.1\r\n\
Host: ${HOSTNAME}:${PORT}\r\n\
Connection: close\r\n\
User-Agent: ${HOSTNAME} healthcheck\r\n\
Accept: */*\r\n\r\n\
' >&3 \
&& head -n 1 <&3 | grep -F 'HTTP/1.1 200 OK' >/dev/null \
|| exit 1"
I should have maybe just jumped through the hoops of installing curl in every microservice dockerfile.
I admit it
I feel ECS Fargate is great at face value but bringing it into prod was less straightforward than what we would have hoped for (we wanted to drop managed k8s for complexity reasons..).
In the end still less complex and nice to have one layer of abstraction less (for the most part). Terraform is just awesome.
So I guess if there is a question - did you end up just installing curl with every container image Josh?
You will want something for your docker healthchecks curl, wget, whatever
Yeah as you can see above as I didn’t want to bake any of those in I fell back to shell builtins.
But agree a simple HTTP GET via curl would be more readable maybe.
Not maybe, definitely
Thanks for the feedback Josh, I really appreciate it. Cabin fever is real.
np, can be hard if working in isolation
This is a fairly robust module, may talk to client and see if we can publish to the registry
ECS module, batteries included
If you can do it I’m guessing we are not the only ones having to solve these problems.
Are you doing this as consultant? I mean thinking cloudposse, right? Most likely some dev like myself reaches out to for you to manage it for them?
Anyways open source yadayada keep the good fight
Whatever floats your boat - can just tell you I didn’t find a blog article on first naive duckducking
I’m a consultant for Disney+ at the mo, have worked with the good CP folks in the past
So maybe just that, probably good karma or more.
Very nice - yeah they rock.
Lots of Disney+ services using this module now
I don’t think I’ve ever ended up using one of their modules but I like to read their code and attend their wednesday office hours.
But for sure if need arises would for example get in contact.
CP modules are great, if even not quite for you, always an excellent starting point
So I guess it’s a good model? I hope so as open source is great.
I totally agree.
Very inspiring also because they try to do quite a lot.
IMHO sensing lots of Hashicorp “DNA” in that way?
Not sure I follow
Maybe too esoteric observation but I guess you know what I mean
Haha that’s allright.
Good work all the way.
Also very cool with regards to disney+
Still always amazed that even big co’s are relying on cloud platforms - I know netflix paved they way.
I guess it just makes sense and in the end you can bargain at that $ize - now even more so with Terraform and theoretical “platform independence”.
I’m also exposed to some mixed on-prem / cloud infra and I hate to admit it but I prefer to work with all these nice terraform providers at this point.
Pre terraform I really hated most of those cloud platforms.
Anyways - getting back to my shiny ECS deploy - thanks again and have a great weekend
Using terraform-aws-alb: How do you add an instance to a target group?
Terraform module to provision a standard ALB for HTTP/HTTP traffic - cloudposse/terraform-aws-alb
Terraform module to provision a standard ALB for HTTP/HTTP traffic - cloudposse/terraform-aws-alb
we usually use that module in conjunction with https://github.com/cloudposse/terraform-aws-alb-ingress
Terraform module to provision an HTTP style ingress rule based on hostname and path for an ALB using target groups - cloudposse/terraform-aws-alb-ingress
and the service behind (ECS, etc) will attach the container/instance
Ahh, okay, cool. I was just writing it into the terraform-aws-alb
module and was gonna do a pull request, this is a solid solution though.
tyvm
yes that is not the job of that module, is the “thing” ( service in aws) that should do the “glue” to attach the instance
Makes sense. How about when there are multiple target groups to be used with the alb?
same deal
some instances will attach to one or the other
I mean how do I attach multiple target groups to the single alb?
ahhhh
https://github.com/cloudposse/terraform-aws-alb-ingress can create a TG per rule
attached to the same ALB
Okay cool, I’ll use that.
Thank you again so much.
np
Hmm.. I’m still struggling to make sense of the pattern here. The Host
header rules are configured within wordpress_alb_ingress
, correct? So I should have a wordpress_alb_ingress
for each target group?
Reading the source now, this is making it more clear
unauthenticated_hosts
and unauthenticated_listerner_arns
were unclear to me before.
2021-01-24
Hi amazing folks, We are using the following module in our codebase
module "module-name" {
source = "git::<https://github.com/cloudposse/terraform-aws-iam-s3-user.git?ref=0.14.1>"
namespace = "xxx"
stage = "xxx"
name = "xxx"
s3_actions = [
"s3:GetBucketLocation",
"s3:ListAllMyBuckets",
"s3:ListBucket",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObjectAcl",
"s3:GetObjectVersion",
"s3:DeleteObject",
"s3:DeleteObjectVersion"
]
s3_resources = [
"my-resources"
]
depends_on = [
"***"
]
}
now strangely
it is removing those s3_actions when i execute terraform plan
~ json = jsonencode(
{
- Statement = [
- {
- Action = [
- "s3:PutObjectAcl",
- "s3:PutObject",
- "s3:ListBucket",
- "s3:ListAllMyBuckets",
- "s3:GetObjectVersion",
- "s3:GetObjectAcl",
- "s3:GetBucketLocation",
- "s3:DeleteObjectVersion",
- "s3:DeleteObject",
]
- Effect = "Allow"
- Resource = [
- "***"
]
- Sid = ""
},
]
- Version = "2012-10-17"
}
If is the same module version an you just changed the tf version and this is happening then try to compare the providers version installed, this could e a provider change unless you were on a very old version of the module version and you upgraded to the latest and there has been changes that are doing this but that is unlikely
Thanks a lot @jose.amengual for taking time in answering .. let me go through the changes … i feel i need to sandbox these things to get more undestanding …. anyway thanks a lot
we are upgrading to version 0.14.4 and our expection is that there should be no change in infrastrucure when we execute terraform plan/apply
theoretically, it depends more on the version of the terraform-aws-provider
since sometimes behavior there can change (E.g. defaults)
ah okay … yeah i think its the default policy which is changing… we want to upgrade the terraform resource files without effecting the infrastructure … so i think i need to dig more
2021-01-25
Testing https://taskfile.dev/#/ by creating TF taskfile for all TF ops instead of Makefile Sample of Taskfile.yml:
# <https://taskfile.dev>
version: '3'
vars:
GREETING: XXX >> Terraform Taskfile!
PROFILE: XXX-dev
tasks:
default:
desc: Hello MSG.
cmds:
- echo "{{.GREETING}}"
silent: true
main:
desc: Main workflow.
cmds:
- task: init
- task: validate
- task: plan
aws-vault:
desc: Login using aws-vault.
cmds:
- aws-vault --version
- aws-vault exec {{.PROFILE}} --duration=2h
silent: true
init:
desc: Terraform init.
cmds:
- terraform -chdir=$DIR init
upgrade:
desc: Terraform upgrade.
cmds:
- terraform -chdir=$DIR init -upgrade=true
validate:
desc: Terraform validate.
cmds:
- terraform -chdir=$DIR validate
plan:
desc: Terraform plam.
cmds:
- terraform -chdir=$DIR plan -compact-warnings
apply:
desc: Terraform apply.
cmds:
- terraform -chdir=$DIR apply -auto-approve
A task runner / simpler Make alternative written in Go
What are you impressions?
I noticed a small typo here:
plan:
desc: Terraform plaM. <-
A task runner / simpler Make alternative written in Go
Thanks, just for testing, thinking replacing my makefile with Taskfile, I want to see how others are using Taskfile
• Fast
• easy to understand and implement
• more features than makefile
• easy help output
• can be used to unify operations across development machines and CICD tools
and my Fav one support .env so you can have secrets and tokens in .env file out of git
Thanks for sharing. We wanted to give this a try, but haven’t had time to play around. Will try to do some poc in the future
I think @roth.andy is using it
Yes he voted here https://sweetops.slack.com/archives/CB6GHNLG0/p1611296753008900
Which task runner you would use? for Terraform operations?
:one: https://taskfile.dev 2
@Mohammed Yahya, @roth.andy
:two: Makefile 3
@aaratn, @loren, @mfridh
:three: Other tool! - let me tell you more about it! 1
@Pierre-Yves
Created by @Mohammed Yahya with /poll
go-task
Hi guys, when trying to use the terraform-aws-eks-workers module with terraform 0.14.3 im unable to do so because of the hard coded version of the terraform-aws-ec2-autoscale-group in here , is there a reason for that? it prevents me from using the module with our current terraform version. Appreciate the help!
Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers
Terraform module to provision Auto Scaling Group and Launch Template on AWS - cloudposse/terraform-aws-ec2-autoscale-group
Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers - cloudposse/terraform-aws-eks-workers
Is there a way to tell terraform to ignore changes on “latest_version” while using the autoscale-group module?
is your module tagged
whats your current “source” argument for the module
also ignore is not working as expected in ecs-service-alb module check this issue https://github.com/cloudposse/terraform-aws-ecs-alb-service-task/issues/93
Found a bug? Maybe our Slack Community can help. Describe the Bug tash_definition ignore is not working as expected tf version 0.14.5 Expected Behavior tash_definition should be ignored
@RB Module isn’t tagged and I’ve forked the module repo, so I’m using my GitHub account’s link.
@Mohammed Yahya Any ideas? Is ignore for latest version supposed to work out of the box for this module? It’s messing up my automation a bit, err.
not sure, try to use it locally - update the terraform-aws-provider to latest and test
Hi everyone, been using ecs-alb-service-task
for a long time but now I have come up with a problem: how to make 2 services on the same Cluster to communicate with one another (or better A –> B)?
I found out an easy way to use Service Discovery that matches well the modules BUT there is this problem:
Error: InvalidParameterException: Specify a value for either 'port' or the 'containerName' and 'containerPort' combination, but not both. Remove one and retry. Registry: arn:aws:servicediscovery:eu-west-1:790682551775:service/srv-6ixnxocmk32rt3on "demo-release-3tier-web"
The module requires all 4 properties, but the underlying module in Terraform states this error
Hello, how do you organize your code for multi-region ? do you set the region at provider level ? or providing location at each element ? or include the region in the directory structure ?
user providers with aliases
# The default "aws" configuration is used for AWS resources in the root
# module where no explicit provider instance is selected.
provider "aws" {
region = "us-west-1"
}
# An alternate configuration is also defined for a different
# region, using the alias "usw2".
provider "aws" {
alias = "usw2"
region = "us-west-2"
}
# An example child module is instantiated with the alternate configuration,
# so any AWS resources it defines will use the us-west-2 region.
module "example" {
source = "./example"
providers = {
aws = aws.usw2
}
}
yes that’s the documentation, but my point is how code is organised to propagate 1+ provider for many module to keep things clean
there are many ways to do it
my approach a folder layouts as follows:
tf-templates
|-123456789012
|--us-east-1
|--us-west-1
I would use account_id as parent folder and regions as sub folders but this is highly personal option
Thanks for your input
I have a flat base repo
I have two backends configs ( one per each repo)
so I run -backend-config=east2.tfvars etc
but the repo is the same, no folder structure etc
I’m not a fan of subfolders
that should work also, I tried this before, but some time the identical TF templates with different config is not covering all use cases. the way with TF subfolders that you can map easily to TF Cloud
thanks a lot @Mohammed Yahya and @Erik Osterman (Cloud Posse) for your time and the detailed information in the video
2021-01-26
Hi, Just passing here to ask from you guys experience what is the best approach (first one) to build mine Terraform project structure. A little bit of context:
• Currently I’m not using Terraform for provisioning servers; I have a strict infrastructure requirement… be cloud-agnostic; We can use AWS or Azure or even on-premise;
• I already have all the necessary software configurations required to met the previous requirement; All the things are in a “IaS”/automation using Ansible;
• Currently my platform are in a cloud-provider and now I want to add a new one for platform redundancy; Now I believe that is the time to add a provision tool like Terraform.
• All the environment should have the same structure (same network settings, same databases, same backend services, etc). The difference should be production have 5 servers and dev only 1, for example;
• I’ve seen several examples of code structure like the modules approach, separate the environment by folders an reuse the modules; On question here: this can lead to code duplication right? Any strategy on this or maybe exists a better approach?
• What is the better strategy to use a collaborative approach on the Terraform development? Starting at the beginning saving the state on a centralized service like S3 (is a must have)? Would be happy to hear from someone with experience about this use case.
can anyone recommend some good documentation on configuring terraform compliance?
we would like to run it against CI for our terraform modules monorepo as well as when we execute this modules from other repos as part of Atlantis
There’s some good docs here: https://www.checkov.io/1.Introduction/Getting%20Started.html
do you use this?
Yup, tons more coverage for both static/dynamic TF on pre-commit, build and run with githooks. Also covers Serverless, k8s manifests, cfn ..
but this is python based right ?
or do you use it in conjunction with tf compliance?
Slightly different vibes https://blog.christophetd.fr/shifting-cloud-security-left-scanning-infrastructure-as-code-for-security-issues/
Identify cloud security issues and misconfigurations even before they pose an actual security risk by performing static analysis of Terraform code.
hmmm interesting
the issue we have is our team aren’t python devs
There’s tons of checks for AWS now and the documentation for adding checks is on checkov.io too.. it’s not super hard (even I have done a few and I am an idiot).
it does not look too bad but will probably be a hard sell as its python which is unfortunate
Check #security Edit: didn’t mean to send just yet
Erik just posted a link that goes over aws security or something to that effect
Interesting! Try a couple manual runs on your monorepo and see how you get on - should give you an idea of coverage/gaps.
Here’s what Erik linked - https://summitroute.com/downloads/aws_security_maturity_roadmap-Summit_Route.pdf
I’m revamping our pipelines to run some type of security scan after deploying to an environment, or weekly/monthly/etc.. not exactly sure. I might introduce BridgeCrew as well.
sure i am looking to run terraform-compliance during the CI of our mono repo firstly
tools to start with:
• tfsec
• checkov
• terraform-complaince more advance use case:
• OPA
I have tfsec setup already
Looking to get compliance setup
basically looking for recommendations on the best approach for such a setup
all of the tools to start with are static analysis tools SAST. so start with one tools at a time, reduce false positives, integrate in your CICD, then you can add other tools to the pipeline
more monorepo, you need a bit of bash scripting to cover all your folders
Yeh trying to work out the best way to do terraform compliance with our mono repo
Is the best way to have tfvars in each module? Do init, plan with the tfvars and then validate?
no tfvars should be in a module, a module should get it’s values from calling it and pass these values down, tfvars should exist in tf-templates folder or something like that.
Doesn’t it make sense to have the tfvars as close to the module as possible?
Because we have dedicated root repos where we execute terraform in accounts from.
What is a module in your use case?
It could be for another
But we have root repos that call modules at a specific hash
there are no single way to do this, terraform can be used in many different ways
The root repos are easy to run compliance against
cloud posse is updating all of our modules to pass bridgecrew compliance
Mohammed Yahya 1 day ago
do init - fmt - validate - tfsec - plan -- …
Done and I have added validate and tfsec in the pre-commit hook ;)
2021-01-27
Is there anyway to collect all resources of a certain resource type into a list, without explicitly knowing the names of all the resources:
Say I want to get all resources of type azuread_group that exist, but not really have to know the resource name of all of them.
[for azure_ad_group.*] or something along those lines?
I do not that is possible
you will have to have a list or map with all the resources of that type and iterate over
and it will be pretty slow and buggy
Seems like a better fit for Azure cli / PowerShell
yeah i thought so, i am creating a bunch of aad groups in tf, but want to output all of their ids. i was hoping there was a clean way to collect them all. but doesn’t look like it
you could drop to local exec and do the api calls and such if the resources are not on the state
v0.15.0-alpha20210127 0.15.0 (Unreleased) BREAKING CHANGES:
The list and map functions, both of which were deprecated since Terraform v0.12, are now removed. You can replace uses of these functions with tolist([…]) and tomap({…}) respectively. (#26818)
Terraform now requires UTF-8 character encoding and virtual terminal support when running on…
Prior to Terraform 0.12 these two functions were the only way to construct literal lists and maps (respectively) in HIL expressions. Terraform 0.12, by switching to HCL 2, introduced first-class sy…
Must watch Github repos:
• https://github.com/donnemartin/awesome-aws
2021-01-28
CLI tool to generate terraform files from existing infrastructure (reverse Terraform). Infrastructure to Code - GoogleCloudPlatform/terraformer
Question for discussion here:
Yesterday I noticed that terraform-aws-rds doesn’t set encryption on the RDS by default. I opened an issue which @antonbabenko closed. I completely understand Anton’s reasoning here, but it raises a question for me: how opinionated should the Terraform modules be?
Users can use checkov/cloudrail/sentinel/terrascan/tfsec to catch things like this, but why not reduce the chances they trip on something?
I’m generally of the opinion of reducing the potential for someone to make a mistake, as many people using Terraform today are new to both TF and AWS. I’m curious what others think here.
i think you end up with pain either way. there are always new features being released, new ways to configure things the “best” way.
a maintainer can implement those features in their module as defaults, with likely backwards-breaking changes, and release a new major version. but then the user needs to be flexible and aware of how to update their config/state much more frequently.
or the maintainer can choose to prioritize backwards-compatibility, release minor versions with new features as much as possible, and users can update configs to consume those features as it becomes important to them.
True. And AWS, as an example, are very good at prioritizing backwards compatibility. But that also results in people making grave mistakes, ending up with exposed buckets, sub-optimal performance, etc.
So there’s a case to be made for nudging people towards better use of their cloud resources, even at the cost of them needing to do some upgrade work between versions.
i think if someone takes that approach, it needs to be quite explicit as a goal, so the user knows what they are opting into
this is a whole different module and maintainer, most likely, with different principles
Valid point. @Erik Osterman (Cloud Posse) what’s cloudposse’s approach on this matter?
It might add too much overhead, but CloudPosse could have a set of hardened modules (think fork/clone of the original) https://github.com/terraform-aws-modules/terraform-aws-rds & https://github.com/terraform-aws-modules/terraform-aws-rds-hardened or something to that effect. We ended up cloning some of the repos and then abandoning it ~6 months later..way too much maintenance for just a few of our teams to use.
Terraform module which creates RDS resources on AWS - terraform-aws-modules/terraform-aws-rds
Yep, we’re currently updating all over our modules to use secure defaults. This is a “New Years Resolution” - something we kicked off at the beginning of january.
We’ll probably finish updating all modules this week (@Maxim Mironenko (Cloud Posse)
It might add too much overhead, but CloudPosse could have a set of hardened modules (think fork/clone of the original)
Cloud Posse has a pretty stark philosophical difference to developing modules that is incongruent with terraform-aws-modules
Also, it’s frequently confused that terraform-aws-modules
are “official” but they are just another organization not directly affiliated with hashicorp.
For example, take the VPC module:
for any module to have 400+ inputs, it’s doing way too much.
Here’s an example of some of the updates we made to the S3 bucket //github.com/cloudposse/terraform-aws-s3-bucket/pull/70>
what BridgeCrew compliance checks fix readme updated default behaviour changed: S3 bucket MFA delete enabled by default default behaviour changed: S3 Bucket Versioning enabled by default default b…
Interesting approach @Erik Osterman (Cloud Posse). So essentially it’s down to the maintainer/developers’ philosophy, and then users can choose which one they prefer to go with. Makes sense.
Wow, here we have a live example from today: https://sweetops.slack.com/archives/CB6GHNLG0/p1611851584111900
Where enforcing more strict security is causing breaking changes.
Hi folks, I am not sure if this should be here, or the AWS channel. But I am having some difficulty with the new changes on the terraform-aws-codebuild
module. That recently enabled mfa_delete
by default. That requires manual intervention to change. When modifying on the CLI, I got the error:
An error occurred (InvalidBucketState) when calling the PutBucketVersioning operation: Mfa Authentication is not supported on a bucket with lifecycle configuration. Delete lifecycle configuration before enabling Mfa Authentication.
So, I deleted the rule temporarily. However, when running the apply I now get:
Error putting S3 lifecycle: InvalidBucketState: Cannot put lifecycle configuration on a bucket that has MFA enabled
Anyone run into this, and perhaps have a way to resolve? On this particular project, I can safely delete the stack and re-create, but I a have another where I likely cannot.
Thanks for pointing out! Looks like we had to roll this one back.
No good deed goes unpunished.
Hi folks, I am not sure if this should be here, or the AWS channel. But I am having some difficulty with the new changes on the terraform-aws-codebuild
module. That recently enabled mfa_delete
by default. That requires manual intervention to change. When modifying on the CLI, I got the error:
An error occurred (InvalidBucketState) when calling the PutBucketVersioning operation: Mfa Authentication is not supported on a bucket with lifecycle configuration. Delete lifecycle configuration before enabling Mfa Authentication.
So, I deleted the rule temporarily. However, when running the apply I now get:
Error putting S3 lifecycle: InvalidBucketState: Cannot put lifecycle configuration on a bucket that has MFA enabled
Anyone run into this, and perhaps have a way to resolve? On this particular project, I can safely delete the stack and re-create, but I a have another where I likely cannot.
Just wanted to link to this thread for reference: https://sweetops.slack.com/archives/CB6GHNLG0/p1611842168105200
Question for discussion here:
Yesterday I noticed that terraform-aws-rds doesn’t set encryption on the RDS by default. I opened an issue which @antonbabenko closed. I completely understand Anton’s reasoning here, but it raises a question for me: how opinionated should the Terraform modules be?
Users can use checkov/cloudrail/sentinel/terrascan/tfsec to catch things like this, but why not reduce the chances they trip on something?
I’m generally of the opinion of reducing the potential for someone to make a mistake, as many people using Terraform today are new to both TF and AWS. I’m curious what others think here.
Sorry for the grief here - but yes, the gist of it is we are moving to secure defaults for CIS benchmark compliance
Everything can be overrriden and disabled
Just it’s better to explicitly disable security features
Hi @Erik Osterman (Cloud Posse) understood - I guess it just wasn’t clear to me how to get this one to apply, at all.
I did open a ticket on the repo, maybe I am missing something
@Maxim Mironenko (Cloud Posse)
Please look into this
I fully believe I could be doing something wrong
@Joe Hosteny please, try new release: <https://github.com/cloudposse/terraform-aws-codebuild/releases/tag/0.30.0>
I’ve removed mfa_delete
due to terraform issue related to it. Thanks for feedback!
@Maxim Mironenko (Cloud Posse) thanks for looking into this so quickly, will do!
This worked for me now, with versioning being set, but mfa_delete staying set to false
I attempted to solve this by deleting the bucket, and letting TF re-apply from the start, but now I am stuck on:
Error putting S3 versioning: AccessDenied: Mfa Authentication must be used for this request
This may help: https://github.com/hashicorp/terraform-provider-aws/issues/629#issuecomment-371462069
This issue was originally opened by @Techbrunch as hashicorp/terraform#12973. It was migrated here as part of the provider split. The original body of the issue is below. How to enable mfa_delete o…
Right. When running this command using geodesic, though, it is using an assume role into the account. I think this setting needs you to use to root account credentials, so it’s not even clear to me this change is compatible with the workflow as is stands (but I could be wrong). FWIW, I opened this issue: https://github.com/cloudposse/terraform-aws-codebuild/issues/76
Found a bug? Maybe our Slack Community can help. Describe the Bug The module was updated to include a versioning configuration, with the default set to enable it, with mfa_delete also defaulted to …
The linked thread and module’s variable help point to it not being possible to configure mfa_delete
using Terraform, though? I’ve yet to try this, just reading docs.
Right, I think so as well. But it appears that you cannot also set it via the CLI, and then update the bucket either (at least the lifecycle rule).
I’m just wondering if I am missing something, since the change to add that recently seemed pretty intentional, but it looks like it can’t be enabled at all to me.
did you figure out @Joe Hosteny
Hi @jose.amengual, the flag has been removed with an update to the module
I say that late yesterday, thanks
terraform-provider-aws/releases/tag/v3.26.0 is out
NOTES:
• data-source/aws_route53_zone: The Route 53 ListResourceRecordSets
API call has been implemented to support the name_servers
attribute for private Hosted Zones similar to the resource implementation. Environments using restrictive IAM permissions may require updates. (#17002)
FEATURES:
• New Data Source: aws_imagebuilder_image
(#16710)
• New Resource: aws_imagebuilder_image
(#16710)
• New Resource: aws_prometheus_workspace
(#16882)
• New Resource: aws_sagemaker_app_image_config
(#17221)
ENHANCEMENTS:
• data-source/aws_elasticache_replication_group: Add multi_az_enabled
argument (#17320)
• data-source/aws_vpc_peering_connection: Add cidr_block_set
and peer_cidr_block_set
attributes (#13420)
• provider: Support AWS Single-Sign On (SSO) cached credentials (#17340)
• resource/aws_codeartifact_domain: Make encryption_key
optional (#17262)
• resource/aws_elasticache_replication_group: Add multi_az_enabled
argument (#17320)
• resource/aws_elasticache_replication_group: Allow changing cluster_mode.replica_count
without re-creation (#17301)
BUG FIXES:
• data-source/aws_elb_hosted_zone_id: Correct values for cn-north-1
and cn-northwest-1
regions (#17226)
• data-source/aws_lb_listener: Prevent error when retrieving a listener whose default action contains weighted target groups (#17238)
• data-source/aws_route53_zone: Ensure name_servers
is populated for private Hosted Zones (#17002)
• resource/aws_ebs_volume: Allow both size
and snapshot_id
attributes to be specified (#17243)
• resource/aws_elasticache_replication_group: Correctly update computed member_clusters
values (#17201)
• resource/aws_sagemaker_code_repository: fix doc name (#17221)
For me the new 3.26 provider breaks the usage of credential_process
and sso :disappointed:
Prior to 3.26 one needed to use something like aws-vault
in order to get terraform to play along nicely with sso.
Issue in terraform provider repo https://github.com/hashicorp/terraform-provider-aws/issues/17353
Community Note Please vote on this issue by adding a reaction to the original issue to help the community and maintainers prioritize this request Please do not leave "+1" or other comme…
Upstream go-sdk issue https://github.com/aws/aws-sdk-go/issues/3763
I've gone though Developer Guide and API reference I've checked AWS Forums and StackOverflow for answers I've searched for previous similar issues and didn't find any solution Descr…
will test this week.
Let me know what you find out - for now rolling back to 3.25 as it does not seem to support sso yet
More discussion here https://github.com/hashicorp/terraform-provider-aws/issues/10851
Seems like the aws provider 3.26 can work with SSO - however for the state backend we need terraform 0.15 to work with sso
I see, so far I’m working with aws-vault without any issue with latest Terraform
I only wish for aws-vault to work with TouchID
:thinking_face: - strange - are you using credential_process
?
No, I’m not.
are you then performing rather the wrapped call with aws-vault ... exec terraform
?
I’m using subshell,
aws-vault:
aws-vault --version
aws-vault exec $(PROFILE)
then calling Terraform from subshell
:thinking_face: - interesting - now I need to find a way to switch to the correct subshell via .env
/ direnv
simple:
aws-vault exec $PROFILE_1
terraform init/plan/apply/*
exit
aws-vault exec $PROFILE_2
terraform init/plan/apply/*
exit
https://www.hashicorp.com/blog/terraform-mono-repo-vs-multi-repo-the-great-debate @Erik Osterman (Cloud Posse) @Matt Gowie @jose.amengual the debate from them, similar to stacks approach , the only thing I would do is to add stacks to env folders
Learn about the pros and cons of using mono repositories and multi repositories along with the most logical use case for each.
hmmm…
Multi-repo advantages:
* You can apply module versioning using release tagging and existing tag constructs.
Multi-repo disadvantages:
* If you have a configuration that references many remote modules, Terraform will take time to download them. Try using git submodules to clone the remote repository and store it locally based on commit.
When using git submodules, I presume the source
reference would be a local relative path… what is the git syntax for that, and do submodules fetch git tags?
Learn about the pros and cons of using mono repositories and multi repositories along with the most logical use case for each.
Since I use git I have always thought using submodules it was a bad idea, is like svn external repos
i dont believe relative paths fetch git tags
i use submodules for packaging reasons sometimes… it’s an easy way to pull in any external project that can be updated by tools like dependabot/renovatebot and exercised by CI in the associated pull request
i dont believe relative paths fetch git tags
i was kinda imagining a syntax something like:
source = git::file://<path>?ref=<tag>
hm maybe that works. ive used this in the past.
source = "./test/module"
yeah a pure relative path definitely works. but it doesn’t support the git syntax for tags/refs. i’m just trying to reconcile the seemingly conflicting recommendation from the blog post
i.e. “version modules with tag constructs” and “try git submodules if you have lots of modules”
We’re kicking the tires on vendir
has an alternative to things like git subtrees, git submodules. https://github.com/vmware-tanzu/carvel-vendir
Easy way to vendor portions of git repos, github releases, helm charts, docker image contents, etc. declaratively - vmware-tanzu/carvel-vendir
Supports many more ways to vendor/version.
nice, now to get support for vendir.yml updates into dependabot/renovatebot
haha, yea… though I think it has it’s own concept for that.
…with a lock file
2021-01-29
Hi, I had terraform resource (route53 dns record) created by provider-a (one aws account) but now I manually moved this resource to provider-b(another aws account) I now want to change terraform state to reference to this new manually created resource, what’s the way to do that?
@Laurynas terraform import https://www.terraform.io/docs/cli/import/index.html
Terraform is able to import existing infrastructure. This allows you take resources you’ve created by some other means and bring it under Terraform management.
Thanks, I don’t think it’s possible to do imports from one provider to other? A bit more details of what I have:
provider "aws" {
alias = "us-east-1"
region = "us-east-1"
profile = var.aws_profile
}
provider "aws" {
alias = "route53_aws_profile"
region = "us-east-1"
profile = var.route53_aws_profile
}
and resource:
resource "aws_route53_record" "cloudfront" {
provider = aws.route53_aws_profile
}
I need to change the provider of route53 record to us-east-1"
ok what worked was state rm
and then using import
it’s a bit dangerous but works
re-reading, i think the latter… remove it from the state first, then import it
yep, i call it tfstate gentle massage
Anybody here using privately built, locally saved, plugins in Terraform? (that is, not a binary that was automatically downloaded via terraform init)
We were using one to get kms encrypted secrets from our secrets s3 bucket but then i realized that a data source s3 object (with the object as type text) could replace the plugin. The plug-in is still in use due to old code but it’s used less frequently.
Can you share an example of how the providers file or required_providers block looks like?
we did it for 0.12.x and below so we were downloading it directly to ~/.terraform/plugins/ or some directory
To download that plugin, we hosted it in a separate s3 bucket. Would be better to put it in artifactory id imagine but it’s old so we’ll deprecate it eventually
Hi, I have a question about GCP Cloud Run is it a good practice to have it on a Load Balancer? Is it possible? I try to have a serverless google_compute_region_network_endpoint_group
and then point that to a backend service but it’s failing bc of the health check any ideas?
I’m not super familiar with GCP, but typically the job of a Load Balancer is to route traffic through to healthy instances. If the instance is unhealthy the LB should route traffic to a different instance. Typically you place your instances in an auto-scaling group (to compensate for demand) then you designate a load balancer to route traffic through to the service instance. Hope this helps correct me if I’m wrong.
Question for AWS users. Has anyone figured out how to use cli MFA with terraform?
aws-vault works well
ive been playing with aws sso integrated with aws-vault which is superb so far
Maybe we could do that. We are trying to correct preexisting issues. Do you a good example?
we use okta as our sso
so maybe look into this https://docs.aws.amazon.com/singlesignon/latest/userguide/okta-idp.html
Learn how to set up SCIM provisioning between Okta and AWS SSO.
then play with the aws sso login
command
that command won’t work unless you’re authenticated with your sso provider (which requires mfa)
nice! https://github.com/cloudposse/terraform-aws-cloudtrail-cloudwatch-alarms/pull/26/files#r565039646
what BridgeCrew compliance checks fix readme updated default behaviour changed: Encrypt SNS Topic Data enabled by default why To be able to position our modules as standards compliant Providing…
awesome !
We been using Checkov for one year now, it’s part of the shift left movement
Module dependencies will finally get documented soon in terraform docs
Prerequisites Put an x into the box that applies: This issue describes a bug. This issue describes a feature request. For more information, see the Contributing Guidelines. Description The goal is …
long waited for, thanks for sharing
Prerequisites Put an x into the box that applies: This issue describes a bug. This issue describes a feature request. For more information, see the Contributing Guidelines. Description The goal is …
2021-01-30
I need helpin Terraform i getting this error in terraform apply
i am using terraform *v0.12.0* and AWS eks module *v5.0.0*
Error: Incorrect attribute value type
on .terraform/modules/eks/workers_launch_template.tf line 40, in resource "aws_autoscaling_group" "workers_launch_template":
40: vpc_zone_identifier = lookup(
Inappropriate value for attribute "vpc_zone_identifier": set of string
required
Paste your terraform template.
How to pass variable *vpc_zone_identifier* in the terraform
Terraform Template in the sense of *main.tf*
2021-01-31
love when tf cloud is down
Anyone played with Terraform CDK? Thoughts? When i think about the truly awful code I’ve written in HCL to express quite basic logic, it is appealing…
My general feeling with CDKish things is that infrastructure code is generally a “known desired end state” process which I think is why declarative syntax is a much better fit for it.
In saying all this; I feel your pain on obtuse HCL logic - I would rather this be solved in HCL though rather than another abstraction layer.