#atlantis (2020-04)
Discuss the Atlantis (<http://runatlantis.io | runatlantis.io>) |
**Archive: ** https://archive.sweetops.com/atlantis/
2020-04-03
For folks using atlantis.yaml per-repo settings files to define many projects, figured I’d see if I could reduce a lot of repetition in my config file using YAML Anchors and References. Came up with the following anchor system. While atlantis will process the two anchor templates as projects, the directory doesn’t exist so it doesn’t detect any changes and will not run. I couldn’t define the anchors in a separate section as atlantis got mad about it.
projects:
# Project Definitions
- &terraform_project
name: "template-terraform-project"
dir: '.empty'
workflow: "make"
workspace: "default"
terraform_version: "v0.11.14"
autoplan:
when_modified:
- "Makefile*"
- "*.tf"
- "*.tfvars"
- "*.envrc"
enabled: true
apply_requirements:
- "approved"
## Shared Infrastructure
- <<: *terraform_project
name: "account-dns"
dir: "conf/account-dns"
- <<: *terraform_project
name: "aws-metrics-role"
dir: "conf/us-east-1/aws-metrics-role"
- <<: *terraform_project
name: "chamber"
dir: "conf/us-east-1/chamber"
- <<: *terraform_project
name: "cloudtrail"
dir: "conf/cloudtrail"
It’s worked in my limited testing thus far.
Could you have used e.g. account-dns as the template project or does it get invoked every time so needed to be a non existent thing? Not tested yet
2020-04-09
trying to use this module to setup fargate but having issues. anyone using the same module and have it working as expected?
https://github.com/terraform-aws-modules/terraform-aws-atlantis
Terraform configurations for running Atlantis on AWS Fargate. Github, Gitlab and BitBucket are supported - terraform-aws-modules/terraform-aws-atlantis
I have not used it , I used the cloudposse one
Terraform configurations for running Atlantis on AWS Fargate. Github, Gitlab and BitBucket are supported - terraform-aws-modules/terraform-aws-atlantis
my terraform
module "atlantis" {
source = "terraform-aws-modules/atlantis/aws"
version = "~> 2.0"
name = local.name
# VPC
vpc_id = data.aws_vpc.selected.id
public_subnet_ids = local.public_subnet_ids
# TODO: use private subnets instead of public
# fargate in private. need to be in same set of AZs as public
private_subnet_ids = local.public_subnet_ids
ecs_service_assign_public_ip = true
route53_zone_name = "internal.snip.com"
certificate_arn = data.aws_acm_certificate.internal.arn
# create_github_repository_webhook = true
# Atlantis
atlantis_github_user = "atlantis-bot"
# TODO: use s3keyring or ssm
atlantis_github_user_token = "snip"
atlantis_repo_whitelist = [
"github.com/terraform-aws-modules/*",
"github.com/snip/*",
]
allow_repo_config = "true"
# Atlantis
atlantis_allowed_repo_names = [
"snip/terraform-scripts"
]
}
hmmm, perhaps I should switch to the cloudposse one or at least try it side by side
the cloudposse one is recommended with the use of their fork
you could use the atlantis from runatlantis but you will have to feed some different config to make it work
ah perhaps thats what im missing. i was having trouble figuring out how to apply the server configuration too
the cloudposse module uses Parameter store to store the configs and value and I used chamber to populate the Dockerfile
interesting! ok ill focus more on the fork then
do you still require repo specific atlantis.yml
configs ?
I was going to say : to make it easier for you
here are the file I used
buildscpec for codebuild :
atlantis.yaml
atlantis-repo-config
there
that is all you need
I put this all in the same repo but you do not have to
awesome! you the man pepe
ill give it a go
I tried it with one repo
but you could have the Dockerfile+buildspec+atlantisrepo config in one repo and use that as a atlantis ECS cluster for your AWS account and run multiple repos against the same atlantis
OR
you could have all this files in all the repos and build atlantis every time from the scratch (codebuild+codepipeline) for every repo that have the webhooks configured
ah so this is a local installation of atlantis then
this is a relevant thread that explains the why
SweetOps Slack archive of #atlantis for October, 2019.
no this is for running on ECS+fargate
I’m using https://github.com/terraform-aws-modules/terraform-aws-atlantis , whats the problem with it?
Terraform configurations for running Atlantis on AWS Fargate. Github, Gitlab and BitBucket are supported - terraform-aws-modules/terraform-aws-atlantis
ive listed my config further up in the thread and i can’t seem to get the github account to post on my pr when i write atlantis plan
first time using fargate so just starting to debug
ah I’ve gotten it to work. I forgot to add the webhook
now im hitting an issue where we use private terraform modules and the user doesn’t have access to those modules and so fails
Hello, I'm trying to use a module sourced from a private github repo: module "module-name" { source = "git://github.com/<org>/<repo>.git?ref=0.0.2"> } but …
ATLANTIS_WRITE_GIT_CREDS maybe useful to you
yep that did it, thank you
i found this nice blog post on it too https://3h4x.github.io/tech/2020/02/26/terraform-atlantis.html
2020-04-10
2020-04-11
Terraform module for deploying Atlantis as an ECS Task - cloudposse/terraform-aws-ecs-atlantis
I find the doc and this a bit confusing
Terraform module for deploying Atlantis as an ECS Task - cloudposse/terraform-aws-ecs-atlantis
the docs here : https://github.com/cloudposse/terraform-aws-ecs-atlantis/blob/master/README.md#github-repo-scopes
Terraform module for deploying Atlantis as an ECS Task - cloudposse/terraform-aws-ecs-atlantis
talk about : github_oauth_token
and github_webhooks_token
that need to be created before the init of the module but then it says that is they are not provided it will be looked up in PS
and the instruction is that you can write the values with chamber :
chamber write atlantis atlantis_gh_token "..."
chamber write atlantis github_webhooks_token "...""
so github_oauth_token
and atlantis_gh_token
is the same thing
but called differently
what I mean is that what is asked to be created in github side does not match the names in Parameter store
I don’t know I just find it a bit confusing
working with the cloudposse atlantis module I’m having a problem with the github webhooks
the error I get after creating the webhooks and then changing the team access to read
Error: POST <https://api.github.com/repos/xxxxx/terraform-xx-xx-ecs-cluster/hooks>: 404 Not Found []
on .terraform/modules/repo_webhooks/main.tf line 6, in resource "github_repository_webhook" "default":
6: resource "github_repository_webhook" "default" {
it tries to create the webhooks again
well is trying to look for the webhooks
but read
permission at repository level do not allow you to see webhook configs
so I don’t know how to fix this
I vaguely recall there’s no way to update github webhooks using the provider
so i would need to taint them and reapply
but if you taint
then can you re-apply with read permissions ?
oh no, i never tried changing permissions
but i did try changing the webhook settings and that’s when this happened
the readme says to change to read after the webhook is created
but that breaks things
so maybe having another repo for the webhooks maybe a better idea
@Andriy Knysh (Cloud Posse) might recall what to do here… but it’s been a while
I’m cleaning up my the atlantis setup and I think I got everything pretty much figure out
using one atlantis for multiple repos in the sam environment
anyone was able to recall what can be done ?
taint ?
2020-04-12
2020-04-13
i have atlantis working and have
atlantis_repo_whitelist = [
"github.com/myorg/*",
]
# Atlantis
# Github repositories where webhook should be created
atlantis_allowed_repo_names = [
"myorg/terraform_scripts"
]
but almost every pr in my org is receiving comments of atlantis plan
when I only want it to show plan for myorg/terraform_scripts
if I set my atlantis_repo_whitelist
to only myorg/terraform_scripts
, then I see Error: repo not in whitelist
or similar as a comment
format is :
atlantis_repo_whitelist = ["github.com/org/terraform-xx-xx-ecs-cluster","github.com/org/terraform-xx-xx-rds"]
wilcards seem to be used for all repos
in an org
I had the same problem
if I don’t set the wildcard, I receive the Error
message commented on every pr. did you also see this ? if so, how did you prevent that?
no, I did not see that
I allowed just two repos to use altantis so far
and I’m using cloudposse fork but I think that that matters much
plus I setted up my worflows as repo config on the the atlantis server
ah interesting. our tf is a mixed bag so i have to do an atlantis.yml
per repo to configure it correctly
if i cannot configure the official tf module, ill prob have to go the cloudposse route. which im ok with
so you kinda want all the repos that matches myorg/terraform_scripts
to use atlantis
and nothing else ?
exactly!
just to test this out, configure it correctly, and then manually start adding new repos to it
we have a number of terraform modules that dont even use a backend so i wouldnt want atlantis to even touch those repos until they are migrated
(their tfstates are unfortunately have to be committed)
I will recommend to disable auto plan in all repos, it can get overwhelming, I would do that on the repo side
when I said repo side, repo config side so atlantis config side
so users are force to comment the pr or branch to get a plan
thats a good idea. let me figure out how to do this
is there an env variable for this on the server level ?
ah it looks like it’s only possible on the repo level https://www.runatlantis.io/docs/repo-level-atlantis-yaml.html#use-cases
Atlantis: Terraform Pull Request Automation
bash-4.4# atlantis version atlantis 0.4.13 Issue: I'd like to be able to disable automatic planning as a server flag. I see this option if using atlantis.yaml files. https://www.runatlantis.io/…
yuck… so that means id have to add an atlantis.yml
file for hundreds of repos in order to prevent auto planning
yes repo level
mmmm no
you do do a match for all repos with /.*/
and disable those global configs
then you do the ones for myorg/terraform_scripts
im not sure if i follow you
what do you mean by and disable those global configs
?
i got around this by creating a new user exclusively for atlantis and only added the user to the private repo
you can have something like this :
repos:
# id can either be an exact repo ID or a regex.
# If using a regex, it must start and end with a slash
# Repo ID's are of the form {VCS hostname}/{org}/{repo name}
- id: /.*/
# apply_requirements sets the Apply Requirements for all repos that match
apply_requirements: [approved,mergeable]
# allowed_overrides specifies which keys can be overridden by this repo in
# its atlantis.yaml file
allowed_overrides: [apply_requirements]
# allow_custom_workflows defines whether this repo can define its own
# workflows. If false (default), the repo can only use server-side defined workflows
allow_custom_workflows: false
# Allow repos to chose one of the workflows created in this file
allowed_overrides: [workflow]
or one each repo
etc
so for the global id: /.*/
you could disable auto plan
ah ok but still have to be one for each repo
I think I understand what you’re saying now. Thank you for sending me the config!
i think now that i can test in my org’s terraform_scripts repo, we can now configure our repo level atlantis.yml file without commenting every pr in the org
going forward, i’ll create a new atlantis.yml
file in the new repo, then add our bot user to that new repo
the rule - id: /.*/
will match all repos
then you can do each repo if you need to
oh! i see. so i could have this user added to all repos and configure it right from the server atlantis yaml
correct, that is what I think it will do
I only configure the repos I want with the webhooks
not all the repos
and I add the Team were the atlantis user is
mmm autoplan is not a server level config
Atlantis: Terraform Pull Request Automation
is repo level
that sucks
yea i saw that. unfortunate. see the ticket i mentioned above. it seems like the maintainer doesnt think it’s a good idea to do but i dont fully understand the reasoning
you could do that
although I can see the reason why not to disable autoplan
I gues the idea is to run plan against to PR to master
and you enable the webhooks per repo
not globaly in your VCS
looking at my fargate logs, even tho i only have a single repo whitelisted with a single webhook, the app keeps trying to reach out to every new pr but is getting a 404 (because i havent added the bot to their private repos)
weird that it would still be trying to reach out to each pr. i figured it would know to only look at prs coming from the repos whitelisted
mmm I do not see that
the event comes from the repo
atlantis receives the event
is there a way to configure it for a single repository in an org without it commenting “error” on all other repo prs ?
thanks!
management doesn’t seem to be completely sold on atlantis. besides
• just like we have cicd of code, we can also have cicd of terraform
• auditability of terraform plans
• a number of top companies using it (lyft, shopify, pagerduty, hootsuite, cloudposse) and hashicorp owns it
• locking of modules within prs
• only apply changes that are approved notable posts
• https://medium.com/runatlantis/terraform-and-the-dangers-of-applying-locally-543563782a73
• https://medium.com/runatlantis/introducing-atlantis-6570d6de7281
• https://docs.google.com/presentation/d/1X4VGx-R8UZWE_2s7I8IxcbWsav1kR-QosmHI8kKaIZc/htmlpresent
• https://www.reddit.com/r/devops/comments/cakyfp/psa_love_terraform_love_cicd_you_want_to_run/ are there more reasons to use it?
In my experience management will almost never be “sold” on something like this. This has to come from you, as the subject matter expert, as a requirement. Asking for their permission is shifting the risk over to them, and their number one priority is to avoid risk.
Dev do not need to set terraform on their machines and TF is run by atlantis using instance profiles that are fully auditable
the changes of infra are in a VCS
2020-04-16
2020-04-17
Is there a way to limit atlantis what environment to run without workspaces ?
in my case I have one atlantis per aws account = environment
only selected repos can trigger webhooks
and only few people can do applies
using terraform atlantis cloudposse module
but my terraform project structure is :
./
main.tf
variables.tf
staging.tfvars
staging-backend.tfvars
production.tfvars
production-backend.tfvars
so when we run tf we basically run terraform init -backend-config=staging-backend.tfvars
then plan
then `apply
but the account or role used is only for the staging account
and atlantis is deployed to each specific account
so I want to limit the scope of what they can run for each atlantis deployment
We have the same setup where we have an Atlantis in each account. For accounts that handle multiple environments we went with separate directories and try to follow a strict process of only making changes in one environment in a given PR.
ok I c, same thing we are going to do
cool
2020-04-25
2020-04-28
@Alex Siegman Hows your Atlantis Anchors working out? May need that real soon!
Works perfectly. It’s a little janky to have to specify some “fake projects” to use as templates, but it works perfectly and has made the file a lot more readable.
Yup, my atlantis.yml just hit 1K lines, gonna give this a whirl, thanks @Alex Siegman!
Mine went form 774 lines to 331 lines, just checked the PR
1047 > 357 - still to test though
random disclaimer just to chime in. Anchors are awesome but merging of values is often not recommended for terraform as handling is more library specific. Just something to mention in case things get more complex. An update to a underlying library in some cases and dependency on merging of values can cause problems. I imagine the anchors without overrides though is no issue whatsover.
Oh and I’m talking about yaml specifically, not merging of values in terraform necessarily. It’ just something i’ve come across in various sources as I experimented with yaml anchors and merging and then using that in terraform. I decided against merging of values itself, and only using as global constants because of that potential problem that could be a silent difficult to debug problem in the future
Not sure what you mean, nope, deep merge isn’t a thing.
I’m saying yaml anchors are cool, just wanted to mention that if using them to provide “default values” that then get overriden, that i decided against it for that behavior. Said this for future reader benefit, not that you are doing this.
Merging of even basic yaml anchors is not standardized and has risk for breaking/unpredictable behavior. That’s all. Not correcting, just mentioning it in case something thinks of trying yaml defaults/overrides
Good note; the library that atlantis seems to use works as expected from my testing. At least for the example I put above
2020-04-29
does anybody here have experience with atlantis multi account setup?
multi account as ?
one atlantis many accounts
or many an atlantis per account
one atlantis many accounts
and you need to be specific about environments too
multiple environments per account ?
one account per environment
one environment per account
and what are your concerns ?
i’m running atlantis in my root account, but i cannot assume the role in my child account
it fails during initialization because it can’t find s3 backend
i’m using the ecs fargate module
and the task role has an administrator policy attached
i can’t figure out if I have to use a credential file, env variables or just the role
are you specifying the region of the remote backend ?
instance roles are sufficient
although I do not recommend one atlantis for all accounts
it’s fargate, so only task role
blast radius is too big
fargate task is the same
so you reccomend one atlantis per account?
yes, or per team if each team have multiple accounts
you backend config should look like this :
region = "us-east-2"
bucket = "mybucket-state"
key = "terraform.tfstate"
dynamodb_table = "mybucket-state-lock"
encrypt = true
it looks like this
does it have the region ?
yes
and are you getting 403 or bucket not found ?
if i run it manually from my laptop it works
bucket not found
in your local always works
but what is the error code ?
does the fargate role have Allow s3:* ?
Remote state S3 bucket xxxxx-terraform-state does not exist or you don't have permissions to access it. Would you like Terragrunt to create it? (y/n)
ok, so it could be a 403
fargate role has admin policy attached
the s3 bucket is in the child account
child ?
another account right not the same ?
yes
you still have to specify the bucket in the policy for the role and in the bucket you need to allow the altantis account to have access
this is a problem of cross account access of s3 buckets
no matter if they are in the same org
The second Amazon S3 walkthrough example: How a bucket owner can grant users cross-account bucket permissions.
but i assume the role in the backend definition
or at least i tought so
is the bucket encrypted ?
KMS ?
well anyhow it should work
if you are assuming the role it should work, so maybe the trust policies are not right
you can start a t2.micro using the same role as fargate and use the cli
i’ll try
configure the ~/.aws/config
using the instance profile
and see if it works
ok, thanks
is the ~/.aws/config in the container necessary?
for the aws cli to work yes
Use IAM instance profiles to pass a role to an Amazon EC2 instance when the instance starts.
that one
credential_source = Ec2InstanceMetadata
the thing is, according to this https://www.runatlantis.io/docs/provider-credentials.html#aws-specific-info, it says that the AWS fargate module has its own way to provide credentials
Atlantis: Terraform Pull Request Automation
but i can’t figure out which one it is
but it should be using the fargate role permissions
I run it on fargate and I do not have this problem
but my buckets have bucket policies for cross account access
do you specify the role arn in the provider?
and for the state it have a polocy allowing the account to have access
no I do not , I do no assume role
you can run a script trough atlantis so you could try run commands to list the bucket and see what you get
yeah i guess i’ll have to do some more testing
but if you try it from an instance and it works, then it should work
then at least you know is a config in atlantis
yes, i’ll try that
thanks
Have you looked at the free version of Terraform Cloud?