#office-hours (2020-09)
“Office Hours” are every Wednesday at 11:30 PST via Zoom. It’s open to everyone. Ask questions related to DevOps & Cloud and get answers! https://cloudposse.com/office-hours
Public “Office Hours” are held every Wednesday at 11:30 PST via Zoom. It’s open to everyone. Ask questions related to DevOps & Cloud and get answers!
https://cpco.io/slack-office-hours
Meeting password: sweetops
2020-09-02
@here office hours is starting in 30 minutes! Remember to post your questions here.
i’m curious to know what the overall strategy is for handling the new version of the aws provider.
Erik Osterman (Cloud Posse) has joined Public “Office Hours”
Jeff Wozniak has joined Public “Office Hours”
Anton Shakh has joined Public “Office Hours”
Vlad Ionescu has joined Public “Office Hours”
Soham Jadiya has joined Public “Office Hours”
Sheldon Hull has joined Public “Office Hours”
17133029948 has joined Public “Office Hours”
Ian Bartholomew has joined Public “Office Hours”
@here our devops #office-hours starting now! join us to talk shop https://cloudposse.zoom.us/j/508587304
Andrey Nazarov has joined Public “Office Hours”
Michael Holt has joined Public “Office Hours”
Kareem Shahin has joined Public “Office Hours”
Eric Berg has joined Public “Office Hours”
Neil Gealy has joined Public “Office Hours”
nat lie has joined Public “Office Hours”
Adam Crown has joined Public “Office Hours”
Hugo Samayoa has joined Public “Office Hours”
Matt Gowie has joined Public “Office Hours”
Jawwad Yunus has joined Public “Office Hours”
Isa Aguilar has joined Public “Office Hours”
Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.
James Connolly has joined Public “Office Hours”
Babajide Hassan has joined Public “Office Hours”
Sean Conley has joined Public “Office Hours”
Marc Tamsky has joined Public “Office Hours”
Nick James has joined Public “Office Hours”
Eric Berg has joined Public “Office Hours”
John D has joined Public “Office Hours”
For versioning this is nice.
I have this running right now in a similar manner. I use gitversion that calculates the semver based on branching. If you do a breaking change you manually set the tag to bump otherwise all the patch versions generate pre-release draft releases on branch and normal minor patch.
Sheldon Hull has joined Public “Office Hours”
Andrew Roth has joined Public “Office Hours”
Olivier Chaine has joined Public “Office Hours”
Zadkiel AHARONIAN has joined Public “Office Hours”
I adopted the null label stuff and love it. All my resources have randomized pet names with standard prefix. I have wanted to figure out the null label stuff so I’m excited to try this. The submodules having null label has confused me but this looks like it will help with this problem
Nothing like provisioning a bunch of servers and my coworkers seeing “snarky-puppy-rds-foobar”
Sheldon Hull has joined Public “Office Hours”
Rube goldberg lol. Totally.
I really like this solution for DB operations too: https://aws.amazon.com/blogs/database/building-a-cross-account-continuous-delivery-pipeline-for-database-migrations/
Interesting. You are saying on IAM Service accounts that you wouldn’t manage this user provisioning through a master terraform security repo for example? How do you setup the user provisioning to be IAC at that point?
Can we get a picture of this diagram and mind addressing sometime why terraform is only on foundational, when i’d guess that it has impact in all of the tiers
AWS Controllers for Kubernetes (ACK) is a project enabling you to manage AWS services from Kubernetes - aws/aws-controllers-k8s
Sheldon Hull has joined Public “Office Hours”
If we have time at the end, I want to know what others are doing to provision their IAM user and defined role/groups across accounts via code. Are you using terraform pull request driven workflow, lambda with json in s3 buckets, etc?
https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs :
To use these credentials with the Kubernetes provider, they can be interpolated into the respective attributes of the Kubernetes provider configuration block.
IMPORTANT WARNING When using interpolation to pass credentials to the Kubernetes provider from other resources, these resources SHOULD NOT be created in the same apply
operation where Kubernetes provider resources are also used. This will lead to intermittent and unpredictable errors which are hard to debug and diagnose. The root issue lies with the order in which Terraform itself evaluates the provider blocks vs. actual resources. Please refer to this section of Terraform docs for further explanation.
The best-practice in this case is to ensure that the cluster itself and the Kubernetes provider resources are managed with separate
apply
operations. Data-sources can be used to convey values between the two stages as needed.
Providers are responsible in Terraform for managing the lifecycle of a resource: create, read, update, delete.
GKE module also has some workarounds: https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/blob/4d33759bb6e913586f9d0e2705d6eb2fb6c43a23/cluster.tf#L252
A Terraform module for configuring GKE clusters. Contribute to terraform-google-modules/terraform-google-kubernetes-engine development by creating an account on GitHub.
Deploy Helmfile releases from Terraform. Contribute to mumoshu/terraform-provider-helmfile development by creating an account on GitHub.
Today we’re announcing availability of the new Business tier offering for Terraform Cloud which includes enterprise features for advanced security, compliance and governance, the ability to execute multiple runs concurrently, and flexible support options.
Managing terraform workspaces with terraformenterprise provider (import from yaml perhaps) is the only scalable way to do this
You have to manage terraform workspaces via code at that point
The challenge with managing terraform with terraform is pretty much that on free tier there is no additional levels of permission for folks. You can’t have readers, just admins. So you have to bump up the pay and then ensure that workspaces are NOT allowed to be created by any other method than code, or I feel it’s a lost cause to ensure this is managed consistently.
Kinda frustrating but i don’t see how you can effectively manage manual + automated workspaces in a solid way if you don’t just have it all managed by a service account instead.
^ this sounds like an awesome topic for next #office-hours . What changes come from managing 2-3 workspaces, 10s of workspaces, 100s, 1000s
New Zoom Recording from our Office Hours session on 2020-09-02 is now available.
Following up this multi-level or multi-tier structure you showed. Having this stuff decoupled means that you define different pipelines for them. Is this like a pipeline per level? Separate repo for each? Or it might be several pipelines within the same level? By the pipeline I essentially mean terraform apply command which applies a set of modules. What is the CloudPosse approach?
How do you deal with different chicken-and-egg scenarios? Like you deploy Gitlab and its runners as level 3, but you need runners to run terraform commands on level 1 or even to deploy this Gitlab:)
Will answer next wednesday
2020-09-03
2020-09-04
2020-09-09
@here office hours is starting in 30 minutes! Remember to post your questions here.
waiting to get in
Hi, I am trying to create a security group with
module "app_db_sg" {
source = "terraform-aws-modules/security-group/aws//modules/postgresql"
name = "${local.environment}-db-sg"
vpc_id = module.vpc.vpc_id
description = "Security group that controls access to DB"
use_name_prefix = false
computed_ingress_with_source_security_group_id = [
{
rule = "postgresql-tcp"
source_security_group_id = module.app_beanstalk_environment[0].security_group_id
}
]
number_of_computed_ingress_with_source_security_group_id = 1
}
but I am getting One of ['cidr_blocks', 'ipv6_cidr_blocks', 'self', 'source_security_group_id', 'prefix_list_ids'] must be set to create an AWS Security Group Rule
. I just want to know how I can check that the value returned by module.app_beanstalk_environment[0].security_group_id
is right. I am using tfctl
so terraform console
does not work for me (or I am not sure how to use it).
not sure of an easy way outside of querying the output from state using terraform output
Sorry, I am new to terraform, but I am trying with $ terraform output module.app_beanstalk_environment[0].security_group_id
and getting
Warning: No outputs found
@here our devops #office-hours starting now! join us to talk shop https://cloudposse.zoom.us/j/508587304
Gitpod did this recently, a full setup of their EKS environment and export of the terraform plan and more with a single docker run. I was pretty impressed, esp as never having used helm it was amazing to see it all pretty much just work
David Karlsen has joined Public “Office Hours”
raphael francis has joined Public “Office Hours”
Andrew Roth has joined Public “Office Hours”
Erik Osterman (Cloud Posse) has joined Public “Office Hours”
Vlad Ionescu has joined Public “Office Hours”
Sheldon Hull has joined Public “Office Hours”
Question:
• GitHub Actions —> Any easy way to trigger an action on demand?
• GitHub Actions –> Any update on any dashboard/centralized reporting for actions that have been run in an organization?
Anton Shakh has joined Public “Office Hours”
Isa Aguilar has joined Public “Office Hours”
Adam Crown has joined Public “Office Hours”
Ian Bartholomew has joined Public “Office Hours”
PePe Amengual has joined Public “Office Hours”
Kareem Shahin has joined Public “Office Hours”
GitHub Actions —
Any easy way to trigger an action on demand?
Yup, you can triger them manually now. They have a button! https://github.blog/changelog/2020-07-06-github-actions-manual-triggers-with-workflow_dispatch/
GitHub Actions: Manual triggers with workflow_dispatch
Victor Ma has joined Public “Office Hours”
Babajide Hassan has joined Public “Office Hours”
Taras Dyshkant has joined Public “Office Hours”
Andrew Elkins has joined Public “Office Hours”
Robert Horrox has joined Public “Office Hours”
Christopher Picht has joined Public “Office Hours”
Chef got acquired. I think that’s a big change
eeic berg has joined Public “Office Hours”
sri has joined Public “Office Hours”
Zadkiel AHARONIAN has joined Public “Office Hours”
Pedro Torres has joined Public “Office Hours”
K8S operator for scheduling github actions runner pods - evryfs/github-actions-runner-operator
@mumoshu heads up
K8S operator for scheduling github actions runner pods - evryfs/github-actions-runner-operator
@David J. M. Karlsen nice to meet you! fyi, i’m co-maintaining a similar operator https://github.com/summerwind/actions-runner-controller#runnerdeployments. i’m looking forward to any form of collaboration with you :smiley:
at glance yours seems to support podTemplate
for customizing the runner pod flexibly? that sounds great. mine has only limited support for customizing pod specs currently, although there has not been much complaint due to that yet.
Kubernetes controller for GitHub Actions self-hosted runnners - summerwind/actions-runner-controller
hi! I think we crossed paths in some github repo earlier!
I actually had a look at your operator in the beginning, but had a need for org-wide runners and was in contact with GH when they beta’ed it
to be fair, I was on hunt for a project which required go (and k8s), so that’s how it ended there
it’s a bit tricky to run it containerized due to docker under docker - and runners not really being designed for that as a start, but for most cases it works fine
the next thing I’m looking into is improved security (and api quotas) by solving https://github.com/evryfs/github-actions-runner-operator/issues/75
Add support for several auth mechs (to avoid simple static tokens), which can be handled by https://github.com/palantir/go-githubapp
also investigating version span of k8s and compability in https://github.com/evryfs/github-actions-runner-operator/actions?query=branch%3Amatrixbuild+workflow%3Abuild
K8S operator for scheduling github actions runner pods - evryfs/github-actions-runner-operator
Terraform module for scalable GitHub action runners on AWS - philips-labs/terraform-aws-github-runner
Michael Martin has joined Public “Office Hours”
Sri has joined Public “Office Hours”
Eric Berg has joined Public “Office Hours”
Maged Abdelmoeti has joined Public “Office Hours”
AWS SaaS Factory provides partners with direct access to technical and business content, best practices, and architects that can guide and accelerate their delivery of SaaS solutions on AWS.
@Erik Osterman (Cloud Posse) can you share the parsing logic of the yaml? I’ve not found many good “flatten” examples. That part would be useful in my own work if possible
here’s an example for opsgenie
//github.com/cloudposse/terraform-opsgenie-incident-management/tree/master/examples/config>
Contribute to cloudposse/terraform-opsgenie-incident-management development by creating an account on GitHub.
Contribute to cloudposse/terraform-opsgenie-incident-management development by creating an account on GitHub.
^AWS SaaS Factory presentation on programatic Control Planes.
Screenshot from the above video
Haven’t got a chance to participate today. Looking forward to watching the recorded version
Full disclosure: I’ll miss office-hours next week as I have a conflict
New Zoom Recording from our Office Hours session on 2020-09-09 is now available.
2020-09-10
I’m watching the latest episode. Regarding of version-checker
, Lens had the same functionality, there were some bugs in it, but it was more or less usable. Don’t know its current state though.
Ya, lens has some nice stuff for that too.
As of fat module vs decomposition I would join a @Vlad Ionescu (he/him)’s camp. In the past we struggled a lot managing everything via just one tf apply. It looked cool at first that you theoretically could fire up all the things from the ground up. But then came the pain. Mostly it came firstly, as Vlad pointed out, from fundamental changes in modules and secondly - from unstable third-party or home-grown tf providers. And we’d encountered spoiled state quite often until we decomposed things in a way similar to CloudPosse’s 4-layered approach.
But, yes, it’s a matter of your use cases. For some fat modules might work perfectly.
Just my two cents on this.
Hey, do you have a reference on CloudPosse’s 4-layered approach?
It was just a screen shared by @Erik Osterman (Cloud Posse) during one of the office-hours sessions. That’s all I know. Probably Eric could shed some light on it. I made a screenshot. I hope I didn’t violate anything and sorry for the quality)
Thanks @Andrew Nazarov for sharing the screenshot
Haven’t yet published anywhere, but definitely something we need to do because it helps make it a lot easier to explain things
2020-09-11
2020-09-16
@here office hours is starting in 30 minutes! Remember to post your questions here.
Erik Osterman (Cloud Posse) has joined Public “Office Hours”
Taras Dyshkant has joined Public “Office Hours”
Vicken Simonian has joined Public “Office Hours”
Giles Billenness has joined Public “Office Hours”
Adam Crown has joined Public “Office Hours”
Neil Gealy has joined Public “Office Hours”
Andrew Roth has joined Public “Office Hours”
When referencing multiple instances of a resource with create_before_destroy, reducing the number of instances will not be correctly updated on the first apply. For example: locals { things = { fir…
Paul Obalonye has joined Public “Office Hours”
@Jeremy G (Cloud Posse) what’s the link to the cycle issue you reported?
@here starting now
@Erik Osterman (Cloud Posse) https://github.com/hashicorp/terraform/issues/26226
Terraform fails to apply a plan, citing a dependency cycle, but I think that is wrong. I am not positive, because I do not quite understand how to parse the error message I am getting; maybe if I c…
Matt Gowie has joined Public “Office Hours”
Alex Siegman has joined Public “Office Hours”
Anyone use https://github.com/jckuester/awsweeper ? Is there a better tool out there for blanking out an AWS account? When I am trying to make sure that my Code creates all of the Infrastructure I have, I find destroying to be nearly as important as creating.
A tool for cleaning your AWS account. Contribute to jckuester/awsweeper development by creating an account on GitHub.
No idea which is better but some people have been using https://github.com/rebuy-de/aws-nuke
Nuke a whole AWS account and delete all its resources. - rebuy-de/aws-nuke
Brian Tai has joined Public “Office Hours”
Christopher Picht has joined Public “Office Hours”
Jeremy CloudPosse has joined Public “Office Hours”
Ian Bartholomew has joined Public “Office Hours”
Paul Obalonye has joined Public “Office Hours”
David Lundgren has joined Public “Office Hours”
Sri has joined Public “Office Hours”
Kareem Shahin has joined Public “Office Hours”
Oludahun Bade-Ajidahun has joined Public “Office Hours”
Robert Horrox has joined Public “Office Hours”
Andrew Elkins has joined Public “Office Hours”
Jim Park has joined Public “Office Hours”
Sri has joined Public “Office Hours”
azam has joined Public “Office Hours”
Anton Shakh has joined Public “Office Hours”
Is anyone relying on the undefined behaviour of Helmfile that a multiple negated conditions in a single selector like helmfile -l foo!=foo,bar!=bar
is unexpectedly treated as an OR sometimes?
I’m redefining it to be always AND, so that the behavior is consistent:
https://github.com/roboll/helmfile/pull/1478
This might be just a bug but I wanted inform you all for clarity because this seems like a long-standing bug anyway. Thanks!
My question here might be worth discussing if we need a topic: https://github.com/cloudposse/terraform-aws-acm-request-certificate/pull/25#issuecomment-693419593
In my current project I need to request certificates for a zone which lives in a different account. To let this module do the validation with this zone, I needed to use an alternative AWS provider …
Sri has joined Public “Office Hours”
Anybody use a bot to merge code? I’m wondering what that looks like under the hood
the mergify config uses a series of rules with conditions and actions. when the condition matches, it applies the action
dependabot has its own config. it monitors the various package ecosystems and CVEs, and opens pull requests to update dependencies that match the conditions in its config
dependabot is a github service now, so enabling it with permissions is managed in the repo settings
mergify is a external service that has a github integration, and it needs to be approved for write permissions to the repo
and if you have branch protection enabled with the setting “Restrict who can push to matching branches” then you need to add the mergify
bot-user there
We’re likely circling back to mergify after many failed attempts doing it with GitHub actions
Marc Tamsky has joined Public “Office Hours”
what Drop codefresh pipeline for building docker image why Use github action instead for easier open source adoption
Omer Sen has joined Public “Office Hours”
alejandro chacon has joined Public “Office Hours”
Zadkiel AHARONIAN has joined Public “Office Hours”
pepe amengual has joined Public “Office Hours”
Isa Aguilar has joined Public “Office Hours”
Adam Blackwell has joined Public “Office Hours”
ivan pedro has joined Public “Office Hours”
[WIP] Get up and running quickly with one of our reference architecture using our fully automated cold-start process. - cloudposse/reference-architectures
Eric Berg has joined Public “Office Hours”
Juan Soto has joined Public “Office Hours”
Blaise pabon has joined Public “Office Hours”
sri has joined Public “Office Hours”
Ian Bartholomew has joined Public “Office Hours”
Side question about IAM access, we no longer use cross account assumptions for console access and instead use OneLogin. I’m curious if this diverges from the CloudPosse reference architecture and if others do something similar with Okta or OneLogin. If there are downsides that I’m not aware of, I’d love to know about them.
(in these sample screenshots I, as an SRE, only have admin and readonly for each account, but developers often have various other roles)
@Jeremy G (Cloud Posse)
Yes, it diverges from the Cloud Posse reference architecture, which uses cross-account assume role. This provides a logistical advantage in that a single set of AWS credentials will support working on any environment. We used to generate a separate Geodesic shell and Git repo for each account, but we found that it created far too much work to keep accounts (dev/staging/prod) in sync. When we consolidated the configuration for all accounts into a single repo, the advantage of being able to assume a role in any account became much more pronounced.
This also includes having CI/CD tools that get a single set of credentials and operate on multiple accounts.
Second side question: aws-nuke
was mentioned in the beginning of office hours, which I know is used in this reference architecture:
https://github.com/cloudposse/testing.cloudposse.co/blob/master/.github/workflows/aws-nuke.yml https://github.com/cloudposse/testing.cloudposse.co/blob/4d02425da9a97bb8e7cbe61987d511f0ed6d1e4c/.github/workflows/aws-nuke.yml
I’m curious if others use this, but chose to run the workflow on private runners and use an IAM role to avoid needing to give AWS credentials to Github and if there are cons to the second approach.
Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co
Example Terraform Reference Architecture that implements a Geodesic Module for an Automated Testing Organization in AWS - cloudposse/testing.cloudposse.co
@Adam Blackwell I created a TF module to spin this up as a scheduled task in ECS: https://github.com/masterpointio/terraform-aws-nuke-bomber
It supports what you’re targeting. I’m using it in my own testing account.
A Terraform module to create a bomber which nukes your cloud environment on a schedule - masterpointio/terraform-aws-nuke-bomber
A lot, lot more code than @Erik Osterman (Cloud Posse)’s aws-nuke GH action config, but does have some advantages.
Cool, which advantages did you have in mind when writing it?
@Adam Blackwell I think I was probably just looking for another Terraform / ECS project to open source. It’s a bit heavy weight for what it does honestly, but for your 2 mentioned requirements it does fit well:
- Can use IAM role via ECS metadata endpoint
- Private workers It’s self contained to the account too, so just closing the entire account would be all the cleanup you’d need to do.
Ha, that’s reasonable motivation :-).
Ya, I empathize with the “heavy weight” part. We just wanted to deploy a single container for atlantis
with ECS fargate and we ended up with https://github.com/cloudposse/terraform-aws-ecs-atlantis (a massive module)
Terraform module for deploying Atlantis as an ECS Task - cloudposse/terraform-aws-ecs-atlantis
New Zoom Recording from our Office Hours session on 2020-09-16 is now available.
Can someone chime in on the pros and cons of using terraform “workspace”? I’m trying to see how to structure TF for multiple environments and most of the “advanced” gurus prefer to avoid it. This is the one im following and I’m so confused as a beginner newb
https://www.oreilly.com/library/view/terraform-up-and/9781491977071/ch04.html
2020-09-17
2020-09-18
:wave: Hi guys this is Nitin here and I have just came across this slack channel. If this is not the right channel then please do let me know.
As part of provisioning EKS cluster on AWS we are exploring terraform-aws-eks-cluster
https://github.com/cloudposse/terraform-aws-eks-cluster
What is the advantage of using cloud posse terraform module over the community published terraform module to provision EKS cluster on AWS
Thanks a lot
Do you pin the version of TF and/or your providers/plugins? :one: No, I always use the latest Terraform and latest version of all plugins/providers
:two: I pin my Terraform (like 0.12.28) but don’t pin the providers (always use latest version of “aws” etc) 2
@DJ, @pjaudiomv
:three: I pin Terraform AND the providers (like aws 3.5.0) 3
@roth.andy, @Roach, @jose.amengual
Created by @Yoni Leitersdorf (Indeni Cloudrail) with /poll
@roth.andy what a bummer
yeah
if you are running shell script I guess you could script it
ya
2020-09-21
is there some way I can get tf to load a directory of variable files?
2020-09-22
2020-09-23
hey, apparently all the docs arent letting me do any of this due to the provider if im reading it correct
action { name = “${var.application_name}-ecs-worker” category = “Deploy” owner = “AWS” provider = “ECS” input_artifacts = [“task”] version = “1” configuration = { ClusterName = aws_ecs_cluster.ecs_cluster.name ServiceName = “${var.application_name}-worker” # ActionMode = “REPLACE_ON_FAILURE” # OutputFileName = “CreateStackOutput.json” # StackName = “MyStack” # ImageDefinitionsFile = “worker-imagedefinitions.json” # TaskDefinitionTemplateArtifact = “task” # TaskDefinitionTemplatePath = “worker-imagedefinitions.json” } }
Found a bug? Maybe our Slack Community can help. Describe the Bug The version of the AWS Provider is pinned to 2.x in versions.tf. Since an installed version of AWS provider must satisfy therequire…
id love to talk about this if you are open to it today
@here office hours is starting in 30 minutes! Remember to post your questions here.
anybody have any experience with or recommendations for AWS WAF alternatives like signal science or anything.
Simplest static site hosting in aws that I can use security groups with to keep internal?
Thinking a fargate task that cicd builds with static site and hosts with something like “ran” and done. S3 buckets don’t seem to have anything with groups and ec2 while ok wouldn’t allow me to set target tasks at 1 for it to autoheal itself.
Any better way?
Erik Osterman (Cloud Posse) has joined Public “Office Hours”
Adam Crown has joined Public “Office Hours”
Vlad Ionescu has joined Public “Office Hours”
Jeremy CloudPosse has joined Public “Office Hours”
pepe amengual has joined Public “Office Hours”
Fernando Castillo has joined Public “Office Hours”
Anere Faithful has joined Public “Office Hours”
Marcin Brański has joined Public “Office Hours”
Patrick Joyce has joined Public “Office Hours”
Michael Londeen has joined Public “Office Hours”
Vitali Bystritski has joined Public “Office Hours”
Christian Roy has joined Public “Office Hours”
Justin Ober has joined Public “Office Hours”
Matt Gowie has joined Public “Office Hours”
Sri has joined Public “Office Hours”
Kareem Shahin has joined Public “Office Hours”
vicken has joined Public “Office Hours”
Brian Tai has joined Public “Office Hours”
Oliver Schoenborn has joined Public “Office Hours”
Andrey Nazarov has joined Public “Office Hours”
Nigel Kirby has joined Public “Office Hours”
David Lundgren has joined Public “Office Hours”
Topic I’m interested in if we have time: Grafana users, have you found any useful community dashboards that you would recommend / what is the general opinion about community dashboards. Alternatively, how do you manage your Grafana dashboards? Should it only be codified + read only in the UI
Geodesic is a cloud automation shell. It's the fastest way to get up and running with a rock solid, production grade cloud platform built on top of strictly Open Source tools. ★ this repo! h…
Anere Faithful has joined Public “Office Hours”
Eric Berg has joined Public “Office Hours”
Vlad Ionescu has joined Public “Office Hours”
Michael Holt has joined Public “Office Hours”
Zadkiel AHARONIAN has joined Public “Office Hours”
Nick James has joined Public “Office Hours”
Christopher Picht has joined Public “Office Hours”
Marc Tamsky has joined Public “Office Hours”
For downscaling k8s deployments on a schedule: https://github.com/hjacobs/kube-downscaler
Scale down Kubernetes deployments after work hours - hjacobs/kube-downscaler
Neat! will check this out
Scale down Kubernetes deployments after work hours - hjacobs/kube-downscaler
Neil Gealy has joined Public “Office Hours”
Isa Aguilar has joined Public “Office Hours”
Jim Park has joined Public “Office Hours”
Juan Soto has joined Public “Office Hours”
Laurence Giglio has joined Public “Office Hours”
Andrew Roth has joined Public “Office Hours”
sri has joined Public “Office Hours”
We’re excited to announce that custom variable validation is being released as a production-ready feature in Terraform 0.13. Custom Variable Validation was introduced as a language experiment in Terraform 0.12.20 and builds upon the type system introduced in Terraform 0.12 by allowing configurations to contain validation conditions for a given variable.
waf?
ahhhh!! sorry, we ran out of time today @pjaudiomv
question for today: what is proper way of ensuring that kubectl command called in terraform (via local_exec) will succeed? I often (not all the time) find the command runs before the EKS cluster API server is ready so terraform aborts. If I re-run it again, that 10-20 seconds is sufficient for the server to be ready so terraform then completes the apply. I tried a few things, without success. Any docs on this would be awesome.
New Zoom Recording from our Office Hours session on 2020-09-23 is now available.
2020-09-24
I’m looking for an easy pattern for deploying lambdas with terraform, when the lambda code lives in the terraform module repo. This is for small lambdas that provide maintenance or config services. The problem is always updating the lambda when the code changes: a combination of a null_resource
to build the lambda and an archive_file
to package it into a zip works, but we end up having a build_number
as a trigger on the null_resources that we have to bump to get it to update the code.
Is there some other pattern to make this easier?
I’ve thought about packaging the lambda in gitlab/github CI, but terraform cannot fetch a URL to deploy the lambda source
Spent like 6 hours figuring this out. Thought it might be useful for someone else…
Idempotently create a Personal Access Token for a user in GitLab running in Kubernetes
https://gist.github.com/RothAndrew/e1c8d3e183293d3fadb6cdbf64a3475d
interesting pattern using terraform’s local_exec
with kubectl exec
I’m definitely open to suggestions. What took the longest was the ruby code.
I mean interesting in a way
I like that you can leverage a container, rather than depend on a bunch of stuff installed locally,
Oh. That container comes with the gitlab deployment
previously, I’ve seen docker
used locally, but this is better I think
ya, let me reframe the way I said it to say i like that you leverage that container
Ah. Thanks :)
I hate that this is even possible though. Shelling into that container gives you gitlab god mode
Hello I’m looking for a solution which allows users to use a self-service catalog to deploy (using Helm Charts) a web app.
2020-09-25
Guys? I have a question … What are You using for logging in k8s ( forwarders ) ? I am using Loki & Opendistro ( Elasticsearch ), problem is that I want to use Fluentbit + FluentD combo ( tls forward, exposed separate loadbalancer ), what is problem that there is not complete & matured solution for it, which is weird ..
• fluentbit -> there isnt support for hotreloading, nor API endpoints, signaling option in application
• fluentd -> there is no good helm chart with elasticsearch & loki output and sidecar for reloading after config change ( not only config, but mainly secret (tls) change )
• logging-operator by banzaicloud -> systemd and host logging is behind paywall via logging-opeator-extensions, which is nogo for me
• kubesphere/fluentbit-operator -> seems unfinished ( no helm chart ), but promising
• vmware/kube-fluentd-operator -> helm chart available, its promising Any other alternatives? I can probably use beats & logstash, but whole community is using fluentbit/fluentd combo…, but this ecosystem is not matured yet… Ideas? Thanks
Don’t have an answer for you sadly, but I was looking to add Fluentbit / Fluentd into the mix for one of my clients in the coming month or so. Commenting to follow along
fluentbit -
there isnt support for hotreloading
Is this really a requirement? Aren’t you deploying this as a kubernetes deployment/statefulset? You’re not going to be sending signals to reload the the configuration. You’ll be redeploying the pod.
fluentd -
there is no good helm chart with elasticsearch & loki output and sidecar for reloading after config change ( not only config, but mainly secret (tls) change )
yes, we’ve struggled with this too, and ended up forking.
We currently maintain https://github.com/cloudposse/charts/tree/master/incubator/fluentd-kubernetes but can’t promise that will be forever
The “Cloud Posse” Distribution of Kubernetes Applications - cloudposse/charts
One consideration would be slightly changing. your architecture.
fluentd
or fluentbit
→ kinesis firehose → { S3, Elasticsearch, … et al }
This way you have long-term retention automatically on S3. Don’t need to worry about losing logs (can always reingest if necessary). Can buffer output to Elasticsearch to avoid log spikes taking out the cluster, and can add any number of other destinations.
Consider this post on the comparison of fluentd
vs fluentbit
(note: they are by the same company). Basically, IMO fluentd
is fine unless you’ve already experience problems with it due to scale or performance. With fluentd
written in Ruby, it’s hard to argue that’s it’s performant, but it’s good enough for most. We make use of the nice rate limiting extension for it. With fluentbit
they rewrote it in C and made some important decisions for performance at scale.
Fluentd and Fluent Bit are two popular log aggregators. Find out the similarities and differences between Fluentd vs. Fluent Bit and when to use each.
@Erik Osterman (Cloud Posse) You mentioned the fluentbit => firehose process — Is that what you folks do nowadays? Or are you still using your fluentd setup?
We’re doing fluentd
to kinesis firehose
firehose to S3 and elasticsearch
If You look at this picture https://camo.githubusercontent.com/f3eddff90ffe34784cab72e344b0e6f8a7fe1b17/68747470733a2f2f62616e7a6169636c6f75642e636f6d2f646f63732f6f6e652d6579652f6c6f6767696e672d6f70657261746f722f696d672f6c6f6767696e675f6f70657261746f725f666c6f772e706e67 , is fluentd as aggregator before actual elastisearch, s3 or loki good solution ? Not sure, if its ok to directly send logs from cluster1 to cluster2, where is elasticsearch, loki via same ingress ( for other services ) and http layer…
I believe that spearate loadbalancer with tcp forward input for fluentd and hpa should be a better case, isnt it ?
2020-09-30
The client project I am on at the moment had a pattern in place when I had joined on:
- Raw env variables in values.yaml
- A values.yaml map of env var names to a single cluster wide ConfigMap
- A values.yaml map of env var names to a single cluster wide Secret The ConfigMap + Secret mentioned are created by Terraform when the cluster is initially spun up and populated with various config from tf remote state and similar. The above ends up looking like the following in each Chart’s values.yaml:
secretMapping:
RABBIT_PASSWORD: rabbit_pass # rabbit_pass key in shared Secret
# ...
configMapping:
SOME_ENV_VAR_NAME: some_configmap_name # same as above but in shared ConfigMap
# ...
env:
RAW_ENV_VAR: "Value"
# ...
Then when supplying environment to any container in the Charts, we use a shared helper to mash the 3 together with valueFrom.configMapKeyRef
, valueFrom.secretKeyRef
, and just name value pairs from env
. This works of course, but it’s lot of mapping this
to that
and there is no single source of truth for values (split between Terraform driven Secret / ConfigMap and values.yaml files in each Chart (which there are 20 of right now).
I’m considering throwing most of this away and creating a ConfigMap + Secret per Chart/Service via Terraform. Then a shared helper could just iterate over the service in question’s ConfigMap and Secret without any raw values in the Chart. Thus creating a single source of truth and hopefully saving microservice configuration headaches.
Wondering if that sounds like a decent pattern or if there are other, more mainstream approaches to this.
@here office hours is starting in 30 minutes! Remember to post your questions here.
I am interested to know what some peoples experiences are with AWS WAF alternatives. Bonus if on-prem but not req. These are some of the more popular ones Ive found Imperva, Fortinet, Signal Sciences, Barracuda, Sophos, F5 and obvs cloudflare.
Robert Horrox has joined Public “Office Hours”
@here our devops #office-hours starting now! join us to talk shop https://cloudposse.zoom.us/j/508587304
Erik Osterman (Cloud Posse) has joined Public “Office Hours”
vicken has joined Public “Office Hours”
Alex Siegman has joined Public “Office Hours”
Nigel Kirby has joined Public “Office Hours”
Andrew Roth has joined Public “Office Hours”
Vlad Ionescu has joined Public “Office Hours”
Raja Tejas Yerramalli has joined Public “Office Hours”
David Scott has joined Public “Office Hours”
Zadkiel AHARONIAN has joined Public “Office Hours”
Fernando Castillo has joined Public “Office Hours”
sri has joined Public “Office Hours”
vicken has joined Public “Office Hours”
We’re on track to ship the “new thing” I’ve been working on for HashiConf! We have an internal beta out, logo designed, and most importantly… the product color picked, which I’ll share today! Sign up for the announcement at HashiConf: https://hashiconf.com/digital-october/ https://pbs.twimg.com/media/Eh-oI9BUYAAGsJI.png
Patrick Joyce has joined Public “Office Hours”
Alex Pereyra has joined Public “Office Hours”
15139103984 has joined Public “Office Hours”
rhenusonerosalia has joined Public “Office Hours”
Zachary Loeber has joined Public “Office Hours”
Fernando Castillo has joined Public “Office Hours”
Tim Gourley has joined Public “Office Hours”
Neil Gealy has joined Public “Office Hours”
Kareem Shahin has joined Public “Office Hours”
Jeremy (Cloud Posse) has joined Public “Office Hours”
pepe amengual has joined Public “Office Hours”
Rohit G has joined Public “Office Hours”
Ω has joined Public “Office Hours”
Marc Tamsky has joined Public “Office Hours”
Michael Londeen has joined Public “Office Hours”
Eric Berg has joined Public “Office Hours”
Related to the current conversation: https://github.com/cloudflare/cf-terraforming
Contribute to cloudflare/cf-terraforming development by creating an account on GitHub.
That’s interesting!
cf-terraforming is a command line utility to facilitate terraforming your existing Cloudflare resources. It does this by using your account credentials to retrieve your configurations from the Cloudflare API and converting them to Terraform configurations that can be used with the Terraform Cloudflare provider.
Contribute to cloudflare/cf-terraforming development by creating an account on GitHub.
Very appealing
David Lundgren has joined Public “Office Hours”
charles pogi has joined Public “Office Hours”
Nicolás de la Torre has joined Public “Office Hours”
VJ D has joined Public “Office Hours”
pepe amengual has joined Public “Office Hours”
Durgesh Manohar has joined Public “Office Hours”
New Zoom Recording from our Office Hours session on 2020-09-30 is now available.