#random (2024-02)

Non-work banter and water cooler conversation

A place for non-work-related flimflam, faffing, hodge-podge or jibber-jabber you’d prefer to keep out of more focused work-related channels.

Archive: https://archive.sweetops.com/random/

2024-02-01

Rajat Verma avatar
Rajat Verma

anyone online , can help me on renovate issue

Rajat Verma avatar
Rajat Verma

This is my Makefile:

install:
	@go mod vendor
	@go install github.com/golang/mock/mockgen@latest

and this is my renovate.json file

{
  "$schema": "<https://docs.renovatebot.com/renovate-schema.json>",
  "extends": [
    "config:best-practices"
  ],
  "customManagers": [
    {
      "customType": "regex",
      "fileMatch": [
        "^Makefile$"
      ],
      "matchStrings": [
        "@go install (?<depName>[^@]+)@(?<currentValue>[^\s]+)"
      ],
      "datasourceTemplate": "go"
    }
  ]
}

I am getting error like this

"regex": [
      {
        "deps": [
          {
            "depName": "github.com/golang/mock/mockgen",
            "currentValue": "latest",
            "datasource": "go",
            "replaceString": "@go install github.com/golang/mock/mockgen@latest",
            "updates": [],
            "packageName": "github.com/golang/mock/mockgen",
            "versioning": "semver",
            "warnings": [],
            "skipReason": "invalid-value"
          }
        ],
        "matchStrings": [
          "@go install (?<depName>[^@]+)@(?<currentValue>[^\s]+)"
        ],
        "datasourceTemplate": "go",
        "packageFile": "Makefile"
      }
    ]
  }
}
Joshua Sizer avatar
Joshua Sizer

I’m speculating a bit here but might be worth looking into. It’s parsing current value correctly (“latest”) but since your versioning is set to “semver” I don’t think it can understand “latest” in terms of semver

Joshua Sizer avatar
Joshua Sizer

Might have to change [github.com/golang/mock/mockgen@latest](http://github.com/golang/mock/mockgen@latest) to [github.com/golang/mock/[email protected]](http://github.com/golang/mock/[email protected])

Joshua Sizer avatar
Joshua Sizer

FWIW, a quick search shows that that project is archived and no longer maintained, so renovate will never update that dependency, anyways

May want to consider using the maintained fork https://github.com/uber-go/mock if you want security updates in the future

uber-go/mock

GoMock is a mocking framework for the Go programming language.

2024-02-04

2024-02-05

OliverS avatar
OliverS

2024-02-09

Michael avatar
Michael

Which Terraform pre commit hooks and pipeline do you like to run? Curious what your setups are! I’ve always been a big fan of checkov, tflint, and tfdocs

1
James Humphries avatar
James Humphries

tfsec (now under trivy) is used quite often too

2
Serdar Dalgic avatar
Serdar Dalgic

https://github.com/antonbabenko/pre-commit-terraform is a good place to start, IMHO. (I don’t know if there are any other commonly used terraform pre-commit hooks somewhere else ) I personally like tfupdate to update the provider versions automatically, I think it’s a bit underrated. But it’s quite practical for maintenance

antonbabenko/pre-commit-terraform

pre-commit git hooks to take care of Terraform configurations

sheldonh avatar
sheldonh

I’m using trunk.io now. It has most of these tools and has very little wrangling required. It’s my go to right now esp on open source or private projects.

Michael avatar
Michael

I might have to give this a try! Looks awesome!

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

@Hans D

Hans D avatar

thanks @Erik Osterman (Cloud Posse). Prefer to to be able to run everything also outside a ci/cd pipeline, so i’ve setup 1 workflow which picks up the things it needs to do. phase 1: do any kind of generation / reformatting, so the pr can be updated (single gha step) phase 2: do all the validation/test checks (gha matrix)

Using go-task for the feeding/execution (but you can use what ever you want), which we also use for locally running various things. terraform specific:

• tflint

• terraform fmt (both format and linting)

• checkov / trivy are in the works (some poc is done, but other prios need to be done first)

• terraform plan (via spacelift)

• composing terraform modules (vendir for vendoring and composing + some bash and jq)

• terraform docs

1

2024-02-10

2024-02-13

2024-02-14

2024-02-15

2024-02-16

Alex Atkinson avatar
Alex Atkinson

I still just don’t get orm. Expecting folks to know the technologies they use is too much? My besties are always the data sci crew… I think it comes down to my experience with bid data, vs talent pools without discrete database engineering talent – of which there are many. Definitely an advantage for shops that build this capability. https://stackoverflow.com/questions/1279613/what-is-an-orm-how-does-it-work-and-how-should-i-use-one https://www.reddit.com/r/golang/s/QMDfWHVaV1

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

ORM is a good toy for small projects where database performance is not critical, and nobody looks at the database at all (load, queries, slow queries, etc.). Once the project is big enough, and the database gets high load, ORM becomes a disaster (there are many many examples of that in real companies).

After the disaster starts, nobody knows how to check the queries and how to optimize them - indexes, sub-queries optimization (creating indexes on the relevant columns, rewriting the subquery to use a different approach, or reducing the complexity of the subquery), normalization - all of that is unknown and even worst, hidden from the view b/c the ORM generates something that nobody knows about.

So ORM is good for some website to get data for the UI, but it’s an absolute disaster for high-speed transactional databases with many relations and foreign keys.

The best solution is to use stored procedures (not to hardcode SQL queries in the app). DBAs can see them and optimize them anytime w/o affecting the apps (keeping the same interface). The devs just create a DAL that calls the stored procedures. Every layer does just what it’s supposed to do. Easy to optimize all layers (even in the future with diff loads or number of concurrent users).

2
Alex Atkinson avatar
Alex Atkinson

That’s what I’ve been realizing, but so many just keep gaslighting on about ORM value. These folks will learn with enough exposure to scale. Until then I’ll agree to disagree. At least now I’m confident that there’s no “magic”… And even if there were, we already hate magic numbers, so why would any magic be acceptable. :)

Gabriel avatar
Gabriel

I don’t see why you could not do indexes, query optimization, normalization, look at slow queries, etc. while using an ORM? In fact, that’s what I’ve always seen being done, But probably I’m just gaslit. And stored procedures? Well, better not open that can of worms.

2024-02-17

2024-02-18

nazar.mraka.ri.2022 avatar
nazar.mraka.ri.2022

Hi there, I’m new at terraform, if it’s not much trouble, could someone share, here in comments or dm, an simple example of using terraform-null-label, because the example in github didn’t give a full understanding. Thank to everyone, have a great day

Gabriel avatar
Gabriel

If you ever created more than a few resources on AWS with multiple people/teams involved you would have seen that resource id’s and tags start getting inconsistent very quickly.

The people from cloudposse can correct me but as far as I understand it, null-label is all about consistent naming and consistent tags for resources.

null-label helps us prevent mentioned inconsistencies by us passing it contextual values like name, environment, tenant, etc. and null-label outputting an id and tags for us that we can use for our resources in a consistent and reproducible manner.

If you look at e.g. cloudposse’s s3 bucket module, they call null-label here and use it, among other places, here for the bucket name if no explicit bucket name is provided.

1

2024-02-20

Alex Atkinson avatar
Alex Atkinson

How long has README.md auto-hyprelinked in JIRA? Takes you to some questionable 3rd party website for “people who learn in public”… It does open with an article on writing a good README though. :/

dnsTraceRedirects readme.md
374 ms : readme.md [ nginx/1.20.2:HTTP/1.1(301) ] >> <https://readme.md/>
986 ms : <https://readme.md/> [ nginx/1.20.2:HTTP/1.1(302) ] >> <https://tiloid.com/>
1825 ms : <https://tiloid.com/> [ nginx/1.20.2:HTTP/1.1(200) ] (Terminated)
Total Time (initial asset): 3189 ms

Yes, there is a JIRA issue for this. Please go give it an upvote. https://jira.atlassian.com/browse/JRACLOUD-82508

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)
Save time linking resources with autolink referencesattachment image

Now you can automatically link references to external systems with GitHub Pro, Team, and Enterprise plans.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

You mean this?

Alex Atkinson avatar
Alex Atkinson

Autolinking ticket numbers are great. The annoying thing here is that typing ‘README.md’, etc., results in a link to some external resource.

Alex Atkinson avatar
Alex Atkinson

Basically, the logic: IF the string matches a local asset, then maaaaybe link to that asset, but NEVER link to some 3rd party website on a .md gTLD.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Hah, true

Alex Atkinson avatar
Alex Atkinson

README.md 301 >> goats who see… ?

Alex Atkinson avatar
Alex Atkinson

who knows

2024-02-21

aj_baller23 avatar
aj_baller23

Not sure where to post this question, just wanted to get your guys feedback when it comes to documentation….. How are you guys handling documentation in your companies? Where in the development pipeline should documentation take place ? During development as you are working on your project or at the end of the project. Who should be responsible to make sure documentation are happening. (developer, project manager). I’m interested to hear how to handle documentation in an organization. My organization has poor documentation and I am trying to establish a best practice process for the dev team when it comes to documentation. Any feedback would be greatly appreciated. Thanks!

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

documentation is hard and always lacking. Let me give you an example of how we do it with Atmos https://atmos.tools

Automated Terraform Management & Orchestration Software (ATMOS) | atmos

Atmos is the Ultimate Terraform Environment Configuration and Orchestration Tool for DevOps to manage complex configurations with ease. It’s compatible with Terraform and many other tools.

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

before, we had the CLI and almost no docs, and we notice that we were spending more and more time trying to explain diff features to diff people

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then we spent 2 months on the docs trying to get it to the state describing all the CLI features

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

then we decided, if we add/improve/update anything in Atmos, we update/improve the docs at the same time. This is “annoying” (who wants to write docs?), but pays big time

1
Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

now we update the CLI and the docs at the same time in the same PR, so everything is up to date and in sync - takes more time for the PR, but pays in the long run

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

now almost all PRs have “Update docs” item, e.g. https://github.com/cloudposse/atmos/pull/525

#525 Update workflows UX. Update `atmos`, `atmos --help` and `atmos version` commands. Update demo tape. Update docs

what

• Update workflows UX • Update atmos, atmos --help and atmos version commands • Update demo tape • Update docs

why

• Update workflows UX. Allow selecting a workflow step in the UI to start executing the workflow from

Just execute atmos workflow to start an interactive UI to view, search and execute the configured Atmos
workflows:

atmos workflow

image

• Use the right/left arrow keys to navigate between the “Workflow Manifests”, “Workflows” and the selected workflow
views • Use the up/down arrow keys (or the mouse wheel) to select a workflow manifest and a workflow to execute • Use the / key to filter/search for the workflow manifests and workflows in the corresponding views • Press Enter to execute the selected workflow from the selected workflow manifest starting with the selected step

image

Use the Tab key to flip the 3rd column view between the selected workflow steps and full workflow definition.
For example:

image

image

• Update atmos, atmos --help and atmos version commands: print Atmos styled logo to the terminal (if the terminal supports colors)

image

image

• Update demo tape: showcase the Atmos features like atmos vendor pull • Update docs: improve the doc links in the Atmos docs and make them simpler

aj_baller23 avatar
aj_baller23

That’s what i’m thinking of introducing part of the acceptance criteria for any ticket that a developer works. Documentation should be part of the requirements. Yeah, similar to the what in your example

aj_baller23 avatar
aj_baller23

does it mean that you guys do documentation after you’ve checked off the other bullet points ?

aj_baller23 avatar
aj_baller23

or as you finish each bullet point as an example

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

for Atmos yes. When features are implemented and tested, we write some docs for it

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

i mean, that all new features in a PR (regardless it’s just one or a few) should be covered by the changes to the docs

aj_baller23 avatar
aj_baller23

that makes sense to me, I was being asked why wouldn’t we document as we are working on the feature instead of waiting until the end and then writing the documentation

Andriy Knysh (Cloud Posse) avatar
Andriy Knysh (Cloud Posse)

it’s better b/c when you are working on something, you are in the context and can better describe what you are doing

Manish Khadka avatar
Manish Khadka

Hi

I have a query regarding setting up infrastructure.

I have a ml service hosted on AWS US Region and my client is from Nepal. Now I as a client want to use the application but due to data regulation rules my data cannot leave the on premise network or so How do you handle it?

How do I setup infrastructure such that data resides within the premise or network and I still am able to use the application hosted on AWS to process my data?

j.smith.sics avatar
j.smith.sics

You cannot since AWS is in a different geographical sovereignty. You need to go back to your business and discuss with your regulatory team options.

Contact AWS for localised region, AWS outposts perhaps, AWS partner in your region.

https://www.infoq.com/news/2023/09/aws-dedicated-local-zones/

AWS Introduces Dedicated Local Zones for Sovereignty Requirementsattachment image

AWS has recently introduced Dedicated Local Zones, enabling customers to isolate sensitive workloads to meet their digital sovereignty requirements. This new option is designed for public sector and regulated industry customers who need dedicated infrastructure.

2024-02-22

pv avatar

How do you apply all stacks in a pipeline? If I want my pipeline to run atmos terraform apply, can I do an all flag instead of listing the stack and component?

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Best for atmos

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

So it depends what your pipeline is.

Erik Osterman (Cloud Posse) avatar
Erik Osterman (Cloud Posse)

Are you using Spacelift, GItHub Actions or something else?

pv avatar

GitHub actions

pv avatar

I’ll move to atmos, thanks!

2024-02-23

2024-02-28

Vytautas Klova avatar
Vytautas Klova

Hey! We have just released 2024 Kubernetes Cost Benchmark Report. Maybe you will find it interesting:

• The reports findings are based on our analysis of 4,000 clusters running on AWS, GCP, and Azure.

• In clusters with 50 CPUs or more, only 13% of the CPUs that were provisioned were utilized, on average. Memory utilization was slightly higher at 20%, on average.

• In larger clusters, CPU utilization was only marginally better. In clusters with 1,000 CPUs or more, 17% of provisioned CPUs were utilized.

• CPU utilization varies little between AWS and Azure; they both share nearly identical utilization rates of 11%.

• You could find/download the report at https://cast.ai/k8s-cost-benchmark/

Kubernetes Cost Benchmarkattachment image

Uncover Kubernetes cost-optimization trends, bridge the gap between provisioning and actual CPU/memory utilization in 4,000 clusters, and get actionable tips to avoid overspending.

1
Chris Wahl avatar
Chris Wahl

Sounds cool, I’ll give it a download and a read

Kubernetes Cost Benchmarkattachment image

Uncover Kubernetes cost-optimization trends, bridge the gap between provisioning and actual CPU/memory utilization in 4,000 clusters, and get actionable tips to avoid overspending.

    keyboard_arrow_up