[thread] :thread: Branching & PR Methodology for Improving Velocity and Reducing Size of PR
I normally use
main with short lived feature branches having used this successfully in a devops oriented role….. see thread for details. Could use some ideas to setup remote team for better success…..
In my new role working with a backend development team, I’m helping design a process to improve velocity of getting changes in, but also reducing the size of PR’s. Policy dictates that myself and another dev have to review anything from an offshore team for security.
Current PR size can be thousands of lines and files, so I want to change this to more trunk based development (especially per pre-release of product). I need to configure the merge policy as well.
I have 2 initial thoughts:
- My preferred/method I know the best is simple trunk based development with short lived feature branches. To reduce complexity in reviews, require testing and linting hook checks to pass prior to review. Pros: Faster integration to move towards production, less size in PR’s is a better review experience and more manageable. Cons: Far more PR’s to review from other team.
- Due to the noise this would generate in PR count, I’m not sure this would be viable, so second would be to use
developbut I’m used to rebase/squash workflow to keep history and PR’s very targeted and simple and unclear if I’m going to be dealing with a lot more noise in the end trying to merge sets of changes from
mainat that point. Not sure if can use rebase workflow with a long-lived branch like this so main will end up being different than develop with merge commits and less atomic history I feel. Pros: less PR’s generated, most devs know a workflow similar to this Cons: extra complexity with 2 main branches, probably larger PR’s more likely, slower to integrate into main.
Feels like I’m kinda making a decision for more PR noise, in smaller PR’s but a ton more to review, or less PR’s and more complexity to review… agree? I tend to think more PR’s but smaller is less context switching and cost overall.
We started with #2 for some projects, and #1 for others, and eventually switched to #1 for everything. We just didn’t see any benefits from #2, and it complicated releases/deployments unnecessarily. #1 with a solid CI/CD pipeline is so easy and refreshing, definitely an accelerator and confidence booster
Agree @loren. I think #1 is perfect as well. I guess I need to get better clarity on what “compliance” means truly to stop guessing of what really has to be reviewed at what point as maybe that’s the core driver.
Adding more layers into the review process if it’s not literally shipped concerns me. Instead, couldn’t it deploy to staging from main and finally a pending approval of all these changes in Azure Pipelines as a deployment task? At that point the included changes maybe could be reviewed, but not each individual merge into main before that. Maybe I can shift the “approval” concept to the release itself.
We used to use #2 for everything, now we use #1 for everything and it’s much better. Less confusing and easier to manage.
It won’t magically shrink your PRs, though. I still occasionally hear our devs talking about a monster PR, so you’ll need to couple this with something else to motivate/dictate smaller PRs.
Yep, another vote for #1. I think the key is that you need a way to encourage small PRs and discourage large PRs.
That means get smaller PRs merged easily without stress.
- fast PR process – fast CI loop & review response time
- good protection against merging broken code – CI is complete. No need for manual QA / regression tests.
- developers have confidence their changes will not disrupt others. Features flags or a similar system with a culture of using them. Developers can launch totally empty skeletons of functionality to prod and feel confident they can enable it purely for themselves, or only in test environment even if the code ships to prod
Larger PRs get pushback, and review comments like “can you break out these changes to a new small PR?”
+1 to feature flags to allow for easier incremental release of functionality
Any articles on simplifying this? I don’t need to be sold, I’ve done this before. However, with a remote team requiring review of all changes I’m going to need to properly plan this out and sell it to show the overall value actually improves the PR review process and complexity even though it requires more reviews.
A couple of presentations:
Why your software should be auto-deployed within 15 minutes after you merge it, with no manual gates. This is the key to high performing teams and high-quality software.
Any articles on how this might apply when needing to “audit” the incoming changes so the production release is marked as fully code reviewed? I was thinking that decoupling that “audit” as the final signoff coming during the release pipeline and review would be a way to control and confirm all changes since last release. This would eliminate PR being the bottleneck for every review and only production release being the thing to work on next
We’re using #2 at the moment and I’m feeling the pain. I definitely want to get to #1 but we don’t have enough test coverage to deploy changes without a manual regression test. As soon as I can get some more pieces in place I’m going to be moving us to #1. I’ll also echo what someone already mentioned that feature switches are key for getting smaller PRs. I’ve not quite got to the point where anything over 1000 lines is automatically closed (ignoring any autogenerated files) but I’m tempted every time I see a large PR taking over a week to ship.
Any articles or tips on implementing feature switches to a team that hasn’t done this and needs an easy button? Most of the products are mostly still in pre-release phase too so might be less complex right now
Just want to get some feedback from everyone.
- we have a python backend and a react front. currently everything is in aws. my architect recommended this setup. What do you think? Is this an easy task to do in aws?
- Does anyone have experience with Trend Micro Cloud Conformity. Do you recommend it or not?
- For SCA and SAST tools, what would be good libraries or tools for python and react code base?
Can’t recommend a particular SAST tool for either but take a look at OWASP SAST list https://owasp.org/www-community/Source_Code_Analysis_Tools
Source Code Analysis Tools on the main website for The OWASP Foundation. OWASP is a nonprofit foundation that works to improve the security of software.
yeah I also stumbled this on google the other day. very helpful.
I know for sure I’ve seen a good tool for this before:
Tracking checksums (or content) of remote URLs/files? (Most cases it’s probably git repos). For example to refuse build until you’ve accepted the new checksum of upstream or so… thoughts?
Oh, god, does this bring back memories…. I had to build something like this a while back because we were not mapping the source code commit IDs to the MD5 ids of the build ISO files…. @mfridh, ideally this would be built into your artifact repository (eg. Artifactory, Nexus, etc.) If you don’t have one, it’s a lot easier to get one and then decide whether to DIY later….