[$] Testing AI-enhanced reviews for Linux patches
Code review is in high demand, and short supply, for most open-source projects.
Reviewer time is precious, so any tool that can lighten the load is worth exploring.
That is why Jesse Brandeburg and Kamel Ayari decided to test whether
tools like ChatGPT could review patches to provide quick feedback to
contributors about common problems. In <a href="https://netdevconf.info/0x18/sessions/talk/ai-enhanced-reviews-for-linux-networking.html" rel="nofollow">a
talk</a> at the <a href="https://netdevconf.info/0x18/" rel="nofollow">Netdev
0x18</a> conference this July, Brandeburg provided an overview of an
experiment using machine learning to review emails containing patches
sent to the https://www.kernel.org/doc/html/v5.6/networking/netdev-FAQ.html
mailing list. Large-language models (LLMs) will not be replacing human reviewers anytime
soon, but they may be a useful addition to help humans focus on deeper
reviews instead of simple rule violations.
https://lwn.net/Articles/987319/