Follow ZDNET: Add us as a favorite source On Google.
ZDNET Highlights
- Torvalds and the Linux maintainers are taking a pragmatic approach to using AI in the kernel.
- AI or no AI, it is people, not LLMs, who are responsible for the code of Linux.
- If you try to mess with Linux code using AI, bad things will happen.
After months of heated debate, Linus Torvalds and the Linux kernel maintainers have officially reached a compromise. Codified the project’s first formal policy On AI-assisted code contribution. This new policy reflects Torvalds’ pragmatic approach, which balances the adoption of modern AI development tools with rigorous kernel quality standards.
The new guidelines establish three main principles:
-
AI agents can’t add sign-off tags: Only humans can legally authenticate Developer Certificate of Origin (DCO) of the Linux kernel. This is the legal mechanism that ensures code licensing compliance. In other words, even if you put out a patch that was written entirely by an AI, you, not the AI or its creator, are solely responsible for the contribution.
-
essentially aided by attribution: Any contribution that uses an AI tool must include an assisted-by tag identifying the model, agent, and supporting tool used. For example: “Helper-by:Cloud:Cloud-3-Opus Coccinelle Spars.”
-
Absolute Human Responsibility: Put it all together, and you, the human presenter, will bear full responsibility and accountability for reviewing the AI-generated code, ensuring license compliance, and addressing any bugs or security flaws that arise. Don’t try to hide bad code in the kernel, as some University of Minnesota students did in 2021, otherwise you may kiss goodbye to your chances of becoming a Linux kernel developer or programmer in any other respectable open-source project.
The assisted-by tag serves as both a transparency mechanism and a review flag. This enables maintainers to give AI-assisted patches the additional scrutiny they may need without tarnishing the practice.
Also: Linux after Linus? The kernel community drafts a plan to eventually replace Torvalds
Assisted-by attribution was caught in the crossfire when Nvidia engineer and lead Linux kernel developer Sasha Levin submitted a patch for Linux 6.15. Completely generated by AIWhich includes changelog and tests. Levin reviewed and tested the code before submission, but he did not tell the reviewers that it was written by an AI.
This did not go over well with other kernel developers.
AI’s role is as a tool rather than a co-author
The result of all the subsequent fiasco? At the 2025 North America Open Source Summit, Levin himself began advocating for formal AI transparency rules. In July 2025, he proposed the first draft of the kernel’s AI policy. He initially suggested co-developing tags for AI-assisted patches.
Initial discussions, both in person and on-site Linux Kernel Mailing List (LKML)There was debate over whether to use new generated-BY tags or reuse existing co-developed-BY tags. The maintainers eventually settled on assisted-by to better reflect the AI’s role as a tool rather than a co-author.
The decision comes as AI coding assistants have suddenly become really useful for kernel development. As Greg Kroah-Hartman, maintainer of the Linux stable kernel, recently told me, “something happened a month ago, and the world changed.” AI tools are now producing real, valuable security reports instead of hallucinatory nonsense.
Also: Linux seeks new way to authenticate developers and their code – how it works
The final choice of assist-by rather than generate-by was deliberate and influenced by three factors. First of all, it is more accurate. Most AI use in kernel development is assistive (code completion, refactoring suggestions, test generation) rather than full code generation. Second, the tag format mirrors existing metadata tags such as reviewed-by, tested-by, and co-developed-by. Finally, describes the role of the assisted-by tool without implying that the code is questionable or second-rate.
This practical approach got a kickstart when Torvalds said in an LKML conversation, “I *don’t* want any kernel development document to be some AI statement. We have enough people on both sides of ‘the sky is falling’ and ‘this is going to revolutionize software engineering’. I don’t want some kernel development docs to take any stance. That’s why I strongly want this to be a ‘just a tool’ statement.”
The real challenge is making patches look reliable
Despite the Linux kernel’s new AI disclosure policy, maintainers are not relying on AI-detection software to catch unknown AI-generated patches. Instead, they’re using the same tools they’ve always used: deep technical expertise, pattern recognition, and good, old-fashioned code review. As Torvalds said in 2023, “You have to have a certain amount of good taste to judge other people’s code.”
Also: This is my all-time favorite Linux distro – and I’ve tried them all
Why? As Torvalds pointed out. “There’s no point talking about AI slop. Because AI slop people wouldn’t document their patches like that.” The hard problem isn’t the obvious junk; It is easy to reject regardless of origin. The real challenge is making reliable looking patches that meet immediate specifications, match local style, compile cleanly, and still hide a subtle bug or tax long-term maintenance.
Enforcement of the new policy is not dependent on catching every violation. It relies on making the consequences of getting caught severe enough to discourage dishonesty. Ask anyone who has ever been the target of Torvalds’ ire for littering. Even though he’s a lot more mild-mannered than before, you still don’t want to get on his bad side.
