Linux Kernel Allows AI-Generated Code But Maintains Human Responsibility
- The Linux kernel project has established a formal policy that permits the use of AI coding assistants for contributions, provided that human developers maintain total accountability for the...
- Under the new guidelines, AI-generated code is treated as the contributor's own work.
- A central pillar of the new policy is the strict prohibition of AI agents adding Signed-off-by tags to contributions.
The Linux kernel project has established a formal policy that permits the use of AI coding assistants for contributions, provided that human developers maintain total accountability for the submitted work. This policy integrates AI tools into the kernel’s development workflow while insulating the project from legal and technical liabilities by placing the entire burden of verification on the human submitter.
Under the new guidelines, AI-generated code is treated as the contributor’s own work. This means the human developer is responsible for reviewing all AI-generated code, ensuring it meets the project’s technical standards and verifying that it complies with all licensing requirements.
Human Certification and the DCO
A central pillar of the new policy is the strict prohibition of AI agents adding Signed-off-by
tags to contributions. Only humans are permitted to legally certify the Developer Certificate of Origin (DCO), which is the mechanism used to certify that the contributor has the right to submit the code under the project’s license.
The human submitter is required to personally add their own Signed-off-by
tag to any AI-assisted contribution. By doing so, the developer takes full legal and technical responsibility for the code. The project treats AI coding assistants as tools similar to text editors or compilers, rather than as independent contributors.
Licensing and Technical Compliance
All contributions, regardless of whether they were created with AI assistance, must adhere to the kernel’s strict licensing rules. Specifically, all code must be compatible with GPL-2.0-only and must include the appropriate SPDX license identifiers.
Developers using AI tools are expected to follow the standard kernel development process, including adhering to the established Linux kernel coding style and following the essential guides for submitting patches. If AI-generated code introduces bugs or breaks the kernel, the responsibility lies solely with the human who submitted the patch.
Community Debate and Legal Uncertainty
The introduction of this policy has sparked significant debate within the developer community, particularly regarding copyright and licensing. Discussions on platforms such as Hacker News highlight concerns that AI models trained on copyrighted material may produce code that cannot be legitimately relicensed under the GPL.
Critics of the policy argue that there is no way to use AI tools without the risk of introducing infringing code. While some developers view the human-accountability model as a pragmatic solution, others question whether a contributor’s signature on the DCO actually shields the Linux project from liability in a court of law.
The legal landscape remains complex, with references to indemnification clauses in enterprise agreements from AI providers like Anthropic and OpenAI. These clauses suggest that the providers themselves recognize the inherent legal risks associated with AI-generated code.
Industry Divergence
The Linux kernel’s approach contrasts with other major open-source projects. While the kernel project has opted for a model of human accountability, GNOME has reportedly taken a stricter stance, cracking down on AI-generated code following updates to its extensions guidelines.
This divergence illustrates a broader struggle within the software industry to define where AI-generated content fits into traditional contribution models. While some projects embrace AI as a productivity tool, others view the potential for AI slop
and legal ambiguity as too great a risk to the integrity of their codebases.
For Linux contributors, the mandate is clear: AI tools may be used to assist in writing code, but the human developer remains the final authority and the only party liable for the contribution’s impact and legality.
