AI 生成程式碼不算作弊:開源軟體需要討論此事

AI 生成程式碼不算作弊:開源軟體需要討論此事

Hacker News·

文章指出,曾被視為休閒開發者工具的 AI 生成程式碼,如今已成為業界領袖們普遍採用的新現實。文中強調了以人類提示為核心的 AI 程式碼生成模式的轉變,並呼籲開源社群就此展開公開討論。

Image

AI Generated Code Isn’t Cheating: OSS Needs to Talk About It

Remember early 2025? "Vibe coding" was a meme and seemed mostly a tool for casual builders or those new to coding. It's now 2026, and we find ourselves living in a new reality. Industry leaders like DHH, Karpathy, and Lutke are publicly embracing AI-generated code controlled by human prompting.

Image

Image

Image

Remember early 2025? “Vibe coding” was a meme and seemed mostly a tool for casual builders or those new to coding. It was often used disparagingly, or to imply a lack of deep technical expertise. Some very cool basic applications were being built, but AI coding assistants couldn’t reliably function in complex codebases. But what a difference a year has made!

It’s now 2026, and we find ourselves living in a new reality. Some of the most influential voices in software engineering like DHH (Ruby on Rails), Andrej Karpathy (prev OpenAI, Tesla), Tobi Lutke (Shopify), Salvatore Sanfilippo (Redis), and Mitchell Hashimoto (Ghostty, prev Hashicorp) are publicly embracing a new  paradigm: completely AI generated code controlled by human-in-the-loop prompting. It was also recently publicized that Linus Torvalds (creator of Linux and Git) is leveraging AI vibe-coding in his side-projects.

AI is everywhere: if you’re a software developer, you’ve almost certainly tried at least one AI-assisted coding solution over the past year. It’s a safe assumption that a large portion of developers are using AI to help them, but we still know shockingly little about how their code was derived. This secrecy is outdated, especially now that the practice is being normalized by industry leaders.

The open source community is built on top of foundations of transparency and collaboration, of which knowledge sharing is a key component. At Mozilla.ai, we believe we must embrace and encourage the disclosure of AI usage as quickly as possible. We need to move away from “Should we AI?” and towards a structure that clearly defines our expectations for where we encourage AI usage and how we document it.

In our project any-llm, we’ve started to iterate on this philosophy by creating a pull request template that requests a few pieces of information whenever a PR is submitted.

Here’s a snippet of the relevant part of our pull request template:

Why This Metadata Matters

Contextual Reviewing

First, we request that the contributors specify their level of AI usage: was AI used to draft and make edits? Or was their contribution completely AI-generated with them only directing it via plain language prompts? Both are acceptable, but it helps a reviewer understand how to approach their review. If we know the code is completely AI generated, we can be candid with our feedback and direct the contributor towards improving their prompting or AI coding configuration to improve quality. Without this transparency, it can be difficult to give feedback since a reviewer doesn’t want to offend the contributor by insinuating that their work came from a bot.

Toolchain Discovery

Second, we request information about the contributors' AI setup: what model(s) and IDE/CLI tools were used? This is valuable metadata for crowdsourcing best practices. Maybe there is one model or tool that works amazingly well with a certain codebase or language! Openly sharing this information allows all of us to learn from each other.

Keeping Review Human

Lastly, we request that any responses to comments come from the contributor themselves and not their AI tool. It is frustrating to write comments without knowing if a human is on the other side reading and responding to the feedback. The open source community is a wonderful place to learn from each other, and that learning happens best when humans talk to humans. Of course, AI can be used to help the contributor brainstorm or improve their grammar, but we think the core discussion should still happen between two humans.

We welcome community opinions and hope to see similar approaches be adopted across the open source community. Let's keep learning and developing together!

Read more

Image

Secure Your Keys, Track Your Costs: any-llm Managed Platform Enters Open Beta

Secure, encrypted LLM API key management across OpenAI, Anthropic, Google providers. Track costs, set budgets, avoid vendor lock-in. Free beta access now.

Image

mcpd Plugins: Extend Your Agent Infrastructure Without Touching Your Code

The new plugin system transforms mcpd from a tool-server manager into an extensible enforcement and transformation layer—where authentication, validation, rate limiting, and custom logic live in one governed pipeline.

Image

Building in the Open at Mozilla.ai: 2025 Year in Review

The year 2025 has been a busy one at Mozilla.ai. From hosting live demos and speaking at conferences, to releasing our latest open-source tools, we have made a lot of progress and more exploration this year.

Image

Polyglot AI Agents: WebAssembly Meets the Java Virtual Machine (JVM)

Leverage the JVM's polyglot capabilities to create a self-contained, enterprise-optimized server-side blueprint that combines the performance benefits of WebAssembly with the reliability and maturity of Java's ecosystem.

Image

Mozilla.ai's Blog

Subscribe to get the latest news and ideas from our team

Hacker News

相關文章

  1. AI 讓程式開發變得更有趣

    2 個月前

  2. 如何有效地利用 AI 編寫高品質程式碼

    3 個月前

  3. AI 能寫程式,但不在乎品質

    3 個月前

  4. 我們現在可能都是 AI 工程師了

    大約 2 個月前

  5. 我們所認知的程式設計新紀元之始?

    21 天前