對機器人程式不感興趣,專案點名並羞辱AI生成的開源軟體

對機器人程式不感興趣,專案點名並羞辱AI生成的開源軟體

Hacker News·

名為「OpenSlopware」的專案旨在列出使用大型語言模型(LLM)生成程式碼或整合LLM的開源專案。儘管該專案因騷擾而下架,但其分支版本仍在繼續記錄這些被AI污染的自由開源軟體(FOSS)。

Image

Image

Image

Image

Image

Topics

Security

Off-Prem

On-Prem

Software

Offbeat

Special Features

Vendor Voice

Vendor Voice

Resources

Image

Not hot on bots, project names and shames AI-created open source software

Image

'OpenSlopware' briefly flowers, fades, falls – but fortunately was forked, fast

Image

Image

The splendidly-named "OpenSlopware" was, for a short time, a list of open source projects using LLM bots. Due to harassment, it's gone, but forks of it live on.

"OpenSlopware" was a repository on the European Codeberg git forge containing a list of free software and open source projects which use LLM-bot generated code, or integrate LLMs, or which show signs of "coding assistants" being used on the codebase, such as pull requests created or modified by automated coding tools.

However, its creator – who we are intentionally not naming or tagging here – received so much harassment from LLM boosters that they removed the repository, and indeed their Bluesky account, stating that they would withdraw from social media for a while. Now, if you try to visit the original URL, you will receive only a 404 message.

All is not lost, though. Although it contained human-readable text, it was a Git repository, and so it was possible to fork it – clone its contents into another different repository. Several people did so before the original OpenSlopware creator deleted it again, such as this Small-Hack version, also on Codeberg. The Register has contacted the maintainer of this fork and asked if they'd talk to us about it, but so far, they say they're still thinking about it. Others were planning to maintain copies but have decided to join forces with this one.

Notably, this is despite some people involved in the original apologizing for their involvement and saying it should not be revived.

This is one of a growing number of sites, groups, and communities, which exist to criticize the growing use and promotion of LLM bots and their output, for which the word "slop" is becoming the standard term. Some merely spell out their criticism, such as this open letter to those who fired or didn't hire tech writers because of AI. Others go further, and name and shame those responsible. For instance, we recently saw this blog post "Authors" using AI slop in their books: a small list.

One example is the AntiAI subreddit, but there is also a Lemmy instance devoted to it, called Awful.systems. (For those unfamiliar with it, Lemmy is a tool for creating news aggregator and discussion sites – think Reddit or the recently revived, but LLM-infested, Digg – based around the same ActivityPub protocols used by Mastodon and the rest of the Fediverse.)

One of the Awful.systems site admins is Unix sysadmin and former Wikipedia press officer David Gerard, who formerly took an ultra-skeptical view of the cryptocurrency world on Attack of the 50 Foot Blockchain (which also inspired a book and a sequel. He now publishes an equally skeptical blog on the subject of the LLM bot industry, Pivot to AI. In an post on the Lemmy instance as well as on his Mastodon feed, he says that Awful.systems also plans to curate and maintain a list along the lines of OpenSlopware – but they're looking for a comparably-catchy name.

Those still on the fence about the merits of LLM bots and their output may be surprised by the levels of vitriol this inspires, but it is one of the most contentious aspects of the entire computing world today.

In its section Why not LLMs?, the OpenSlopware continuation mentions the copyright and licensing implications, and continues to cite a Wikipedia article detailing the Environmental impact of artificial intelligence.

These are legitimate concerns, but there are many more. As The Reg reported back in July, in the only test of its kind that we know of, the LLM-promoting site Model Evaluation & Threat Research found that although using coding assistants made programmers think that they were working faster, the truth is that debugging the bots' code slowed the humans down by as much as they thought it sped up the process. The implications of this for code quality are obvious. What long-term use does to programmers' analytical faculties is as yet unmeasured, but the effects on social media look frankly terrifying. Its effects on hiring looked dire early last year, and even as companies rehire those laid off, they are paid less afterwards. Claimed productivity gains are nowhere to be seen.

Along with objective, verifiable measurements, such as performance testing of people and code alike, open criticism is needed – no matter how much it upsets some of those being criticized. ®

Image

More about

Narrower topics

Broader topics

Image

Image

More about

Narrower topics

Broader topics

Send us news

Other stories you might like

The Register Biting the hand that feeds IT

Image

Image

Image

Image

Image

Copyright. All rights reserved © 1998–2025

Hacker News

相關文章

  1. 林納斯·托瓦茲:停止糾結於內核文檔中的AI生成內容

    3 個月前

  2. 多數開發者不信任 AI 生成的程式碼,但仍未檢查

    3 個月前

  3. 當AI聲稱「打造瀏覽器」時,在相信炒作前請先檢查程式碼庫

    3 個月前

  4. AI 產生的低劣內容令我感到恐懼

    2 個月前

  5. LLM 中的 L 代表的是謊言

    大約 2 個月前