AI正自我毒害並推動大型語言模型走向崩潰,但有解方

AI正自我毒害並推動大型語言模型走向崩潰,但有解方

Hacker News·

未經驗證的AI生成內容氾濫,正導致AI系統面臨「垃圾進、垃圾出」問題,並引發模型偏離現實的「模型崩潰」。Gartner預測,為了解決此問題,「零信任」資料治理將日益普及。

AI is quietly poisoning itself and pushing models toward collapse - but there's a cure

Image

Image

Follow ZDNET: Add us as a preferred source on Google.

ZDNET's key takeaways

According to tech analyst Gartner, AI data is rapidly becoming a classic Garbage In/Garbage Out (GIGO) problem for users. That's because organizations' AI systems and large language models (LLMs) are flooded with unverified, AI‑generated content that cannot be trusted.

Model collapse

You know this better as AI slop. While annoying to you and me, it's deadly to AI because it poisons the LLMs with fake data. The result is what's called in AI circles "Model Collapse." AI company Aquant defined this trend: "In simpler terms, when AI is trained on its own outputs, the results can drift further away from reality."

Also: 4 new roles will lead the agentic AI revolution - here's what they require

However, I think that definition is much too kind. It's not a case of "can" -- with bad data, AI results "will" drift away from reality.

Zero trust

This issue is already apparent. Gartner predicted that 50% of organizations will have a zero‑trust posture for data governance by 2028. These enterprises will have no choice, because unverified AI‑generated data is proliferating across corporate systems and public sources.

The analyst argued that enterprises can no longer assume data is human‑generated or trustworthy by default, and must instead authenticate, verify, and track data lineage to protect business and financial outcomes.

Ever try to authenticate and verify data from AI? It's not easy. It can be done, but AI literacy isn't a common skill.

Also: Got AI skills? You can earn 43% more in your next job - and not just for tech work

As IBM distinguished engineer Phaedra Boinodiris told me recently: "Just having the data is not enough. Understanding the context and the relationships of the data is key. This is why you need to have an interdisciplinary approach to who gets to decide what data is correct. Does it represent all the different communities that we need to serve? Do we understand the relationships of how this data was gathered?"

Making matters worse, GIGO now operates at AI scale. This situation means that flawed inputs can cascade through automated workflows and decision systems, producing worse results. Yes, that's right, if you think AI result bias, hallucinations, and simple factual errors are bad today, wait until tomorrow.

To counter this concern, Gartner said businesses should adopt zero‑trust thinking. Originally developed for networks, zero-trust is now being applied to data governance in response to AI risks.

Also: Deploying AI agents is not your typical software launch - 7 lessons from the trenches

Stronger mechanisms

Gartner suggested many companies will need stronger mechanisms to authenticate data sources, verify quality, tag AI‑generated content, and continuously manage metadata so they know what their systems are actually consuming. The analyst proposed the following steps:

So, will AI still be useful in 2028? Sure, but ensuring it's useful and not heading down a primrose path to a bad answer will require a lot of good, old-fashioned people work. However, this role will at least be a new job generated by the so-called AI revolution.

Related

I asked six popular AIs the same trick questions, and every one of them hallucinated

4 new roles will lead the agentic AI revolution - here's what they require

6 ways to stop cleaning up after AI - and keep your productivity gains

Hacker News

相關文章

  1. 網路正在吞噬自身:模型崩潰與AI數據的迫切危機

    4 個月前

  2. 大型語言模型若為泡沫,AI對齊的未來將走向何方

    Lesswrong · 4 個月前

  3. 未來的萬物皆是謊言:AI 帶來的煩擾與責任缺失將如何形塑我們的生活

    12 天前

  4. AI產業內部人士發起「毒化數據」行動,以阻礙AI發展

    3 個月前

  5. AI 發展順利:大型語言模型並非你的朋友

    4 個月前