時間加劇AI風險:合成穩定性與設計式治理的失敗

Hacker News·

本文質疑時間能修正AI風險的普遍假設,指出在與治理相關的決策情境中,時間反而會放大風險,使AI生成內容更具說服力且對機構構成更大威脅。

Image

Image

When Time Hardens AI Risk- Synthetic Stability and the Failure of Governance-by-Design

Creators

Description

ABSTRACT

A common assumption underlies much contemporary thinking about AI risk: that time is corrective.

Models improve. Guardrails tighten. Feedback loops reduce error. Early failures are expected to fade as systems mature.

In many technical domains, this assumption is reasonable. In governance-relevant decision contexts, it is not.

Here, time often functions not as a corrective force, but as a risk amplifier. Certain classes of AI-generated outputs become more persuasive, more stable, and more institutionally dangerous the longer they persist.

This article examines that failure mode and names the mechanism behind it.

A common assumption underlies much contemporary thinking about AI risk: that time is corrective.

Models improve. Guardrails tighten. Feedback loops reduce error. Early failures are expected to fade as systems mature.

In many technical domains, this assumption is reasonable. In governance-relevant decision contexts, it is not.

Files

When Time Hardens AI Risk- Synthetic Stability and the Failure of Governance-by-Design.pdf

Files

     (195.9 kB)

Total views

Total downloads

Total data volume

More info on how stats are collected....

Versions

External resources

Image

Communities

Keywords and subjects

Keywords

Details

Image

Markdown

reStructuredText

HTML

Image URL

Target URL

Rights

Image

Citation

Export

Technical metadata

About

Blog

Help

Developers

Contribute

Funded by

Image

Image

Image

Powered by
CERN Data Centre & InvenioRDM

This site uses cookies. Find out more on how we use cookies

Hacker News

相關文章

  1. 決策塑造的AI推理在受監管醫療環境中的即時治理風險

    4 個月前

  2. AI監管:事實與虛構

    3 個月前

  3. 當AI失靈時:監管系統中的推理可見性與治理:2026年金融服務與醫療保健案例研究

    4 個月前

  4. 多數AI事件是證據失敗,而非模型失敗

    3 個月前

  5. 當AI發聲,證據成為控制介面

    3 個月前