時間加劇AI風險:合成穩定性與設計式治理的失敗
本文質疑時間能修正AI風險的普遍假設,指出在與治理相關的決策情境中,時間反而會放大風險,使AI生成內容更具說服力且對機構構成更大威脅。
When Time Hardens AI Risk- Synthetic Stability and the Failure of Governance-by-Design
Creators
Description
ABSTRACT
A common assumption underlies much contemporary thinking about AI risk: that time is corrective.
Models improve. Guardrails tighten. Feedback loops reduce error. Early failures are expected to fade as systems mature.
In many technical domains, this assumption is reasonable. In governance-relevant decision contexts, it is not.
Here, time often functions not as a corrective force, but as a risk amplifier. Certain classes of AI-generated outputs become more persuasive, more stable, and more institutionally dangerous the longer they persist.
This article examines that failure mode and names the mechanism behind it.
A common assumption underlies much contemporary thinking about AI risk: that time is corrective.
Models improve. Guardrails tighten. Feedback loops reduce error. Early failures are expected to fade as systems mature.
In many technical domains, this assumption is reasonable. In governance-relevant decision contexts, it is not.
Files
When Time Hardens AI Risk- Synthetic Stability and the Failure of Governance-by-Design.pdf
Files
(195.9 kB)
Total views
Total downloads
Total data volume
More info on how stats are collected....
Versions
External resources
Communities
Keywords and subjects
Keywords
Details
Markdown
reStructuredText
HTML
Image URL
Target URL
Rights
Citation
Export
Technical metadata
About
Blog
Help
Developers
Contribute
Funded by



Powered by
CERN Data Centre & InvenioRDM
This site uses cookies. Find out more on how we use cookies
相關文章