多數AI事件是證據失敗,而非模型失敗

Hacker News·

本文認為,多數AI事件源於機構在重構AI系統輸出及其上下文方面的失敗,而非模型本身的缺陷。文章將AI事件管理重新定義為一個證據控制問題,重點在於保存可檢查的記錄。

Image

Image

Why Most AI Incidents Are Evidence Failures, Not Model Failures

Creators

Description

Public discourse on AI risk continues to frame incidents primarily as technical failures: model bias, hallucination, or misconfiguration. This article advances a different interpretation grounded in governance practice. Drawing on patterns observable in the OECD AI Incidents Monitor, it argues that many AI incidents escalate not because models fail, but because institutions cannot reconstruct what AI systems said, when they said it, and how those representations were framed at the moment of reliance.

The article does not assess model accuracy, internal system design, or causality. Instead, it examines AI incidents as post-event accountability failures driven by missing or non-inspectable evidence. Through sector-agnostic walkthroughs spanning finance, healthcare, and public administration, it demonstrates a recurring governance failure mode: once scrutiny occurs, the absence of contemporaneous, interaction-specific records converts uncertainty into institutional exposure regardless of technical intent or system quality.

The paper reframes AI incident management as an evidentiary control problem rather than a model optimization problem. It concludes that, in non-deterministic systems deployed as external representation channels, accountability depends less on improving prediction accuracy than on preserving inspectable records of AI-mediated representations at the point of human reliance.

Files

Why Most AI Incidents Are Evidence Failures, Not Model Failures.pdf

Files

     (132.3 kB)

Total views

Total downloads

Total data volume

More info on how stats are collected....

Versions

External resources

Image

Communities

Keywords and subjects

Keywords

Details

Image

Markdown

reStructuredText

HTML

Image URL

Target URL

Rights

Image

Citation

Export

Technical metadata

About

Blog

Help

Developers

Contribute

Funded by

Image

Image

Image

Powered by
CERN Data Centre & InvenioRDM

This site uses cookies. Find out more on how we use cookies

Hacker News

相關文章

  1. 當AI成為記錄系統:為何證據而非準確性將定義責任歸屬

    4 個月前

  2. 當AI發聲,證據成為控制介面

    3 個月前

  3. 當AI採購失敗時,存在哪些證據?

    3 個月前

  4. AI 依賴日誌:AI 輔助決策中證據治理的控制定義

    3 個月前

  5. AI監管:事實與虛構

    3 個月前