外部AI推理為何預設上會違反歐盟AI法案的第12條和第61條
本文指出,在受監管的決策流程中使用外部AI推理,即使組織本身不開發AI,也會立即產生違反歐盟AI法案的合規缺口。這種失敗源於無法重建依賴的基礎,而非AI不準確。
Why External AI Reasoning Breaks Articles 12 and 61 of the EU AI Act by Default
Creators
Description
Why probability becomes a governance diagnostic, not a prediction
For many enterprises, the EU AI Act still feels like a future problem. The debate is framed around internal AI systems, model development, and hypothetical harms that will materialize once enforcement begins in earnest.
That framing misses a more immediate exposure.
A compliance gap is already emerging that does not depend on whether an organization builds, deploys, or even authorizes AI systems at all. It arises when external AI reasoning about the organization enters regulated decision pathways without an evidentiary control.
When that happens, compliance does not fail because the AI is inaccurate. It fails because the organization cannot reconstruct what was relied upon.
This article introduces a probability-based diagnostic framework designed to surface that exposure early, before Articles 12 and 61 are tested in enforcement, supervision, or litigation.
Files
Why External AI Reasoning Breaks Articles 12 and 61 of the EU AI act by Default.pdf
Files
(84.3 kB)
Total views
Total downloads
Total data volume
More info on how stats are collected....
Versions
External resources
Communities
Keywords and subjects
Keywords
Details
Markdown
reStructuredText
HTML
Image URL
Target URL
Rights
Citation
Export
Technical metadata
About
Blog
Help
Developers
Contribute
Funded by



Powered by
CERN Data Centre & InvenioRDM
This site uses cookies. Find out more on how we use cookies
相關文章