
諷刺警訊:頂尖AI會議NeurIPS論文中發現幻覺式引用
AI偵測新創公司GPTZero在頂尖AI會議NeurIPS發表的論文中,發現了51篇論文共計100個幻覺式引用。儘管此數量相較於總引用量在統計上微不足道,但突顯了AI生成內容在學術研究中潛在的問題。
Topics
Latest
AI
Amazon
Apps
Biotech & Health
Climate
Cloud Computing
Commerce
Crypto
Enterprise
EVs
Fintech
Fundraising
Gadgets
Gaming
Government & Policy
Hardware
Layoffs
Media & Entertainment
Meta
Microsoft
Privacy
Robotics
Security
Social
Space
Startups
TikTok
Transportation
Venture
More from TechCrunch
Staff
Events
Startup Battlefield
StrictlyVC
Newsletters
Podcasts
Videos
Partner Content
TechCrunch Brand Studio
Crunchboard
Contact Us

Irony alert: Hallucinated citations found in papers from NeurIPS, the prestigious AI conference
AI detection startup GPTZero scanned all 4,841 papers accepted by the prestigious Conference on Neural Information Processing Systems (NeurIPS), which took place last month in San Diego. The company found 100 hallucinated citations across 51 papers that it confirmed as fake, the company tells TechCrunch.
Having a paper accepted by NeurIPS is a resume-worthy achievement in the world of AI. Given that these are the leading minds of AI research, one might assume they would use LLMs for the catastrophically boring task of writing citations.
So caveats abound with this finding: 100 confirmed hallucinated citations across 51 papers is not statistically significant. Each paper has dozens of citations. So out of tens of thousands of citations, this is, statistically, zero.
It’s also important to note that an inaccurate citation doesn’t negate the paper’s research. As NeurIPS told Fortune, which was first to report on this GPTZero’s research, “Even if 1.1% of the papers have one or more incorrect references due to the use of LLMs, the content of the papers themselves [is] not necessarily invalidated.”
But having said all that, a faked citation is not a nothing, either. NeurIPS prides itself on its “rigorous scholarly publishing in machine learning and artificial intelligence,” it says. And each paper is peer-reviewed by multiple people who are instructed to flag hallucinations.
Citations are also a sort of currency for researchers. They are used as a career metric to show how influential a researcher’s work is among their peers. When AI makes them up, it waters down their value.
No one can fault the peer reviewers for not catching a few AI-fabricated citations given the sheer volume involved. GPTZero is also quick to point this out. The goal of the exercise was to offer specific data on how AI slop sneaks in via “a submission tsunami” that has “strained these conferences’ review pipelines to the breaking point,” the startup says in its report. GPTZero even points to a May 2025 paper called “The AI Conference Peer Review Crisis” that discussed the problem at premiere conferences including NeurIPS.
Still, why couldn’t the researchers themselves fact-check the LLMs work for accuracy? Surely, they must know the actual list of papers they used for their work.
What the whole thing really points to one big, ironic takeaway: If the world’s leading AI experts, with their reputations at stake, can’t ensure their LLM usage is accurate in the details, what does that mean for the rest of us?
相關文章