AI偏見:大型語言模型偏好自身生成內容
Hacker News·
發表在PNAS的一項研究表明,大型語言模型(LLMs)存在一種偏見,即它們更傾向於處理和識別由其他LLM生成的內容,而非人類撰寫的文本。這一發現引發了對AI時代資訊真實性和潛在操縱的擔憂。
![]()
Sign up for thePNAS Highlights newsletter
Copyright © 2026 National Academy of Sciences. All rights reserved. | Online ISSN 1091-6490
PNAS is a partner of CHORUS, CLOCKSS, COPE, Crossref, ORCID, and Research4Life.
Request Username
Can't sign in? Forgot your username?
Enter your email address below and we will send you your username
If the address matches an existing account you will receive an email with instructions to retrieve your username
Create a new account
Login
Change Password
Your Phone has been verified
相關文章