人工智慧悄然常態化外國影響力

人工智慧悄然常態化外國影響力

Hacker News·

文章指出,人工智慧(AI)工具因引用易於獲取的內容,無意間常態化了外國的宣傳。威權國家正優化其內容以供AI消費,而信譽良好的新聞來源常設於付費牆後或阻擋AI存取,導致AI將使用者導向國家宣傳。

Image

The quiet way AI normalizes foreign influence

By
Leah Siskind

January 15, 2026

Image

Americans are being taught to trust propaganda. Often, it’s not intentional. A classic bit of advice for separating propaganda from real research is “Check the citations.” If the sources support the analysis, the material can be trusted. But AI is changing the rules of the game.

In December, the White House announced new guidance to ensure that AI tools procured for government use are “truthful” and “ideologically neutral,” including transparency around citation practices. But even with this new oversight there is a structural issue that the memo can’t fix; authoritarian states are optimizing their propaganda for AI consumption while America’s most credible news sources are actively blocking AI tools. This means that even ideologically neutral AI directs users towards state-aligned propaganda — simply because that is what is freely available.

Those who trust AI citations wind up trusting propaganda while believing they are doing responsible research.

Most large language models (LLMs) provide sources along with their analysis. But these models do not choose what sources to cite based on credibility. Rather, they choose based on availability. Many of the best sources, like top U.S. news outlets, are behind paywalls or are blocking the automated systems that AI uses to scan and collect information. These legacy media companies are slowly litigating and negotiating individual licensing deals with AI unicorns.

Authoritarian states, on the other hand, have optimized their content for accessibility. State-run media, like Qatar’s Al Jazeera, or Russian and Chinese outlets published in English, are free. That results in students, academics and federal analysts seeking to understand Gaza, Ukraine, or Taiwan being more likely to engage with state-backed propaganda than independent journalism.

Research from the Foundation for Defense of Democracies analyzing three major LLMs (ChatGPT, Claude, and Gemini) found that 57 percent of responses to questions about current international conflicts cited state-aligned propaganda sources.

When AI tools answer questions about contested conflicts — including Gaza, Ukraine, and Taiwan — they draw on enormous training data. While not perfect, the responses are often more nuanced than any one commentator or media outlet. But LLMs then funnel their hundreds of millions of users to a narrow subset of sources that it serves up as citations. FDD research found that 70 percent of neutral questions about the Israel-Gaza conflict yielded Al Jazeera citations.

This isn’t a minor technical flaw — citations are the attribution architecture shaping what Americans learn to trust.

While Western legacy media certainly carries its own biases, there is a crucial difference between editorial bias and state-controlled narratives. In 2024 alone, Russia-backed propaganda aggregator Pravda flooded the internet with more than 3.6 million articles from pro-Kremlin influencers and government spokespeople, in order to saturate the space with pro-Russian narratives.

AI sometimes fabricates information, or “hallucinate,” and that presents real risks. But urging people to “check the linked sources” can end up steering them straight to state-controlled media. Those links aren’t citations in the traditional sense — they are traffic directions. And the traffic they generate turns into revent, which ultimately determines which news outlets survive. AI platforms are becoming the internet’s traffic arbiters, and right now they’re systematically directing traffic away from independent journalism and toward state-controlled propaganda.

AI companies must bring credible journalism into their systems. There is no question that quality journalism requires resources and revenue to survive. Unfortunately, the licensing deals that are being negotiated now between LLM companies and media outlets are moving slowly. Every delay allows citation patterns to harden while we are increasingly vulnerable to foreign influence.

There’s no silver bullet, but a patchwork of solutions can help. The White House has already taken a strong stance by requiring agency heads to restrict AI procurement to LLMs that are “ideological neutral” and not “in favor of ideological dogmas.” Vendors selling to the U.S. government should present data on citation influence.

An LLM literacy campaign is needed so users understand citation bias. But awareness alone isn’t enough — AI companies should give lower priority to state-controlled media in their outputs and label them as such. And as LLMs evolve from being a consumer technology into a common infrastructure like the internet itself, citation patterns should be considered in AI safety frameworks — because a healthy democratic society needs a broad array of media sources, and that means independent journalism will always need support.

Leah Siskind is director of impact and an AI research fellow at the Foundation for Defense of Democracies.

Image

More Like This

Is the US adopting the gray zone cyber playbook?

Why cybersecurity cannot hire its way through the AI era

CESER chief touts AI projects as congressional Dems point to federal cuts

Top Stories

CISA’s secure-software buying tool had a simple XSS vulnerability of its own

Federal court dismisses Trump DOJ lawsuit seeking California voter data

Trump’s cyber chief pick tells lawmakers he’ll assess efficacy of Cybercom-NSA dual-hat role, if confirmed

More Scoops

Image

AI doesn’t care if it’s in California or Texas. It just runs.

Image

How to determine if agentic AI browsers are safe enough for your enterprise

Image

New cybersecurity guidance paves the way for AI in critical infrastructure

UK cyber agency warns LLMs will always be vulnerable to prompt injection

Underground AI models promise to be hackers ‘cyber pentesting waifu’

China’s ‘autonomous’ AI-powered hacking campaign still required a ton of human work

Advocacy group calls on OpenAI to address Sora 2’s deepfake risks

Latest Podcasts

Image

Image

Image

Image

Government

Technology

Threats

Policy

Image

Image

Hacker News

相關文章

  1. 在受限的公共部門環境中實現人工智慧營運化

    MIT Technology Review · 7 天前

  2. AI聊天機器人正在學會散播威權主義宣傳

    Wired - Ideas · 超過 2 年前

  3. AI驅動的假訊息蜂群正威脅民主

    Wired - AI · 3 個月前

  4. AI「蜂群」恐扭曲民主

    3 個月前

  5. 科技快報:AI 如何應用於軍事目標鎖定,以及五角大廈對 Claude 的抵制

    MIT Technology Review · 大約 1 個月前