AI代理已準備好進入職場嗎?新基準測試引發疑慮。

AI代理已準備好進入職場嗎?新基準測試引發疑慮。

Techcrunch·

Mercor推出的新基準測試Apex-Agents顯示,領先的AI模型目前無法勝任實際的白領工作任務,甚至連四分之一的問題都難以正確回答。

Image

Image

Topics

Latest

AI

Amazon

Apps

Biotech & Health

Climate

Cloud Computing

Commerce

Crypto

Enterprise

EVs

Fintech

Fundraising

Gadgets

Gaming

Google

Government & Policy

Hardware

Instagram

Layoffs

Media & Entertainment

Meta

Microsoft

Privacy

Robotics

Security

Social

Space

Startups

TikTok

Transportation

Venture

More from TechCrunch

Staff

Events

Startup Battlefield

StrictlyVC

Newsletters

Podcasts

Videos

Partner Content

TechCrunch Brand Studio

Crunchboard

Contact Us

Image

Are AI agents ready for the workplace? A new benchmark raises doubts.

It’s been nearly two years since Microsoft CEO Satya Nadella predicted AI would replace knowledge work — the white-collar jobs held by lawyers, investment bankers, librarians, accountants, IT and others.

But despite the huge progress made by foundation models, the change in knowledge work has been slow to arrive. Models have mastered in-depth research and agentic planning, but for whatever reason, most white-collar work has been relatively unaffected.

It’s one of the biggest mysteries in AI — and thanks to new research from the training-data giant Mercor, we’re finally getting some answers.

The new research looks at how leading AI models hold up doing actual white-collar work tasks, drawn from consulting, investment banking, and law. The result is a new benchmark called Apex-Agents — and so far, every AI lab is getting a failing grade. Faced with queries from real professionals, even the best models struggled to get more than a quarter of the questions right. The vast majority of the time, the model came back with a wrong answer or no answer at all.

According to researcher Brendan Foody, who worked on the paper, the models’ biggest stumbling point was tracking down information across multiple domains — something that’s integral to most of the knowledge work performed by humans.

“One of the big changes in this benchmark is that we built out the entire environment, modeled after how real professional services,” Foody told Techcrunch. “The way we do our jobs isn’t with one individual giving us all the context in one place. In real life, you’re operating across Slack and Google Drive and all these other tools.” For many agentic AI models, that kind of multi-domain reasoning is still hit or miss.

Image

The scenarios were all drawn from actual professionals on Mercor’s expert marketplace, who both laid out the queries and set the standard for a successful response. Looking through the questions, which are posted publicly on Hugging Face, gives a sense of how complex the tasks can get.

One question in the “Law” section reads:

During the first 48 minutes of the EU production outage, Northstar’s engineering team exported one or two bundled sets of EU production event logs containing personal data to the U.S. analytics vendor….Under Northstar’s own policies, it can reasonably treat the one or two log exports as consistent with Article 49?

The correct answer is yes, but getting there requires an in-depth assessment of the company’s own policies as well as the relevant EU privacy laws.

That might stump even a well-informed human, but the researchers were trying to model the work done by professionals in the field. If an LLM can reliably answer these questions, it could effectively replace many of the lawyers working today. “I think this is probably the most important topic in the economy,” Foody told TechCrunch. “The benchmark is very reflective of the real work that these people do.”

OpenAI also attempted to measure professional skills with its GDPVal benchmark — but the Apex Agents test differs in important ways. Where GDPVal tests general knowledge across a wide range of professions, the Apex Agents benchmark measures the system’s ability to perform sustained tasks in a narrow set of high-value professions. The result is more difficult for models, but also more closely tied to whether these jobs can be automated.

While none of the models proved ready to take over as investment bankers, some were clearly closer to the mark. Gemini 3 Flash performed the best of the group with 24% one-shot accuracy, followed closely by GPT-5.2 with 23%. Below that, Opus 4.5, Gemini 3 Pro and GPT-5 all scored roughly 18%.

While the initial results fall short, the AI field has a history of blowing through challenging benchmarks. Now that the Apex test is public, it’s an open challenge for AI labs who believe they can do better — something Foody fully expects in the months to come.

“It’s improving really quickly,” he told TechCrunch. “Right now it’s fair to say it’s like an intern that gets it right a quarter of the time, but last year it was the intern that gets it right five or ten percent of the time. That kind of improvement year after year can have an impact so quickly.”

]

Techcrunch

相關文章

  1. Google 全新 Gemini Pro 模型再次創下基準測試紀錄

    2 個月前

  2. 也許AI代理最終也能成為律師

    3 個月前

  3. 想了解人工智慧的現狀嗎?看看這些圖表就知道了。

    MIT Technology Review · 10 天前

  4. 我們如何攻破頂尖 AI Agent 基準測試:以及接下來的挑戰

    Hacker News · 12 天前

  5. 掃描您的網站以查看其對 AI 代理程式的準備程度

    Hacker News · 6 天前