揭秘OpenAI挖角Thinking Machines Lab的內幕

揭秘OpenAI挖角Thinking Machines Lab的內幕

Wired - AI·

OpenAI宣布重新聘用Mira Murati的AI新創公司Thinking Machines Lab的聯合創始人Barret Zoph和Luke Metz,雙方對其離職原因的說法存在衝突。一方指控Zoph因嚴重不當行為被解僱,而OpenAI則表示此次聘用已籌劃數週,並質疑對Zoph操守的擔憂。

Inside OpenAI’s Raid on Thinking Machines Lab

Image

If someone ever makes an HBO Max series about the AI industry, the events of this week will make quite the episode.

On Wednesday, OpenAI’s CEO of applications, Fidji Simo, announced the company had rehired Barret Zoph and Luke Metz, cofounders of Mira Murati’s AI startup, Thinking Machines Lab. Zoph and Metz had left OpenAI in late 2024.

We reported last night on two narratives forming around what led to the departures, and have since learned new information.

A source with direct knowledge says that Thinking Machines leadership believed Zoph engaged in an incident of serious misconduct while at the company last year. That incident broke Murati’s trust, the source says, and disrupted the pair’s working relationship. The source also alleged Murati fired Zoph on Wednesday—before knowing he was going to OpenAI—due to what the company claimed were issues that arose after the alleged misconduct. Around the time the company learned that Zoph was returning to OpenAI, Thinking Machines raised concerns internally about whether he had shared confidential information with competitors. (Zoph has not responded to several requests for comment from WIRED.)

Meanwhile, in a Wednesday memo to employees, Simo claimed the hires had been in the works for weeks and that Zoph told Murati he was considering leaving Thinking Machines on Monday—prior to the date he was fired. Simo also told employees that OpenAI doesn’t share Thinking Machines' concerns about Zoph’s ethics.

Alongside Zoph and Metz, another former OpenAI researcher that was working at Thinking Machines, Sam Schoenholz, is rejoining the ChatGPT-maker, per Simo’s announcement. At least two more Thinking Machines employees are expected to join OpenAI in the coming weeks, according to a source familiar with the matter. Technology reporter Alex Heath was first to report the additional hires.

A separate source familiar with the matter pushed back on the perception that the recent personnel changes were wholly related to Zoph. "This has been part of a long discussion at Thinking Machines. There were discussions and misalignment on what the company wanted to build—it was about the product, the technology, and the future.”

Thinking Machines Lab and OpenAI declined to comment.

In the aftermath of these events, we’ve been hearing from several researchers at leading AI labs who say they are exhausted by the constant drama in their industry. This specific incident is reminiscent of OpenAI’s brief ouster of Sam Altman in 2023, known inside of OpenAI as “the blip.” Murati played a key role in that event as the company’s then chief technology officer, according to reporting from The Wall Street Journal.

In the years since Altman’s ouster, the drama in the AI industry has continued, with departures of cofounders at several major AI labs, including xAI’s Igor Babuschkin, Safe Superintelligence’s Daniel Gross, and Meta’s Yann LeCun (he did cofound Facebook’s longstanding AI lab, FAIR, after all).

Some might argue the drama is justified for a nascent industry whose expenditures are contributing to America’s GDP growth. Also, if you buy into the idea that one of these researchers might crack a few breakthroughs on the path to AGI, it’s probably worth tracking where they’re going.

That said, many researchers started working before ChatGPT’s breakout success and appear surprised that their industry is now the source of nearly constant scrutiny.

As long as researchers can keep raising billion-dollar seed rounds on a whim, we’re guessing the AI industry’s power shake-ups will continue apace. HBO Max writers, lock in.

How AI Labs Are Training Agents to Do Your Job

People in Silicon Valley have been musing about AI displacing jobs for decades. In the past few months, however, the efforts to actually get AI to do economically valuable work have become far more sophisticated.

AI labs are smartening up about the data they’re using to create AI agents. Last week, WIRED reported that OpenAI has been asking third-party contractors from the firm Handshake to upload examples of their real work from previous jobs to evaluate OpenAI’s agents. The companies ask employees to scrub these documents of any confidential data and personally identifying information. While it’s possible some corporate secrets or names slip by, that’s likely not what OpenAI is after (though the company could get in serious trouble if that happens, experts say).

AI labs are more interested in getting realistic examples of work created by a McKinsey consultant, Goldman Sachs investment banker, or Harvard doctor. That’s why data suppliers such as Mercor specifically seek out professionals that have worked at these companies on their job postings.

Handshake, Mercor, Surge, and Turing are some of the major data suppliers that AI labs rely on to get this data. In the past year, data firms have started paying upwards of $100 an hour to contract top talent for AI labs.

One way they’re using this data is to create “environments,” which are essentially boring video games that teach AI agents how to use enterprise software applications. The idea is that AI agents can test on environments and learn how to use real-world software that professionals would use to do their jobs.

“Over the past year, labs have increasingly recognized that they need to train and fine-tune models for a whole bunch of areas of knowledge work, including legal, health care, consulting, and banking,” says Aaron Levie, the CEO of the enterprise company Box, which offers enterprise agents powered by models from OpenAI, Anthropic, and Google. “These firms have been hiring contractors to generate datasets and rubrics, which offer ways that they can train and evaluate the model so it can get better at particular skills.”

Whether this is enough to train AI agents to execute office tasks accurately and consistently remains to be seen. AI labs have significantly improved their agents in the past year, as shown by viral products like Claude Code, which people are increasingly using for tasks outside of coding. If that’s any indication of what’s to come for other industries, it’s worth watching these enterprise agents.

This is an edition of the Model Behavior newsletter. Read previous newsletters here.

Comments

You Might Also Like

In your inbox: The week’s biggest tech news in perspective

The dominance of the dollar is coming to an end

Big Story: Understanding Trump’s retro coup in Venezuela

Billion-dollar data centers are taking over the world

Livestream AMA: Welcome to the Chinese century

Image

Image

Image

Image

Image

Image

Image

Image

Image

Image

Image

Image

Image

Image

© 2026 Condé Nast. All rights reserved. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices

Wired - AI

相關文章

  1. 兩位Thinking Machines Lab共同創辦人將重返OpenAI

    3 個月前

  2. Thinking Machines 共同創辦人的辦公室戀情早於其離職

    3 個月前

  3. 前OpenAI技術長創辦的120億美元AI新創公司Thinking Machines出現離職潮,凸顯AI人才爭奪戰的激烈

    Hacker News · 3 個月前

  4. Mira Murati 的新創公司 Thinking Machines Lab 兩位共同創辦人將轉往 OpenAI

    Techcrunch · 3 個月前

  5. 矽谷的忠誠已死

    3 個月前