OpenAI尋找新的「應對準備主管」以處理AI相關風險

OpenAI尋找新的「應對準備主管」以處理AI相關風險

Techcrunch·

OpenAI正積極招聘一位新的「應對準備主管」,負責研究並減緩新興的AI相關風險,包括網路安全和心理健康等領域,此舉已由執行長Sam Altman證實。

Image

Image

Topics

Latest

AI

Amazon

Apps

Biotech & Health

Climate

Cloud Computing

Commerce

Crypto

Enterprise

EVs

Fintech

Fundraising

Gadgets

Gaming

Google

Government & Policy

Hardware

Instagram

Layoffs

Media & Entertainment

Meta

Microsoft

Privacy

Robotics

Security

Social

Space

Startups

TikTok

Transportation

Venture

More from TechCrunch

Staff

Events

Startup Battlefield

StrictlyVC

Newsletters

Podcasts

Videos

Partner Content

TechCrunch Brand Studio

Crunchboard

Contact Us

Image

OpenAI is looking for a new Head of Preparedness

OpenAI is looking to hire a new executive responsible for studying emerging AI-related risks in areas ranging from computer security to mental health.

In a post on X, CEO Sam Altman acknowledged that AI models are “starting to present some real challenges,” including the “potential impact of models on mental health,” as well as models that are “so good at computer security they are beginning to find critical vulnerabilities.”

“If you want to help the world figure out how to enable cybersecurity defenders with cutting edge capabilities while ensuring attackers can’t use them for harm, ideally by making all systems more secure, and similarly for how we release biological capabilities and even gain confidence in the safety of running systems that can self-improve, please consider applying,” Altman wrote.

OpenAI’s listing for the Head of Preparedness role describes the job as one that’s responsible for executing the company’s preparedness framework, “our framework explaining OpenAI’s approach to tracking and preparing for frontier capabilities that create new risks of severe harm.”

The company first announced the creation of a preparedness team in 2023, saying it would be responsible for studying potential “catastrophic risks,” whether they were more immediate, like phishing attacks, or more speculative, such as nuclear threats.

Less than a year later, OpenAI reassigned Head of Preparedness Aleksander Madry to a job focused on AI reasoning. Other safety executives at OpenAI have also left the company or taken on new roles outside of preparedness and safety.

The company also recently updated its Preparedness Framework, stating that it might “adjust” its safety requirements if a competing AI lab releases a “high-risk” model without similar protections.

As Altman alluded to in his post, generative AI chatbots have faced growing scrutiny around their impact on mental health. Recent lawsuits allege that OpenAI’s ChatGPT reinforced users’ delusions, increased their social isolation, and even led some to suicide. (The company said it continues working to improve ChatGPT’s ability to recognize signs of emotional distress and to connect users to real-world support.)

Topics

Image

Anthony Ha is TechCrunch’s weekend editor. Previously, he worked as a tech reporter at Adweek, a senior editor at VentureBeat, a local government reporter at the Hollister Free Lance, and vice president of content at a VC firm. He lives in New York City.

You can contact or verify outreach from Anthony by emailing [email protected].

Image

Plan ahead for the 2026 StrictlyVC events. Hear straight-from-the-source candid insights in on-stage fireside sessions and meet the builders and backers shaping the industry. Join the waitlist to get first access to the lowest-priced tickets and important updates.

Techcrunch

相關文章

  1. Sam Altman 正在招聘專人負責 AI 潛在危險

    Hacker News · 4 個月前

  2. 「這將是一份壓力巨大的工作」:Sam Altman 提供 55.5 萬美元年薪招聘 AI 領域最艱鉅職位

    Hacker News · 4 個月前

  3. OpenAI 招聘「準備事務主管」,年薪 55.5 萬美元並提供股權

    Hacker News · 4 個月前

  4. OpenAI 解散專注於安全與可信賴 AI 開發的任務對齊團隊

    2 個月前

  5. OpenAI 招募整備負責人應對 AI 風險

    Sam Altman · 4 個月前