
Sam Altman 正在招聘專人負責 AI 潛在危險
OpenAI 執行長 Sam Altman 正設立一個新職位「準備事務負責人」,專門負責應對人工智慧帶來的風險,涵蓋心理健康、網路安全以及失控 AI 的可能性。
News
Posts from this topic will be added to your daily email digest and your homepage feed.
See All News
AI
Posts from this topic will be added to your daily email digest and your homepage feed.
See All AI
OpenAI
Posts from this topic will be added to your daily email digest and your homepage feed.
See All OpenAI
Sam Altman is hiring someone to worry about the dangers of AI
The Head of Preparedness will be responsible for issues around mental health, cybersecurity, and runaway AI.
The Head of Preparedness will be responsible for issues around mental health, cybersecurity, and runaway AI.

Terrence O'Brien
Posts from this author will be added to your daily email digest and your homepage feed.
See All by Terrence O'Brien




Terrence O'Brien
Posts from this author will be added to your daily email digest and your homepage feed.
See All by Terrence O'Brien
OpenAI is hiring a Head of Preparedness. Or, in other words, someone whose primary job is to think about all the ways AI could go horribly, horribly wrong. In a post on X, Sam Altman announced the position by acknowledging that the rapid improvement of AI models poses “some real challenges.” The post goes on to specifically call out the potential impact on people’s mental health and the dangers of AI-powered cybersecurity weapons.
The job listing says the person in the role would be responsible for:
“Tracking and preparing for frontier capabilities that create new risks of severe harm. You will be the directly responsible leader for building and coordinating capability evaluations, threat models, and mitigations that form a coherent, rigorous, and operationally scalable safety pipeline.”
Altman also says that, looking forward, this person would be responsible for executing the company’s “preparedness framework,” securing AI models for the release of “biological capabilities,” and even setting guardrails for self-improving systems. He also states that it will be a “stressful job,” which seems like an understatement.
In the wake of several high-profile cases where chatbots were implicated in the suicide of teens, it seems a little late in the game to just now be having someone focus on the potential mental health dangers posed by these models. AI psychosis is a growing concern, as chatbots feed people’s delusions, encourage conspiracy theories, and help people hide their eating disorders.

Terrence O'Brien
Posts from this author will be added to your daily email digest and your homepage feed.
See All by Terrence O'Brien
AI
Posts from this topic will be added to your daily email digest and your homepage feed.
See All AI
News
Posts from this topic will be added to your daily email digest and your homepage feed.
See All News
OpenAI
Posts from this topic will be added to your daily email digest and your homepage feed.
See All OpenAI
相關文章