
我們在年齡預測方面的做法
OpenAI 在 ChatGPT 消費者方案中推出年齡預測模型,以識別未滿 18 歲的用戶,從而應用適當的安全措施和用戶體驗。此功能利用行為和帳戶級別的訊號來增強青少年安全並個性化成人用戶的互動。
January 20, 2026
Our approach to age prediction
Building on our work to strengthen teen safety.
We're rolling out age prediction on ChatGPT consumer plans to help determine whether an account likely belongs to someone under 18, so the right experience and safeguards can be applied to teens. As we’ve outlined in our Teen Safety Blueprint and Under-18 Principles for Model Behavior, young people deserve technology that both expands opportunity and protects their well-being.
Age prediction builds on protections already in place. Teens who tell us they are under 18 when they sign up automatically receive additional safeguards to reduce exposure to sensitive or potentially harmful content. This also enables us to treat adults like adults and use our tools in the way that they want, within the bounds of safety.
We previously shared our early plans for age prediction, and today we’re sharing more detail as the rollout is underway.
How age prediction works
ChatGPT uses an age prediction model to help estimate whether an account likely belongs to someone under 18. The model looks at a combination of behavioral and account-level signals, including how long an account has existed, typical times of day when someone is active, usage patterns over time, and a user’s stated age. Deploying age prediction helps us learn which signals improve accuracy, and we use those learnings to continuously refine the model over time.
Users who are incorrectly placed in the under-18 experience will always have a fast, simple way to confirm their age and restore their full access with a selfie through Persona, a secure identity-verification service. Users can check if safeguards have been added to their account and start this process at any time by going to Settings > Account.
When the age prediction model estimates that an account may belong to someone under 18, ChatGPT automatically applies additional protections designed to reduce exposure to sensitive content, such as:
This approach is guided by expert input and rooted in academic literature about the science of child development and acknowledges known teen differences in risk perception, impulse control, peer influence, and emotional regulation. While these content restrictions help reduce teens’ exposure to sensitive material, we are focused on continually improving these protections, especially to address attempts to bypass our safeguards. When we are not confident about someone’s age or have incomplete information, we default to a safer experience.
In addition to these safeguards, parents can choose to customize their teen’s experience further through parental controls(opens in a new window) including setting quiet hours when ChatGPT can not be used, controlling features such as memory or model training, and receiving notifications if signs of acute distress are detected.
What’s next
We’re learning from the initial rollout and continuing to improve the accuracy of age prediction over time. We will closely track rollout and use those signals to guide ongoing improvements.
In the EU, age prediction will roll out in the coming weeks to account for regional requirements. For more detail, visit our help page(opens in a new window).
While this is an important milestone, our work to support teen safety is ongoing. We’ll continue to share updates on our progress and what we’re learning, in dialogue with experts including the American Psychological Association, ConnectSafely, and Global Physicians Network.
Author
Keep reading

SafetyDec 18, 2025

CompanyNov 6, 2025

SafetySep 16, 2025
— OpenAI
相關文章