Ask HN:您如何在生產環境中授權 AI 代理的行動?
Hacker News·
一位 Hacker News 使用者正在尋求如何在生產環境中安全地授權 AI 代理的行動,並對意外行動和缺乏稽核記錄表示擔憂。他們正在考慮完全信任、手動審查或權限層等選項,並詢問其他人的成功經驗。
My concern: the agent sometimes attempts actions it shouldn't, and
there's no clear audit trail of what it did or why.
Current options I see:
- Trust the agent fully (scary)
- Manual review of every action (defeats automation)
- Some kind of permission/approval layer (does this exist?)
For those running AI agents in production:
- How do you limit what the agent CAN do?
- Do you require approval for high-risk operations?
- How do you audit what happened after the fact?
Curious what patterns have worked.

相關文章