重大问题:人工智能、自动化和治理
Who should control powerful AI systems, and what must be regulated to protect society?
来源文章
RAND Corporation (United States) | Mar 25, 2026
RAND Corporation (United States) | Mar 26, 2026
The Lancet (United Kingdom) | Apr 11, 2026
如何理解这些陈述
此背景可能为机器翻译,质量或有差异。
请先对你的当前观点投票。上方的相关文章是可选的时事背景;此框中的参考资料是进一步的可选背景——这不是测试。在你参与后,我们会展示更多观点和分析(共识地图和旅程总结)。
参考资料旨在提供机构多样性(例如官方数据、立法机构、国际组织和独立研究)。收录不代表赞同;外部网站遵循自己的编辑标准。
你的投票记录了你今天的想法 — 你无需先阅读下面的可选参考资料。它们解释了我们如何框架化陈述。投票后,使用共识分析(解锁后)和你的旅程总结进行后续阅读。
主题完成!
你对此主题中的每个陈述都进行了投票 (7)。
AI 翻译 · 显示原文
AI developers should bear strict legal liability for foreseeable harms caused by their deployed systems, as manufacturers do for physical products.
AI 翻译 · 显示原文
Generative AI systems capable of producing realistic synthetic media should be required to embed detectable watermarks in their outputs.
AI 翻译 · 显示原文
Global AI safety governance requires a binding multilateral treaty process, not voluntary national commitments.
AI 翻译 · 显示原文
Frontier AI systems capable of causing large-scale harm should be required to pass independent safety evaluations before public deployment.
AI 翻译 · 显示原文
AI-driven automation will displace significantly more jobs than it creates this decade, requiring fundamental redesign of social insurance systems.
AI 翻译 · 显示原文
Open-source release of the most powerful AI model weights creates security risks that outweigh the benefits of public access.
AI 翻译 · 显示原文
Training AI systems on personal data without explicit opt-in consent should be prohibited under data protection law.