Sign in to save & get updates.
Big questions: Artificial intelligence, automation, and governance
Who should control powerful AI systems, and what must be regulated to protect society?
Source Articles
RAND Corporation (United States) | Mar 25, 2026
RAND Corporation (United States) | Mar 26, 2026
The Lancet (United Kingdom) | Apr 11, 2026
How to read these statements
Vote on your current views first. Linked articles above are optional timely context; the references in this box are further optional background — not a test. We surface more perspectives and analysis after you participate (consensus map and journey recap).
References aim for institutional variety (for example official data, legislatures, international bodies, and independent research). Inclusion is not endorsement; external sites set their own editorial standards.
Your vote records what you think today — you are not expected to read the optional references below first. They explain how we frame statements. After you vote, use Consensus analysis (when it unlocks) and your journey recap for follow-up reading.
Focus on governance and trade-offs: safety, innovation, jobs, and democratic oversight.
Optional references: EU AI Act (official text via EUR-Lex) · UK AI Safety Institute · OECD.AI policy observatory · UN AI Advisory Body final report · Acemoglu — simple macro of AI (NBER working paper)
Theme complete!
You've voted on all 7 statements in this theme.