Inicia sesión para guardar y recibir actualizaciones.
Los parámetros de referencia de IA están rotos. Esto es lo que necesitamos en su lugar.
For decades, artificial intelligence has been evaluated through the question of whether machines outperform humans. From chess to advanced math, from coding to essay writing, the performance of AI models and applications is tested against that of individual humans completing tasks. This framing is seductive: An AI vs. human comparison on isolated problems with clear…
Artículos de Fuentes
MIT Technology Review (United States) | Mar 31, 2026
Your votes count
No account needed — your votes are saved and included in the consensus analysis. Create an account to track your voting history and add statements.
Traducido por IA · Ver original
Maintaining human-centric benchmarks is essential for ensuring AI systems remain accountable and aligned with human values.
Traducido por IA · Ver original
Current benchmarks, despite their flaws, provide a familiar framework for understanding AI advancements and should not be discarded entirely.
Traducido por IA · Ver original
Shifting focus from human comparison to task efficiency could drive innovation and prioritize AI's unique strengths.
Traducido por IA · Ver original
Redefining AI performance metrics could lead to misinterpretation of capabilities, potentially causing public mistrust in AI technologies.
Traducido por IA · Ver original
AI benchmarks should evolve beyond human comparisons to better reflect real-world applications and collaborative potential.
💡 How This Works
- • Add Statements: Post claims or questions (10-500 characters)
- • Vote: Agree, Disagree, or Unsure on each statement
- • Respond: Add detailed pro/con responses with evidence
- • Consensus: After enough participation, analysis reveals opinion groups and areas of agreement
Society Speaks is open and independent. Your support keeps civic discussion free from advertising and commercial influence.
Support us