معايير الذكاء الاصطناعي معيبة. إليك ما نحتاجه بدلاً منها
For decades, artificial intelligence has been evaluated through the question of whether machines outperform humans. From chess to advanced math, from coding to essay writing, the performance of AI models and applications is tested against that of individual humans completing tasks. This framing is seductive: An AI vs. human comparison on isolated problems with clear…
مقالات المصادر
MIT Technology Review (United States) | Mar 31, 2026
Your votes count
No account needed — your votes are saved and included in the consensus analysis. Create an account to track your voting history and add statements.
مترجم بالذكاء الاصطناعي · عرض الأصل
Maintaining human-centric benchmarks is essential for ensuring AI systems remain accountable and aligned with human values.
مترجم بالذكاء الاصطناعي · عرض الأصل
Current benchmarks, despite their flaws, provide a familiar framework for understanding AI advancements and should not be discarded entirely.
مترجم بالذكاء الاصطناعي · عرض الأصل
Shifting focus from human comparison to task efficiency could drive innovation and prioritize AI's unique strengths.
مترجم بالذكاء الاصطناعي · عرض الأصل
Redefining AI performance metrics could lead to misinterpretation of capabilities, potentially causing public mistrust in AI technologies.
مترجم بالذكاء الاصطناعي · عرض الأصل
AI benchmarks should evolve beyond human comparisons to better reflect real-world applications and collaborative potential.
💡 How This Works
- • Add Statements: Post claims or questions (10-500 characters)
- • Vote: Agree, Disagree, or Unsure on each statement
- • Respond: Add detailed pro/con responses with evidence
- • Consensus: After enough participation, analysis reveals opinion groups and areas of agreement
Society Speaks is open and independent. Your support keeps civic discussion free from advertising and commercial influence.
Support us