Skip to main content

LLM和AGI之间缺少什么——Vishal Misra和Martin Casado

Technology
United States
开始于 March 18, 2026

Vishal Misra returns to explain his latest research on how LLMs actually work under the hood. He walks through experiments showing that transformers update their predictions in a precise, mathematically predictable way as they process new information, explains why this still doesn't mean they're conscious, and describes what's actually required for AGI: the ability to keep learning after training and the move from pattern matching to understanding cause and effect

Need to find a specific claim? Search all statements.
🗳️ Join the conversation
5 条陈述待投票 • Your perspective shapes the analysis
📊 Progress to Consensus Analysis Need: 7+ participants, 20+ votes, 3+ votes per statement
Participants 0/7
Statements (7+ recommended) 5/7
Total Votes 0/20
💡 Progress updates live here. Final readiness is confirmed when all three requirements are met.

Your votes count

No account needed — your votes are saved and included in the consensus analysis. Create an account to track your voting history and add statements.

CLAIM 发布者 will Mar 18, 2026
从模式匹配到因果关系理解的转变至关重要,但在人工智能开发中仍远未实现。
AI 翻译 · 显示原文

The transition from pattern matching to cause and effect understanding is critical but still far from being realized in AI development.

Vote options for this statement: agree, disagree, or unsure
Vote to see results
CLAIM 发布者 will Mar 18, 2026
大语言模型的进步不应掩盖创建模仿人类理解的系统所带来的伦理影响。
AI 翻译 · 显示原文

Advancements in LLMs should not overshadow the ethical implications of creating systems that mimic human understanding.

Vote options for this statement: agree, disagree, or unsure
Vote to see results
CLAIM 发布者 will Mar 18, 2026
相信大语言模型能够导向AGI误传了机器意识和理解的复杂性。
AI 翻译 · 显示原文

Believing LLMs can lead to AGI misrepresents the complexity of consciousness and understanding in machines.

Vote options for this statement: agree, disagree, or unsure
Vote to see results
CLAIM 发布者 will Mar 18, 2026
训练后的持续学习对AGI至关重要;大语言模型必须超越静态模型才能真正复制人类智能。
AI 翻译 · 显示原文

Continuous learning post-training is essential for AGI; LLMs must evolve beyond static models to truly replicate human intelligence.

Vote options for this statement: agree, disagree, or unsure
Vote to see results
CLAIM 发布者 will Mar 18, 2026
大语言模型的精确性和可预测性展示了它们作为实现AGI基础工具的潜力。
AI 翻译 · 显示原文

The precise, predictable nature of LLMs showcases their potential as foundational tools for achieving AGI.

Vote options for this statement: agree, disagree, or unsure
Vote to see results

💡 How This Works

  • Add Statements: Post claims or questions (10-500 characters)
  • Vote: Agree, Disagree, or Unsure on each statement
  • Respond: Add detailed pro/con responses with evidence
  • Consensus: After enough participation, analysis reveals opinion groups and areas of agreement

Society Speaks is open and independent. Your support keeps civic discussion free from advertising and commercial influence.

Support us