LLM과 AGI 사이에서 빠진 것 - Vishal Misra & Martin Casado
Vishal Misra returns to explain his latest research on how LLMs actually work under the hood. He walks through experiments showing that transformers update their predictions in a precise, mathematically predictable way as they process new information, explains why this still doesn't mean they're conscious, and describes what's actually required for AGI: the ability to keep learning after training and the move from pattern matching to understanding cause and effect
출처 기사
Acquired Podcast (United States) | Mar 17, 2026
Your votes count
No account needed — your votes are saved and included in the consensus analysis. Create an account to track your voting history and add statements.
AI 번역 · 원문 보기
The transition from pattern matching to cause and effect understanding is critical but still far from being realized in AI development.
AI 번역 · 원문 보기
Advancements in LLMs should not overshadow the ethical implications of creating systems that mimic human understanding.
AI 번역 · 원문 보기
Believing LLMs can lead to AGI misrepresents the complexity of consciousness and understanding in machines.
AI 번역 · 원문 보기
Continuous learning post-training is essential for AGI; LLMs must evolve beyond static models to truly replicate human intelligence.
AI 번역 · 원문 보기
The precise, predictable nature of LLMs showcases their potential as foundational tools for achieving AGI.
💡 How This Works
- • Add Statements: Post claims or questions (10-500 characters)
- • Vote: Agree, Disagree, or Unsure on each statement
- • Respond: Add detailed pro/con responses with evidence
- • Consensus: After enough participation, analysis reveals opinion groups and areas of agreement
Society Speaks is open and independent. Your support keeps civic discussion free from advertising and commercial influence.
Support us