Skip to main content

LLM과 AGI 사이에서 빠진 것 - Vishal Misra & Martin Casado

Technology
United States
March 18, 2026에 시작됨

Vishal Misra returns to explain his latest research on how LLMs actually work under the hood. He walks through experiments showing that transformers update their predictions in a precise, mathematically predictable way as they process new information, explains why this still doesn't mean they're conscious, and describes what's actually required for AGI: the ability to keep learning after training and the move from pattern matching to understanding cause and effect

Need to find a specific claim? Search all statements.
🗳️ Join the conversation
5 투표할 진술 • Your perspective shapes the analysis
📊 Progress to Consensus Analysis Need: 7+ participants, 20+ votes, 3+ votes per statement
Participants 0/7
Statements (7+ recommended) 5/7
Total Votes 0/20
💡 Progress updates live here. Final readiness is confirmed when all three requirements are met.

Your votes count

No account needed — your votes are saved and included in the consensus analysis. Create an account to track your voting history and add statements.

CLAIM 게시자: will Mar 18, 2026
패턴 인식에서 원인과 결과 이해로의 전환은 중요하지만 AI 개발에서 아직도 실현과는 거리가 멀다.
AI 번역 · 원문 보기

The transition from pattern matching to cause and effect understanding is critical but still far from being realized in AI development.

Vote options for this statement: agree, disagree, or unsure
Vote to see results
CLAIM 게시자: will Mar 18, 2026
LLM의 발전은 인간의 이해를 모방하는 시스템 창출의 윤리적 함의를 가려서는 안 된다.
AI 번역 · 원문 보기

Advancements in LLMs should not overshadow the ethical implications of creating systems that mimic human understanding.

Vote options for this statement: agree, disagree, or unsure
Vote to see results
CLAIM 게시자: will Mar 18, 2026
LLM이 AGI로 이어질 수 있다고 믿는 것은 기계의 의식과 이해의 복잡성을 과소평가한다.
AI 번역 · 원문 보기

Believing LLMs can lead to AGI misrepresents the complexity of consciousness and understanding in machines.

Vote options for this statement: agree, disagree, or unsure
Vote to see results
CLAIM 게시자: will Mar 18, 2026
훈련 후 지속적인 학습은 AGI에 필수적이며, LLM은 인간 지능을 진정으로 복제하기 위해 정적 모델을 넘어 진화해야 한다.
AI 번역 · 원문 보기

Continuous learning post-training is essential for AGI; LLMs must evolve beyond static models to truly replicate human intelligence.

Vote options for this statement: agree, disagree, or unsure
Vote to see results
CLAIM 게시자: will Mar 18, 2026
LLM의 정확하고 예측 가능한 특성은 AGI 달성의 기초 도구로서의 잠재력을 보여준다.
AI 번역 · 원문 보기

The precise, predictable nature of LLMs showcases their potential as foundational tools for achieving AGI.

Vote options for this statement: agree, disagree, or unsure
Vote to see results

💡 How This Works

  • Add Statements: Post claims or questions (10-500 characters)
  • Vote: Agree, Disagree, or Unsure on each statement
  • Respond: Add detailed pro/con responses with evidence
  • Consensus: After enough participation, analysis reveals opinion groups and areas of agreement

Society Speaks is open and independent. Your support keeps civic discussion free from advertising and commercial influence.

Support us