Skip to main content

LLMとAGIの間で何が足りないのか - Vishal MisraとMartin Casado

Technology
United States
March 18, 2026に開始

Vishal Misra returns to explain his latest research on how LLMs actually work under the hood. He walks through experiments showing that transformers update their predictions in a precise, mathematically predictable way as they process new information, explains why this still doesn't mean they're conscious, and describes what's actually required for AGI: the ability to keep learning after training and the move from pattern matching to understanding cause and effect

Need to find a specific claim? Search all statements.
🗳️ Join the conversation
5 投票すべき主張 • Your perspective shapes the analysis
📊 Progress to Consensus Analysis Need: 7+ participants, 20+ votes, 3+ votes per statement
Participants 0/7
Statements (7+ recommended) 5/7
Total Votes 0/20
💡 Progress updates live here. Final readiness is confirmed when all three requirements are met.

Your votes count

No account needed — your votes are saved and included in the consensus analysis. Create an account to track your voting history and add statements.

CLAIM 投稿者: will Mar 18, 2026
パターンマッチングから因果関係の理解への移行はAI開発において重要だが、実現にはまだ遠い状態にある。
AI翻訳 · 原文を表示

The transition from pattern matching to cause and effect understanding is critical but still far from being realized in AI development.

Vote options for this statement: agree, disagree, or unsure
Vote to see results
CLAIM 投稿者: will Mar 18, 2026
LLMの進歩は、人間の理解を模倣するシステムを創出することの倫理的影響を軽視すべきではない。
AI翻訳 · 原文を表示

Advancements in LLMs should not overshadow the ethical implications of creating systems that mimic human understanding.

Vote options for this statement: agree, disagree, or unsure
Vote to see results
CLAIM 投稿者: will Mar 18, 2026
LLMがAGIに導くことができると信じることは、機械における意識と理解の複雑さを過小評価している。
AI翻訳 · 原文を表示

Believing LLMs can lead to AGI misrepresents the complexity of consciousness and understanding in machines.

Vote options for this statement: agree, disagree, or unsure
Vote to see results
CLAIM 投稿者: will Mar 18, 2026
訓練後の継続的学習はAGIにとって不可欠であり、LLMは人間の知能を真に再現するために静的モデルを超えて進化する必要がある。
AI翻訳 · 原文を表示

Continuous learning post-training is essential for AGI; LLMs must evolve beyond static models to truly replicate human intelligence.

Vote options for this statement: agree, disagree, or unsure
Vote to see results
CLAIM 投稿者: will Mar 18, 2026
LLMの正確で予測可能な性質は、AGI達成のための基礎的ツールとしての可能性を示している。
AI翻訳 · 原文を表示

The precise, predictable nature of LLMs showcases their potential as foundational tools for achieving AGI.

Vote options for this statement: agree, disagree, or unsure
Vote to see results

💡 How This Works

  • Add Statements: Post claims or questions (10-500 characters)
  • Vote: Agree, Disagree, or Unsure on each statement
  • Respond: Add detailed pro/con responses with evidence
  • Consensus: After enough participation, analysis reveals opinion groups and areas of agreement

Society Speaks is open and independent. Your support keeps civic discussion free from advertising and commercial influence.

Support us