Skip to main content

Was zwischen LLMs und AGI fehlt - Vishal Misra & Martin Casado

Technology
United States
Gestartet March 18, 2026

Vishal Misra returns to explain his latest research on how LLMs actually work under the hood. He walks through experiments showing that transformers update their predictions in a precise, mathematically predictable way as they process new information, explains why this still doesn't mean they're conscious, and describes what's actually required for AGI: the ability to keep learning after training and the move from pattern matching to understanding cause and effect

Need to find a specific claim? Search all statements.
🗳️ Join the conversation
5 Aussagen zum Abstimmen • Your perspective shapes the analysis
📊 Progress to Consensus Analysis Need: 7+ participants, 20+ votes, 3+ votes per statement
Participants 0/7
Statements (7+ recommended) 5/7
Total Votes 0/20
💡 Progress updates live here. Final readiness is confirmed when all three requirements are met.

Your votes count

No account needed — your votes are saved and included in the consensus analysis. Create an account to track your voting history and add statements.

CLAIM Veröffentlicht von will Mar 18, 2026
Der Übergang vom Mustererkennung zu Ursache-und-Wirkung-Verständnis ist kritisch, wird aber in der KI-Entwicklung noch immer nicht realisiert.
KI-übersetzt · Original anzeigen

The transition from pattern matching to cause and effect understanding is critical but still far from being realized in AI development.

Vote options for this statement: agree, disagree, or unsure
Vote to see results
CLAIM Veröffentlicht von will Mar 18, 2026
Fortschritte bei LLMs sollten nicht die ethischen Implikationen der Schaffung von Systemen überschatten, die menschliches Verständnis nachahmen.
KI-übersetzt · Original anzeigen

Advancements in LLMs should not overshadow the ethical implications of creating systems that mimic human understanding.

Vote options for this statement: agree, disagree, or unsure
Vote to see results
CLAIM Veröffentlicht von will Mar 18, 2026
Die Annahme, dass LLMs zu AGI führen können, verharmlost die Komplexität von Bewusstsein und Verständnis in Maschinen.
KI-übersetzt · Original anzeigen

Believing LLMs can lead to AGI misrepresents the complexity of consciousness and understanding in machines.

Vote options for this statement: agree, disagree, or unsure
Vote to see results
CLAIM Veröffentlicht von will Mar 18, 2026
Kontinuierliches Lernen nach dem Training ist für AGI essentiell; LLMs müssen sich über statische Modelle hinausentwickeln, um menschliche Intelligenz wirklich nachzubilden.
KI-übersetzt · Original anzeigen

Continuous learning post-training is essential for AGI; LLMs must evolve beyond static models to truly replicate human intelligence.

Vote options for this statement: agree, disagree, or unsure
Vote to see results
CLAIM Veröffentlicht von will Mar 18, 2026
Die präzise, vorhersagbare Natur von LLMs zeigt ihr Potenzial als grundlegende Werkzeuge zur Erreichung von AGI.
KI-übersetzt · Original anzeigen

The precise, predictable nature of LLMs showcases their potential as foundational tools for achieving AGI.

Vote options for this statement: agree, disagree, or unsure
Vote to see results

💡 How This Works

  • Add Statements: Post claims or questions (10-500 characters)
  • Vote: Agree, Disagree, or Unsure on each statement
  • Respond: Add detailed pro/con responses with evidence
  • Consensus: After enough participation, analysis reveals opinion groups and areas of agreement

Society Speaks is open and independent. Your support keeps civic discussion free from advertising and commercial influence.

Support us