LLMs और AGI के बीच क्या कमी है - विशाल मिश्रा और मार्टिन कैसाडो
Vishal Misra returns to explain his latest research on how LLMs actually work under the hood. He walks through experiments showing that transformers update their predictions in a precise, mathematically predictable way as they process new information, explains why this still doesn't mean they're conscious, and describes what's actually required for AGI: the ability to keep learning after training and the move from pattern matching to understanding cause and effect
स्रोत लेख
Acquired Podcast (United States) | Mar 17, 2026
Your votes count
No account needed — your votes are saved and included in the consensus analysis. Create an account to track your voting history and add statements.
AI द्वारा अनुवादित · मूल दिखाएं
The transition from pattern matching to cause and effect understanding is critical but still far from being realized in AI development.
AI द्वारा अनुवादित · मूल दिखाएं
Advancements in LLMs should not overshadow the ethical implications of creating systems that mimic human understanding.
AI द्वारा अनुवादित · मूल दिखाएं
Believing LLMs can lead to AGI misrepresents the complexity of consciousness and understanding in machines.
AI द्वारा अनुवादित · मूल दिखाएं
Continuous learning post-training is essential for AGI; LLMs must evolve beyond static models to truly replicate human intelligence.
AI द्वारा अनुवादित · मूल दिखाएं
The precise, predictable nature of LLMs showcases their potential as foundational tools for achieving AGI.
💡 How This Works
- • Add Statements: Post claims or questions (10-500 characters)
- • Vote: Agree, Disagree, or Unsure on each statement
- • Respond: Add detailed pro/con responses with evidence
- • Consensus: After enough participation, analysis reveals opinion groups and areas of agreement
Society Speaks is open and independent. Your support keeps civic discussion free from advertising and commercial influence.
Support us