Skip to main content

LLMs और AGI के बीच क्या कमी है - विशाल मिश्रा और मार्टिन कैसाडो

Technology
United States
March 18, 2026 को शुरू किया गया

Vishal Misra returns to explain his latest research on how LLMs actually work under the hood. He walks through experiments showing that transformers update their predictions in a precise, mathematically predictable way as they process new information, explains why this still doesn't mean they're conscious, and describes what's actually required for AGI: the ability to keep learning after training and the move from pattern matching to understanding cause and effect

Need to find a specific claim? Search all statements.
🗳️ Join the conversation
5 मतदान के लिए कथन • Your perspective shapes the analysis
📊 Progress to Consensus Analysis Need: 7+ participants, 20+ votes, 3+ votes per statement
Participants 0/7
Statements (7+ recommended) 5/7
Total Votes 0/20
💡 Progress updates live here. Final readiness is confirmed when all three requirements are met.

Your votes count

No account needed — your votes are saved and included in the consensus analysis. Create an account to track your voting history and add statements.

CLAIM द्वारा पोस्ट will Mar 18, 2026
पैटर्न मिलान से कारण और प्रभाव की समझ की ओर संक्रमण महत्वपूर्ण है लेकिन AI विकास में अभी भी दूर है।
AI द्वारा अनुवादित · मूल दिखाएं

The transition from pattern matching to cause and effect understanding is critical but still far from being realized in AI development.

Vote options for this statement: agree, disagree, or unsure
Vote to see results
CLAIM द्वारा पोस्ट will Mar 18, 2026
LLMs में प्रगति को मानव समझ की नकल करने वाली प्रणालियों के निर्माण के नैतिक निहितार्थों को नज़रअंदाज़ नहीं करना चाहिए।
AI द्वारा अनुवादित · मूल दिखाएं

Advancements in LLMs should not overshadow the ethical implications of creating systems that mimic human understanding.

Vote options for this statement: agree, disagree, or unsure
Vote to see results
CLAIM द्वारा पोस्ट will Mar 18, 2026
यह मानना कि LLMs AGI की ओर ले जा सकते हैं, मशीनों में चेतना और समझ की जटिलता को गलत तरीके से प्रस्तुत करता है।
AI द्वारा अनुवादित · मूल दिखाएं

Believing LLMs can lead to AGI misrepresents the complexity of consciousness and understanding in machines.

Vote options for this statement: agree, disagree, or unsure
Vote to see results
CLAIM द्वारा पोस्ट will Mar 18, 2026
प्रशिक्षण के बाद निरंतर सीखना AGI के लिए आवश्यक है; LLMs को मानव बुद्धिमत्ता को सच में दोहराने के लिए स्थिर मॉडल से आगे विकसित होना चाहिए।
AI द्वारा अनुवादित · मूल दिखाएं

Continuous learning post-training is essential for AGI; LLMs must evolve beyond static models to truly replicate human intelligence.

Vote options for this statement: agree, disagree, or unsure
Vote to see results
CLAIM द्वारा पोस्ट will Mar 18, 2026
LLMs की सटीक, पूर्वानुमेय प्रकृति AGI हासिल करने के लिए आधारभूत उपकरणों के रूप में उनकी क्षमता को प्रदर्शित करती है।
AI द्वारा अनुवादित · मूल दिखाएं

The precise, predictable nature of LLMs showcases their potential as foundational tools for achieving AGI.

Vote options for this statement: agree, disagree, or unsure
Vote to see results

💡 How This Works

  • Add Statements: Post claims or questions (10-500 characters)
  • Vote: Agree, Disagree, or Unsure on each statement
  • Respond: Add detailed pro/con responses with evidence
  • Consensus: After enough participation, analysis reveals opinion groups and areas of agreement

Society Speaks is open and independent. Your support keeps civic discussion free from advertising and commercial influence.

Support us