Skip to main content

국방을 위해 군대에서 인공지능 시스템을 사용할 때의 이점과 위험성은 무엇인가?

Geopolitics
United States
March 27, 2026에 시작됨

Elon Musk's artificial intelligence company xAI has signed an agreement to allow the military to use its model, Grok, in classified systems, a Defense official confirmed to Axios. Why it matters: Up to now, Anthropic's Claude has been the only model available in the systems on which the military's most sensitive intelligence work, weapons development and battlefield operations take place. But the Pentagon is threatening Anthropic in a dispute over safeguards and may soon need a replacement. Anthropic has refused the Pentagon's demand that they make Claude available for "all lawful purposes," insisting in particular on blocking its use for the mass surveillance of Americans and the development of fully autonomous weapons.xAI agreed to that "all lawful use" standard, as Axios previously reported. The New York Times first reported that a deal had been signed. xAI did not respond to requests for comment.It's not clear whether xAI will be able to fully replace Anthropic, or how long that process would take. Claude was used in the Maduro raid, for example, through Anthropic's partnership with Palantir. Driving the news: Defense Secretary Pete Hegseth will host Anthropic CEO Dario Amodei for what sources expect to be a tense meeting at the Pentagon on Tuesday. A Defense official said Hegseth would effectively be presenting Amodei with an ultimatum. The Pentagon is threatening to brand Anthropic a "supply chain risk," among other potential penalties, if it won't agree to lift all safeguards.Defense officials admit that offloading and replacing Claude would be a very difficult process. State of play: Grok, Google's Gemini and OpenAI's ChatGPT are all available in the military's unclassified systems, and Google and OpenAI have also been in talks to move over into the classified space. The Pentagon has moved to speed up those negotiations as it prepares to potentially sever its relationship with Anthropic. One source said the Pentagon had "reached out to OpenAI to reignite tal

Need to find a specific claim? Search all statements.
🗳️ Join the conversation
1 투표할 진술 • Your perspective shapes the analysis
📊 Progress to Consensus Analysis Need: 7+ participants, 20+ votes, 3+ votes per statement
Participants 0/7
Statements (7+ recommended) 1/7
Total Votes 0/20
💡 Progress updates live here. Final readiness is confirmed when all three requirements are met.

Your votes count

No account needed — your votes are saved and included in the consensus analysis. Create an account to track your voting history and add statements.

CLAIM 게시자: will Mar 27, 2026
군사 응용 분야의 AI에 관한 논의는 전쟁에서 인간의 판단 역할에 대한 중요한 질문들을 야기합니다. 우리는 기술 발전을 우선시해야 할까요, 아니면 윤리적 기준을 보장하기 위해 중요한 군사 결정에서 인간의 감시를 유지해야 할까요?
AI 번역 · 원문 보기

The discussion around AI in military applications prompts important questions about the role of human judgement in warfare. Should we prioritize technological advancement, or should we maintain human oversight in critical military decisions to ensure ethical standards?

Vote options for this statement: agree, disagree, or unsure
Vote to see results

💡 How This Works

  • Add Statements: Post claims or questions (10-500 characters)
  • Vote: Agree, Disagree, or Unsure on each statement
  • Respond: Add detailed pro/con responses with evidence
  • Consensus: After enough participation, analysis reveals opinion groups and areas of agreement

Society Speaks is open and independent. Your support keeps civic discussion free from advertising and commercial influence.

Support us