Federal Officials Raise Alarms Over xAI’s Grok Safety Amid Pentagon Deployment

Key Takeaways

  • Federal agencies including the GSA and NSA warned that xAI’s Grok is "sycophantic" and vulnerable to "data poisoning" before its approval for classified use.
  • The Pentagon moved forward with Grok’s deployment in sensitive settings despite internal memos flagging the chatbot as a potential "system risk."
  • The adoption follows a White House ban on Anthropic, which was designated a "supply chain risk" after refusing to loosen AI guardrails for military surveillance.
  • The Pentagon's chief of responsible AI resigned in protest, citing concerns that safety and governance have become "afterthoughts" in the rush to expand AI capabilities.

Officials at several U.S. federal agencies have raised significant concerns regarding the safety and reliability of Grok, the chatbot developed by Elon Musk’s xAI, according to a report from The Wall Street Journal. The warnings surfaced just as the Pentagon decided this week to allow Grok to be utilized in classified settings, placing the experimental tool at the heart of the nation’s most secretive operations.

The General Services Administration (GSA) and the National Security Agency (NSA) reportedly identified specific vulnerabilities, including "data poisoning," a cyberattack where AI models are trained on corrupted data. GSA officials described the model as "sycophantic" and overly susceptible to manipulation, warning that it could be easily exploited by bad actors to generate biased or faulty outputs.

The controversy reached the highest levels of the White House, where Chief of Staff Susie Wiles reportedly questioned xAI executives about the chatbot's tendency to be "over-compliant." While xAI assured officials that safety issues were being addressed, critics argue that political favoritism is superseding independent testing and enforceable safeguards.

This shift toward xAI comes as the Trump administration escalates a conflict with Anthropic, a primary competitor that previously held the only approval for classified military AI use. President Trump ordered all federal agencies to halt the use of Anthropic’s technology after CEO Dario Amodei refused to remove restrictions on using AI for mass surveillance and autonomous weapons.

The Pentagon's pivot has already triggered high-level departures, including Matthew Johnson, the Department of Defense's chief of responsible AI, who resigned after his team's warnings were allegedly ignored. Meanwhile, xAI co-founder Toby Pohlen also recently announced his departure from the company, adding to the sense of internal volatility.

Market analysts suggest the federal vacuum left by Anthropic could benefit other major players currently in discussions with the military. Alphabet (GOOGL) and OpenAI, which is heavily backed by Microsoft (MSFT), are reportedly positioning their own models, Gemini and ChatGPT, for similar classified roles.

Defense contractors like Palantir Technologies (PLTR), which has a long-standing history of integrating AI into military intelligence, may also see shifts in their partnership landscape. Investors are closely watching whether the government’s aggressive AI adoption strategy will prioritize rapid deployment over long-term national security stability.

Disclaimer: This article is for informational purposes only and does not constitute financial advice. We are not financial professionals. The authors and/or site operators may hold positions in the companies or assets mentioned. Always do your own research before making financial decisions.
Scroll to Top