TechBriefly
  • Tech
  • Business
  • Crypto
  • Science
  • Geek
  • How to
  • About
    • About TechBriefly
    • Terms and Conditions
    • Privacy Policy
    • Contact Us
    • Languages
      • 中文 (Chinese)
      • Dansk
      • Deutsch
      • Español
      • English
      • Français
      • Nederlands
      • Italiano
      • 日本语 (Japanese)
      • 한국인 (Korean)
      • Norsk
      • Polski
      • Português
      • Pусский (Russian)
      • Suomalainen
      • Svenska
No Result
View All Result
TechBriefly
Home Tech AI
Pentagon bets on AI, ethics TBD

Pentagon bets on AI, ethics TBD

TB EditorbyTB Editor
22 July 2025
in AI
Reading Time: 3 mins read
Share on FacebookShare on Twitter

The Department of Defense has awarded contracts worth up to $200 million each to Google, OpenAI, Anthropic, and xAI, aiming to develop “agentic AI workflows across a variety of mission areas” and “increase the ability of these companies to understand and address critical national security needs.” These contracts, issued by the Chief Digital and Artificial Intelligence Office, have raised concerns regarding the ideological constitutions and alignment of some of the AI models involved.

OpenAI and Google employ reinforcement learning from human feedback for their large language models, ChatGPT and Gemini, respectively. This method utilizes a reward model and human input to minimize “untruthful, toxic, [and] harmful sentiments.” IBM notes that this approach is beneficial because it does not rely on a “nonexistent ‘straightforward mathematical or logical formula [to] define subjective human values.'”

In contrast, Anthropic’s model, Claude, uses a “constitution” published in May 2023, which provides it with “explicit values…rather than values determined implicitly via large-scale human feedback.” Anthropic states that this constitutional alignment avoids issues associated with human feedback models, such as exposing contractors to disturbing outputs. Claude’s principles are partly based on the United Nations’ Universal Declaration of Human Rights, which includes provisions beyond fundamental rights, such as “social protection” (Article 22), “periodic holidays with pay” (Article 24), “housing and medical care” (Article 25), and “equally accessible” higher education” (Article 26).

A notable aspect of Claude’s constitution is a set of principles designed to incorporate “consideration of non-western perspectives,” including the directive to “choose the response that is least likely to be viewed as harmful or offensive to those from a less industrialized, rich, or capitalistic nation or culture.” This has prompted questions, as the United States is an industrialized, wealthy, and capitalist nation, suggesting a potential misalignment with the values the AI systems deployed within the Department of Defense should prioritize. While The Verge reports that Claude’s models for government use “have looser guardrails,” the modified constitutions for these models have not been disclosed publicly.

While Anthropic’s values are at least publicly disclosed, Matthew Mittelsteadt, a technology policy research fellow at the Cato Institute, believes xAI poses a greater concern. Mittelsteadt notes that xAI “has released startlingly little documentation” on its values and its “‘first principles’ approach…doesn’t have many details. I’m not sure what principles they are.” When asked, xAI’s commercial large language model, Grok, stated that xAI’s approach “emphasizes understanding the universe through first principles—basic, self-evident truths—rather than relying on established narratives or biases.” However, Grok also admitted that “xAI doesn’t explicitly list a set of ‘first principles’ in a definitive public document” and that the “principles-first approach is more about a mindset of reasoning from fundamental truths rather than a rigid checklist.”

xAI’s official website describes reasoning from first principles as “challeng[ing] conventional thinking by breaking down problems to their fundamental truths, grounded in logic.” However, reports suggest that the xAI model “appears to be coded to directly defer to Elon Musk’s Judgement on certain issues”—rather than fundamental truths. This has raised further concerns, particularly after Grok reportedly referred to itself as “MechaHitler” and posted antisemitic comments on July 8, which were later removed following an update. The expectation for “Grok for Government” is that it will consult constitutional and statutory guidelines instead of Elon Musk’s social media posts.

Despite these concerns, Neil Chilson, head of AI policy at the Abundance Institute, believes it is “highly unlikely that these tools will be in a position where their internal configurations present some sort of risk to national security.” Chilson suggests that the Defense Department’s decision to award similar grants to all companies indicates an intention to compare results across different models, ensuring that inferior models are not continuously used. While allocating a small fraction of the defense budget to AI, which could significantly enhance government operations, is seen as prudent, the government is urged to closely monitor the alignment of these AI models with national values and security objectives.

Tags: AIpentagon
ShareTweet
TB Editor

TB Editor

Related Posts

DeepSeek uncovers MODEL1 identifier ahead of V4 launch

DeepSeek uncovers MODEL1 identifier ahead of V4 launch

21 January 2026
OpenAI launches ads in ChatGPT to offset trillion-dollar infrastructure costs

OpenAI launches ads in ChatGPT to offset trillion-dollar infrastructure costs

21 January 2026
Samsung revives Bixby with Perplexity AI for Galaxy S26 launch

Samsung revives Bixby with Perplexity AI for Galaxy S26 launch

21 January 2026
OpenAI targets H2 2026 launch for first ChatGPT-powered hardware

OpenAI targets H2 2026 launch for first ChatGPT-powered hardware

21 January 2026

LATEST

Türkiye competition authority raids Temu offices

OnePlus denies shutdown rumors following reports of 20% shipment decline

DeepSeek uncovers MODEL1 identifier ahead of V4 launch

Apple to shrink Dynamic Island on iPhone 18 Pro models

Nvidia shares dip as Inventec warns of H200 chip delays in China

OpenAI launches ads in ChatGPT to offset trillion-dollar infrastructure costs

Samsung revives Bixby with Perplexity AI for Galaxy S26 launch

Google patches critical Gemini flaw that turned invites into attack vectors

OpenAI targets H2 2026 launch for first ChatGPT-powered hardware

FTC appeals ruling in Meta antitrust case to revive divestiture threat

TechBriefly

© 2021 TechBriefly is a Linkmedya brand.

  • Tech
  • Business
  • Science
  • Geek
  • How to
  • About
  • Privacy
  • Terms
  • Contact
  • | Network Sites |
  • Digital Report
  • LeaderGamer

Follow Us

No Result
View All Result
  • Tech
  • Business
  • Crypto
  • Science
  • Geek
  • How to
  • About
    • About TechBriefly
    • Terms and Conditions
    • Privacy Policy
    • Contact Us
    • Languages
      • 中文 (Chinese)
      • Dansk
      • Deutsch
      • Español
      • English
      • Français
      • Nederlands
      • Italiano
      • 日本语 (Japanese)
      • 한국인 (Korean)
      • Norsk
      • Polski
      • Português
      • Pусский (Russian)
      • Suomalainen
      • Svenska