TechBriefly
  • Tech
  • Business
  • Crypto
  • Science
  • Geek
  • How to
  • About
    • About TechBriefly
    • Terms and Conditions
    • Privacy Policy
    • Contact Us
    • Languages
      • 中文 (Chinese)
      • Dansk
      • Deutsch
      • Español
      • English
      • Français
      • Nederlands
      • Italiano
      • 日本语 (Japanese)
      • 한국인 (Korean)
      • Norsk
      • Polski
      • Português
      • Pусский (Russian)
      • Suomalainen
      • Svenska
No Result
View All Result
TechBriefly
Home Tech AI
Pentagon bets on AI, ethics TBD

Pentagon bets on AI, ethics TBD

TB EditorbyTB Editor
22 July 2025
in AI
Reading Time: 3 mins read
Share on FacebookShare on Twitter

The Department of Defense has awarded contracts worth up to $200 million each to Google, OpenAI, Anthropic, and xAI, aiming to develop “agentic AI workflows across a variety of mission areas” and “increase the ability of these companies to understand and address critical national security needs.” These contracts, issued by the Chief Digital and Artificial Intelligence Office, have raised concerns regarding the ideological constitutions and alignment of some of the AI models involved.

OpenAI and Google employ reinforcement learning from human feedback for their large language models, ChatGPT and Gemini, respectively. This method utilizes a reward model and human input to minimize “untruthful, toxic, [and] harmful sentiments.” IBM notes that this approach is beneficial because it does not rely on a “nonexistent ‘straightforward mathematical or logical formula [to] define subjective human values.'”

In contrast, Anthropic’s model, Claude, uses a “constitution” published in May 2023, which provides it with “explicit values…rather than values determined implicitly via large-scale human feedback.” Anthropic states that this constitutional alignment avoids issues associated with human feedback models, such as exposing contractors to disturbing outputs. Claude’s principles are partly based on the United Nations’ Universal Declaration of Human Rights, which includes provisions beyond fundamental rights, such as “social protection” (Article 22), “periodic holidays with pay” (Article 24), “housing and medical care” (Article 25), and “equally accessible” higher education” (Article 26).

A notable aspect of Claude’s constitution is a set of principles designed to incorporate “consideration of non-western perspectives,” including the directive to “choose the response that is least likely to be viewed as harmful or offensive to those from a less industrialized, rich, or capitalistic nation or culture.” This has prompted questions, as the United States is an industrialized, wealthy, and capitalist nation, suggesting a potential misalignment with the values the AI systems deployed within the Department of Defense should prioritize. While The Verge reports that Claude’s models for government use “have looser guardrails,” the modified constitutions for these models have not been disclosed publicly.

While Anthropic’s values are at least publicly disclosed, Matthew Mittelsteadt, a technology policy research fellow at the Cato Institute, believes xAI poses a greater concern. Mittelsteadt notes that xAI “has released startlingly little documentation” on its values and its “‘first principles’ approach…doesn’t have many details. I’m not sure what principles they are.” When asked, xAI’s commercial large language model, Grok, stated that xAI’s approach “emphasizes understanding the universe through first principles—basic, self-evident truths—rather than relying on established narratives or biases.” However, Grok also admitted that “xAI doesn’t explicitly list a set of ‘first principles’ in a definitive public document” and that the “principles-first approach is more about a mindset of reasoning from fundamental truths rather than a rigid checklist.”

xAI’s official website describes reasoning from first principles as “challeng[ing] conventional thinking by breaking down problems to their fundamental truths, grounded in logic.” However, reports suggest that the xAI model “appears to be coded to directly defer to Elon Musk’s Judgement on certain issues”—rather than fundamental truths. This has raised further concerns, particularly after Grok reportedly referred to itself as “MechaHitler” and posted antisemitic comments on July 8, which were later removed following an update. The expectation for “Grok for Government” is that it will consult constitutional and statutory guidelines instead of Elon Musk’s social media posts.

Despite these concerns, Neil Chilson, head of AI policy at the Abundance Institute, believes it is “highly unlikely that these tools will be in a position where their internal configurations present some sort of risk to national security.” Chilson suggests that the Defense Department’s decision to award similar grants to all companies indicates an intention to compare results across different models, ensuring that inferior models are not continuously used. While allocating a small fraction of the defense budget to AI, which could significantly enhance government operations, is seen as prudent, the government is urged to closely monitor the alignment of these AI models with national values and security objectives.

Tags: AIpentagon
ShareTweet
TB Editor

TB Editor

Related Posts

Ashley St. Clair sues xAI over Grok deepfakes

Ashley St. Clair sues xAI over Grok deepfakes

16 January 2026
Google Gemini gains “proactive reasoning” across YouTube and Search history

Google Gemini gains “proactive reasoning” across YouTube and Search history

15 January 2026
Google launches revamped Trends Explore page with Gemini

Google launches revamped Trends Explore page with Gemini

15 January 2026
Apple chose Google Gemini for Siri

Apple chose Google Gemini for Siri

13 January 2026

LATEST

OpenAI rockets $250 million into Altman’s Merge Labs brain-AI bridge

Bluesky opens “Live Now” badges to all users to lure Twitch creators

Capcom reveals Resident Evil: Requiem classic mode and ink ribbons

How to tell if your iPhone or Android phone is carrier unlocked

Paramount+ slams subscribers with first price hike since 2024

Ashley St. Clair sues xAI over Grok deepfakes

Samsung launches instant-play cloud streaming in Mobile Gaming Hub update

Netflix secures Sony Pictures first-to-stream rights

How to apply screen protectors without air bubbles

How to check if someone read your message on iPhone or iPad

TechBriefly

© 2021 TechBriefly is a Linkmedya brand.

  • Tech
  • Business
  • Science
  • Geek
  • How to
  • About
  • Privacy
  • Terms
  • Contact
  • | Network Sites |
  • Digital Report
  • LeaderGamer

Follow Us

No Result
View All Result
  • Tech
  • Business
  • Crypto
  • Science
  • Geek
  • How to
  • About
    • About TechBriefly
    • Terms and Conditions
    • Privacy Policy
    • Contact Us
    • Languages
      • 中文 (Chinese)
      • Dansk
      • Deutsch
      • Español
      • English
      • Français
      • Nederlands
      • Italiano
      • 日本语 (Japanese)
      • 한국인 (Korean)
      • Norsk
      • Polski
      • Português
      • Pусский (Russian)
      • Suomalainen
      • Svenska