TechBriefly
  • Tech
  • Business
  • Crypto
  • Science
  • Geek
  • How to
  • About
    • About TechBriefly
    • Terms and Conditions
    • Privacy Policy
    • Contact Us
    • Languages
      • 中文 (Chinese)
      • Dansk
      • Deutsch
      • Español
      • English
      • Français
      • Nederlands
      • Italiano
      • 日本语 (Japanese)
      • 한국인 (Korean)
      • Norsk
      • Polski
      • Português
      • Pусский (Russian)
      • Suomalainen
      • Svenska
  • FAQ
    • Articles
No Result
View All Result
 Hot Topics:
  • Nvidia
  • Snapchat planets order
  • Replika AI
  • Lookism AI filter
  • Binance WOTD answers (Portfolios)
TechBriefly
No Result
View All Result
Home Tech AI

OpenAI’s Sam Altman threatens EU exit amid AI regulation talks

OpenAI’s Founder Sam Altman supports EU’s AI regulation on the condition that he can continue to sell his models.

by Cenk Atlı
25 May 2023
in AI, news
Reading Time: 4 mins read
OpenAI’s Sam
Share on FacebookShare on Twitter

OpenAI CEO Sam Altman continues his international outreach, building upon the momentum generated by his recent appearances before the U.S. Congress. However, Altman’s approach overseas differs from the AI-friendly climate in the United States, as he hints at the possibility of relocating his tech ventures if compliance with his rules is not met.

Altman’s globetrotting journey has taken him from Lagos, Nigeria to various European destinations, culminating in London, UK. Despite facing some protests, he actively engages with prominent figures from the tech industry, businesses, and policymakers, emphasizing the capabilities of OpenAI’s ChatGPT language model. Altman seeks to rally support for pro-AI regulations while expressing discontent over the European Union’s definition of “high-risk” systems, as discussed during a panel at University College London.

The European Union’s proposed AI Act introduces a three-tiered classification for AI systems based on risk levels. Instances of AI that present an “unacceptable risk” by violating fundamental rights include social scoring systems and manipulative social engineering AI. On the other hand, “high-risk AI systems” must adhere to comprehensive standards of transparency and oversight, tailored to their intended use.

openai, sam altman
OpenAI CEO, Sam Altman, testifies on AI’s risks and opportunities before Senate Subcommittee

Sam Altman expressed concerns regarding the current draft of the law, stating that both ChatGPT and the upcoming GPT-4 could potentially fall under the high-risk category. Compliance with specific requirements would be necessary in such cases. Altman emphasized OpenAI’s intention to strive for compliance but acknowledged that there are technical limitations that may impact their ability to do so. As reported by Time, he stated, “If we can comply, we will, and if we can’t, we’ll cease operating… We will try. But there are technical limits to what’s possible.”

EU’s AI Act gets updated as AI keeps improving

Originally aimed at addressing concerns related to China’s social credit system and facial recognition, the AI Act has encountered new challenges with the emergence of OpenAI and other startups. The EU subsequently introduced provisions in December, targeting “foundational models” like the large language models (LLMs) powering AI chatbots such as ChatGPT. Recently, a European Parliament committee approved these updated regulations, enforcing safety checks and risk management.

In contrast to the United States, the EU has shown a greater inclination to scrutinize OpenAI. The European Data Protection Board has been actively monitoring ChatGPT’s compliance with privacy laws. However, it’s important to note that the AI Act is still subject to potential revisions, which likely explains Altman’s global tour, seeking to navigate the evolving landscape of AI regulations.

Altman reiterated familiar points from his recent Congressional testimony, expressing both concerns about AI risks and recognition of its potential benefits. He advocated for regulation, including safety requirements and a governing agency for compliance testing. Altman called for a regulatory approach that strikes a balance between European and American traditions.

openai, sam altman
As ChatGPT’s user base keeps growing, the need to reconcile with the government’s regulations increase

However, Altman cautioned against regulations that could limit user access, harm smaller companies, or impede the open-source AI movement. OpenAI’s evolving stance on openness, citing competition, contrasts with its previous practices. It is worth noting that any new regulations would inherently benefit OpenAI by providing a framework for accountability. Compliance checks could also increase the cost of developing new AI models, giving the company an advantage in the competitive AI landscape.

Several countries have imposed bans on ChatGPT, with Italy being one of them. However, after OpenAI enhanced users’ privacy controls, the ban was lifted by Italy’s far-right government. OpenAI may need to continue addressing concerns and making concessions to maintain a favorable relationship with governments worldwide, especially considering its large user base of over 100 million active ChatGPT users.

Why do governments act cautiously over AI chatbots?

There are a number of reasons why governments may impose bans on ChatGPT or similar AI models for several reasons:

  1. Misinformation and Fake News: AI models like ChatGPT can generate misleading or false information, contributing to the spread of misinformation. Governments may impose bans to prevent the dissemination of inaccurate or harmful content.
  2. Inappropriate or Offensive Content: AI models have the potential to generate content that is inappropriate, offensive, or violates cultural norms and values. Governments may ban ChatGPT to protect citizens from encountering objectionable material.
  3. Ethical Concerns: AI models raise ethical questions related to privacy, consent, and bias. Governments may impose bans to address concerns about data privacy, the potential misuse of personal information, or the perpetuation of biases in AI-generated content.
  4. Regulatory Compliance: AI models must adhere to existing laws and regulations. Governments may impose bans if they find that ChatGPT fails to meet regulatory requirements, such as data protection or content standards.
  5. National Security and Social Stability: Governments may perceive AI models as a potential threat to national security or social stability. They may impose bans to prevent the misuse of AI technology for malicious purposes or to maintain control over information flow.

It is worth noting that the specific reasons for government-imposed bans on ChatGPT can vary across jurisdictions, and decisions are influenced by a combination of legal, ethical, societal, and political factors. For some current risks and undesired consequences of AI chatbots, check these latest articles:

  • ChatGPT risks exposed: Understanding the potential threats
  • The Dark Side of ChatGPT: The human cost of AI’s success
Tags: ChatGPTfeaturedOpenAI

Related Posts

Windows 11 AI Copilot release date, features, and more

Microsoft announced the release date of Windows Copilot

Google ai search engine

Google AI search engine: How to sign up?

Infosys Topaz AI

Unlocking business potential with Infosys Topaz AI suite

Microsoft Build 2023

Microsoft Build 2023: Biggest announcements

POPULAR

Binance Word of the Day answers: Bitcoin Fundamentals theme

Is there a way to remove Character AI NSFW filters?

How to fix Division 2 if it keeps crashing in 2023?

RCM Loader for Nintendo Switch: What is it, how can you install?

How to fix “no secure boot’ and “DLC assets are damaged” errors on FIFA 23?

What does setting interrogation succeeded mean?

Webtoon Lookism AI filter: TikTok trend explained

What is Snapchat planets order?

What is Instagram direct message suggested list order (explained)?

Can Chai see your chats?

RSS News Republic

  • Hogwarts Legacy: Which ball in Quidditch is the largest?
  • Backbone One PlayStation Android: Specs, price, and release date
  • DarkBERT: A deep dive into the Dark Web’s secrets
  • What happened to Ryan Waller? $15 million lawsuit explained
  • TikTok trend explained: Webtoon Lookism AI filter

RSS Digital Report

  • Using Voice of the Customer for marketing and its benefits
  • Creating estimations for cost and organic traffic for your future SEO endeavors
  • Biggest issues plaguing the blockchain in 2023
  • What is the “Framing Effect” in marketing and how to use it?
  • How does in-house SEO compare to utilizing agencies and how to get started with it?

RSS Latest from LeaderGamer

  • Twitter sign up – Sign up guide
  • What is Twitter Discover?
  • Twitter video download – How to download video from Twitter?
  • How to login without Twitter account?
  • CSGO console codes
TechBriefly

© 2021 TechBriefly is a Linkmedya brand.

  • Tech
  • Business
  • Science
  • Geek
  • How to
  • About
  • Privacy
  • Terms
  • Contact
  • FAQ
  • | Network Sites |
  • Digital Report
  • LeaderGamer
  • News Republic

Follow Us

No Result
View All Result
  • Tech
  • Business
  • Crypto
  • Science
  • Geek
  • How to
  • About
    • About TechBriefly
    • Terms and Conditions
    • Privacy Policy
    • Contact Us
    • Languages
      • 中文 (Chinese)
      • Dansk
      • Deutsch
      • Español
      • English
      • Français
      • Nederlands
      • Italiano
      • 日本语 (Japanese)
      • 한국인 (Korean)
      • Norsk
      • Polski
      • Português
      • Pусский (Russian)
      • Suomalainen
      • Svenska
  • FAQ
    • Articles