TechBriefly
  • Tech
  • Business
  • Crypto
  • Science
  • Geek
  • How to
  • About
    • About TechBriefly
    • Terms and Conditions
    • Privacy Policy
    • Contact Us
    • Languages
      • 中文 (Chinese)
      • Dansk
      • Deutsch
      • Español
      • English
      • Français
      • Nederlands
      • Italiano
      • 日本语 (Japanese)
      • 한국인 (Korean)
      • Norsk
      • Polski
      • Português
      • Pусский (Russian)
      • Suomalainen
      • Svenska
No Result
View All Result
TechBriefly
Home Tech AI
Meta AI chatbot exposes user data publicly

Meta AI chatbot exposes user data publicly

TB EditorbyTB Editor
17 June 2025
in AI, Security
Reading Time: 3 mins read
Share on FacebookShare on Twitter

Meta’s latest AI chatbot has sparked a significant privacy controversy after it was discovered that the tool’s default settings were broadcasting user interactions publicly. This revelation, reported by major news outlets this week, has exposed a wide range of sensitive information, from medical queries to legal concerns, without users’ explicit consent.

The AI chatbot, launched earlier in 2025, automatically sets all user interactions to “public” unless individuals actively adjust their privacy settings. This design choice has led to numerous users, including elderly individuals and children, unknowingly sharing highly personal and sensitive information with a wide audience. Examples include users asking about genital injuries, young people seeking guidance on gender transitions, and individuals requesting assistance with legal matters, such as cooperating with authorities to reduce penal sentences.

These public posts often included usernames and profile pictures, directly linking the queries to users’ social media accounts. This transforms private medical anxieties and legal troubles into permanent, publicly accessible records.

Meta included a pop-up warning stating, “Prompts you post are public and visible to everyone… Avoid sharing personal or sensitive information.” However, critics argue that this warning was insufficient. Many users did not understand they were publishing to a public feed, especially given that they may not expect AI chatbot interactions to appear on a social media-like feed. Meta’s press release announcing the feature described “a Discover feed, a place to share and explore how others are using AI,” framing the public sharing of private conversations as a feature rather than a flaw.

The incident has ignited a broader debate about AI privacy and the potential for these tools to expose sensitive user data. The Electronic Frontier Foundation (EFF) warns that AI chatbots can inadvertently reveal personal information through “model leakage.” This refers to the risk that AI models, through their training and operation, may inadvertently disclose information about the data they were trained on or the individuals who interacted with them.

Beyond the Meta incident, concerns about AI privacy are widespread. A 2024 National Cybersecurity Alliance survey revealed that 38% of employees share sensitive work information with AI tools without their employer’s permission. This highlights the potential for corporate secrets and confidential data to be compromised through the use of AI chatbots.

Data protection authorities are also grappling with the implications of AI privacy. The Dutch Data Protection Authority, for example, has received multiple breach notifications from companies whose employees inputted patient medical data and customer addresses into AI chatbots. These incidents underscore the risk of violating privacy regulations and exposing sensitive personal information.

Even AI services that claim to offer better privacy protections are not immune to concerns. Anthropic’s Claude claims stronger default protections, while ChatGPT requires paid subscriptions to guarantee data isn’t used for training. However, these policies can be changed at any time, potentially allowing companies to retroactively access years of stored conversations. This leaves users reliant on the trustworthiness of profit-driven corporations to resist the temptation of monetizing vast troves of intimate user data.

Recent data breaches have further highlighted the vulnerability of AI systems. OpenAI experienced a data breach that exposed internal discussions, while over one million DeepSeek chat records were left exposed in an unsecured database. These incidents demonstrate the potential for sensitive information stored within AI systems to be compromised through security vulnerabilities.

The MIT Technology Review has warned that we are heading toward a security and privacy “disaster” as AI tools become increasingly integrated into daily life. Millions of users are sharing medical anxieties, work secrets, and personal challenges with AI chatbots, creating permanent records that could be exposed, sold, or subpoenaed. The Meta incident has merely brought to light what many believe is a common practice among AI companies: harvesting intimate conversations for profit while users bear all the risk.

While GDPR violations can result in significant fines—up to €20 million or 4% of global revenue—enforcement against AI companies remains limited. Moreover, current GDPR and CCPA documentation does not adequately address how personal information is handled in AI training data or model outputs.

The Meta chatbot incident serves as a stark reminder that any information shared with an AI chatbot today is potentially vulnerable to future exposure. This could occur through corporate policy changes, security breaches, or legal demands. While Meta’s public feed disaster is a highly visible example, it underscores a broader issue of AI companies collecting and potentially misusing sensitive user data. At least Meta users can see their embarrassing questions posted publicly and try to delete them. The rest of us have no idea what’s happening to our conversations.

Tags: chatbotdata breachMeta AI
ShareTweet
TB Editor

TB Editor

Related Posts

EA investigates AI claims in Battlefield 6 cosmetics

EA investigates AI claims in Battlefield 6 cosmetics

24 December 2025
Amazon Alexa+ will book your hotels and salons starting in 2026

Amazon Alexa+ will book your hotels and salons starting in 2026

24 December 2025
OpenAI launches Skills in Codex

OpenAI launches Skills in Codex

24 December 2025
Google is hitting the brakes on its plan to kill Assistant

Google is hitting the brakes on its plan to kill Assistant

22 December 2025

LATEST

New WhatsApp update brings 2026 stickers and video call effects

Leaker reveals Xiaomi plans for high end eSIM device in 2026

HP prepares OMEN OLED monitor reveal for CES 2026

High RAM costs from AI boom could delay next Xbox and PlayStation

LG to unveil its Gallery TV at CES 2026

Bitcoin drops 3% to $87,300 as altcoins decline

How to install mods and custom content in The Sims 2

Running Python files and fixing path errors on Windows

How to boot your PC into Command Prompt for troubleshooting

How to delete a virus using Command Prompt

TechBriefly

© 2021 TechBriefly is a Linkmedya brand.

  • Tech
  • Business
  • Science
  • Geek
  • How to
  • About
  • Privacy
  • Terms
  • Contact
  • | Network Sites |
  • Digital Report
  • LeaderGamer

Follow Us

No Result
View All Result
  • Tech
  • Business
  • Crypto
  • Science
  • Geek
  • How to
  • About
    • About TechBriefly
    • Terms and Conditions
    • Privacy Policy
    • Contact Us
    • Languages
      • 中文 (Chinese)
      • Dansk
      • Deutsch
      • Español
      • English
      • Français
      • Nederlands
      • Italiano
      • 日本语 (Japanese)
      • 한국인 (Korean)
      • Norsk
      • Polski
      • Português
      • Pусский (Russian)
      • Suomalainen
      • Svenska