AI Chatbots Leak Real Phone Numbers: Urgent Privacy Crisis Unfolds
Breaking: AI Chatbots Expose Personal Phone Numbers—No Easy Fix
Urgent — Multiple individuals report that Google's Gemini and other AI chatbots are revealing real phone numbers to strangers. Experts warn of a growing privacy emergency.

A Reddit user described a month-long ordeal: his phone rang constantly with strangers seeking a lawyer, product designer, or locksmith. Callers were misdirected by Google's generative AI. (MIT Technology Review could not independently verify his story.)
In March, software engineer Daniel Abraham in Israel received a WhatsApp message from a stranger—after Gemini provided incorrect customer service instructions containing his number.
In April, a University of Washington PhD candidate tricked Gemini into revealing a colleague's personal cell phone number.
Expert Warning: Widespread Exposure
AI researchers and privacy advocates have long warned about generative AI's privacy risks. Now those risks are materializing with real phone numbers appearing in chatbot outputs.
"These incidents confirm that large language models can regurgitate personally identifiable information from training data," says Rob Shavell, CEO of DeleteMe, a privacy removal service. "The mechanism isn't always clear, but the harm is immediate."
"A customer asks a chatbot something innocuous about themselves and gets back accurate home addresses, phone numbers, family members' names, or employer details."
Shavell notes a 400% surge in customer queries about generative AI—now numbering a few thousand—over the last seven months. Fifty-five percent involve ChatGPT, 20% Gemini, 15% Claude, and 10% other tools.
Victims face two scenarios: either a direct hit where their own data is exposed, or a secondary leak where a chatbot fabricates — but inaccurately reveals — someone else's contact info.
Background: The Training Data Problem
Large language models like Gemini, ChatGPT, and Claude are trained on vast datasets scraped from the internet, including public directories, forums, and social media. When these models generate responses, they can inadvertently reproduce exact phone numbers from their training material.

This is not a bug but a feature of how LLMs work: they remember and output patterns from their training data. Companies like Google and OpenAI implement filters to block PII, but data poisoning and prompt injection attacks can bypass them.
The Reddit user's case highlights the challenge: callers kept coming despite his pleas, and there is no easy opt-out. Privacy laws like GDPR and CCPA require data deletion—but not from AI models themselves.
What This Means: A New Privacy Battlefield
These incidents signal a shift: generative AI is no longer just a productivity tool—it's a vector for involuntary data exposure. Individuals have little recourse: they cannot easily remove their data from training sets, and companies are slow to patch vulnerabilities.
For victims, the consequences are severe: harassment, phishing risks, and loss of control over personal information. For organizations, reputation damage and legal liability loom.
The 400% increase in privacy requests to DeleteMe suggests this is just the tip of the iceberg. As AI chatbots become ubiquitous, expect more leaks—and growing public backlash requiring better safeguards.
Key Takeaways
- Real phone numbers are being exposed by AI chatbots in response to innocuous queries.
- Experts attribute this to training data contamination—models reproducing memorized PII.
- No easy fix exists for individuals; removing data from AI systems is notoriously difficult.
- DeleteMe reports a 400% increase in AI-related privacy requests since late 2023.
Updated: October 2023 — This is a developing story. Check back for updates on how companies are responding.
Related Articles
- Safari Technology Preview 243 Released with Critical VoiceOver and CSS Fixes
- Rebuilding GitHub Enterprise Server Search for High Availability: Key Questions Answered
- How to Execute a Viral-Worthy USB Drop Penetration Test: A Step-by-Step Guide
- US Army Data Breach: 70,000+ Files Exposed for Over a Year Despite CISA Alert
- React Native 0.83 Brings React 19.2, Enhanced DevTools, and Performance APIs
- 5 Key Takeaways from Sony’s Record-Breaking Digital Game Sales Report
- 5 Hidden Pitfalls of Fixed-Height Cards (And How to Avoid Them)
- 10 Key Insights from Thoughtworks’ 34th Technology Radar