What started as an innocent trend—turning selfies into adorable “Studio Ghibli-style AI images”—has now taken a sinister turn. AI-powered tools, once celebrated for artistic creativity, are now being manipulated to craft fake identities, forge documents, and plan digital scams. This isn’t science fiction. It’s happening right now, and India is already feeling the ripple effects. AI tools like ChatGPT and image generators have captured the public imagination.
But while most users explore them for productivity and entertainment, cybercriminals have reverse-engineered their potential for deception. By combining text-based AI prompts with image manipulation, fraudsters are generating shockingly realistic fake IDs—especially Aadhaar and PAN cards.
The Rise of AI-Fueled Scams
Using minimal details such as name, date of birth, and address, attackers have been able to produce near-perfect replicas of official identity documents. Social media platforms like X (formerly Twitter) have been flooded with examples. One user, Yaswanth Sai Palaghat, raised alarm bells by saying,
“ChatGPT is generating fake Aadhaar and PAN cards instantly, which is a serious security risk. This is why AI should be regulated to some extent.”

Another user, Piku, shared a chilling revelation:
“I asked AI to generate an Aadhaar card with just a name, date of birth, and address… and it created a nearly perfect copy. Now anyone can make a fake version… We often discuss data privacy, but who’s selling these Aadhaar and PAN card datasets to AI companies to develop such models?”
While AI tools don’t use actual personal information, the accuracy with which they mimic formats, fonts, and layout styles suggests that they’ve been exposed to real-world data—possibly through public leaks or open-source training materials. The Airoli Aadhaar incident is a notable example that could have provided a template for such operations.
Hackers are also coupling these digital forgeries with real data scavenged from discarded papers, old printers, or e-waste dumps. The result? Entire fake identities that can pass basic verification—leading to SIM card frauds, fake bank accounts, rental scams, and more.
Let that sink in: the same tools that generate anime-style selfies are now being weaponized to commit identity theft.
The Viral Shreya Ghoshal “Leak” That Wasn’t
While document fraud is worrying, misinformation and phishing campaigns are evolving with similar complexity. Just last week, the Indian internet was abuzz with a supposed “leak” involving popular playback singer Shreya Ghoshal. Fans were stunned by headlines hinting at courtroom controversies and career-ending moments. But it was all fake.
According to cyber intelligence analyst Anmol Sharma, the leak was never real—it was a link. Sharma tracked the viral content to newly created scam websites posing as news outlets, such as replaceyourselfupset.run and faragonballz.com.
“These websites were set up to look like credible news sources but were actually redirecting people to phishing pages and shady investment scams,” he explained.

These sites mimicked trusted media layouts and used AI-generated images of Ghoshal behind bars or in tears to evoke emotional responses. The goal? To drive traffic to malicious domains that steal personal data or push crypto scams under fake brands like Lovarionix Liquidity.
Fake Doctors, Real Deaths
In an even more harrowing case, a man impersonating renowned UK-based cardiologist Dr. N John Camm performed over 15 heart surgeries at a respected hospital in Madhya Pradesh. Identified as Narendra Yadav, the impersonator fooled staff and patients alike at Mission Hospital in Damoh, leading to multiple patient deaths between December 2024 and February 2025.
According to official records, at least two fatalities have been linked to Yadav’s actions. Victims’ families, including Nabi Qureshi and Jitendra Singh, have recounted heartbreaking experiences involving aggressive surgeries and vanishing doctors.
While the case is still under investigation, it highlights the terrifying extent to which digital impersonation—possibly aided by fake credentials or manipulated documents—can be taken offline, resulting in real-world harm.
A Need for Privacy-Conscious AI Use
The growing misuse of AI has sparked concern among cybersecurity experts. Ronghui Gu, founder, CertiK warns:
“Users should approach AI-based image generators with a healthy level of caution, particularly when it comes to sharing biometric information like facial images. Many of these platforms are storing user data to train their models, and without transparent policies, there’s no way to know whether images are being repurposed or shared with third parties.”
The warning extends beyond image data. As AI tools become more integrated into daily applications—from onboarding processes to document verification—the risk of misuse rises, especially in jurisdictions with weak data governance.
Ronghui Gu advises users to:
- Thoroughly review privacy policies before uploading data.
- Avoid sharing high-resolution or identifiable images.
- Use pseudonyms or secondary email addresses.
- Ensure the platform complies with data protection laws like GDPR or CCPA.
“Privacy-conscious usage requires a proactive approach and an understanding that convenience should never come at the cost of control over personal data,” Ronghui Gu added.
A HiddenLayer report reinforces this, revealing that 77% of companies using AI have already faced security breaches, potentially exposing sensitive customer data. The takeaway? Even legitimate use of AI tools carries hidden risks—especially if the backend systems aren’t secure.
A New Age of Cybercrime — Where a Selfie Starts the Scam
What began as playful AI-generated art is now being hijacked for fraud, identity theft, and misinformation. The same tools that power creativity are now powering chaos—and cybercriminals are getting smarter by the day.
India’s digital ecosystem is becoming ground zero for these AI-driven scams. And the scariest part? This is just the beginning.
We can’t afford to marvel at the tech while ignoring its darker edge. Regulators must move beyond lip service. Tech companies must be held accountable. And cybersecurity professionals need to treat generative AI not as a novelty, but as a real threat vector.
Because in this era, even something as harmless as a selfie could be weaponized.
And if we’re not paying attention now, we’ll be outrun by those who are.
Source: Read More