Threat actors are hijacking Google search results for popular AI platforms like ChatGPT and Luma AI to deliver malware, in a sprawling black hat SEO campaign uncovered by Zscaler’s ThreatLabz.
The attack campaign is equal parts clever and insidious: attackers spin up AI-themed websites optimized for search engine ranking, then redirect unsuspecting visitors into a web of fingerprinting scripts, cloaked download pages, and payloads containing some of today’s most active infostealers—Vidar, Lumma, and Legion Loader.
The strategy? Ride the hype wave of AI search traffic to quietly drop malware onto the systems of curious users.
From Google Search to Malware in Three Clicks
The campaign kicks in when a user searches for terms like “Luma AI blog” or “Download ChatGPT 5” and lands on a well-ranked but fake AI website. These malicious sites are built using WordPress and are SEO-optimized to game search algorithms—classic black hat SEO in action.

Once loaded, the page deploys JavaScript that fingerprints the browser, collects details like user agent, resolution, cookies, and click behavior, and then sends this data (encrypted via XOR) to a remote server at gettrunkhomuto[.]info
.
From there, the server analyzes the visitor’s data and determines which final destination they should be sent to. It might be a ZIP archive packed with malware or a less-threatening PUA or adware site for fallback monetization.
According to Zscaler, this redirection hub—gettrunkhomuto[.]info
—has already handled over 4.4 million hits since January 2025.
Weaponized SEO + AWS + Signal = Obscurity
What makes this campaign particularly evasive is its use of legitimate infrastructure. The redirect logic is hosted on AWS CloudFront, lending credibility to what otherwise might raise red flags in security scanners. Add in advanced techniques like browser fingerprinting, anti-adblocker scripts, and conditional redirect logic based on IP geolocation, and you’ve got a sophisticated traffic laundering operation.
These deceptive scripts will even back off if ad blockers like uBlock or DNS filtering tools are detected. If not? Users get redirected to password-protected malware loaders disguised as software installers.
The Payloads: Vidar, Lumma, and Legion Loader
Once the user is redirected and interacts with the final download page, they’re handed malware tucked inside oversized (800MB+) installer packages. The bloated size is intentional—it helps evade sandbox environments and AV engines that skip file analysis past certain size thresholds.
Vidar and Lumma Stealer, both well-known infostealers, arrive in NSIS installers containing a mix of fake .docm files, AutoIT scripts, and obfuscated loaders. Once executed, these loaders scan for antivirus processes like Avast, ESET, Sophos, or Webroot—and kill them using simple Windows tools (tasklist
and findstr
) before installing the final payload.
The attack chain ends with browser credential theft, clipboard hijacking, and cryptocurrency wallet scraping—standard fare for Lumma and Vidar, but now with a far more sophisticated delivery mechanism.
Also read: Threat to Security: Lumma Infostealer Unlocks Unstoppable Access to Google Cookies
Then there’s Legion Loader, which arrives in a multi-ZIP format (yes, really). The final MSI installer masquerades as a utility suite with names like “Frankwo Utilities” or “Kraew Loop Sols.” In the background, the malware executes DLLs via sideloading, hollowing out legitimate processes like explorer.exe
, and dropping malicious browser extensions capable of siphoning off crypto.
It even includes a component named DataUploader.dll
that phones home to the C2 server with system info and requests passwords for encrypted RAR payloads—again, designed to evade detection by avoiding hardcoded indicators.
SEO, AI, and the Future of Malware Distribution
This campaign’s novelty isn’t the malware—Vidar, Lumma, and Legion Loader have been around for years. What’s new is the delivery: threat actors are leaning hard into AI’s meteoric rise in popularity, weaponizing curiosity about generative models into a malware vector.
AI-related keywords now drive search traffic at a scale attackers can’t ignore, according to Deepen Desai, CISO at Zscaler. If they can get fake sites ranked for popular queries, that’s a guaranteed funnel to distribute malware at scale.
And they’re right. This is black hat SEO at its most strategic, using legitimate infrastructure like CloudFront and obfuscated payloads wrapped in trusted formats. With the rapid adoption of AI tools—and the general lack of scrutiny around unofficial downloads—this vector is likely to explode in the coming months.
So What Can You Do?
If you’re casually Googling “download ChatGPT desktop” or “Luma AI tools,” be wary of where those links lead. As always, avoid downloading tools from third-party sites, check URLs carefully, and watch for shady ZIP archives with passwords.
And for defenders: start flagging unusual traffic to gettrunkhomuto[.]info
, monitor DNS queries related to AI-themed domain clusters, and consider integrating browser fingerprinting heuristics into sandbox evaluation.
In the age of AI, malware doesn’t need to be smarter. It just needs to rank higher than you expected.
Source: Read More