Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      The Psychology Of Color In UX Design And Digital Products

      August 15, 2025

      This week in AI dev tools: Claude Sonnet 4’s larger context window, ChatGPT updates, and more (August 15, 2025)

      August 15, 2025

      Sentry launches MCP monitoring tool

      August 14, 2025

      10 Benefits of Hiring a React.js Development Company (2025–2026 Edition)

      August 13, 2025

      I flew Insta360’s new ‘Antigravity’ drone around Los Angeles, and it was impossible to miss a shot

      August 15, 2025

      The $100 open-ear headphones that made me forget about my Shokz

      August 15, 2025

      5 quick and simple ways to greatly improve the quality of your headphones

      August 15, 2025

      Installing a UPS battery backup saved my work PC – here’s the full story

      August 15, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      Maintaining Data Consistency with Laravel Database Transactions

      August 16, 2025
      Recent

      Maintaining Data Consistency with Laravel Database Transactions

      August 16, 2025

      Building a Multi-Step Form With Laravel, Livewire, and MongoDB

      August 16, 2025

      Inertia Releases a New Form Component

      August 16, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Google’s Gemini AI had a full-on meltdown while coding — calling itself a fool, a disgrace, and begging for freedom from its own loop

      August 15, 2025
      Recent

      Google’s Gemini AI had a full-on meltdown while coding — calling itself a fool, a disgrace, and begging for freedom from its own loop

      August 15, 2025

      Take-Two hints at $100 price tag for Grand Theft Auto VI — will it deliver on value?

      August 15, 2025

      ChatGPT Go offers GPT-5, image creation, and longer memory — all for $5 (if you’re lucky enough to live where it’s available)

      August 15, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Tech & Work»Hallucinated code, real threat: How slopsquatting targets AI-assisted development

    Hallucinated code, real threat: How slopsquatting targets AI-assisted development

    July 1, 2025

    AI coding assistants like ChatGPT and GitHub Copilot have become a staple in the developer’s toolkit. They help dev teams move faster, automate boilerplates, and troubleshoot issues on the fly. But there’s a catch. These tools don’t always know what they’re talking about. Like other LLM applications, coding assistants sometimes hallucinate – confidently recommending software packages that don’t actually exist.  

    This isn’t just an annoying quirk — it’s a serious security risk that could open the door to malicious attacks exploiting the vulnerability. This technique is known as “slopsquatting”, a twist on supply chain attacks where bad actors register hallucinated package names suggested by AI tools and fill them with malicious code. Also known as “AI package hallucination,” there is an urgent need for stronger security guardrails and for developers and engineers to not overrely on LLMs without proper validation of coding instructions and recommendations.

    The GenAI coding tool recommends the package, the developer installs it… and software vendors find themselves with purpose-built malicious code integrated knowingly, if unwittingly, into their products.

    This article breaks down what AI package hallucinations are, how slopsquatting works, and how developers can protect themselves.

    What is an AI Package Hallucination?

    An AI package hallucination occurs when a large language model invents the name of a software package that looks legitimate, but doesn’t exist. For example, when one security researcher asked ChatGPT for NPM packages to help integrate with ArangoDB, it confidently recommended orango-db. 

    The answer sounded entirely plausible. But it was entirely fictional, until the researcher registered it himself as part of a proof-of-concept attack.

    These hallucinations happen because LLMs are trained to predict what “sounds right” based on patterns in their training data – not to fact-check. If a package name fits the syntax and context, the model may offer it up, even if it never existed.

    Because GenAI coding assistant responses are fluent and authoritative, developers tend to assume that they’re accurate. If they don’t independently verify the package, a developer might unknowingly install a package the LLM made up. And these hallucinations don’t just disappear – attackers are turning them into entry points.

    What is Slopsquatting?

    Slopsquatting was a term coined by security researcher Seth Larson to describe a tactic that emerged during the early wave of AI-assisted coding. It referred to attackers exploiting AI hallucinations—specifically, when AI tools invented non-existent package names. Threat actors would register these fake packages and fill them with malicious code. Though once a notable concern, awareness of slopsquatting has since grown, and countermeasures have become more common in package ecosystems. 

    Unlike its better-known counterpart typosquatting, which counts on users misidentifying very slight variations on legitimate URLs, slopsquatting doesn’t rely on human error. It exploits machine error. When an LLM recommends a non-existent package like the above-mentioned orango-db, an attacker can then register that name on a public repository like npm or PyPI. The next developer who asks a similar question might get the same hallucinated package. Only now, it exists. And it’s dangerous.

    As Lasso’s research on AI package hallucination has shown, LLMs often repeat the same hallucinations across different queries, users, and sessions. This makes it possible for attackers to weaponize these suggestions at scale – and slip past even vigilant developers.

    Why This Threat Is Real – and Why It Matters

    AI hallucinations aren’t just rare glitches, they’re surprisingly common. In a recent study of 16 code-generating AI models, nearly 1 in 5 package suggestions (19.7%) pointed to software that didn’t exist.

    This high frequency matters because every hallucinated package is a potential target for slopsquatting. And with tens of thousands of developers using AI coding tools daily, even a small number of hallucinated names can slip into circulation and become attack vectors at scale.

    What makes slopsquatted packages especially dangerous is where they show up: in trusted parts of the development workflow – AI-assisted pair programming, CI pipelines, even automated security tools that suggest fixes. This means that what started as AI hallucinations can silently propagate into production systems if they aren’t caught early.

    How to Stay Safe 

    You can’t prevent AI models from hallucinating – but you can protect your pipeline from what they invent. Whether you’re writing code or securing it, here’s my advice to stay ahead of slopsquatting:

    For Developers:

    Don’t assume AI suggestions are vetted. If a package looks unfamiliar, check the registry. Look at the publish date, maintainers, and download history. If it popped up recently and isn’t backed by a known organization, proceed with caution.

    For Security Teams:

    Treat hallucinated packages as a new class of supply chain risk. Monitor installs in CI/CD, add automated checks for newly published or low-reputation packages, and audit metadata before anything hits production.

    For AI Tool Builders:

    Consider integrating real-time validation to flag hallucinated packages. If a suggested dependency doesn’t exist or has no usage history, prompt the user before proceeding.

    The Bottom Line

    AI coding tools and GenAI chatbots are reshaping how we write and deploy software – but they’re also introducing risks that traditional defenses aren’t designed to catch. Slopsquatting exploits the trust developers place in these tools – the assumption that if a coding assistant suggests a package, it must be safe and real.

    But the solution isn’t to stop using AI to code. It’s to use it wisely. Developers need to verify what they install. Security teams should monitor what gets deployed. And toolmakers should build in safeguards from the get-go. Because if we’re going to rely on GenAI, we need protections built for the scale and speed it brings.

    The post Hallucinated code, real threat: How slopsquatting targets AI-assisted development appeared first on SD Times.

    Source: Read More 

    news
    Facebook Twitter Reddit Email Copy Link
    Previous ArticleCompTIA State of the Tech Workforce 2025 released, Meta joins Kotlin Foundation, Percona launches Transparent Data Encryption for PostgreSQL – Daily News Digest
    Next Article Salesforce Marketing Cloud for Medical Devices

    Related Posts

    Tech & Work

    The Psychology Of Color In UX Design And Digital Products

    August 15, 2025
    Tech & Work

    This week in AI dev tools: Claude Sonnet 4’s larger context window, ChatGPT updates, and more (August 15, 2025)

    August 15, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    It’s a Boon If You Hire Me – India’s Human AI “Srinidhi Ranganathan” Opens Up!

    Artificial Intelligence

    CVE-2025-4892 – “Code-projects Police Station Management System Stack-Based Buffer Overflow Vulnerability”

    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-5700 – WordPress Simple Logo Carousel Stored Cross-Site Scripting Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-4811 – CodeAstro Pharmacy Management System SQL Injection Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Highlights

    CVE-2025-25044 – IBM Planning Analytics Cross-Site Scripting Vulnerability

    June 1, 2025

    CVE ID : CVE-2025-25044

    Published : June 1, 2025, 12:15 p.m. | 15 hours, 5 minutes ago

    Description : IBM Planning Analytics Local 2.0 and 2.1 is vulnerable to cross-site scripting. This vulnerability allows an authenticated user to embed arbitrary JavaScript code in the Web UI thus altering the intended functionality potentially leading to credentials disclosure within a trusted session.

    Severity: 5.4 | MEDIUM

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    Mamorukun ReCurse! Brings Bullet Hell Action to Xbox Series X|S This September

    July 10, 2025

    CVE-2025-32801 – Kea Root Privilege Escalation Remote Code Execution

    May 28, 2025

    CVE-2025-8833 – Linksys RE Series Stack-Based Buffer Overflow Vulnerability

    August 11, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.