Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      Hallucinated code, real threat: How slopsquatting targets AI-assisted development

      July 1, 2025

      CompTIA State of the Tech Workforce 2025 released, Meta joins Kotlin Foundation, Percona launches Transparent Data Encryption for PostgreSQL – Daily News Digest

      July 1, 2025

      Turning User Research Into Real Organizational Change

      July 1, 2025

      June 2025: All AI updates from the past month

      June 30, 2025

      Cloudflare declares war on AI crawlers – and the stakes couldn’t be higher

      July 1, 2025

      Samsung dropped the 4TB T9 portable SSD to over 40% off ahead of Prime Day

      July 1, 2025

      I FINALLY got my hands on my most anticipated gaming laptop of 2025 — and it’s a 14-inch monster

      July 1, 2025

      This gimbal-tracking webcam has TWO cameras and a great price — but it may not be “private” enough

      July 1, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      1KB JavaScript Demoscene Challenge Just Launched

      July 1, 2025
      Recent

      1KB JavaScript Demoscene Challenge Just Launched

      July 1, 2025

      Salesforce Marketing Cloud for Medical Devices

      July 1, 2025

      June report 2025

      July 1, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Windows 11 Notepad’s rich text formatting (markdown) is now available

      July 1, 2025
      Recent

      Windows 11 Notepad’s rich text formatting (markdown) is now available

      July 1, 2025

      Ten Forward – control a NAT-PMP gateway

      July 1, 2025

      SerialTest – test tool for serial port, Bluetooth, TCP and UDP

      July 1, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Tech & Work»Hallucinated code, real threat: How slopsquatting targets AI-assisted development

    Hallucinated code, real threat: How slopsquatting targets AI-assisted development

    July 1, 2025

    AI coding assistants like ChatGPT and GitHub Copilot have become a staple in the developer’s toolkit. They help dev teams move faster, automate boilerplates, and troubleshoot issues on the fly. But there’s a catch. These tools don’t always know what they’re talking about. Like other LLM applications, coding assistants sometimes hallucinate – confidently recommending software packages that don’t actually exist.  

    This isn’t just an annoying quirk — it’s a serious security risk that could open the door to malicious attacks exploiting the vulnerability. This technique is known as “slopsquatting”, a twist on supply chain attacks where bad actors register hallucinated package names suggested by AI tools and fill them with malicious code. Also known as “AI package hallucination,” there is an urgent need for stronger security guardrails and for developers and engineers to not overrely on LLMs without proper validation of coding instructions and recommendations.

    The GenAI coding tool recommends the package, the developer installs it… and software vendors find themselves with purpose-built malicious code integrated knowingly, if unwittingly, into their products.

    This article breaks down what AI package hallucinations are, how slopsquatting works, and how developers can protect themselves.

    What is an AI Package Hallucination?

    An AI package hallucination occurs when a large language model invents the name of a software package that looks legitimate, but doesn’t exist. For example, when one security researcher asked ChatGPT for NPM packages to help integrate with ArangoDB, it confidently recommended orango-db. 

    The answer sounded entirely plausible. But it was entirely fictional, until the researcher registered it himself as part of a proof-of-concept attack.

    These hallucinations happen because LLMs are trained to predict what “sounds right” based on patterns in their training data – not to fact-check. If a package name fits the syntax and context, the model may offer it up, even if it never existed.

    Because GenAI coding assistant responses are fluent and authoritative, developers tend to assume that they’re accurate. If they don’t independently verify the package, a developer might unknowingly install a package the LLM made up. And these hallucinations don’t just disappear – attackers are turning them into entry points.

    What is Slopsquatting?

    Slopsquatting was a term coined by security researcher Seth Larson to describe a tactic that emerged during the early wave of AI-assisted coding. It referred to attackers exploiting AI hallucinations—specifically, when AI tools invented non-existent package names. Threat actors would register these fake packages and fill them with malicious code. Though once a notable concern, awareness of slopsquatting has since grown, and countermeasures have become more common in package ecosystems. 

    Unlike its better-known counterpart typosquatting, which counts on users misidentifying very slight variations on legitimate URLs, slopsquatting doesn’t rely on human error. It exploits machine error. When an LLM recommends a non-existent package like the above-mentioned orango-db, an attacker can then register that name on a public repository like npm or PyPI. The next developer who asks a similar question might get the same hallucinated package. Only now, it exists. And it’s dangerous.

    As Lasso’s research on AI package hallucination has shown, LLMs often repeat the same hallucinations across different queries, users, and sessions. This makes it possible for attackers to weaponize these suggestions at scale – and slip past even vigilant developers.

    Why This Threat Is Real – and Why It Matters

    AI hallucinations aren’t just rare glitches, they’re surprisingly common. In a recent study of 16 code-generating AI models, nearly 1 in 5 package suggestions (19.7%) pointed to software that didn’t exist.

    This high frequency matters because every hallucinated package is a potential target for slopsquatting. And with tens of thousands of developers using AI coding tools daily, even a small number of hallucinated names can slip into circulation and become attack vectors at scale.

    What makes slopsquatted packages especially dangerous is where they show up: in trusted parts of the development workflow – AI-assisted pair programming, CI pipelines, even automated security tools that suggest fixes. This means that what started as AI hallucinations can silently propagate into production systems if they aren’t caught early.

    How to Stay Safe 

    You can’t prevent AI models from hallucinating – but you can protect your pipeline from what they invent. Whether you’re writing code or securing it, here’s my advice to stay ahead of slopsquatting:

    For Developers:

    Don’t assume AI suggestions are vetted. If a package looks unfamiliar, check the registry. Look at the publish date, maintainers, and download history. If it popped up recently and isn’t backed by a known organization, proceed with caution.

    For Security Teams:

    Treat hallucinated packages as a new class of supply chain risk. Monitor installs in CI/CD, add automated checks for newly published or low-reputation packages, and audit metadata before anything hits production.

    For AI Tool Builders:

    Consider integrating real-time validation to flag hallucinated packages. If a suggested dependency doesn’t exist or has no usage history, prompt the user before proceeding.

    The Bottom Line

    AI coding tools and GenAI chatbots are reshaping how we write and deploy software – but they’re also introducing risks that traditional defenses aren’t designed to catch. Slopsquatting exploits the trust developers place in these tools – the assumption that if a coding assistant suggests a package, it must be safe and real.

    But the solution isn’t to stop using AI to code. It’s to use it wisely. Developers need to verify what they install. Security teams should monitor what gets deployed. And toolmakers should build in safeguards from the get-go. Because if we’re going to rely on GenAI, we need protections built for the scale and speed it brings.

    The post Hallucinated code, real threat: How slopsquatting targets AI-assisted development appeared first on SD Times.

    Source: Read More 

    news
    Facebook Twitter Reddit Email Copy Link
    Previous ArticleCompTIA State of the Tech Workforce 2025 released, Meta joins Kotlin Foundation, Percona launches Transparent Data Encryption for PostgreSQL – Daily News Digest
    Next Article Salesforce Marketing Cloud for Medical Devices

    Related Posts

    Tech & Work

    CompTIA State of the Tech Workforce 2025 released, Meta joins Kotlin Foundation, Percona launches Transparent Data Encryption for PostgreSQL – Daily News Digest

    July 1, 2025
    Tech & Work

    Turning User Research Into Real Organizational Change

    July 1, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    How Agentic AI Will Revolutionize Business Automation in 2025🤖

    Web Development

    Shortcuts is the best Apple app you’re not using – and iOS 26 makes it even more powerful

    News & Updates

    CVE-2025-6317 – Code-projects Online Shoe Store SQL Injection Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Don’t make this costly thermostat mistake – and the best place to put it

    News & Updates

    Highlights

    CVE-2024-51977 – HP Printer Information Disclosure

    June 25, 2025

    CVE ID : CVE-2024-51977

    Published : June 25, 2025, 8:15 a.m. | 2 hours, 42 minutes ago

    Description : An unauthenticated attacker who can access either the HTTP service (TCP port 80), the HTTPS service (TCP port 443), or the IPP service (TCP port 631), can leak several pieces of sensitive information from a vulnerable device. The URI path /etc/mnt_info.csv can be accessed via a GET request and no authentication is required. The returned result is a comma separated value (CSV) table of information. The leaked information includes the device’s model, firmware version, IP address, and serial number.

    Severity: 5.3 | MEDIUM

    Visit the link for more details, such as CVSS details, affected products, timeline, and more…

    CVE-2025-3722 – Symantec ePO Path Traversal Vulnerability

    June 26, 2025

    Google Proposes New Browser Security: Your Local Network, Your Permission!

    June 5, 2025

    From Accountant to Data Engineer with Alyson La [Podcast #168]

    April 12, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.