Close Menu
    DevStackTipsDevStackTips
    • Home
    • News & Updates
      1. Tech & Work
      2. View All

      10 Top Node.js Development Companies for Enterprise-Scale Projects (2025-2026 Ranked & Reviewed)

      July 4, 2025

      12 Must-Know Cost Factors When Hiring Node.js Developers for Your Enterprise

      July 4, 2025

      Mirantis reveals Lens Prism, an AI copilot for operating Kubernetes clusters

      July 3, 2025

      Avoid these common platform engineering mistakes

      July 3, 2025

      Hideo Kojima’s “OD” is still in development with Xbox, at least for today

      July 4, 2025

      Microsoft is replacing salespeople with “solutions engineers” amid recent layoffs — promoting Copilot AI while ChatGPT dominates the enterprise sector

      July 4, 2025

      Microsoft’s extra year of Windows 10 security updates isn’t a “viable solution” for the 400 million PCs that can’t upgrade to Windows 11 — “It’s obvious users are frustrated and feel yanked around.”

      July 4, 2025

      OpenAI almost shipped ChatGPT with a different name — before a late-night twist

      July 4, 2025
    • Development
      1. Algorithms & Data Structures
      2. Artificial Intelligence
      3. Back-End Development
      4. Databases
      5. Front-End Development
      6. Libraries & Frameworks
      7. Machine Learning
      8. Security
      9. Software Engineering
      10. Tools & IDEs
      11. Web Design
      12. Web Development
      13. Web Security
      14. Programming Languages
        • PHP
        • JavaScript
      Featured

      The dog days of JavaScript summer

      July 4, 2025
      Recent

      The dog days of JavaScript summer

      July 4, 2025

      Databricks Lakebase – Database Branching in Action

      July 4, 2025

      Flutter + GitHub Copilot = Your New Superpower

      July 4, 2025
    • Operating Systems
      1. Windows
      2. Linux
      3. macOS
      Featured

      Hideo Kojima’s “OD” is still in development with Xbox, at least for today

      July 4, 2025
      Recent

      Hideo Kojima’s “OD” is still in development with Xbox, at least for today

      July 4, 2025

      Microsoft is replacing salespeople with “solutions engineers” amid recent layoffs — promoting Copilot AI while ChatGPT dominates the enterprise sector

      July 4, 2025

      Microsoft’s extra year of Windows 10 security updates isn’t a “viable solution” for the 400 million PCs that can’t upgrade to Windows 11 — “It’s obvious users are frustrated and feel yanked around.”

      July 4, 2025
    • Learning Resources
      • Books
      • Cheatsheets
      • Tutorials & Guides
    Home»Tech & Work»Hallucinated code, real threat: How slopsquatting targets AI-assisted development

    Hallucinated code, real threat: How slopsquatting targets AI-assisted development

    July 1, 2025

    AI coding assistants like ChatGPT and GitHub Copilot have become a staple in the developer’s toolkit. They help dev teams move faster, automate boilerplates, and troubleshoot issues on the fly. But there’s a catch. These tools don’t always know what they’re talking about. Like other LLM applications, coding assistants sometimes hallucinate – confidently recommending software packages that don’t actually exist.  

    This isn’t just an annoying quirk — it’s a serious security risk that could open the door to malicious attacks exploiting the vulnerability. This technique is known as “slopsquatting”, a twist on supply chain attacks where bad actors register hallucinated package names suggested by AI tools and fill them with malicious code. Also known as “AI package hallucination,” there is an urgent need for stronger security guardrails and for developers and engineers to not overrely on LLMs without proper validation of coding instructions and recommendations.

    The GenAI coding tool recommends the package, the developer installs it… and software vendors find themselves with purpose-built malicious code integrated knowingly, if unwittingly, into their products.

    This article breaks down what AI package hallucinations are, how slopsquatting works, and how developers can protect themselves.

    What is an AI Package Hallucination?

    An AI package hallucination occurs when a large language model invents the name of a software package that looks legitimate, but doesn’t exist. For example, when one security researcher asked ChatGPT for NPM packages to help integrate with ArangoDB, it confidently recommended orango-db. 

    The answer sounded entirely plausible. But it was entirely fictional, until the researcher registered it himself as part of a proof-of-concept attack.

    These hallucinations happen because LLMs are trained to predict what “sounds right” based on patterns in their training data – not to fact-check. If a package name fits the syntax and context, the model may offer it up, even if it never existed.

    Because GenAI coding assistant responses are fluent and authoritative, developers tend to assume that they’re accurate. If they don’t independently verify the package, a developer might unknowingly install a package the LLM made up. And these hallucinations don’t just disappear – attackers are turning them into entry points.

    What is Slopsquatting?

    Slopsquatting was a term coined by security researcher Seth Larson to describe a tactic that emerged during the early wave of AI-assisted coding. It referred to attackers exploiting AI hallucinations—specifically, when AI tools invented non-existent package names. Threat actors would register these fake packages and fill them with malicious code. Though once a notable concern, awareness of slopsquatting has since grown, and countermeasures have become more common in package ecosystems. 

    Unlike its better-known counterpart typosquatting, which counts on users misidentifying very slight variations on legitimate URLs, slopsquatting doesn’t rely on human error. It exploits machine error. When an LLM recommends a non-existent package like the above-mentioned orango-db, an attacker can then register that name on a public repository like npm or PyPI. The next developer who asks a similar question might get the same hallucinated package. Only now, it exists. And it’s dangerous.

    As Lasso’s research on AI package hallucination has shown, LLMs often repeat the same hallucinations across different queries, users, and sessions. This makes it possible for attackers to weaponize these suggestions at scale – and slip past even vigilant developers.

    Why This Threat Is Real – and Why It Matters

    AI hallucinations aren’t just rare glitches, they’re surprisingly common. In a recent study of 16 code-generating AI models, nearly 1 in 5 package suggestions (19.7%) pointed to software that didn’t exist.

    This high frequency matters because every hallucinated package is a potential target for slopsquatting. And with tens of thousands of developers using AI coding tools daily, even a small number of hallucinated names can slip into circulation and become attack vectors at scale.

    What makes slopsquatted packages especially dangerous is where they show up: in trusted parts of the development workflow – AI-assisted pair programming, CI pipelines, even automated security tools that suggest fixes. This means that what started as AI hallucinations can silently propagate into production systems if they aren’t caught early.

    How to Stay Safe 

    You can’t prevent AI models from hallucinating – but you can protect your pipeline from what they invent. Whether you’re writing code or securing it, here’s my advice to stay ahead of slopsquatting:

    For Developers:

    Don’t assume AI suggestions are vetted. If a package looks unfamiliar, check the registry. Look at the publish date, maintainers, and download history. If it popped up recently and isn’t backed by a known organization, proceed with caution.

    For Security Teams:

    Treat hallucinated packages as a new class of supply chain risk. Monitor installs in CI/CD, add automated checks for newly published or low-reputation packages, and audit metadata before anything hits production.

    For AI Tool Builders:

    Consider integrating real-time validation to flag hallucinated packages. If a suggested dependency doesn’t exist or has no usage history, prompt the user before proceeding.

    The Bottom Line

    AI coding tools and GenAI chatbots are reshaping how we write and deploy software – but they’re also introducing risks that traditional defenses aren’t designed to catch. Slopsquatting exploits the trust developers place in these tools – the assumption that if a coding assistant suggests a package, it must be safe and real.

    But the solution isn’t to stop using AI to code. It’s to use it wisely. Developers need to verify what they install. Security teams should monitor what gets deployed. And toolmakers should build in safeguards from the get-go. Because if we’re going to rely on GenAI, we need protections built for the scale and speed it brings.

    The post Hallucinated code, real threat: How slopsquatting targets AI-assisted development appeared first on SD Times.

    Source: Read More 

    news
    Facebook Twitter Reddit Email Copy Link
    Previous ArticleCompTIA State of the Tech Workforce 2025 released, Meta joins Kotlin Foundation, Percona launches Transparent Data Encryption for PostgreSQL – Daily News Digest
    Next Article Salesforce Marketing Cloud for Medical Devices

    Related Posts

    Tech & Work

    10 Top Node.js Development Companies for Enterprise-Scale Projects (2025-2026 Ranked & Reviewed)

    July 4, 2025
    Tech & Work

    12 Must-Know Cost Factors When Hiring Node.js Developers for Your Enterprise

    July 4, 2025
    Leave A Reply Cancel Reply

    For security, use of Google's reCAPTCHA service is required which is subject to the Google Privacy Policy and Terms of Use.

    Continue Reading

    Motorola Solutions to outfit first responders with new AI-enabled body cameras

    News & Updates

    CVE-2024-45094 – IBM DS8900F and DS8A00 HMC Stored Cross-Site Scripting Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    CVE-2022-26056 – Apache HTTP Server XML External Entity (XXE) Injection

    Common Vulnerabilities and Exposures (CVEs)

    CVE-2025-20978 – PENUP Privilege Escalation Vulnerability

    Common Vulnerabilities and Exposures (CVEs)

    Highlights

    Development

    Toward a Safer Digital ASEAN: Building Legal and Law Enforcement Synergy

    April 14, 2025

    By Salleh Kodri, SE Regional Manager, Cyble Southeast Asia is undergoing rapid digital transformation—bringing a surge…

    CVE-2025-4525 – Discord WINSTA.dll Uncontrolled Search Path Vulnerability

    May 10, 2025

    Optimizing Hybrid IT for Competitive and Environmental Advantage

    May 22, 2025
    Malicious npm Package Targets Atomic Wallet, Exodus Users by Swapping Crypto Addresses

    Malicious npm Package Targets Atomic Wallet, Exodus Users by Swapping Crypto Addresses

    April 10, 2025
    © DevStackTips 2025. All rights reserved.
    • Contact
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.