Home / Education / Crypto / Artificial Intelligence (AI) / AI Hallucinations & Slopsquatting: A Caution for Blockchain Devs

AI Hallucinations & Slopsquatting: A Caution for Blockchain Devs

Published
Lorena Nessi
Published

Key Takeaways

  • Autocomplete tools may confidently suggest non-existent or insecure code.
  • Attackers can hijack hallucinated package names by registering them with malicious code. 
  • A 2025 study, with over 200,000 examples, showed that open-source models hallucinated fake packages at 4× the rate of commercial models.
  • The future of secure AI coding demands human review and better model training​​.

One of the first topics that entered the mainstream artificial intelligence (AI) debate was AI hallucinations. These plausible outputs follow content standards but are factually or logically incorrect. 

Despite sounding fluent and convincing, AI models can hallucinate when generating text by producing made-up statistics, misquotes, or fake sources. 

In autocomplete coding, hallucinations occur when the AI confidently suggests faulty code—basically, these are all lies, or in the worst cases, they lead to scams.

AI autocomplete hallucinations may compile without errors and appear technically sound, but they introduce  AI-generated bugs and vulnerabilities—especially in security-critical systems like smart contracts.

This article discusses the growing concern behind faulty AI-generated code and why developer vigilance matters more than ever.

What Are AI Hallucinations in Autocomplete Coding?

AI hallucinations in autocomplete coding are outputs created by large language models (LLMs) when used for coding assistance. These outputs follow a logic based on pattern recognition, so they often look convincing but are wrong (non-existent or incorrect package names). A recent case involved fake packages mimicking bitcoinlib, which were used to target crypto wallets through malicious Python libraries.

AI hallucinations happen because the model does not understand facts. It does not think. It follows statistical patterns from its training data to predict what comes next. As a result, it can generate a hallucination that, compared to most human hallucinations, reads quite convincingly.

A hallucinated code snippet may resemble something users expect to see. It might refer to a function that does not exist, misuse an API, or create a logical contradiction. And because it looks polished, it can slip through reviews without anyone noticing.

Slopsquatting Explained: A New AI-Generated Threat

Slopsquatting, a form of typosquatting, is a deliberate attack strategy that uses AI hallucinations generated by code completion tools. 

Exploiting package hallucination | Source: Arxiv
Exploiting package hallucination | Source: Arxiv

Here is how this attack works: 

  • Intentional exploitation of errors: Malicious actors deliberately target the AI’s hallucinations, knowing that its output can contain flawed or insecure logic that appears trustworthy.
  • Leveraging AI’s training data: Since LLMs are trained on massive, often unfiltered datasets, attackers exploit the fact that the model can learn and repeat insecure patterns or toxic code.
  • Mimicking legitimate packages: Slight modifications in naming, such as a missing letter or underscore, can lead blockchain developers to accept malicious packages that appear identical to official ones, especially when recommended by autocomplete tools, similar to typosquatting.
  • Social engineering via AI: Slopsquatting does not start with direct hacking. It manipulates developer trust by using the AI as the delivery vector, making the attack feel like part of the normal workflow.
  • Stealth and delayed impact: The malicious code blends in seamlessly. It may lie dormant until a specific trigger or deployment, making detection difficult and cleanup expensive.
  • Dependency chain compromise: Slopsquatted packages can pull in other hidden malicious dependencies, deepening the vulnerability without obvious signs in the surface-level code.
  • Zero-click execution risk: Some hallucinated imports may include post-install scripts, small bits of code that run automatically when a package is installed. These can execute malicious actions as soon as the package is fetched—no user interaction is needed.
  • Exploitation of developer habits: Attackers understand common shortcuts—ignoring warnings or failing to verify package integrity—and align their tactics accordingly.

Real-World Examples of AI-Caused Coding Bugs

When hallucinations make it into production code, they do not just cause errors—they open the door to full-blown security failures. These are not theoretical risks—they have already happened. 

A 2025 study found that code LLMs suggested over 200,000 fake packages, with open-source models hallucinating at rates four times higher than commercial ones. 

Some of the real examples highlighted in the study include: 

  • Mass hallucination of fictitious packages: The models generated 205,000+ unique fake package names, none of which existed in PyPI or npm. These hallucinations appeared realistic enough to be exploited by attackers publishing malicious code under those names.
  • Persistent repetition: In over 43% of cases, the same hallucinated package was generated every time across 10 test runs, turning one-off glitches into consistent, exploitable patterns.
  • Cross-language confusion: During Python code generation, models hallucinated JavaScript packages 6,705 times, introducing compatibility errors and increasing the attack surface across ecosystems.
  • Verbosity-risk tradeoff: More “creative” models hallucinated more often. Those that generated a wider variety of package names showed the highest hallucination rates, making verbosity a measurable risk. Interestingly, this mirrors how humans can become less accurate when sounding more expressive or impressive!
  • Hallucinations mimicking legitimate names: While most hallucinated names were not typo-based, 13.4% were only one or two characters away from real packages—perfect for slopsquatting or package confusion attacks.

Why Vibe Coding Poses Risks To Blockchain Security

Vibe coding is an emerging approach to software development that leverages AI to generate code from natural language inputs, enabling developers to focus on high-level design and intent rather than low-level implementation details.

It rewards confidence over correctness. If blockchain developers under pressure accept AI-suggested code that feels familiar, even when it lacks context, accuracy, or safety, they might become easy victims of this threat. 

The devil is in the details when it comes to AI hallucinations and slopsquatting.  

Some autocomplete coding risks for blockchain developers include:

  • Hidden vulnerabilities: Code may look clean, but fail under edge cases or attacks (e.g., reentrancy issues in smart contracts).
  • Unverified dependencies: You may import packages without checking authenticity.
  • Outdated practices: AI may surface deprecated Solidity patterns (like tx.origin for authentication) that compromise contract security.
  • Broken logic: Suggested code can conflict with your project’s architecture or intent—especially in contracts with multiple inheritance layers or delegate calls.
  • Blind trust loops: One AI-generated error, once accepted, may propagate across multiple files or modules—introducing systematic vulnerabilities.
  • Code bloat: Unneeded logic or overcomplicated patterns can slip in silently, leading to higher gas costs and lower contract efficiency.

Best Practices To Prevent AI-Generated Coding Vulnerabilities

Some of the best practices to avoid damage from AI hallucinations and slopsquatting attacks include:

  • Always verify suggested packages
  • Keep prompts specific and scoped
  • Use low temperature settings when possible
  • Review every line of code
  • Stick to widely used, trusted libraries
  • Run static analysis tools
  • Fine-tune models with validated outputs
  • Prompt models to check their responses
Source: Reddit
Source: Reddit

Future of AI in Secure Coding Environments

AI cannot replace developers, but it can be used for support. This support must come with better training data, stricter safeguards, and tools aiming to detect hallucinations before they become threats. As models evolve, security must scale with them. 

The future of secure coding lies in human oversight, smarter AI tuning, regulation, and shared responsibility across development teams, model providers, and open-source communities.

Conclusion

AI-generated code can significantly accelerate blockchain development—but it also introduces serious security risks. Hallucinated imports, slop-squatted packages, and flawed logic aren’t theoretical—they’re appearing in real-world smart contract projects.

Recent research shows that open-source language models hallucinate at alarmingly high rates, producing thousands of fake packages that closely mimic trusted libraries. In the context of blockchain, where immutability and on-chain execution leave little room for error, these risks are amplified.

Autocomplete coding may feel like a time-saver, but it’s quickly becoming a security blind spot. To build securely with AI tools, developers must enforce strict validations, write precise prompts, and depend only on verified, audited libraries. AI can assist—but secure smart contracts still require vigilant human oversight.

FAQs

What are AI hallucinations in the context of code generation?

AI hallucinations refer to instances where AI models generate code suggestions that include non-existent or incorrect package names, leading developers to integrate faulty dependencies into their projects.

What is slopsquatting, and how does it exploit AI hallucinations?

Slopsquatting is a cyberattack method where malicious actors register fake packages with names similar to those hallucinated by AI models. Developers might unknowingly install these malicious packages, compromising their software’s security.

How prevalent is the issue of hallucinated packages in AI-generated code?

Research indicates that a significant percentage of AI-generated code suggestions include hallucinated packages, with some models hallucinating in over a third of outputs.

What measures can developers take to mitigate risks associated with AI-generated code?

Developers should verify the existence and authenticity of suggested packages, avoid unquestioningly trusting AI-generated code, and implement security tools to detect and prevent the inclusion of malicious dependencies.

Was this Article helpful? Yes No
Lorena Nessi is an award-winning journalist and media and technology expert. She is based in Oxfordshire, UK, and holds a PhD in Communication, Sociology, and Digital Cultures, as well as a Master’s degree in Globalization, Identity, and Technology. Lorena has lectured at prestigious institutions, including Fairleigh Dickinson University, Nottingham Trent University, and the University of Oxford. Her journalism career includes working for the BBC in London and producing television content in Mexico and Japan. She has published extensively on digital cultures, social media, technology, and capitalism. Lorena is interested in exploring how digital innovation impacts cultural and social dynamics and has a keen interest in blockchain technology. In her free time, Lorena enjoys science fiction books and films, board games, and thrilling adventures that get her heart racing. A perfect day for her includes a spa session and a good family meal.
See more