Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

Over 40 fake news claims debunked by PIB amid Operation Sindoor

May 11, 2025

Operation Sindoor: How India fought off Pakistan's disinformation war, 7 biggest picks of conflict – Moneycontrol

May 11, 2025

Will nation-states use AI to usher in apocalypse?

May 11, 2025
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»AI Fake News
AI Fake News

Technology for Identifying Large Language Models.

News RoomBy News RoomDecember 23, 2024Updated:December 23, 20244 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

Unmasking AI Authorship: A New Tool Detects and Identifies Large Language Models

The rapid advancement of Artificial Intelligence (AI), particularly in natural language processing, has ushered in an era where distinguishing human-written text from AI-generated content is increasingly challenging. This blurring of lines poses significant threats, facilitating academic dishonesty and the proliferation of misinformation. Researchers at Johns Hopkins University have developed a groundbreaking tool that not only detects AI-generated text but also identifies the specific Large Language Model (LLM) responsible for its creation. This innovation offers a powerful means to combat plagiarism and control the misuse of AI programs.

The tool operates on the principle that both human and AI writing exhibit unique stylistic characteristics, akin to fingerprints. Nicholas Andrews, a senior research scientist at Johns Hopkins, and his team were the first to demonstrate that AI-generated text possesses identifiable features mirroring those found in human writing. These "fingerprints" allow researchers to detect AI involvement and pinpoint the specific LLM used. This capability marks a significant advancement in the fight against AI-generated misinformation and plagiarism, offering a much-needed tool for educators and researchers alike.

Initially, Andrews’ research, which began in 2016, focused on identifying online misinformation and foreign influence on social media. His early work predated the rise of popular LLMs like ChatGPT. Surprisingly, the system he developed to identify individual human writing styles proved highly effective in detecting AI-generated text, even though it wasn’t designed for that purpose. The tool’s ability to identify "fingerprints" in AI-generated text was an unexpected but welcome discovery, demonstrating the potential for broader applications beyond its initial scope.

The Johns Hopkins tool distinguishes itself from existing AI detection tools, such as Turnitin and GPTZero, through its superior accuracy and flexibility. It’s trained on anonymous writing samples from Reddit and functions across multiple languages. The free, downloadable tool empowers individuals with basic Python knowledge to analyze any text sample and extract identifying "fingerprints." This accessibility makes it a valuable resource for researchers, educators, and anyone concerned about the authenticity of written material.

The tool’s underlying mechanism involves extracting 512-dimensional vectors that represent the unique characteristics of each writing sample, effectively creating a “fingerprint.” This deep learning approach allows the model to capture complex stylistic nuances that would be difficult for human analysis. While the resulting fingerprints aren’t readily interpretable, further research has revealed that they are largely unaffected by content words (like nouns), indicating a focus on writing style rather than topic. This emphasis on style allows the tool to differentiate AI-generated text even when the subject matter is similar to human-written text.

To assess the robustness of their tool, the researchers investigated two potential "attacks": prompting AI to mimic human writing styles and paraphrasing AI-generated text. The first approach, stylistic imitation, proved largely unsuccessful in deceiving the detector, suggesting that LLMs can only superficially mimic human writing. Paraphrasing, however, posed a greater challenge, prompting the team to refine their tool by creating a "fingerprint" for paraphrasing models. While manual paraphrasing by humans remains a potential evasion tactic, the team is continually working to improve the tool’s resilience against such manipulations. In educational settings, training the tool on students’ past writing could further enhance its ability to detect AI-generated submissions.

The potential applications of this technology extend beyond academic integrity. Its ability to identify the specific LLM used to generate text can help track the spread of misinformation and potentially identify malicious actors exploiting AI for nefarious purposes. In the ever-evolving landscape of AI-generated content, this tool offers a crucial mechanism for maintaining transparency and accountability. As AI technology continues to advance, tools like this will become increasingly vital in navigating the complex interplay between human and machine-generated communication. The development of this tool represents a significant step towards ensuring that AI remains a tool for progress rather than a source of deception.

A recent presentation of their research at the International Conference on Learning Representations yielded a compelling real-world example. Lead author Rafael Rivera Soto, a PhD student at Johns Hopkins, demonstrated the tool’s capabilities by analyzing the conference’s peer reviews. The results revealed that approximately 10% of the reviews were likely AI-generated, highlighting the growing prevalence of AI in academic submissions and the need for effective detection methods. This discovery underscores the importance of this new tool in maintaining academic integrity and ensuring the quality of scholarly work. As AI becomes increasingly integrated into various aspects of life, tools like this will play a critical role in navigating the ethical and practical challenges that arise.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

Will nation-states use AI to usher in apocalypse?

Can Wikipedia survive the rise of AI and the age of Donald Trump?

Fake AI video generators drop new Noodlophile infostealer malware

WION Updates on Fake News

Fake AI video of EAM Jaishankar ‘apologising’ debunked by PIB amid Pakistan misinformation blitz

Google Uses AI in Chrome to Spot Scam Sites Before You Grasp Any Fake News

Editors Picks

Operation Sindoor: How India fought off Pakistan's disinformation war, 7 biggest picks of conflict – Moneycontrol

May 11, 2025

Will nation-states use AI to usher in apocalypse?

May 11, 2025

Rockford Mosque hosts open house to clear misinformation about Islam

May 11, 2025

Battle against disinformation continues | Latest News India

May 11, 2025

Every citizen should be a responsible ‘soldier’ against false propaganda

May 11, 2025

Latest Articles

Time To Fight The Misinformation And Fake News War

May 11, 2025

MIB launches #IndiaFightsPropaganda campaign against cross-border misinformation – Storyboard18

May 11, 2025

Trucking Industry Disinformation Will Cost Lives

May 11, 2025

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2025 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.