Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

Kelowna paediatricians speak out on unit closure, misinformation

June 9, 2025

Government warns of AI-driven disinformation as national security risk | Politics

June 9, 2025

Fake news about roundworms shakes the US cattle market

June 9, 2025
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»AI Fake News
AI Fake News

Technology for Identifying Large Language Models.

News RoomBy News RoomDecember 23, 2024Updated:December 23, 20244 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

Unmasking AI Authorship: A New Tool Detects and Identifies Large Language Models

The rapid advancement of Artificial Intelligence (AI), particularly in natural language processing, has ushered in an era where distinguishing human-written text from AI-generated content is increasingly challenging. This blurring of lines poses significant threats, facilitating academic dishonesty and the proliferation of misinformation. Researchers at Johns Hopkins University have developed a groundbreaking tool that not only detects AI-generated text but also identifies the specific Large Language Model (LLM) responsible for its creation. This innovation offers a powerful means to combat plagiarism and control the misuse of AI programs.

The tool operates on the principle that both human and AI writing exhibit unique stylistic characteristics, akin to fingerprints. Nicholas Andrews, a senior research scientist at Johns Hopkins, and his team were the first to demonstrate that AI-generated text possesses identifiable features mirroring those found in human writing. These "fingerprints" allow researchers to detect AI involvement and pinpoint the specific LLM used. This capability marks a significant advancement in the fight against AI-generated misinformation and plagiarism, offering a much-needed tool for educators and researchers alike.

Initially, Andrews’ research, which began in 2016, focused on identifying online misinformation and foreign influence on social media. His early work predated the rise of popular LLMs like ChatGPT. Surprisingly, the system he developed to identify individual human writing styles proved highly effective in detecting AI-generated text, even though it wasn’t designed for that purpose. The tool’s ability to identify "fingerprints" in AI-generated text was an unexpected but welcome discovery, demonstrating the potential for broader applications beyond its initial scope.

The Johns Hopkins tool distinguishes itself from existing AI detection tools, such as Turnitin and GPTZero, through its superior accuracy and flexibility. It’s trained on anonymous writing samples from Reddit and functions across multiple languages. The free, downloadable tool empowers individuals with basic Python knowledge to analyze any text sample and extract identifying "fingerprints." This accessibility makes it a valuable resource for researchers, educators, and anyone concerned about the authenticity of written material.

The tool’s underlying mechanism involves extracting 512-dimensional vectors that represent the unique characteristics of each writing sample, effectively creating a “fingerprint.” This deep learning approach allows the model to capture complex stylistic nuances that would be difficult for human analysis. While the resulting fingerprints aren’t readily interpretable, further research has revealed that they are largely unaffected by content words (like nouns), indicating a focus on writing style rather than topic. This emphasis on style allows the tool to differentiate AI-generated text even when the subject matter is similar to human-written text.

To assess the robustness of their tool, the researchers investigated two potential "attacks": prompting AI to mimic human writing styles and paraphrasing AI-generated text. The first approach, stylistic imitation, proved largely unsuccessful in deceiving the detector, suggesting that LLMs can only superficially mimic human writing. Paraphrasing, however, posed a greater challenge, prompting the team to refine their tool by creating a "fingerprint" for paraphrasing models. While manual paraphrasing by humans remains a potential evasion tactic, the team is continually working to improve the tool’s resilience against such manipulations. In educational settings, training the tool on students’ past writing could further enhance its ability to detect AI-generated submissions.

The potential applications of this technology extend beyond academic integrity. Its ability to identify the specific LLM used to generate text can help track the spread of misinformation and potentially identify malicious actors exploiting AI for nefarious purposes. In the ever-evolving landscape of AI-generated content, this tool offers a crucial mechanism for maintaining transparency and accountability. As AI technology continues to advance, tools like this will become increasingly vital in navigating the complex interplay between human and machine-generated communication. The development of this tool represents a significant step towards ensuring that AI remains a tool for progress rather than a source of deception.

A recent presentation of their research at the International Conference on Learning Representations yielded a compelling real-world example. Lead author Rafael Rivera Soto, a PhD student at Johns Hopkins, demonstrated the tool’s capabilities by analyzing the conference’s peer reviews. The results revealed that approximately 10% of the reviews were likely AI-generated, highlighting the growing prevalence of AI in academic submissions and the need for effective detection methods. This discovery underscores the importance of this new tool in maintaining academic integrity and ensuring the quality of scholarly work. As AI becomes increasingly integrated into various aspects of life, tools like this will play a critical role in navigating the ethical and practical challenges that arise.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

Top Tech News Today | Meta’s $10B AI Bet, Fake AI in Court, WeChat Spying, Apple Legend Passes & Ocean-Safe Plastic! – Analytics Insight

IsItCap.com Launches Free AI Tool to Detect Fake News and Media Bias

Lawyers could face ‘severe’ penalties for fake AI-generated citations, UK court warns

UK judge warns of risk to justice after lawyers cited fake AI-generated cases in court

Investigation finds social media companies help enable explicit deepfakes with ads for AI tools

IMLS Updates, Fake AI-Generated Reading Recs, and More Library News

Editors Picks

Government warns of AI-driven disinformation as national security risk | Politics

June 9, 2025

Fake news about roundworms shakes the US cattle market

June 9, 2025

Pentagon spread UFO disinformation to protect classified projects: Report

June 9, 2025

New book explores misinformation’s history, psychology, social impact and potential solutions

June 9, 2025

Disinformation, China and Indonesia’s Viral Battleground After Trump’s Tariffs

June 9, 2025

Latest Articles

Manish Gupta Denies Stabbing Allegation, Claims Driver Filed False FIR To ‘Extort Money’

June 9, 2025

Mapping, understanding and reducing belief in misinformation about electric vehicles

June 9, 2025

Russia to launch disinformation campaign targeting Ukraine

June 9, 2025

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2025 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.