Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

Sin Shake Sin Blazes a Trail of Raw Rock with “Misinformation”

July 14, 2025

PIB holds Vartalap on tackling misinformation & disinformation

July 14, 2025

Vaccine hesitancy growing in at-risk communities, providers blame social media misinformation

July 14, 2025
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»Misinformation
Misinformation

Indiana University Receives $7.5 Million Grant to Combat AI-Generated Misinformation

News RoomBy News RoomDecember 18, 20244 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

AI’s Potential to Amplify Online Misinformation and Radicalization: A $7.5 Million DoD-Funded Study

The digital age has ushered in an era of unprecedented information access, but this access comes with a dark side: the proliferation of misinformation and radicalizing messages online. These deceptive and manipulative narratives pose a significant threat to societal stability, and their impact is only expected to intensify with the rapid advancement of artificial intelligence. Recognizing this growing danger, the U.S. Department of Defense has awarded a $7.5 million grant to a multi-institutional team led by Indiana University researchers to investigate the role AI plays in amplifying the influence of online communications, including the spread of misinformation and radicalization.

This five-year project, part of the DoD’s Multidisciplinary University Research Initiative, brings together a diverse group of experts from fields such as informatics, psychology, communications, folklore, AI, natural language processing, network science, and neurophysiology. Led by Professor Yong-Yeol Ahn of the IU Luddy School of Informatics, Computing and Engineering, the team aims to dissect the complex interplay between AI, social media, and online misinformation, providing insights that could empower the government to counter foreign influence campaigns and online radicalization efforts.

Central to the investigation is the concept of "resonance," a sociological phenomenon referring to the increased persuasiveness of messages that align with an individual’s existing beliefs, emotional biases, or cultural norms. Resonance can be a double-edged sword, capable of bridging divides or fueling polarization. The researchers hypothesize that AI’s ability to rapidly generate tailored text, images, and videos has the potential to supercharge the impact of resonant messages, either for positive or negative purposes, by customizing content down to the individual level.

The research will leverage cutting-edge AI technology, including the creation of "model agents" – virtual individuals interacting within a simulated environment – to study how information flows between groups and its impact on individuals. This approach will allow the team to model the spread of information and beliefs with greater accuracy than previous methods, accounting for the complex interplay of individual beliefs, social dynamics, and network effects. The project also involves studying real-life human responses to online information, using physiological measures like heart rate monitoring to gauge the impact of "resonance."

This innovative research represents a significant departure from traditional models of belief systems. Instead of relying on simplistic categorizations like political affiliation, the team will employ a "complex network of interacting beliefs and concepts" integrated with social contagion theory. This holistic, dynamic model aims to capture the nuanced reality of human belief formation, where factors such as social group affiliation or attitudes towards specific industries may be more predictive of certain beliefs than political ideology. For example, an individual’s stance on vaccine safety might be better predicted by their social circle or views on the medical industry than their political leanings.

The implications of this research extend far beyond countering misinformation and radicalization. Understanding how AI interacts with fundamental psychological principles can inform strategies for building trust in AI systems, crucial for applications ranging from autonomous navigation to healthcare. The project’s open-access approach ensures that its findings will be publicly available, benefiting not only government agencies concerned with national security but also the wider scientific community and the public at large. This transparency reflects the project’s commitment to advancing fundamental scientific understanding of AI’s societal impact and fostering responsible development and deployment of this powerful technology.

The research team comprises six IU researchers from the Luddy School, all affiliated with IU’s Observatory on Social Media. Collaborators from other institutions include a media expert from Boston University, a psychologist from Stanford University, and a computational folklorist from the University of California, Berkeley. The project also provides valuable research opportunities for numerous Ph.D. and undergraduate students at IU. This diverse team combines expertise from disparate disciplines, signifying a comprehensive and interdisciplinary approach to tackling the complex challenges posed by the intersection of AI, online communication, and human psychology. The project’s findings promise to be crucial in navigating the increasingly complex digital landscape and mitigating the risks posed by AI-powered misinformation and radicalization campaigns.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

Sin Shake Sin Blazes a Trail of Raw Rock with “Misinformation”

Vaccine hesitancy growing in at-risk communities, providers blame social media misinformation

Trump officials address ‘chemtrails’ conspiracy theories while spreading misinformation, experts say | US Environmental Protection Agency

“Adolf Hitler is a German benefactor!” The risk of persistent memory and misinformation

BOB AVAKIAN REVOLUTION #128: Stephen A. Smith: an obnoxious poser bloviating misinformation and sucking up to fascists.

Experts warn of AI-generated fake reports and social media misinformation

Editors Picks

PIB holds Vartalap on tackling misinformation & disinformation

July 14, 2025

Vaccine hesitancy growing in at-risk communities, providers blame social media misinformation

July 14, 2025

‘A lot of disinformation’ on Props A and B spurs Ann Arbor library director to respond

July 14, 2025

How to Reduce False Positives in AI-Powered Quality Control

July 14, 2025

Trump officials address ‘chemtrails’ conspiracy theories while spreading misinformation, experts say | US Environmental Protection Agency

July 14, 2025

Latest Articles

China Is Testing Out Disinformation in Philippine Elections

July 14, 2025

“Adolf Hitler is a German benefactor!” The risk of persistent memory and misinformation

July 14, 2025

Moldova Denies Soldiers Fighting in Ukraine Amid Disinformation Claims | Ukraine news

July 14, 2025

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2025 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.