AIPasta: A New Way to Spread Disinformation [2025]

.xlsx
)[citation]

generateative AI can be used to create persuasive content by manipulating disinformation. A new strategy called “AIPasta” combines the controlled manipulation of information with the “repetitive truth” effect by repeatedly sharing copies of the same text. Researchers demonstrated that this method, inspired by the CopyPasta phenomenon, can amplify disinformation efforts.

In a study published in the *PNAS Nexus, researchers explored the combination of AIPasta and CopyPasta to create messaging about conspiracy theories. The results showed that AIPasta, with its iterative paraphrasing of disinformation, outperformed CopyPasta in convincing participants that certain claims were true. Specifically, AIPasta was found to increase belief in false narratives more than CopyPasta, particularly for Republican participants.

AIPasta stands out because it uses AI to generate multiple, slightly different versions of the same message, giving participants a sense of increased agreement and confidence. This method is more effective than CopyPasta, which amplifies its credibility but can be less effective when presented by copying similar texts.

However, AIPasta has limitations. While it is not easily detectable by AI-optered detectors, this makes it resistant to removing from social media platforms. The study also suggests that a two-fold improvement in participant confidence may be necessary for widespread adoption of AIPasta.

Overall, AIPasta represents a promising new tool for preserving and amplifying disinformation, but further research is needed to address its limitations and explore alternatives that are both effective and non-detrimental.

Conclusion [2025]ざ堠 (Citation)

summarizes AIPasta as an innovative approach to disseminating disinformation through the convolution of repeated and paraphrased content. While AIPasta offers increased participant confidence in certain narratives, its preservation and resiliency in social media channels highlighted by AI-optered detectors must address its potential dilution risk. Ongoing studies are essential to refine this method for greater effectiveness and safer dissemination of false information.

Share.
Exit mobile version