In a recent incident that has raised alarms about the integrity of journalism in the era of artificial intelligence, a reporter from the Cody Enterprise in Wyoming, Aaron Pelczar, reportedly used generative AI to assist in writing his articles. Powell Tribune reporter CJ Baker noted several discrepancies in Pelczar’s reporting, including synthetic quotes and a peculiar style that seemed almost robotic. Notably, in a June 26 article, Pelczar quoted a local comedic figure and included an explanation of the ‘inverted pyramid’ writing format, elements indicating a reliance on AI rather than diligent journalism. Following an internal review initiated after Baker’s discovery, the Enterprise editors acknowledged that Pelczar’s work contained quotes from individuals who claimed they had never spoken to the reporter.
Both the publisher and the editor of the Cody Enterprise have publicly addressed the situation, apologizing for the lapse in editorial oversight. Editor Chris Bacon admitted that he failed to catch the AI-generated content, highlighting the seriousness of allowing AI to fabricate quotes within the stories. Publisher Megan Barton emphasized the newspaper’s commitment to community trust and announced that preventive measures would be put in place to ensure the authenticity of their reporting. The incident has called into question the influence and management of AI in newsrooms and how such tools are integrated into journalistic practices.
While AI technologies are being adopted to streamline various journalistic tasks—such as producing earnings reports or translating articles—there remains an overarching concern about the accuracy and authenticity of AI-generated content. The Associated Press, for example, has enlisted AI to assist with specific tasks while maintaining a policy that restricts the use of generative AI for creating publishable articles. This careful approach seeks to preserve the credibility of journalistic endeavors, particularly as stories alleging improper use of AI continue to emerge.
CJ Baker’s investigative efforts revealed that several individuals had been misquoted in Pelczar’s stories, which included references to fabricated quotes from Wyoming Governor Mark Gordon. The state officials only learned of these inaccuracies when Baker reached out, underscoring the potential harm AI-generated content can cause in misrepresenting authorities and crucial information. This incident evidences the challenges of verifying the authenticity of quotes generated by AI and raises critical questions regarding the accountability of news outlets in this rapidly evolving media landscape.
As the implications of AI continue to unfold, particularly in regards to job displacement, experts warn of the inherent risks that generative AI poses to the journalism sector. Baker’s investigation has prompted calls for stricter regulations and clearer guidelines regarding the use of AI in reporting to prevent further ethical breaches. Poynter Institute’s Alex Mahadevan pointed out the ease with which users can generate deceptive content through AI, stressing the necessity of integrating discussion about AI policies into newsroom culture and pre-employment procedures.
In light of the growing concerns around AI’s role in journalism, many news outlets are now re-evaluating their policies. Editor Chris Bacon stated his intention to create a comprehensive AI policy for the Cody Enterprise by the week’s end. With the increasing sophistication of AI technologies capable of generating articles that may seem credible at first glance, the call for transparency, accountability, and clear regulations is stronger than ever to maintain public trust in journalism’s foundational principles.