Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

UN: Ad spending tops $1T but AI risk fueling misinformation

May 7, 2026

Disinformation is a national security threat – Felix Kwakye Ofosu concerned over fake news

May 7, 2026

California elections officials urge early mail-in voting, warn about 'misinformation' – The Daily Gazette

May 7, 2026
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»AI Fake News
AI Fake News

Using AI in journalism – Media Helping Media

News RoomBy News RoomMarch 12, 2026Updated:April 29, 20268 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

Navigating the AI Frontier in Journalism: A Human-Centric Approach

In the ever-evolving landscape of media, the integration of Artificial Intelligence presents both exciting opportunities and significant challenges. For media organizations, the fundamental principle must remain unwavering: safeguarding journalistic integrity and nurturing audience trust. These aren’t just abstract ideals; they are the bedrock upon which reliable news and information are built. Our readers, viewers, and listeners, who turn to us for clarity and understanding in a complex world, rightfully expect the highest standards in every stage of our work – from the initial gathering of facts to their final dissemination. At Media Helping Media (MHM), we embarked on this transformative journey with generative AI in February 2025, building upon two decades of dedicated service. Our site, launched 20 years prior, was already a valuable repository of 150 articles on journalistic best practices, lovingly crafted and contributed by professional journalists within the MHM network. This rich human-generated foundation has been, and continues to be, the indispensable starting point for all our AI endeavors. We firmly believe that AI should serve as a powerful assistant, amplifying and enriching human expertise, never replacing the critical thinking, ethical judgment, and creative spark that define true journalism.

The sheer potential of AI to enhance accessibility and reach for our educational resources became clear almost immediately. Since that initial foray, our use of AI has been a journey of continuous refinement and expansion. We’re currently leveraging a diverse suite of generative AI tools, including Google Gemini, OpenAI’s ChatGPT, Perplexity AI, and Anthropic’s Claude AI. These tools aren’t just used in isolation; they are thoughtfully integrated to build upon and complement our original, human-authored content. The results have been remarkable: our free resources, which cater to a broad audience of journalists, educators, managers, and indeed, anyone seeking to understand best practices, have more than doubled. As of March 2025, we proudly offer 405 items, all readily available for download, adaptation to local contexts, and practical application. What’s crucial to understand is that every single piece of this expanded content originates from the foundational material created by MHM contributors. AI’s role is one of augmentation, intelligently weaving itself into our workflow to add value in several key areas. From structuring new learning materials – ranging from concise guides on essential journalistic basics to comprehensive day-long lessons and course modules – to acting as a brainstorming partner for deeper exploration of existing content, AI is a constant, helpful presence. It meticulously proofreads our new, human-created content, catching typos and grammatical errors that might otherwise slip through. It optimizes headings for better search engine discoverability, ensuring our valuable resources reach a wider audience. Perhaps most engagingly, AI helps us produce Q&As related to our existing content and creates a “Further Thoughts” feature, where, under human supervision, it expands on selected MHM-created pieces, offering new perspectives and angles. The fundamental principle remains: no resource on MHM is solely the product of AI-generated content. Its contribution is specifically tailored to provide structural support for learning materials and to add carefully selected enriching elements to the content meticulously crafted by our network of professional journalists.

One of the most exciting developments in our AI integration has been the creation of our own bespoke MHM AI tools, which we affectionately call “Gems.” We recognized that many of the tasks AI excels at – like structuring content, proofreading, or generating Q&As – are often repetitive and time-consuming for humans. What might take a person several hours or even a full day, AI can now accomplish in mere seconds. The beauty of these Gems lies in their accessibility and simplicity. Anyone with a free Google Gemini account can create one, designing it to act as an expert assistant for specific, recurring tasks. We offer a straightforward walk-through that enables users to create their own Gems in minutes. Currently, MHM boasts a growing list of these tailored Gems, each designed to streamline a particular aspect of our content creation and dissemination process. The ease of building a Gem is truly remarkable. It involves three key steps: first, giving the Gem a clear name and defining its role; second, writing precise instructions detailing what you expect the Gem to produce; and finally, uploading relevant examples, such as style guides, rules, and regulations, to its knowledge base. This ensures that the Gem understands the specific context and parameters of your request.

The core of a successful Gem lies in the quality and clarity of its instructions. You, as the human operator, are the architect of its behavior. It’s crucial to imbue the Gem with explicit rules about what it should and should not produce. For instance, if your objective is for the Gem to generate a lesson outline solely based on material you have personally written, you must unequivocally instruct it not to deviate from that specific content. If you envision that lesson starting at 9 AM, comprising two morning and two afternoon sessions, complete with a presentation, interactive activities, and a discussion, then every one of those details must be meticulously set out during the Gem’s creation. Once a Gem is built, you have the flexibility to keep it private for your own use or to share it with colleagues, enabling them to leverage its capabilities. The level of access is entirely within your control. Furthermore, the iterative nature of using Gems allows for continuous refinement. If a Gem makes an error, you simply provide feedback on what went wrong and instruct it to incorporate that correction into its future guidelines, ensuring it doesn’t repeat the mistake. This ongoing dialogue between human and AI allows the tool to learn and improve, becoming an increasingly sophisticated and reliable assistant over time.

Beyond the creation of Gems, a critical lesson we’ve learned is the profound impact of prompts. The quality of the input directly correlates to the quality of the output a Gem – or any AI model – produces. Thoughtfully constructed prompts will invariably yield higher-quality, more relevant responses. AI, in its eagerness to please, tends to be highly responsive to the wording of a prompt. This means that if a prompt is poorly phrased or subtly biases towards a particular outcome, the AI will likely oblige. To maintain objectivity and factual accuracy, it is imperative that prompts are neutral in tone, devoid of leading questions or preconceived notions. This human attentiveness to prompt construction is a vital safeguard against unintended bias and inaccurate information. Even with carefully crafted prompts, vigilance remains paramount. AI, despite its impressive capabilities, is not infallible. It is prone to what are colloquially known as “hallucinations” – instances where it generates responses containing false or misleading information that are presented as factual. We emphasize that every piece of content generated by AI, without exception, must undergo rigorous quality control. This means thoroughly checking for factual accuracy, relevance, and adherence to our editorial standards. This human oversight is not merely a formality; it is a non-negotiable step in ensuring the integrity of our published material.

Ultimately, transparency with our audience is not just good practice; it is an ethical imperative. If a media organization opts to integrate AI into its production processes, whether for designing training materials or compiling straightforward background information, it has a moral obligation to inform its audience about how AI is being utilized. Major media organizations worldwide have already established dedicated pages on their websites outlining their AI policies. This practice fosters understanding and trust, allowing the audience to comprehend the methods behind the content they consume. These policies typically coalesce around three fundamental, high-level rules, which MHM wholeheartedly endorses. Firstly, human responsibility remains paramount: AI serves as a powerful assistive tool for journalists, but the ultimate editorial judgment, the rigorous verification of facts, and the final accountability for all published content must always rest with human editors and reporters. Secondly, the golden rule of “check everything” is non-negotiable: AI output should never be accepted as fact without independent verification. All information generated with AI must be meticulously cross-referenced and validated using standard journalistic procedures before publication. Thirdly, and perhaps most crucially, media organizations must be open and honest with their audiences: transparency regarding when and how AI is employed is essential. Crucially, AI must never be used in a manner that could mislead or deceive the public. These three foundational rules often serve as the bedrock, with more detailed newsroom policies addressing ethics, copyright, data protection, and comprehensive editorial oversight built upon them. By adhering to these principles, media organizations can responsibly harness the immense potential of AI while preserving the core tenets of truthful and trustworthy journalism.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

Reform candidate ‘accidentally’ shares fake AI video of a Muslim man

How to survive the information crisis: ‘We once talked about fake news – now reality itself feels fake’ | Media

Report reveals: Fake AI ‘rabbis’ spread antisemitism on TikTok

‘Think before sharing’: Giorgia Meloni issues warning as fake lingerie images spread online

‘Think before sharing,’ Giorgia Meloni says as AI-made lingerie image of her goes viral | Giorgia Meloni

AI‑generated Met Gala looks are back: Here’s how to tell the real from the fake

Editors Picks

Disinformation is a national security threat – Felix Kwakye Ofosu concerned over fake news

May 7, 2026

California elections officials urge early mail-in voting, warn about 'misinformation' – The Daily Gazette

May 7, 2026

Employment Ministry launches communication strategy to tackle misinformation

May 7, 2026

False News Alert – PS Employment not in custody

May 7, 2026

GCC Condemns Iran’s ‘False Allegations’ Against UAE, Backs Emirati Sovereignty and Regional Security

May 7, 2026

Latest Articles

SA on high alert, but beware of misinformation campaign on anti-immigration debate

May 7, 2026

Community invited to explore truth and misinformation

May 7, 2026

Inside Housing – News – Misinformation risks undermining real causes of housing crisis

May 7, 2026

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2026 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.