Imagine a prime-time talk show, the kind that tackles big, important issues, especially those making headlines. On Wednesday evening, Christian Nitsche, the editor-in-chief of Bayerischer Rundfunk (BR), was at the helm of “Münchner Runde.” The topic? Deepfakes – those unsettling, hyper-realistic fakes created by artificial intelligence that can make it seem like someone said or did something they absolutely didn’t. The recent incident with actress Collien Fernandes, highlighting sexualized violence through AI manipulation, really drove home the urgency of this conversation. Nitsche kicked off the show, emphasizing, “We have to protect ourselves, we know that, and that’s what we’re talking about now.” He then segued into a segment designed to shock and inform, showcasing just how easy it is for AI to create these convincing fakes. He introduced an “AI expert” from Munich who, he claimed, was demonstrating the ease of faking videos using special sensors on his head to transfer facial expressions and even voice to other faces. “Look how lifelike it looks!” Nitsche exclaimed, genuinely impressed by what he believed was cutting-edge technology. The videos then played, featuring AI avatars warning about AI fakes, with their expressions eerily mirroring the “expert” – Florian Hübner, also known as “Mr. Tech” – who appeared in a small window with black dots on his face and a strange headset.
But here’s where the story takes a rather ironic turn, a “plot twist” that highlights the very media literacy issues the show aimed to address. The “AI expert,” the special “helmet” with sensors, the black dots on his face – it was all a clever illusion. Florian Hübner, the person behind “Mr. Tech,” had openly stated in the comments section of his own videos that his entire setup was simulated, a creation of AI itself. When a user, perhaps a little too eagerly, asked where they could buy the “helmet and the points,” Hübner’s response was clear: “Hold on (…). The video below is of course … also AI. The helmet doesn’t exist.” Yet, somehow, this crucial detail completely escaped the notice of everyone at Bayerischer Rundfunk. They presented his AI creation as a demonstrable example of how easily real people with real gear could create deepfakes, entirely missing the fact that the demonstration itself was an AI fake. It’s like watching a magic show and believing the magician genuinely made a rabbit disappear, rather than understanding it’s a trick.
Florian Hübner, quite understandably, took to Instagram, expressing a mix of amusement and astonishment. He acknowledged that it was “actually a good thing” for BR to raise awareness about deepfakes, but he couldn’t help but point out the glaring flaw: “If they hadn’t fallen for my AI themselves!” He elaborated on the irony, recalling how Nitsche had so earnestly described his “special ‘helmet’ with sensors and tracking points” for transmitting facial expressions. Hübner clarified, “The plot twist: this helmet doesn’t even exist. The dots, the sensors … it’s ALL AI!” He posed a critical question that should make anyone in media, or indeed anyone consuming media, pause and reflect: “If even the public media can no longer tell the difference between real hardware and AI tools, how are YOU supposed to be able to?” His good-natured, yet pointed, advice to BR and “all of you” was simple: “it never hurts to do a tiny little fact check.” It’s a stark reminder that in our increasingly AI-saturated world, critical thinking and verification aren’t just good practices; they’re essential survival skills.
This rather embarrassing oversight by BR, though perhaps not as explosive in its immediate aftermath, echoes a similar misstep by German public broadcaster ZDF’s “heute journal” just a few weeks prior. In February, that news program used AI-generated images to illustrate the methods of the US migration authority ICE, unknowingly presenting fabricated visuals as factual evidence. To make matters worse, they also included an authentic scene that was pulled completely out of its original 2022 context, further blurring the lines between truth and misrepresentation. The consequences for ZDF were more severe: a US correspondent was dismissed, and a comprehensive plan of action was announced to restore credibility. Bettina Schausten, ZDF Editor-in-Chief, emphatically stated, “Credibility is our greatest asset. With the measures we have adopted, we are showing that we are very serious about coming to terms with the situation.” These incidents underscore a growing challenge for news organizations: the relentless pace of technological advancement, particularly in AI, demands an equally rapid adaptation of journalistic scrutiny and verification processes.
The BR incident, while not leading to dismissals, serves as a crucial wake-up call, emphasizing that even institutions dedicated to informing the public can fall prey to sophisticated deceptions. It’s a testament to the powerful, often insidious, capabilities of AI to mimic reality so convincingly that it can fool even those actively trying to expose its dangers. The “Münchner Runde” aimed to educate its audience about the perils of deepfakes and the necessity of media literacy, only to inadvertently provide a live, real-time example of exactly what they were warning against. This kind of self-inflicted wound, while uncomfortable, can be a powerful learning experience. It forces a re-evaluation of current verification protocols and highlights the urgent need for journalists and content creators to become even more astute at identifying AI-generated content, especially when the lines between real and synthetic are so artfully blurred.
In essence, the BR tale is a modern fable about the perils of an increasingly digital world where appearances can be deceiving, and even those who preach caution can be caught off guard. It’s a story not just about technology, but about human trust, the constant need for vigilance, and the fundamental importance of “a tiny little fact check.” As AI continues to evolve at breakneck speed, blurring the lines of reality, it places an even greater responsibility on media organizations to uphold their commitment to truth and accuracy, not just in the stories they report, but in the very examples they choose to illustrate them. For the audience, it’s a reminder that in the age of AI, critical thinking is no longer a luxury; it’s a necessity for navigating the digital landscape and discerning what is truly real from what is merely a very convincing illusion.

