Steven Rosenbaum, a passionate advocate for a healthier media landscape, has spent five years grappling with a sobering truth: our shared understanding of reality is crumbling, and AI is accelerating its demise. As co-founder of the Sustainable Media Center, he’s seen firsthand how Gen Z is campaigning to reform a media ecosystem that increasingly struggles to serve democracy and societal well-being. In his upcoming book, “The Future of Truth: How AI Reshapes Reality,” Rosenbaum dives deep into how our truth-seeking institutions are struggling to keep up, painting a stark picture of the dangers of letting technology dictate what’s real and what’s not.
Rosenbaum laments how the term “misinformation” barely scratches the surface of our current predicament. Five years ago, the battle was against clear falsehoods. Today, it’s a far more insidious enemy: apathy. He observes a widespread “fuck it” response, a crushing resignation among people who feel constantly lied to, leading them to simply throw in the towel. This isn’t just a personal choice; it has profound civic implications. We’ve moved beyond the era of “alternative facts”; now, it’s about who can shout their version of “truth” the loudest and fastest. The very existence of “Truth Social” isn’t an accident; it’s a deliberate attempt to own the narrative. While a singular truth may be a relic of the past, Rosenbaum believes we still have the right to demand transparency from those who deliver information. Just as we expect food labels to be accurate, we should demand to know the underlying sources and the role AI plays in shaping the information we consume. His book serves as a wake-up call, urging consumers to either demand truth transparency or risk becoming victims of a manipulated reality.
The thought of top-down gatekeeping makes Rosenbaum shudder, recalling past attempts to control information that only sparked backlash. He believes the idea that technology alone can smooth over cultural, linguistic, and belief differences is “insane.” However, he finds hope in recent legal developments. Two lawsuits have successfully argued for product liability against social media companies for their impact on young people. Rosenbaum sees this as a crucial precedent, asking: “Does that same question apply to AI?” If so, AI developers must seriously consider their societal responsibilities. He believes holding tool-makers accountable for their creations’ behavior is a positive step. Yet, he acknowledges the limitations of legislation when it comes to constantly evolving technology. Banning specific apps often leads to a cat-and-mouse game where developers simply create new iterations, outmaneuvering laws designed to catch old ones. Venture capital, with its relentless pursuit of 10x returns, often fuels this cycle, prioritizing profit over potential dangers.
Despite the bleak outlook, Rosenbaum expresses a surprising optimism for Gen Z. They are digital natives, born into a world where everything – truth, facts, comedy, memes, satire – comes without clear labels. This has made them inherently skeptical, taking everything with a grain of salt. He reassures his older friends that young people aren’t checked out; in fact, they’re remarkably adept at distinguishing between “silly AI dog videos and serious things.” However, a significant concern remains: the algorithms. Rosenbaum worries about the point where algorithms bombard Gen Z with a deluge of content, 70% of which might be untrue. He sees this as a critical juncture, a moment where we must collectively demand clarity and truth from these platforms. If we fail to do so, he warns, we risk ceding control to machines that dictate our lives and our understanding of the world.
Rosenbaum’s observations about the media landscape are underscored by real-world events. U.S. District Judge Amit Mehta recently grappled with enforcing remedies against Google’s search monopoly, particularly regarding sharing search data with competitors. Google’s lawyers dramatically called this data their “crown jewels,” arguing irreparable harm if forced to comply before appeals are exhausted. The Department of Justice, however, dismissed these concerns as “purely theoretical.” Mehta, visibly frustrated with the notion of “kicking the can down the road,” acknowledged the rapid changes AI is bringing to the search market, further complicating the judicial process. This highlights the inherent difficulty in regulating fast-moving technology.
Meanwhile, a significant legal precedent was set with the first-ever conviction under the “Take It Down Act” of 2025. James Strahler II pled guilty to cybercrimes, including posting 700 images, both real and AI-generated, to a child sexual abuse website, and distributing explicit AI-generated videos of a victim to her co-workers. U.S. Attorney Dominick S. Gerace II emphasized the commitment to using “every tool at our disposal” to hold such offenders accountable. This case, championed by First Lady Melania Trump, demonstrates a crucial step towards battling new forms of digital harm, especially those fueled by AI’s generative capabilities. It’s a small victory in a much larger war, but it shows that accountability, even in the ever-evolving digital landscape, is possible.

