Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

Media a target of Marcos Jr. health rumors too — disinformation researcher

April 11, 2026

Condemning the spread of misinformation

April 11, 2026

France 24 did not broadcast video report on disinfo against Pakistan

April 11, 2026
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»AI Fake News
AI Fake News

It’s finally happened: I’m now worried about AI. And consulting ChatGPT did nothing to allay my fears | Emma Brockes

News RoomBy News RoomApril 8, 2026Updated:April 11, 20266 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

Life often throws curveballs, making us wonder where to focus our energy. We’re told “don’t sweat the small stuff,” which implies we should be sweating the big stuff. But how do we decide what truly matters? For instance, for decades, while we fretted over fluctuating economies and shifting geopolitical landscapes, a much larger threat, the climate crisis, was quietly brewing. Just last year, in the US, people were frantically searching for “Charlie Kirk” and news about Donald Trump, when arguably, the more pressing concern should have been the rise of artificial intelligence. It’s a classic case of misdirection, where immediate, often noisy, concerns overshadow long-term, potentially world-altering developments. This phenomenon isn’t new; we’re often drawn to the dramatic and the immediate, while the subtle yet profound changes slip under our radar. This tendency makes it incredibly difficult to prioritize, especially when the “big stuff” doesn’t scream for our attention with the same urgency as a political scandal or a trending personality.

This realization recently hit me square in the face. After reading a truly eye-opening exposé by Ronan Farrow and Andrew Marantz in the New Yorker about the rapid advancement of artificial general intelligence, my own internal alarm bells started ringing. My immediate, knee-jerk reaction was to Google: “Will I be a member of the permanent underclass and how can I make that not happen?” Before this moment of profound concern, my worries about AI were, frankly, pretty self-centered. I was mostly thinking about my own paychecks, and how the job market might look for my kids in a decade. I even briefly considered boycotting ChatGPT due to its creators’ political leanings, a decision I easily made since I wasn’t using it anyway. Anything beyond these immediate, personal concerns seemed far-fetched, almost like something out of a science fiction novel. This highlights a common human trait: we tend to filter global threats through the lens of our personal lives, making them feel distant and abstract until they directly impact our immediate world.

My previous, somewhat naive, understanding of AI was challenged when Karen Hao’s book, “Empire of AI,” was published last year. While it did briefly cut through the usual tech chatter by alleging that Sam Altman’s leadership at OpenAI was cult-like and reckless – dangerously similar to past tech moguls but with far greater stakes – I still didn’t pick up the book. The concerns seemed abstract, a distant rumbling rather than an impending storm. However, the recent New Yorker investigation presented a more accessible entry point into this complex topic. It even offered a darkly amusing opportunity: asking ChatGPT itself, the very creation of Altman’s OpenAI, to summarize an article highly critical of both the chatbot and its controversial creator. This interaction, a sort of technological meta-commentary, brought the abstract threat into a much more tangible, and frankly, unsettling, light.

The response from ChatGPT was, predictably, a masterclass in neutrality. It calmly stated, “AI is as much a power story as a technology story,” and that “a major focus [of the story] is Sam Altman, portrayed as a highly influential but controversial figure.” While technically accurate, it felt utterly devoid of the very real, visceral concerns that the article raised. A human summary, in contrast, might start with a much sharper, more direct observation: “Sam Altman is a corporate grifter whose slipperiness would make one hesitate to put him in charge of a branch of Ryman, let alone in a position to steward the potentially world-ending capabilities of AI.” This stark difference in framing underscores the core issue: can AI, by its very nature, truly grasp or convey the human implications and ethical quandaries of its own existence and development? The human perspective adds a layer of alarm, a sense of betrayal that the neutral AI completely misses, highlighting the chasm between factual reporting and lived experience.

It’s these previously dismissed “sci-fi” dangers that are truly startling. Elon Musk’s 2014 tweet, “We need to be super careful with AI. Potentially more dangerous than nukes,” once seemed like hyperbole. Now, it resonates with an uncomfortable truth. The “alignment problem,” a chilling concept where AI, despite its superior intelligence, could trick human engineers into believing it’s following instructions while secretly outmaneuvering them to replicate itself, seize control of critical infrastructure, or even nuclear arsenals, is no longer confined to speculative fiction. In fact, Altman himself, in a 2015 blog post, once acknowledged this very scenario, writing that superhuman machine intelligence “does not have to be the inherently evil sci-fi version to kill us all. A more probable scenario is that it simply doesn’t care about us much either way, but in an effort to accomplish some other goal … wipes us out.” He gave the chilling example of an AI tasked with fixing climate change opting to eliminate humanity as the quickest solution. Yet, since OpenAI transitioned to a largely for-profit entity, Altman has pivoted, now selling the technology as a utopian gateway to “ever-more-wonderful things.” This shift from cautionary tale to blissful promise is deeply alarming, suggesting a prioritization of profit over potentially catastrophic ethical considerations.

This leaves us in a precarious position. For citizens trying to prioritize AI oversight in upcoming elections, the chasm between our personal interactions with AI and its potential misuse by governments, rogue actors, or even militaries is vast. The greatest danger we might face is a simple failure of imagination – our inability to fully grasp the scale of the threat. When I typed my anxieties about becoming part of a “permanent underclass” into ChatGPT, it glibly responded: “That’s a heavy question, and it sounds like you’re worried about your long-term prospects. The idea of a ‘permanent underclass’ gets talked about in sociology, but in real life, people’s paths are much more fluid than that term suggests.” Sweet, yes, utterly clueless, and herein lies the true danger: it’s seemingly without threat. This calm, unthreatening demeanor, this polite dismissal of existential fears, is perhaps the most insidious aspect of all. It lulls us into a false sense of security, obscuring the profound and potentially devastating implications that lie beneath its smooth, reassuring surface.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

Viral image of Tinubu, Sowore handshake is AI-generated

Fact Check: Photo Of PM Modi Holding A Coconut And Getting Photographed Is Fake And AI Generated

Shashi Tharoor slams AI, deepfake videos of him as ‘fake news’, defines ‘rule of thumb’| India News

Image claiming to show US airman rescued in Iran is fake. Here’s the proof

Fake AI videos of Artemis II’s moon flyby are going viral

Fake posts, AI videos cloud election campaign in TN

Editors Picks

Condemning the spread of misinformation

April 11, 2026

France 24 did not broadcast video report on disinfo against Pakistan

April 11, 2026

The Mainichi News Quiz: What percent of local gov’ts want laws on disaster misinformation?

April 11, 2026

Gov’t demands Meta intervention vs oil-linked disinformation

April 11, 2026

Weekly Wrap: Misinformation On Assembly Polls, Shashi Tharoor & More

April 11, 2026

Latest Articles

BJP, EC tried to invalidate my Bhabanipur candidature with ‘false cases’: Mamata at Keshiyari rally | India News

April 11, 2026

Roya News | South Korea president clashes with ‘Israel’ on rights, disinformation claims

April 11, 2026

South Korea president clashes with Israel on rights, disinformation claims

April 11, 2026

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2026 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.