Why does everyone hate AI?

Author: Rex Woodbury

Title: Why Does Everyone Hate AI?

Translation: SpecialistXBT, BlockBeats

Editor’s Note: The domestic enthusiasm for Openclaw has brought AI Agents into everyday life. In the venture capital world, new breakthroughs, funding myths, and grand narratives about AI reshaping the world appear every few weeks. However, in stark contrast to the enthusiasm in tech and investment circles, the general public’s attitude toward AI is far less optimistic. A clear anti-AI sentiment is spreading. Why does a technology regarded as “the next industrial revolution” also provoke such strong resentment and hostility? This article explores the public sentiment paradox in the AI era from three dimensions: technological history, economic emotions, and cultural psychology.

If you want to get a sense of the current era’s mood, there’s one place especially worth checking: TikTok’s comment section. When you start reading TikTok comments, you’ll repeatedly notice a particular emotion: a sharp, intense, almost instinctive hatred toward AI.

Here are some comments I captured from a video last night:

The atmosphere… isn’t very good.

I’ve been thinking about this issue lately. My column “Digital Native” focuses on the intersection of humans and technology. And now, it seems people really despise the most important technology of our time. Clearly, this tension presents challenges: when many people outright refuse to use AI, large-scale adoption becomes difficult.

A few days ago, someone asked me how many times I use ChatGPT in a day. I said I’ve never used it, and they were shocked. I will continue to remain dismissive of AI.

I believe Silicon Valley hasn’t fully realized how deep the aversion to AI is among most Americans. I also think Silicon Valley needs to seriously consider how to respond to this backlash.

This article is divided into three parts:

  1. A Brief History of Technological Skepticism

  2. Why Is AI So Hated?

  3. How to Address AI’s Public Relations Challenges

Without further ado, let’s begin.

A Brief History of Technological Skepticism

Skeptics of technology have always existed. Even the most ordinary art of writing has faced criticism: Socrates, in Plato’s “Phaedrus,” believed that the invention of writing would “bring forgetfulness into the soul,” impairing memory. He wasn’t entirely wrong, but he was definitely alarmist. After humans shifted from oral memory to writing, they could develop more complex and advanced thoughts, leading to more sophisticated societies. Of course, sometimes writing can prevent forgetfulness (like shopping lists). And we know Socrates’ view mainly because Plato wrote it down. It’s quite fascinating.

When the printing press appeared in the 1500s, Swiss scientist Conrad Gessner warned that information overload would make the human brain “confused and harmful.” Two hundred years later, with newspapers emerging, a French politician argued that newspapers would isolate readers and undermine the collective excitement of gathering in church pulpits for news. I’ve never heard news from a pulpit myself, but I can confidently say: I prefer reading The New York Times over coffee.

By the 1900s, cars also became a target. For example, The New York Times once ran a headline titled “Nation Angry Over Motorist Killings” (which you can still find). At that time, a widely circulated statistic claimed that in the four years after World War I, more Americans died in car accidents than on French battlefields.

1924 headline: “Nation Angry Over Motorist Killings.”

I tend to agree that, at this point, people were right: future generations might find it hard to believe that we once packed ourselves into 4,000-pound death machines and sped down roads. But the anxiety at the time was already pointless: the magic had already been released from the bottle, and it couldn’t be put back.

There are many similar stories. The phonograph was criticized for depriving live, human-emotional performances of vitality; critics believed recording music would kill amateur musicians and ruin musical taste (imagine what they’d say if they saw suno.ai). Meanwhile, television was one of the most famous controversial technologies. It was nicknamed “idiot box” and “dumb box.” Critics argued TV would destroy community bonds, shorten attention spans, and encourage violence. It probably did all three.

In 1948, a boy’s reaction upon first seeing television.

Moving into the 21st century, the internet and social media faced backlash too, some justified, some not. The pace of technological progress has always been steady and predictable, and human reactions to innovation follow the same pattern. There’s a long-standing tradition: fear of what we create.

Frankenstein’s monster is perhaps the best metaphor for human fear of our own creations.

Of course, every new technology brings benefits and drawbacks; technology itself is a mirror of society. As Marshall McLuhan said: “We shape our tools, and then our tools shape us.”

All of this brings us to AI — arguably the most hated technology I’ve encountered in my lifetime.

Why Is AI So Hated?

The backlash against AI somewhat follows the historical pattern described above, but I believe the sentiment has shifted from mere skepticism to outright hostility. I see several reasons:

AI appears at a time when the public image of the tech industry is extremely poor.

In the 2010s, tech was cool. Everyone wanted to work at Google or Facebook, playing ping-pong after free lunches. In 2013, a movie depicted Vince Vaughn and Owen Wilson interning at Google. That same year, Sheryl Sandberg published “Lean In.” Marissa Mayer was revitalizing Yahoo, Apple’s spaceship headquarters was under construction, and WeWork was a rapidly growing real estate tech company. The atmosphere was optimistic.

Ten years later, when ChatGPT emerged, public attitudes had shifted. Facebook had gone through the Cambridge Analytica scandal, new studies revealed Instagram’s impact on mental health, and many had lost money on meme coins and expensive JPEGs. The mood had turned sour.

Some studies show that perceptions of AI are highly correlated with attitudes toward social media. Countries with more positive views of social media at the time of ChatGPT’s release were more receptive to AI. Conversely, countries that see social media as a threat to democracy…

Simply put: the timing for AI is poor. Trust in tech companies has eroded.

Job fears are real, and they emerge during a period of economic unease.

AI also appeared amid a tough economic environment. ChatGPT was launched in November 2022, when most Americans felt pessimistic about the economy.

People aren’t eager for disruptive technologies that might take their jobs. When they hear words like “copilot” and “augmentation,” they think of layoffs. Again, the timing for AI isn’t ideal.

Creative industries shape culture, and AI poses a unique threat to creative work.

Some of the sharpest criticism of AI comes from creative sectors. You can see this on TikTok.

Last year, Adrien Brody won an Oscar for “The Brutalist,” but later, filmmakers revealed they used AI to improve Brody’s Hungarian accent in the film. This still draws criticism from TikTok users. Taylor Swift’s AI-generated promotional videos for “The Life of a Showgirl” also faced backlash. In the TV series “The Studio” (an excellent show), an angry viewer yells at Seth Rogen’s producer character because they used AI in the Kool-Aid movie, and Ice Cube even directly shouts: “F*ck AI!”

Of course, after the 2023 SAG-AFTRA strike — the longest in Hollywood history — we even saw AI actors like Tilly Norwood. A recent headline in The Hollywood Reporter read:

Creative workers are the shapers of culture and public opinion. If AI is seen as a threat to their livelihoods, its influence will ripple through the entire cultural sphere.

AI is inauthentic, and current cultural trends celebrate authenticity. AI is online, while the trend now favors offline.

Vinyl sales hit a 30-year high, Gen Z is buying film cameras, and flip phones (the so-called “dumb phones”) are making a comeback. A cultural shift toward analog, human touch, and tactile experiences is underway. AI, by contrast, is synthetic. Nostalgia partly reacts against AI fever, but this trend predates the rise of transformer models. Today, offline life is cool, and AI is the most “online” thing. When people crave authenticity, a technology inherently “fake” by nature is at a disadvantage.

AI is perceived as an attack on identity.

The fifth reason is more vague but perhaps the most crucial. AI makes people feel that they are becoming less than machines in their most proud achievements. What does this mean? Consider Maslow’s hierarchy of needs: AI is attacking the top of the pyramid.

Historically, automation waves have targeted the lower levels of the pyramid — steam engines and assembly lines replaced physical labor (the physiological work of survival). Early software automated clerical and administrative tasks. Some felt displaced, but automation didn’t reach into the areas where people define their highest value — creativity, art, music. Many take pride in their skills — programming, law, customer service. AI is invading these identity domains, and the speed of this invasion is alarming. If a graphic designer’s self-worth is based on creating beautiful animations, and Midjourney can generate a “better” image in seconds… that’s hard to accept.

A TikTok comment captures this well:

I want AI to do the boring stuff I don’t want to do, not my hobbies I love.

The angry comments on TikTok criticizing AI are often from knowledge workers, educators, and those at the top of the economic pyramid, who originally believed they wouldn’t be replaced by technology. AI threatens the most privileged, which almost reverses the historical pattern of technological progress.

How to Address AI’s Public Relations Challenges

Most technological backlash stems from innate fear of new things. But AI’s backlash is more like a combination of factors: broken trust, economic anxiety, and a cultural atmosphere ready to reject any new technology — not to mention that it touches deeply human realms. But the magic has already been released from the bottle, and AI has many astonishing applications; I am a firm supporter of AI myself. So, how do we fix this PR problem?

Start from the bottom of the pyramid

The most convincing AI applications are those that save lives. For example: AI can detect cancer earlier than any radiologist. These applications directly address fundamental human needs (survival) and should be emphasized more.

Tell stories around “pain points” rather than “capabilities”

Some of our Daybreak portfolio companies have quietly switched their .ai domains back to .com. Entrepreneurs need to be very careful when communicating AI to customers. They should focus first on the problems they solve. Nurses don’t care whether they use Opus or Sonnet; they care whether the product helps them complete paperwork faster. Most tech launches emphasize what AI can do (model capabilities), not what problems it can solve for ordinary people. The narrative should shift from “this model has 1 trillion parameters” to “this product can eliminate 4 hours of repetitive work.”

Change who delivers the message — stop letting VCs talk

Maybe this is a sign I should end here. Nobody wants to hear VCs talk. The loudest voices supporting AI come from tech CEOs and VCs, who are precisely the groups the American public trusts the least. If I were in charge of AI marketing, I’d have real users make ads: farmers, accountants, home caregivers. Even OpenAI or Anthropic would be more convincing with real users in Super Bowl ads than vague inspirational montages (OpenAI) or subtle digs at competitors (Anthropic).

Acknowledge labor market shifts, then emphasize retraining and new job opportunities

Many entrepreneurs and VCs cite data claiming AI will create more jobs than it displaces. But for those who lose their jobs, that doesn’t matter. The term “Luddite” originates from 19th-century British textile workers who organized to destroy weaving machines in the 1810s.

These workers probably understood that new machines would eventually make society better; but they also knew these machines would make their lives worse in the short term. The right response to such upheaval is to acknowledge it and invest genuinely in retraining programs.

Make humans more visible in AI products

If I were Pixar, I’d hold a contest: see who can make the best animated short using AI tools. This kind of exercise makes the playing field more equitable: anyone with a good story can create beautiful work from their own living room. Artists remain central. If we had more projects like this, people would better understand how AI amplifies human creativity and acts as an equalizer. Just a thought.

Conclusion

Last month, Trump’s State of the Union address became the longest in history, 20 minutes longer than Clinton’s in 2000. Yet, in nearly two hours, Trump mentioned AI only three times.

Clearly, many things are happening in the world; we are in a very fragile geopolitical moment (I highly recommend Ray Dalio’s article on the disintegration of the world order). But at the same time, we are also in the early stages of what could be the biggest technological revolution of this generation, or even history. The fact that AI was only mentioned three times in a two-hour speech shows we are still very early.

Billions of people worldwide have never used AI. In the US, many even take pride in never having used it.

This is clearly unsustainable. AI adoption is coming fast, and it is colliding with the strongest anti-technology sentiment in a century (possibly ever).

Silicon Valley confidently believes AI will ultimately win; of course it will. Technology always wins. But this confidence can also make them appear arrogant in the face of public skepticism, leaving a trail of resentment that could ultimately backfire on Silicon Valley. The most impressive thing about Silicon Valley is its long history of building technology for billions. But if billions see you as the villain, it becomes very difficult indeed.

Original link

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin