In the not-so-distant past, artificial intelligence was being hailed as the savior of modern business. Grand predictions floated around boardrooms and tech expos alike: AI would automate the mundane, streamline operations, and cut labor costs.
According to OpenAI’s Sam Altman, 2025 was supposed to be the golden year for “AI agents”—digital workers capable of performing complex, human-level tasks entirely on their own.
But now, halfway through the much-anticipated year, that bright, gleaming future is looking a bit… buggy.
The “AI Agent” Dream Falls Short
An AI agent is supposed to act like a virtual employee—one that doesn’t eat, sleep, or ask for a raise. These agents are designed to handle end-to-end tasks on their own, ideally without needing humans to double-check their work.
In theory, that sounds like a dream come true for companies. In practice, though? Not so much.
As of April 2025, even the most advanced AI agents could only complete about 24% of the tasks given to them. That’s not just underwhelming—it’s barely a passing grade. Yet despite the unimpressive track record, companies raced ahead, slashing entire departments and swapping out seasoned professionals for AI-driven solutions.
Some saw it as bold innovation. Others now see it as corporate self-sabotage.
Gartner’s Survey: Executives Are Second-Guessing Themselves
A recent survey by consulting firm Gartner revealed some telling numbers. Out of 163 business executives, half admitted that their once-ambitious plans to significantly downsize their customer service teams would be dropped by 2027.
Simple: AI couldn’t handle the job. Whether it’s misinterpreting customer inquiries, failing to escalate problems properly, or simply misunderstanding nuance, AI tools—particularly in customer-facing roles—are proving to be less reliable than promised.
This has pushed companies to shift their messaging. The phrase “AI-powered transformation” is being slowly retired. In its place: “hybrid workforce models,” “transitional challenges,” and the ever-popular “human-in-the-loop” approach. In plain English, that means companies still need humans… lots of them.
As Kathy Ross, a senior director at Gartner, put it: “The human touch remains irreplaceable in many interactions.”
Turns out, customer empathy can’t be automated.
Read more: ‘Spiritual Bliss Attractor’: Strange Phenomenon Emerges When Two AIs Are Left Talking To Eachother
AI Fatigue Is Spreading in the Workforce
Interestingly, many workers saw this coming long before the boardroom did. A joint study by IT firm GoTo and research company Workplace Intelligence found that 62% of employees believe AI is being significantly overhyped.
That’s not just workplace grumbling. It’s a reality check.
Many employees have already experienced AI tools in action—and not always in a good way. From chatbots that loop you in circles to auto-responses that completely miss the point, workers and customers alike are becoming disenchanted with the so-called magic of machine learning.
Even more telling: only 45% of corporate IT leaders say their company has an actual AI policy. That means more than half of organizations have been throwing AI into their operations without a clear strategy, framework, or safety net.
Among the top concerns cited:
- Security risks: AI can unintentionally leak sensitive data.
- Integration issues: New systems don’t always play nice with legacy software.
- Training demands: AI tools need lots of data—and lots of oversight.
In many cases, AI has added complexity rather than reducing it.
Case Study: Klarna’s Big Reversal
Few stories illustrate the AI reversal better than Klarna, a Swedish fintech company.
In 2024, Klarna downsized its workforce by a striking 22%. The goal? Make way for AI and automation. The narrative was clear: out with the old (human workers), in with the new (machine efficiency).
But by May 2025, Klarna hit the brakes—and hit them hard.
The company launched a “recruitment drive” to rehire human employees. It was a quiet admission that the AI experiment hadn’t quite gone to plan. And Klarna isn’t alone. Several major brands, particularly in customer service and content moderation, have quietly reintroduced human teams after failed AI rollouts.
This isn’t just about productivity—it’s also about reputation. In some cases, AI-driven tools have made embarrassing mistakes, from sharing private information to offering completely nonsensical advice. The cost of cleaning up those errors? Often higher than the savings gained from automation in the first place.
Read more: The Brain Actively Removes Unwanted Memories. Here’s How.
Smoke, Mirrors, and “Agentic” Hype
So why did so many companies fall for the AI pitch?
According to tech critic Ed Zitron, the problem isn’t just the tech itself—it’s the way it was sold.
“These ‘agents’ are branded to sound like intelligent lifeforms,” Zitron says, “but are really just trumped-up automations.” Instead of being sentient or intuitive, many AI tools still require extensive programming, regular corrections, and strict limitations.
Think of them less as brilliant co-workers, and more like over-eager interns who need supervision 24/7.
In fact, many AI products have been found to simply repackage older automation systems with a new AI label. The result? Confused customers, frustrated staff, and CFOs wondering where all that “cost-saving efficiency” went.
Where Do We Go From Here?
So, is AI dead in the water? Not quite.
Despite the missteps, AI still has plenty of useful applications—just not the all-powerful ones marketers claimed. When paired with skilled human workers, AI can:
- Help organize data faster
- Automate repetitive tasks (like scheduling or basic reports)
- Provide insights from large datasets
- Offer draft responses or suggestions in communications
But it can’t replace emotional intelligence, ethical judgment, or common sense. Not yet, anyway.
What’s becoming clearer by the day is this: companies can’t shortcut their way into the future by simply swapping people for code. True innovation requires thoughtful planning, ethical foresight, and a better understanding of what AI really can and can’t do.
Final Thought: The Human Edge
Despite all the flashy demos, futuristic headlines, and optimistic tech forecasts, one truth is echoing louder than ever across boardrooms and break rooms alike: humans aren’t obsolete—far from it.
In a world increasingly shaped by algorithms and automated decision-making, the value of human qualities has become more—not less—apparent. Machines might process data at lightning speed, but they don’t feel the room. They don’t navigate messy interpersonal situations, notice the hesitation in a customer’s voice, or recognize that awkward pause in a Zoom call as a sign of unspoken concerns. That’s the realm of human instinct, empathy, and nuance.
These uniquely human capabilities are not just “nice to have” anymore—they’re essential.
Empathy Still Can’t Be Coded
No matter how sophisticated AI becomes, it doesn’t truly understand what it’s like to be human. It doesn’t know what it means to comfort a frustrated client, motivate a tired coworker, or mediate between two team members who don’t see eye to eye. In high-stakes environments—whether it’s healthcare, crisis management, education, or customer support—empathy often makes the difference between success and failure.
AI can mimic polite language. It can generate apologies. But it doesn’t mean them. That gap in authenticity is something people feel instinctively. And once that trust is lost, it’s incredibly hard to regain.
Creativity Doesn’t Follow Scripts
Then there’s the creative side of the workforce. Innovation, storytelling, strategic thinking—these aren’t checklist tasks you can feed into an algorithm. They require lateral thinking, cultural context, imagination, and yes, the occasional happy accident.
AI might help brainstorm ideas or speed up parts of the process, but it rarely originates the big idea that changes the game. That spark? It’s still human.
Read more: Google Claims That AI Will Surpass Human Intelligence By 2030, Posing Extinction Risk
When Things Go Wrong, Humans Fix It
Let’s not forget: when AI messes up (and it does), humans are the ones who clean up the mess. Whether it’s a chatbot giving dangerously wrong medical advice, an algorithm misjudging a hiring decision, or an AI-generated response that sparks a PR nightmare, it’s human workers who step in, assess the damage, apologize, and repair the relationship.
It’s also humans who have to train these systems, supervise them, and improve them continuously. That’s not just support work—it’s essential labor that keeps AI functioning.