The debate about AI chatbots often feels polarized. Some believe they will eventually replace all human interaction in customer service. Others dismiss them as cold, impersonal, and more frustrating than helpful. A trend not worth following.
Thousands of years ago, Aristotle wrote in his Nicomachean Ethics that virtue lies in the middle ground, not in extremes.¹ It is striking, and perhaps ironic, that the words of an ancient philosopher can still guide today’s discussion about artificial intelligence.
There is no denying that chatbots bring clear advantages to call centers. They are fast, consistent, and can manage large volumes of routine requests. For organizations under pressure, that efficiency matters.
However, research shows they cannot replace what makes human communication effective. In the banking sector, chatbots performed well on efficiency, but human agents outscored them on empathy and reassurance — qualities that strongly influence satisfaction.² Another study found that while people may prefer bots for sensitive or embarrassing topics, when angry or distressed they overwhelmingly wanted to speak to a human.³
The risks are not just emotional, and this is exactly where things can get tricky. Poorly designed chatbots can actually create more work. Studies show that unresolved chatbot conversations often escalate to humans, sometimes with greater complexity than if a person had been involved from the start.⁴ Other research highlights added cognitive strain for staff,⁵ while broad industry reviews have found only modest productivity gains, often no more than three percent.⁶
Even regulators have taken notice. In the Netherlands, the Autoriteit Persoonsgegevens (AP) and the Autoriteit Consument & Markt (ACM) recently stated that chatbots may not fully replace humans in customer service. Companies must always provide access to a person, be transparent when bots are used, and prevent misleading or evasive answers.⁷
The reality is that chatbots are also fragile. Columbia University research showed that they can mistake nonsense for meaningful input.⁸ Other studies reveal that they oversimplify complex information, missing important details.⁹ In real-world customer conversations, which rarely follow a script, this fragility can quickly erode trust.
When mistakes happen, the difference is clear. Customers are far less forgiving of chatbot errors than of human ones. A person can apologize, explain, and rebuild trust. A chatbot cannot.¹⁰ Moreover, a poorly handled chatbot failure can directly harm brand perception, leaving customers with the impression that the company is impersonal, indifferent, or hiding behind automation. This negative spillover doesn’t just frustrate in the moment. It damages loyalty and long-term trust in the brand.¹¹ ¹² ¹³
Beyond performance lies something deeper. Scholars warn of “emotional outsourcing,” where too much reliance on AI weakens real human connection.¹⁴ Others argue that treating bots as if they could reciprocate respect risks undermining dignity itself.¹⁵ Efficiency should never come at the cost of humanity.
So, what’s the answer? It isn’t all bot or all human. Chatbots are valuable for simple, routine tasks and triage. But when conversations grow complex, sensitive, or emotional, people must remain at the center. The strongest systems allow seamless handover, with clear transparency so customers always know who they are speaking to.
At Pridis, AI is a tool to amplify what people do best, not take their place. The promise of chatbots is real, but their role must be defined with care. If designed to complement human strengths, they can deliver both efficiency and authenticity. And perhaps Aristotle was right all along: wisdom rarely lives at the extremes, but in the balance between them.