Publications

Why AI Chatbots Should Complement, Not Replace, Humans

Written by Jenny Sitareniou | Oct 6, 2025 12:58:35 PM

 

The debate about AI chatbots often feels polarized. Some believe they will eventually replace all human interaction in customer service. Others dismiss them as cold, impersonal, and more frustrating than helpful. A trend not worth following.

Thousands of years ago, Aristotle wrote in his Nicomachean Ethics that virtue lies in the middle ground, not in extremes.¹ It is striking, and perhaps ironic, that the words of an ancient philosopher can still guide today’s discussion about artificial intelligence.

 

The Promise and the Problem

There is no denying that chatbots bring clear advantages to call centers. They are fast, consistent, and can manage large volumes of routine requests. For organizations under pressure, that efficiency matters.

However, research shows they cannot replace what makes human communication effective. In the banking sector, chatbots performed well on efficiency, but human agents outscored them on empathy and reassurance — qualities that strongly influence satisfaction.² Another study found that while people may prefer bots for sensitive or embarrassing topics, when angry or distressed they overwhelmingly wanted to speak to a human.³

 

The Practical Gap

The risks are not just emotional, and this is exactly where things can get tricky. Poorly designed chatbots can actually create more work. Studies show that unresolved chatbot conversations often escalate to humans, sometimes with greater complexity than if a person had been involved from the start.⁴ Other research highlights added cognitive strain for staff,⁵ while broad industry reviews have found only modest productivity gains, often no more than three percent.⁶

Even regulators have taken notice. In the Netherlands, the Autoriteit Persoonsgegevens (AP) and the Autoriteit Consument & Markt (ACM) recently stated that chatbots may not fully replace humans in customer service. Companies must always provide access to a person, be transparent when bots are used, and prevent misleading or evasive answers.⁷

 

Fragility & Failure

The reality is that chatbots are also fragile. Columbia University research showed that they can mistake nonsense for meaningful input.⁸ Other studies reveal that they oversimplify complex information, missing important details.⁹ In real-world customer conversations, which rarely follow a script, this fragility can quickly erode trust.

When mistakes happen, the difference is clear. Customers are far less forgiving of chatbot errors than of human ones. A person can apologize, explain, and rebuild trust. A chatbot cannot.¹⁰ Moreover, a poorly handled chatbot failure can directly harm brand perception, leaving customers with the impression that the company is impersonal, indifferent, or hiding behind automation. This negative spillover doesn’t just frustrate in the moment. It damages loyalty and long-term trust in the brand.¹¹ ¹² ¹³

 

The Human Dimension

Beyond performance lies something deeper. Scholars warn of “emotional outsourcing,” where too much reliance on AI weakens real human connection.¹⁴ Others argue that treating bots as if they could reciprocate respect risks undermining dignity itself.¹⁵ Efficiency should never come at the cost of humanity.

 

A Balanced Path Forward

So, what’s the answer? It isn’t all bot or all human. Chatbots are valuable for simple, routine tasks and triage. But when conversations grow complex, sensitive, or emotional, people must remain at the center. The strongest systems allow seamless handover, with clear transparency so customers always know who they are speaking to.

At Pridis, AI is a tool to amplify what people do best, not take their place. The promise of chatbots is real, but their role must be defined with care. If designed to complement human strengths, they can deliver both efficiency and authenticity. And perhaps Aristotle was right all along: wisdom rarely lives at the extremes, but in the balance between them.

 

 

References
  1. Aristotle. Nicomachean Ethics. (4th century BCE).
  2. Wijesundara, C., & Perera, R. (2023). AI Chatbots vs Human Agents: A Study in Banking Sector, Sri LankaIJISR
  3. Chan, M., et al. (2023). Consumers’ Preferences for Human vs AI Chatbots in Sensitive Health ContextsUniversity of Kansas
  4. Heimbach, I. et al. (2025). Deploying Chatbots in Customer Service: Adoption Hurdles and Simple RemediesarXiv
  5. Xu, A. et al. (2021). Cognitive Load and Productivity Implications in Human-Chatbot InteractionarXiv
  6. Computerworld (2024). AI chatbots deliver minimal productivity gains, study findsComputerworld
  7. Autoriteit Persoonsgegevens & Autoriteit Consument & Markt (2025). Chatbot mag mens niet volledig vervangen bij klantenserviceAP.nl
  8. National Science Foundation / Columbia University (2023). Verbal Nonsense Reveals Limitations of AI ChatbotsNSF
  9. Livescience (2024). AI Chatbots Oversimplify Scientific StudiesLivescience
  10. Belanche, D. et al. (2025). Customer Reactions to Service Failures by Chatbots vs Human AgentsScienceDirect
  11. Cai, N. et al. (2025). Understanding Consumer Reactions to Chatbot Service FailuresScienceDirect
  12. Chattaraman, V. et al. (2023). Exploring the Relationship Between Chatbots, Service Failure Recovery and Customer LoyaltyWiley Online Library
  13. Park, E. et al. (2024). How the Communication Style of Chatbots Influences Consumer Satisfaction, Trust, and EngagementNature
  14. Brookings Institution (2023). What Happens When AI Chatbots Replace Real Human Connection? Brookings
  15. Roeser, S. (2025). Chatbots, Respect, and Human DignityarXiv