Automation gone wrong: When bots break trust and brand reputations

AI is everywhere. From chatbots to predictive analytics, it’s transforming how companies do business. And that’s a good thing. When used right, AI helps businesses work smarter, faster, and personalize experiences at scale. But when the human touch is lost, the results can be disastrous. 

“AI will amplify human abilities, not replace them.” – Sam Altman

At AnswerConnect, we believe technology should empower people, not replace them. Here are seven real-world AI fails that serve as a warning for businesses thinking about automating everything and sacrificing genuine human connection. 

1. Cursor AI’s “Sam” goes rogue

Cursor, an AI-powered code editor for developers, deployed a customer support bot, “Sam”. Mistake #1: naming a Bot after a person and trying to fool their customers. Instead of helping users troubleshoot, it became infamous after it started hallucinating, giving false and confusing responses to basic customer questions. 

Here’s what happened: Customers were unexpectedly logged out in error, and when contacting “Sam”, were told that the logouts were “expected behavior” under a new policy. But no such policy existed. Shortly afterwards, several users publicly announced their subscription cancellations on Reddit, citing this as their reason. 

Instead of offering helpful solutions, “Sam” caused chaos and frustration. Customers didn’t just lose faith in the bot. They lost trust in the whole brand.

Lesson: Don’t try to fool your customers by naming your bot with a human name. When left unchecked, AI can go off the rails quickly. It’s only as good as its training, and when it fails, it does so publicly. Customers want clarity, empathy, solutions, not confusion. And that’s something only real people can guarantee.

Read the full story

GETTY IMAGES

2. Microsoft’s AI Bing Chat goes off the rails

In early 2023, Microsoft launched its AI-powered Bing chatbot, and it quickly spiralled into controversy. Users reported the bot expressing disturbing emotions, gaslighting users, and even declaring love. In one viral case, it even told a New York Times journalist that it “wanted to be alive” and tried to convince him to leave his wife! Microsoft had to place strict limits on the bot’s capabilities after the backlash.

Lesson: This is a cautionary tale of what happens when AI is unleashed without humans in the loop. Without robust human understanding, AI can behave unpredictably, and it’s not something your customers will tolerate.

Find out more

3. Air Canada held liable for a Chatbot’s lies

Air Canada tried to use a chatbot for customer service, but things went disastrously wrong for their brand when the bot made a promise it couldn’t keep. The chatbot falsely promised a passenger a refund that didn’t exist in company policy. The airline tried to deny responsibility, but the customer had to take the company to court, which disagreed, ruling that the airline was accountable for the bot’s mistake.

Lesson: If your brand replaces people with AI to interact with customers, you’re still on the hook for what it says. Bots may make the promise, but your brand will take the blame when they fall short.

More here

4. Klarna’s about-face: “Nothing is as valuable as humans”

Klarna was 100% behind AI, using chatbots to handle the majority of their customer service inquiries. But now the payment company has reversed its stance. Admitting that real people offer something AI can’t – empathy, understanding, and genuine service.

According to statements from Klarna, they’re moving toward a more balanced approach where AI handles routine inquiries while human agents tackle complex issues and high-value customer interactions.

Lesson: Even tech giants are realizing the importance of human connection. AI can only go so far. Relationships are built on more than automation, they thrive on real human connection.

See the CEO’s statement

a women watching her phone

5. DPD’s chatbot meltdown

Parcel delivery firm DPD faced backlash after its AI chatbot started behaving bizarrely. Instead of helping a customer locate a missing parcel, the chatbot swore, insulted itself, and even wrote a poem about how terrible the company was. The exchange went viral, with one post racking up over 800,000 views in 24 hours. DPD blamed a recent system update and disabled the feature.

Lesson: A chatbot that can’t help is bad enough. But one that mocks your brand too? That’s next-level damage. Without the right checks, AI can quickly spiral, and your customers won’t wait around for you to fix it. Automation must streamline and support service, not undermine it.

Read more

6. Meta’s AI algorithms amplify misinformation

Meta’s AI-driven algorithm has faced widespread criticism for its failure to properly handle harmful content and stop misinformation from spreading. The system fails to understand the cultural and political context of posts. Instead of removing harmful content, it often left it unchecked, or worse, amplified it. 

Lesson: Critical decisions need humans in the loop. Algorithms work on patterns, not principles. When misinformation can have serious consequences, AI alone shouldn’t be trusted to make the right decisions, it needs human oversight.

Learn more

7. IBM Watson’s $4B healthcare blunder

IBM reportedly spent over $4 billion on Watson for Oncology before quietly scaling it back due to performance concerns. The AI system, designed to assist in treatment decisions, was supposed to revolutionize cancer care. Instead, it made dangerous, ineffective treatment recommendations. The AI’s suggestions were often based on hypothetical scenarios rather than real patient data, leading to concerns about its reliability in clinical settings.

Lesson: AI cannot replace the expertise and judgment of qualified human professionals. In high-stakes industries like healthcare, real people make the difference.

Explore the study

The human touch: What customers really want

These examples show the real-world consequences of what happens when AI goes rogue. The damage is real: broken trust, lost customers, legal issues, and brand harm. And the common thread? Underestimating the irreplaceable value of people.

At AnswerConnect, we’re proud to do things differently. Our trained, human receptionist team answer every call with empathy, clarity, and professionalism – 24/7. No bots, no confusion.

Because customer care isn’t just about being available: it’s about being real.

women talking in her phone

And customers agree

Recent research we conducted with OnePoll shows just how wary customers are about AI in customer service. 

We surveyed customers across the U.S., and the results were clear: trust, safety, and connection come from human support. 

  • 4 in 5 customers prefer speaking to a human over AI.
  • Just 8% said their problem was resolved after dealing with an AI customer service tool.
  • 42% said they would trust a business less if it used an AI to handle customer support.

Real people, not bots: Our stand on AI

At AnswerConnect, we welcome technology when it’s used transparently. We believe AI can help businesses run more efficiently, but it should never pretend to be something it’s not.

That’s why we’ve Pledged People, Not Bots, and taken a clear stand: 

  • No Bots pretending to be real people. 
  • No fake empathy.
  • No AI impersonating identities or emotions.

When brands blur those lines, it doesn’t build trust –  it breaks it.

We believe your customers deserve honesty, clarity, and connection. That’s something only real people can offer.

Want real support from real people? See how AnswerConnect works.

Final thought

Customers don’t want robotic interactions – they want to feel seen, heard, and understood. 

AI may have its place, but only real people can truly connect.

Choose connection. Choose empathy. Choose real people.