How to Avoid Chatbots That Do More Harm Than Good

Posted by Alyssa Verzino on Jun 4, 2019 2:32:00 PM

 

AdobeStock_162376803

"You never get a second chance to make a first impression" isn't just a truism; it's a warning about deploying customer support chatbots before they are ready. If you prematurely hand off customer interaction to an artificially intelligent chatbot before that chatbot has been properly trained, you can do far more harm than good.

Where artificial intelligence solutions like chatbots differ from conventional software is that AI chatbots need ramp-up time, just like a human employee. You wouldn't throw a brand new support agent into a call queue unsupervised at 9 a.m. on their first day. You shouldn't do that to your support chatbots, either.

Where support chatbots in particular are different than other AI agents is that they are customer-facing, so their errors are not only public, but particularly costly. Most customers who contact your support team are already unhappy, so dealing with an unprepared or unoptimized chatbot can irritate them more. Worse, if a chatbot isn't properly trained, it may actually cause support cases to increase, rather than decrease, as customers file tickets not only for their original issues, but follow-on complaints about your broken chatbot.

Ideally, all AI chatbots go through three training phases:

  1. Learn from well-annotated historical data
  2. Observe human "co-workers" in real time
  3. Train under a human "supervisor"

Support chatbots can't afford to cut corners on any of these phases.

Many organizations skip or screw up Step #1 because they don't have well-annotated training data. Dumping a log of your past customer conversations into a machine-learning algorithm won't do much good if that data has been marked up to identify both the support rep and the customer, as well as to tie the conversation back to a formal support ticket and a customer satisfaction rating of the experience. Otherwise, the AI won't know how to distinguish good support calls from bad ones.

While famous game-playing AI agents like AlphaGo can jump right into human interaction after Step #1, customer service isn't a game you can afford to let your chatbot repeatedly "lose" in order to learn. That's why deploying a chatbot that assists your support reps, rather than directly talks directly to customers, is a critical second step. An "assistant" chatbot can make time-saving suggestions to your support team, which they can accept or reject. This natively builds annotation into your workflow and lets the chatbot learn to tell good, helpful answers from bad. If you don't have the annotated data to undertake Step #1, Step #2 becomes even more critical.

Once your support chatbot becomes consistently helpful to support agents you can let it loose -- in a supervised fashion -- on customers. Simply place a support rep to oversee the chatbot for a pilot period, and allow that rep to step in (or "escalate to a person") if the chatbot ever gets stuck or veers off track. This ensures the chatbot can handle direct interaction before left to handle customers on its own.

Skipping any of these steps will at best lead to a chatbot that simply escalates every customer interaction to a live support agent, adding pointless friction to your support process and wasting your deployment efforts. In the worst case, an untrained support chatbot will frustrate and mislead your customers with bad advice -- effectively becoming a "bad support hire" at software speed and scale.

Talla is building support chatbots designed explicitly to avoid these pitfalls. If you'd like to learn the right kind of support AI to employ -- and want to learn the right way to deploy it -- contact Talla today.

New call-to-action

Topics: Customer Support, chatbot