BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

AI: How Good Is Good Enough?

Following

Many business leaders today are excited by the idea of implementing AI solutions that will replace, or at least augment, humans in customer-facing roles. What if, for example, the fallible humans providing first-level support in a call center could be replaced (or helped) by an AI chatbot that knew every word of every product manual, had read every line in the company’s knowledge base, and had also digested hundreds of thousands of previous support conversations? Enticing, right?

Arguably, an omniscient conversational chatbot could deliver faster solutions and higher levels of customer satisfaction. Today, in contrast with the past, chatbots can be imbued with at least as much personality as a bored human call center operator.

Most customers who contact a company’s support center aren’t looking for companionship. They aren’t starved for human contact. They want a fast and efficient solution to their problem. A four-minute support conversation is much better than one that lasts twenty-four minutes.

Customer Service Today

If your coffee maker stops brewing or your WiFi router fails, you’ll likely go to the company’s website and choose chat or phone support. Both have drawbacks. Chat support may start with an automated flowchart of questions and suggestions for how to solve the problem yourself. Calling the company will likely result in dealing with multiple voice menus that suggest self-service options before they connect you with a human.

Both of these options could be improved if a smart chatbot answered the chat/call and, using normal language, identified the reason for the call and quickly resolved the issue. There would be no need to push the customer to self-service options. It’s understandable that companies want to reduce the number of costly human interactions, but pushing customers to self-service is annoying and adds to perceived effort. These nudges are even more frustrating if the customer has already tried self-service and the problem remains.

Many front-line call center operators function much like bots themselves. Their responses are scripted, and they often are required to follow specific flow charts to solve problems or refer the customer to the next support level. Often, their product knowledge is limited. Knowledgeable customers can become frustrated when the rep doesn’t understand their problem and mindlessly follows a series of steps the customer has already tried.

The Tipping Point

We are at a point today where, in theory, a well-trained chatbot can provide faster and better service that a typical human. Technology is evolving at lightning speed to turn that theory into actual practice. Generative AI can be simultaneously conversational and highly knowledgeable.

One of the challenges is that generative AI is different than rule-based AI or expert systems. Generative AI is a prediction engine - it consumes massive amounts of text content, and then answers questions by predicting the most likely words. This can produce amazing results - ask ChatGPT to compare the health benefits of red wine and avocados, and it will spit out a plausible, grammatically correct comparison in a few seconds.

This comparison will likely be far more detailed than even an expert nutritionist could produce without doing additional reading and online research - and, it’s delivered in an instant.

The problem is that the bot’s output may not be accurate - for years, most health experts believed a daily glass of red wine improved health and longevity. Countless articles and blog posts repeated this claim. Recent research has debunked the idea that that daily glass of red wine has positive health outcomes, but currently generative AI programs have difficulty sorting out what’s accurate.

Often, the preponderance of information carries the day. At other times, generative AI simply gets things wrong.

When AI Fails

Last week, a lawyer was ordered to appear in court to explain why a brief was submitted to the court with a list of citations composed of fake or irrelevant cases. The brief was prepared using ChatGPT, and the citations were plausible in appearance but were invented by the AI’s predictive algorithm.

Many others using ChatGPT for writing or research have discovered its tendency to incorporate plausible but erroneous statements and conclusions in its output. Sometimes apparently factual statements are fabricated. These inventions are often called “hallucinations.”

Another example involves the National Eating Disorders Association (NEDA). They shut down their chatbot help line after the chatbot dispensed potentially harmful weight loss advice to some callers with eating disorders. NEDA had previously announced they were closing their human-staffed call center in March.

The damaging weight loss advice on the NEDA hotline was offered despite “guardrails” put in place by the software developers. The malfunction shows the difficulty of predicting every possible way things could go wrong. If an airline has customers chatting with AI about a rescheduled flight, they don’t want the AI to suggest the customer sue the airline or fly with a competitor, even though both are possible options.

Perfection vs. Good Enough

Avoiding grotesque errors by AI is important, but mistakes have to be put in context. Every time a self-driving car (or, more accurately, a car with advanced driver assistance) kills a pedestrian, lawsuits follow and there’s an outcry demanding more regulation or an outright ban. Rarely is the relative safety of autonomous cars acknowledged. If we could somehow replace all human drivers in the country with robot cars, even with today’s imperfect technology, far fewer pedestrians would die.

In the case of the bad weight loss advice, the software developer noted that the AI deviated from the expected range of counseling only 0.1% of the time - that’s one in a thousand.

There was no comparison to how often advice from human counselors went awry, but I’d be surprised if the human number wasn’t higher. Even normally empathetic humans can get bored, distracted, irritated, or tired. A chatbot can be infinitely patient. Humans, not so much. Mistakes can happen whenever people are involved.

Perfection should be the goal, but not the required standard. When a driverless car hits something or someone, the accident should be investigated and, if possible, the technology altered to prevent a similar event in the future. But, as long as there’s no indication that such crashes are increasing, there’s no need to panic and put humans back behind the wheel.

Similarly, it would be a mistake for a low-probability error like the one on the eating disorder help chat to disqualify its continued use. Rather, the error rate and success rate should be compared to what was achieved by humans. A small tweak to the algorithm could probably prevent diet advice from ever being offered, and the incidence of problematic advice would drop even lower.

Airline safety is a good comparison. Air travel is incredibly safe now compared to decades ago because each accident is investigated and the knowledge gained incorporated into procedures and regulations. Flights are grounded only when a dangerous, systemic problem is encountered. Otherwise, improvements are made incrementally and safety improves over time.

Beware of a Double Standard

It makes no sense to say, “Of course humans make mistakes,” but then demand perfection from AI. Rather, the results should be objectively compared and remedial action taken wherever needed. Human training is fallible, of course, while process and algorithm changes are usually permanent and yield consistent results.

The Future is Here

AI will make customer service and support better by engaging in fluent conversations and offering accurate answers more quickly.

We’ll always need a human tier of support, but these skilled people won’t have to deal with most routine problems. And, when the AI hands off a problem, it will describe it well enough that the customer won’t have to explain it again - a big pain point in customer service interactions.

AI offers the company the same benefits as self-service without pushing the customer into a channel they find confusing or effortful. Both company and customer benefit - a rare situation indeed.

Follow me on Twitter or LinkedInCheck out my website or some of my other work here

Join The Conversation

Comments 

One Community. Many Voices. Create a free account to share your thoughts. 

Read our community guidelines .

Forbes Community Guidelines

Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.

In order to do so, please follow the posting rules in our site's Terms of Service.  We've summarized some of those key rules below. Simply put, keep it civil.

Your post will be rejected if we notice that it seems to contain:

  • False or intentionally out-of-context or misleading information
  • Spam
  • Insults, profanity, incoherent, obscene or inflammatory language or threats of any kind
  • Attacks on the identity of other commenters or the article's author
  • Content that otherwise violates our site's terms.

User accounts will be blocked if we notice or believe that users are engaged in:

  • Continuous attempts to re-post comments that have been previously moderated/rejected
  • Racist, sexist, homophobic or other discriminatory comments
  • Attempts or tactics that put the site security at risk
  • Actions that otherwise violate our site's terms.

So, how can you be a power user?

  • Stay on topic and share your insights
  • Feel free to be clear and thoughtful to get your point across
  • ‘Like’ or ‘Dislike’ to show your point of view.
  • Protect your community.
  • Use the report tool to alert us when someone breaks the rules.

Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's Terms of Service.