We’re moving away from first-generation chatbots and into the world of complex conversational AI strategies as a means of interacting and delivering on multi-faceted customer requests. This means that the volume of conversations across channels is scaling…quickly.
The need to disambiguate confusing messages is a common phenomenon in human interactions, so it’s no surprise that it’s also commonly present in human-to-robot conversations, especially with scale.
Importance of Disambiguation
Businesses competing on CX are required to tap into the long tail of data if implementing self-service NLU capabilities is in their plan. The ability to easily disambiguate intents into sub-intents is crucial to achieving truly good NLU, because as more intents are added to an ontology, the noisier it can get, the chances of overlapping increase, and the worse the end-user experience will be.
When a chatbot can match a query to an intent (in other words, it understands the users' message), a standard response is triggered based on the conversation's design flow. On the other hand, disambiguation flows are generally used when the bot is able to recognize the customers' message, but it has multiple matching intents. Disambiguation is the process of clarifying the user's intent.
Conversational AI platforms do offer ways to disambiguate. For example, conversation designers using IBM Watson can set up disambiguation dialogues, whereby the chatbot presents the top matching intents to the user, so they can choose the correct one. Other platforms, like LivePerson or Rasa, work in similar ways. However, this tends to be a long and daunting process for designers; it requires manually creating disambiguation dialogues for each node, without the help of data-driven or automated approaches.
As users pick their true intent from the provided list of options, the NLU will improve and learn to propose better intents initially. But, this improvement process is painful to go through without tools like HumanFirst.
Disambiguation With HumanFirst
As projects evolve, chatbots need to distinguish between similar queries by eliminating any confusion, noise, or overlapping intents, and adding granularity to their ontology. Thus, the importance of a scalable disambiguation workflow.
HumanFirst allows you to run tests on your data to help uncover problems and optimize your NLU data so that you can easily identify which training examples belong to another intent in your corpus.
Once you've selected a problematic intent for re-labeling, you'll be shown a disambiguation tool. In this view, you can move training phrases between both intents to disambiguate them:
As you can see, it's easy to move utterances between intents and create sub-intents when needed.
It’s also data-driven. You're given the option to toggle a minimum confusion bar to use as a threshold. Once you’ve selected an utterance to relabel, you're provided the intent that is confused with the one you’ve selected, with an accompanying match score. The task of disambiguating confused intents is painless, quick, and scientific.
When your model contains 150+ intents with 1000+ training phrases, the manual process of creating disambiguation dialogs is no longer feasible, especially without sucking up valuable resources.
To learn more about HumanFirsts’ disambiguation, reach out to our growth team here!
HumanFirst is like Excel, for Natural Language Data.
A complete productivity suite to transform natural language into business insights and AI training data.