All articles on Ultimate Guide to Cognigy NLU Training
- Cognigy NLU Training: Intro
- Cognigy NLU Training: Improving Intent recognition
- Cognigy NLU Training: Resolving Intent conflicts
- Cognigy NLU Training: Reducing false positives
The example sentences you feed in the Intents are the key to a successful NLU model and must be properly curated to ensure success. In this article, we will look at various ways of accomplishing Intent recognition and how these methods influence model performance in different ways.
Improving Intent Recognition
Intent recognition can be achieved by different means:
- Machine Learning - Example Sentences
- without Slot annotations
- with Slot annotation
- combined approach
- Cognigy Script - Intent Rules
Each of these methods impacts the model performance in a different way:
Machine Learning – Example Sentences
Your Example Sentences are the key to a successful NLU model and must be carefully curated to ensure success. The content of the Example Sentences should encompass the broad scope of the respective Intent, otherwise you will see unexpected results. Take for example the “Order Food” and “Rate Food” Intents below:
Even though the Intents, as well as the model evaluation, are green, the “Order Food” Intent is practically an “About Burger” Intent and the “Rate Food” Intent an “About Pizza” one, which is not what we intended:
The solution here is to diversify the food that is used in the Example Sentences and be consistent in the foods used in both Intents. This then also shows us that our given Example Sentences are not enough to properly teach the model what to do in this case:
You can get some more detailed feedback by hovering over the feedback circles, let’s have a look at some of the red ones:
After some retraining and adding several Example Sentences, we were able to get the model back to green:
Even though we now have some overlap between the Intents, the actual confidence score of our previous inputs is higher than before with all of them mapped correctly:
You can see that our Example Sentences for the working Intent model are varied both in expression as well as length, this usually provides the best results. This is true for both Example Sentences with and without Slot annotations, even though they have different effects on the NLU model. The differences between the two will be covered in the next sections:
Adding Example Sentences without Slot annotations
This is the simplest method of adding phrases to an Intent and extracts the most information based on the underlying language models. You can see an example of order food Intent using this method below:
As you can see it also understands food that we have not provided in our Example Sentences, with lower scoring.
While we are now able to correctly identify that the user wants to order food, we do not yet recognize what type of food should be delivered. One possibility here would be to create child Intents for every type of food that we offer. For this we need to modify our “Order Food” Intent as well, always make sure that the “Inherit Example Sentences from Child Intents” is turned on:
Now, this does not really scale well and can be a huge amount of effort, especially when we want to add subcategories of burgers, pizzas, sandwiches, etc., not to mention the issue with recognizing user inputs like “one salami pizza and two cheeseburgers”.
Adding Example Sentences with Slot Annotations
This brings us to utilizing Slots within the Example Sentences of the Order Food Intent to be able to distinguish the general Order Food Intent from the specific food Slots. First, let us create a new Food Lexicon as follows:
As you can see, we have also added additional synonyms that include both different words as well as the relevant plural forms. Lexicons are exact matching while ignoring upper- or lower-case differences. This means that you will need to add any common misspellings here as well, should you encounter them!
We can now use the Lexicon to annotate our Food Intent, which looks like this afterwards:
As you can see, we are now able to recognize both the Intent as well as the desired food slot. However, any phrases that are not included in the Lexicon will not be recognized, which also means that the intent is not recognized, as you can see here:
Combined Approach
Note: For optimal Intent matching, annotate only 50% of your example sentences with Slots. This balanced approach instructs the Cognigy NLU system that the Slot is relevant for the Intent, while also indicating that identifying the Slot is not mandatory for a successful Intent match.
In case you want to reintroduce some of the previous Intent recognition fuzziness back into the model, duplicate the annotated sentences without the annotation:
As you can see, it found the Intent now without finding a food Slot.
Please keep in mind that the “Implicit Slot Parsing” setting must be disabled for this combined annotated and not annotated Example Sentence model to work:
You can also add a Fuzzy Search Node as a backup for your Slot recognition, in case the Intent was found, but without a Slot:
Cognigy Script – Intent Rules
The machine learning approach with Example Sentences should always be the preferred choice for Intent recognition wherever possible. Intent Rules will get processed before the machine learning and always win over it.
Intent Rules do not have the Intent Analyzer feedback, so there is no easy way of recognizing Intent conflicts! Avoid using Intent Rules as much as possible!
However, there are certain applications where only Intent Rules can achieve the desired results, one of them is recognition of technical data payloads, e.g., for triggering actions from the frontend/webchat integration:
Now that we know how to enhance the NLU performance by improving intent recognition, Let's dive into different methods for resolving intent conflicts.
Comments
0 comments