In this episode of Cognigy Sessions, we’ll do an in-depth exploration of applied artificial intelligence and delve deep into Cognigy.AI’s NLU capabilities. We’ll show you how exactly Cognigy NLU processes Natural Language Input and handles Slots, Lexicons, and Keyphrases. We’ll explain how to manage Intents at scale and share best practices for creating exceptional end-user experiences with the minimum development effort. Learn how to tackle the toughest use cases with advanced NLU features.
Join the Cognigy.AI User Community
Welcome to Cognigy Sessions, our Techinar series from Conversational AI experts for experts. In every episode, we will do a deep dive into another Cognigy topic. Today's episode about Cognigy NLU will definitely live up to these expectations. Almost every Cognigy.AI user has worked with the NLU as it's a core piece of our platform but only a few conversation designers ever had a chance to see the full scope of its features and capabilities. So be prepared for an in-depth exploration of applied artificial intelligence. In this session, we will show you how exactly the Cognigy NLU processes natural language input, how to tackle the toughest use cases with the advanced NLU features, and finally, best practices for creating superior end-user experiences with minimal development effort. This recording is also available in our Help Center. Please follow the link below to access additional resources and go to our community for questions and discussions. Also, please be invited to start a free trial and see Cognigy NLU in action if you're not already a Cognigy.AI user. And with that let's start today's session.
Hi everyone, my name is Thijs and today I'll be talking about Cognigy NLU.
What is Cognigy NLU?
Cognigy NLU is a state-of-the-art Natural Language Understanding engine with industry-leading performance. It's part of the Cognigy.AI platform. It features custom-built models for over 20 languages with very advanced Slots detection and Intent mapping capabilities. Furthermore, it has support for over 100 languages using a generic model that is also built in. If you want to have an overview of which specific languages are supported, please refer to the following page in our documentation. Please also note that we have a page in the documentation on this exact topic, from Natural Language Understanding with all of its sub-components. And we have a Help Center where you can find Getting Started articles.
Please also note that this is not an introductory course, we will be having a look at some more advanced concepts today. So feel welcome to go to our training website where you'll find material on more basic stuff in Cognigy. We have an Essential's Training and we also have a Flow Developer training where you will learn how to use the NLU in conjunction with other concepts like conversational Flows.
If you want to follow along today, you're very welcome to do so. Please go to signup.cognigy.ai, and sign up. Let's get started.
Cognigy exposes what is called a Natural Language Understanding pipeline, and I want to get started by looking at this pipeline and explain the components that it has. We will assume that a user calls in using the Cognigy Voice Gateway because this means that there is a Speech-to-Text transcription that needs to take place. The user calls in and has a voice signal. The Speech-to-Text text transcription, also called ASR, is the first step in this pipeline. This is part of the Voice Gateway and it integrates with a wide range of technologies that can be used for this. The results of the Speech-to-Text transcription and all of its metadata, so different interpretations, is then being sent to an optional pre-NLU Transformer. The pre-NLU transformer is a hook, a script that you can use where you can, for example, clean up data. This is then being sent to the actual NLU Connector. We call the NLU Connector because in Cognigy you can use Cognigy's NLU, the topic of this webinar, but you're free to use any other NLU technology that you like. We have pre-built Connectors for NLU engines like Dialogflow, IBM Watson Assistant, and Microsoft LUIS, and you can also build your own NLU Connector. After the NLU pass has taken place, we have a post-NLU Transformer, which is similar to pre-NLU Transformer, except in this instance you already have the NLU results that come out of the NLU Connectors processing. And after that, finally, the result of all of this entire pipeline is being sent to the Cognigy Flow, which in itself can also be seen as an extension of the NLU pipeline. It is, for example, possible to within a Cognigy Flow re-trigger Cognigy's NLU or reach out to third-party applications or third-party NLU engines.
So for this webinar today, we will be focusing on Cognigy's NLU Connector and we will have a very detailed look at how this interacts with the Flow because this is a very powerful feature in Cognigy. It doesn't stop at the NLU, it doesn't start with the Intent mapping. It's actually where the fun starts, if you will! Because we can use the Flow to create custom logic, to refine our results, to ask questions, and to come to a result that is very natural.
The first step in Cognigy's Natural Language Understanding is a process called Slot mapping. Now I will assume that you've already had a look at it. This is a concept in the Essentials Training. So I'm not going to go into a lot of detail as to how this works and where to find it in Cognigy. I do want to point out that for those 20 languages that I mentioned before, those custom models, we have a lot of built-in Slots that are detected automatically. So if we take the following sentence: "The day before yesterday, I went on a 30 kilometer trip with 4 friends and it took almost 2 hours since it was minus 2 degrees outside". Cognigy is able to automatically map the following Slots. And I want to show you how this works in Cognigy. So if we switch over to the editor here and I copy-paste the exact sentence here in the Interaction Panel. Please note, I don't have any Intents defined for this. If I execute this and run the Flow, go to the Info tab, you'll see that Cognigy was able to automatically extract a lot of information already, right. So we see that it found Slots for a date, it realizes that the day before yesterday is Friday because I'm recording this on a Sunday. It also noticed the distance, a number, a duration, and please note that we can even give the units so something like one and a half hours would also work, and it even has a temperature and in this case, a negative number.
After Slot mapping has taken place, the next step is so-called Intent mapping. Intent mapping is where machine learning really kicks in and Cognigy has a unique capability called Hierarchical Intent Transformers that allows the NLU engine to do the Intent mapping in different layers of different steps, if you will. This is an optional feature, it is not required, but you can definitely make use of it and we make it very easy to do so in Cognigy. It's all visual.
So to understand this concept, you can imagine that you have a number of high-level topics, a number of high-level Intents, and each of them can have so-called Child Intents. You can go up to three layers per Flow and this means you can create incredibly accurate Intent mapping. So Cognigy will come in and one pass, it will first compare to Parent Intents, then to Child Intents, and then the Child Intents-Child Intents. So if you switch over to the Cognigy editor and we upload a set of Intents that I prepared here, we'll have a chance to see this in action.
So I'm now uploading a set of Intents in CSV format. It is not necessary to create all of this in the Editor, but you can, of course, create new Intents in the Editor. I'm waiting for the task to complete and I now see that we have a very nice hierarchy of Intents that is available. The next step is to train the models so that we can play with it. So I'll go to the top right here and hit Build Model. This will take a little while because it will build an extensive model based on all the training data that's available here in this Intent hierarchy. We can already have a look at some of the training data. So if I start with one of the Parent Intents, in this case ''order'' and I click on this, you see that the order Intent itself has its own training data. It has training data that points to very generic order questions. It then has Child Intents that are a bit more specific. Someone could ask for an Order Status, someone could ask to cancel an order, maybe someone wants to change an order or place a new order and maybe someone even wants to change the order delivery address.
Please also note that Cognigy gives very nice feedback in the form of these traffic lights here, and this is what we call the Intent Analyzer. So what Cognigy essentially does in the background is a cross-validation and this gives us feedback as to how well these particular sentences work within this model. If I hover over one of these sentences, you see that we get a feedback in terms of what the actual score for this particular sentence would be. And Cognigy also lets us know to what extent or if there's overlap with other Intents. The traffic light system here, the Intent Analyzer, works on different levels, so it makes the comparison between these individual training sentences and it also makes a comparison between the Intents here in the hierarchy. And then finally, here's the top right, It gives us an overall model score. So it says this entire model has a good accuracy score of .88. It does not mean that there's not stuff that we can improve. For example, you see that there is a red light here and this has a pretty low F-score, that's the first problem, right. 0.46 is relatively low, but even more importantly, Cognigy is letting us know that this Intent has overlap with product comparison, product price, and product find, so this is definitely something that we would want to fix.
For now, let's scroll through this list one more time so that we have an understanding of the actual topics. So you see that this is all related to service and ordering. So new business and providing service. We have order, products, discounts, account, and customer data, and then finally, we have payment available here.
Let's try this out, I will turn on the expert mode, here in my Interaction Panel, and I'll just enter some sentences so you get a sense of the scoring here in this model. So let's start by typing something like ''what is the status of mz order?'' and you see, I have a German keyboard and I mistyped the Y / Z, but Cognigy NLU is robust enough to, of course, recognize that this is about the Order Status. What's interesting here is that we're at 1.1, so it managed to find a Child Intent, not only the Parent Intent but also the Child Intent. And the cool thing is that we have access to this hierarchy in the Intent scoring. So if we go to the Info object here and we look under NLU ''Intent mapper results'', we see a so-called Intent path where we see that it first found Order and it then found Order Status. We even get the path, the Intent IDs here. And the Flow reference IDs in case it switches Flow.
Now I'm going to ask a more generic question, so I say "I have a question about my order" And what we see now is that the Order Intent is triggered. And that is because I gave my Order Intent its own example sentences. So this is a more generic question and I deliberately gave my Order Intent its own example sentences because I want to use it as a Fallback. I don't necessarily need to do this though. I can also create Child Intents and then on a Parent level say ''Inherit the example sentences from the Child Intents''. So this is definitely a possibility, something that you should be aware of. You don't need to necessarily provide all of the Parent Intents with their own training data. But you could in case you want to use it as a more generic fallback mechanism.
Let's now have a look at some more advanced Intent settings. The first concept that I want to talk about is a type of Intent so called Rule Intents. It does what its name suggests: It triggers an Intent based on a rule, not on a machine learning matching. The second concept are so-called Conditions. Sometimes you want a Intent that's been found using machine learning to be triggered, but only if a certain condition has been met. This could, of course, also apply to a Rule Intents. So you can see this as a bit of a gatekeeper. You know, it will do the NLU, but it will not trigger the Intent, if the condition is not met. And then thirdly, I want to talk briefly about States. This allows you to enable or disable Intents based on what we call the conversation state and this is a concept that's fully integrated in Cognigy and it allows you to white or blacklist certain Intents.
Let's switch over to the Editor so we can have a look for ourselves. This is the Intent set that we were just working with. So what I will do now is I will create a new Intent and this Intent I will name RI from Rule Intent and then I'll call it Swearing. Now, instead of providing good example sentences like we did for all of these Intents that are machine learning based, I want to use this field here to provide a simple rule. And the rule that I want to provide is the following: I just want to see if a Slot of the type swearword was detected. If that's the case, I want this Intent to trigger. So I'll save this, and before I build the model, I also want to add something for a condition. So this is an absolute rule. This will either trigger and the Intents is expected to have a 100% score. The condition that we just mentioned can be applied to machine learning Intents as well. So in the case of our Order Status that we've been working with so far, let's go to Advanced and add a Condition here. We're basically saying, Ok, you're free to trigger this Intent, but only if the word "order" is included. Let's assume that we want to use this as some sort of a gatekeeper. We want to use the NLU, the machine learning, but we do want to manually check if the word ''order'' has been used in a sentence, and only then is this Intent allowed to be triggered. So I'll save this and I'll build a model and I'll now use the quick build functionality.
So what we added is a condition to the Order Status Intent and we added the brand new Intent called RI Swearing. So if I open my Interaction Panel while I wait for the model to be trained and I make sure that my expert mode still on, go back to the Chat. The first thing that I want to do is I want to use a swear word to see if the rule intent is triggered. So let's say I type something like "bollocks", in order to limit the profanity here in this Techinar. You see that indeed, because the Slots swear word was found based on a key phrase "bollocks", we see that the RI Swearing is triggered as expected, and you can see that the Intent score is one, because this is a rule. It's an absolute value.
Let's now also have a look at the Order Status Intent that we have here where we added a condition that the utterance that the user entered should include the word "order". So in order to test this, we'll just start by including the word ''order'', so we say something like, what is the status of my order?" You see that the Order Status Intent is triggered as expected. Let's now say, let's now add a variation, something like "why is it taking so incredibly long?" And of course, also referring to an order. So we're indirectly asking for an Order Status. If I now enter this, you see that we expect the Order Status Intent to be triggered and it would be triggered if it weren't for the Condition here. So what it does instead is it reverts back to the Parent Intent. This condition is not fulfilled so instead of finding this Intent, it will go for the next best option, in this case, the Parent Intent.
Let's now have a look at States. States are very similar to this concept of a Condition, but it has a separate tab here. If we go to States, we can create a new one. So let's say that we create one that we call ordering. And in the ordering state we have the option to now either black- or whitelist Intent. We can also do both but it's common to either say we want to whitelist particular Intents or you say we want to blacklist particular Intents. And if we blacklist particular Intents, what we mean by that, is that we block certain Intents from being triggered. And this can come in quite handy, for example, if an order Intent was found and we somehow move the bot to a order process, we might want to actually blacklist the order Intent itself because we were already doing this. We don't want to re-trigger this and we can do this in a very granular level. We can do this for a broad set of Intents, we can do it with which Child Intents as well. And if we now save this and go to the chart, there is no chart yet so I'll just very quickly create a chart just to show you how this works, and we'll go a bit deeper into how the Intents integrate with the chart in just a minute. So I'll create a case here for the Order Intent. And what I will do is I'll use the exact same sentence that I just used so that I'm sure that the Order Intent is triggered. Now, if the order Intent is triggered, I will use a Node that we call Set State. And the Set State Node should then set the Ordering State. And save. And then after that, I will say "I set the state to ordering", like so. And as long as this State is true, the Order Intent is not allowed to be found anymore. Because we blacklisted the Intent.
So I'll now enter the sentence here. The Intent is found, the order case is triggered in the LookUp Node. We see that we get very nice feedback in our Interaction Panel where it says "new state: ordering". Please note under Info we have another tab for this where this can be found called State. You see that now ordering is highlighted, so we have two locations where this can be found. And if I now click on this sentence again, enter it here, you see that the exact same sentence that just triggered this Intent with a fairly high score is now not being triggered anymore simply because the State is set to ordering and we told Cognigy's NLU under the States tab that when the ordering state is being set, it should blacklist the Order Intent.
There's one thing to keep in mind because what effectively happens is that if you create a State and thereby either white or blacklisting Intents you create, you essentially create another version of this entire model. And this is fine, you can perfectly well use this, but it will have implications on the size of the model. So if you plan to create an incredibly big model with many, many different states, you should expect your Snapshot size, for example, to increase by a lot.
Before we continue, I want to mention two more topics that you'll find on the advanced Intent settings. They are ''Confirmation Sentence & Intent Disambiguation''. What do they do? Well, the Confirmation Sentence automatically asks the user for confirmation. This is patented technology in Cognigy NLU, and this allows bots to effectively learn by themselves because they can automatically, independently ask users for confirmation, and if enough users confirm that a certain utterance points to a certain Intent, the bot, will have a way of learning this by itself.
The second concept is Intent Disambiguation, and this essentially allows us to add some metadata to the Intent. Let's have a quick look in a tool and see how this works to have a better understanding. I have a much simpler Intent model with the customer service Intent and an Order Food Intent. The Order Food Intent has a Intent Disambiguation where I say order some food, and it also has a confirmation sentence where I say ''Did you mean to order food?''. This is the question that the bot will ask in case the bot is not sure. So what does this mean when we say the bot is not sure? Well, this refers to settings and certain thresholds. So we can enable on a Flow level, certain Intents threshold, where we say that every intent mapping score that falls in between these two thresholds and let's, therefore, put the slider up a little bit because we want to trigger re-confirmation. Every score that falls in here will trigger the reconfirmation, the confirmation sentence, right, meaning that the bot is not sure and it will ask the user for confirmation.
So let's say something like "I could really go for some food now", something like this. We enter the sentence, you see that there's an Intent reconfirmation in progress. The bot thinks that this has to do with ordering food, but it checks with us, it says, hey, did you mean to order food? If I now confirm and say sure or yes, the Order Food Intent will be triggered and it will now trigger with a 100 percent because we as an end-user actually confirmed that this is an Order Food Intent so it doesn't need to guess anymore. You see that it also gives an answer and the answer is actually dynamic, it says "I understand that you would like to order some food", that's fine, but this is not hardcoded. The ''I understand that you would like to'' is hardcoded but the ''order some food'' part comes from the Intent Disambiguation. That's the Disambiguation Sentence that we now use, that we configured for this particular Intent. So this allows us to react to users in a slightly more organic, conversational way. And we can for example, you also use Intent Disambiguation sentences to automatically generate Quick Replies or other graphical outputs.
Before we look at Flow orchestration, it's important to understand how Flow execution works. We learned that there is the NLU and it is important to know that the NLU has the option to generate Default Replies, so default output. Every Intent can have a standard answer, which is what you learned in the Essential's Training. If there's a Default Reply available, Cognigy will output it. The NLU will provide the answer and there's no reason to trigger a chart. If there's no Default Reply available, Cognigy will execute a chart, so the rest of the Flow and see if there's any output there. You can combine this so it is possible to give out to the Default Reply, for example, as part of a large FAQ and then after that trigger the rest of the chart, rest of the Flow nonetheless.
There's also a concept that we call Entry Points, so by default, every new utterance starts at the top of the Flow. It will execute the rest of the flowchart chart, and then if a Question Node is asked, the Entry Point actually moves to this Question Node. So it expects to use it to answer the question. If no questions is asked, the Entry Point will move to the start of the Flow Chart again. That's where the green triangle in the actual editor can be found. This system is layered and it's important to understand this. It means that you can have multiple questions that are asked by the bot and each of these questions, they have a "Forgot Question" threshold that you can tweak. It's important to understand because it means that a bot can ask a question, even though the user does not directly answer the question, the bot can keep the fact that this question was asked, for example, a specific Slot or a number. It can keep that in mind so the user can answer this may be on a second or the third input.
If we now look at Flow Execution, so switching between different Flows, we see that there's three ways of doing this: We have an Execute Flow where you run through Flow once and then the entry point, the green triangle. Starts at the back of the original Flow. Then there is the Go To Flow, which permanently switches to another Flow, so the entry point starts to start of that new Flow, it will not go back automatically to the original Flow unless we explicitly tell the bot to go back there. And then there's the Attach Flow, which means that you merge the NLU of another Flow with the NLU of this particular Flow. And by doing so, the Intents can be triggered. And If Node Intent is triggered it will switch to the Attach Flow and either you know give out a Default Reply and that's what is commonly used for, or it would actually execute the Flow Chart of that attach Flow because the Intent there is triggered.
So let's now have a look at how it all comes together. We've learned about Intent Hierarchies, we've learned about Flow Orchestration, executing Flows, going to Flows, different entry points. And we also learned that it's possible to attach NLU models to each other using Attached Flows. How does this come together? Well, suppose we have a scenario and just take this scenario that we looked at before where a user asked for his ''Order Status''. We want to map the Intent in the original Flow because we have a very nice, detailed, Hierarchical Intent model, but we don't want to do the fulfillment of that Intent. So the actual answering or questions that the bot has to ask in that same main Flow. Because it will become too much for all of the Intents that we have. So what we want to do instead is we want to do the NLU in an Orchestrator Main Flow, find the Intents so find that it has to do with Order but we want to fulfill this in a dedicated Order Flow. So we essentially hand over the Intent mapper results to, in this case, to Order Flow. What's important, though, is that the input starts all the way at the top again, the entry point. Because we execute the order Flow, it starts all the way back at the Start Node of the Orchestrator Flow. So we look at another example. In this case, we ask, oh, "did I already pay my invoice?" So what will happen is: The NLU will find that it has to do with payment. It will publish this to a dedicated Payment Flow, find a Payment Status Intent, fulfill it and then go all the way back to the Start Node of the Orchestrator Flow. Sometimes we have something like Small Talk attached where we want the Intents to be available wherever in the conversation we are, so we can in that case attach the Small Talk model to the Orchestrator Flow. What will then happen is it will implicitly switch to the Small Talk Flow because it triggers an Intent there. In the same way, though, it will start with the entry point back in the original Orchestrator Flow.
Let's have a quick look at this in Cognigy itself. We see that we actually have a number of Flows here and I re-created this example in Cognigy. So we have the Orchestrator Flow. The Orchestrator Flow has the Order, Product, and Customer data cases. I did not include all of them. What happens if I enter the sentence ''What is the status of my order?'' is that we expect it to do the Intent mapping, therefore find the Order Intent. It will probably find more information like the details, Order status Intent, but it would not care for that here, it would just hand it off to the Order Flow. So we enter this information and indeed it hands this off to the Order Flow. This is where the fulfillment comes from. So let's try that again. We have the same input here, let's in this case, switch to the Order Flow. If I now enter the same sentence here, it will not find the Intent because the Order Flow does not have the NLU available for this, does not have the Intents with all the training data. What I can do is I can stay here and start in the Orchestrator Flow and enter the sentence again. And then the system works, right? We get the Intent mapper results. It's all available, we have the Hierarchy, Order, Order status. This is not published to the order Flow.
And to make this even more exciting, we can actually attach very specific FAQs to very specific Flows and in order to make this work and to have a more interesting scenario, I'm going to do a very quick recap of Slot Fillers. This is part of the Essential's Training, I'm just going to summarize it for you. Sometimes if you have a fulfillment Flow, transactional Flow, you need to ask the user for some information. For example, what's the product that you want to which the user can say, I would like some cupcakes. Then the next question is how many? I think that five is enough. And then finally, you might want to have a color ''In blue, please''. A user can, of course, over-answer. A user can come in and say, hey, I want to buy three cupcakes, please. In that case, we don't need to ask for the product and amount anymore because we have this answered already. The user over-answered, and then we only need to ask for the remaining piece of information, namely the color: ''In blue, please''.
This process of being able to handle these types of user inputs is what we call Slot Fillers. This is part of the NLU. I am not going too deep into how this works, it is part of the Essential's Training. I just wanted to quickly refresh your memory there because we're going to use this in this final example. So if we go back to our project, we see that we're now, instead of in the Orchestrator Flow, we're in the dedicated Product Flow. From the Orchestrator Flow we actually jump to the Product Flow whenever a product-related Intent was found. Then in the Product Flow for the Product Find Intent, you see that we have a number of questions and we indeed use a Slot Filler where we can under- and over-answer. So we have a Slot Fillers under the NLU where we have the amount, product, and color available.
So what I will do is I will start in the Orchestrator Flow and I will then trigger an Intent in the Orchestrator Flow that should land here in the Product Flow and then trigger the Product Find Intent. So I say "I'm looking for two macarons". I enter this in Orchestrator Flow. The Product Find Intents is found. The Parent Product Intent is found as well. That's why it's now in this Product Flow and it now asks us: Ok,two macarons, all good, what's the color? Before I answer this, I am going to ask something else. I want to ask: Does it have a lot of calories? This is not something that's available here. So if I do this, you see that I attached another Flow called Product FAQ to this Flow that is able to handle FAQs that are related to this order process. It can now automatically answer: A macaron has 275 calories. Which is really cool because this is like a contextual FAQ. What we also see is the concept of an Entry Point, we see this green triangle, so Cognigy is aware that this question was asked. So it's still waiting for an answer here. So if I now say "Ok, well, I prefer blue", the question is answered and the process still continues. So you see that we have a blue macaron that we can buy. And this is an incredibly powerful concept. What we just did is we have a very robust, very interesting Hierarchal Intent model that we execute on an Orchestrator level. We delegate to a dedicated Product Flow where we handle the fulfillment here. And then this Flow has a contextual FAQ attached here using the Attached Flow mechanism. The Product FAQ is only available in this Flow because this is where it makes sense and is, therefore, able to have an advanced concept like Slot Fillers while still having the ability to break out and answer related questions here. And the fact that we still have a very straight linear Question Flow makes it really easy to work with. It's a contextual FAQ that's available, but it does not pollute our Flow here with unnecessary information.
Thank you very much for watching our session on Cognigy NLU. Please don't forget to visit our Cognigy.AI Help Center under support.cognigy.com for additional resources and information and a link to our community where you can ask questions and start a discussion with your peers. Thank you very much for watching. See you next time.