In this episode of Cognigy Sessions we will dive into the analytics capabilities of Cognigy.AI and explore its OData interface. You will learn how to build an OData query and how to use it to stream analytics records from a Cognigy.AI Virtual Agent. Moreover, you’ll learn how to level-up a Flow with custom analytics and how to build a Power BI Dashboard for real-time performance monitoring.
Join the Cognigy.AI User Community
Welcome to Cognigy Sessions, our Techinar series from Conversational AI experts for experts. In every episode, we will do a deep dive into another Cognigy.AI topic. We will go much deeper than any marketing collateral, and we will empower you to build your own applications based on the demo, code examples, and best practices in the video. This session is about the external analytics capabilities of Cognigy.AI via the OData Endpoint. We will show you: How to build an OData query and use it to extract data from a Virtual Agent, how to feed data into a data analysis tool such as Excel and Power BI, and finally, how to build a Power BI analytics dashboard for real-time performance monitoring. This recording is also available in our Help Center. Please follow the link below to access additional resources, go to our community for questions and discussions or to start a free trial if you're not already a Cognigy.AI user. And with that, let's start today's session.
Hi, my name is Matt from Cognigy, and in this video, we're going to be taking a look at Cognigy Analytics and OData. We're going to start by having a look at what analytics options are available with Cognigy. And we're going to move through to building an OData query, to query data out of the Cognigy database. And finally, we're going to use that data to build a small dashboard in Power BI to visualize the analytics. So let's get started.
Now in Cognigy, we have multiple ways of viewing analytics. Firstly, we have a simple in-tool analytics dashboard. And this is really good for viewing some key statistics about the performance of our bots, such as the number of sessions, the number of users, and the number of messages that have occurred over the last short period of time. Secondly, we have a native connection with analytics suppliers, Dashbot and Chatbase, and this enables connections from a Cognigy channel Endpoint to directly to one of these services. This is a nice native integration where you can add just your credentials for integrating with these services and it's ready to go. Now, thirdly, and finally, we have our OData analytics Endpoints, and this is going to be the focus of this video.
So the OData analytics Endpoint is used to extract raw data from Cognigy and to use that to build your own customized dashboards to view the performance of your bot and also the interactions between its users. So as a supplementary information for this video, we also provide our documentation, which is also linked in the video description, and we also have our Help Center where you can reach out to us and ask for advice and tips on building your own analytics dashboards.
So firstly, what is OData? OData is a protocol for retrieving information via API. It will provide the uniform way to access datasets and is also ISO/IEC approved. And this is the way that Cognigy has decided to make a OData analytics available to users and developers to monitor the bot performance. The reason it's been chosen is that it's a widely accepted protocol and it means that we can integrate our analytics records with a lot of very standard and popular data analytics software packages. So if you need more information about OData, you can check out odata.org.
Now, Cognigy has an OData architecture that looks a bit like this diagram. So we start off with our analytics software on the left-hand side, and that could be anything such as Microsoft Excel or maybe Microsoft Power BI. And we provide a a query inside those software analytics packages to reach out to our OData Endpoint. Now, with the Cognigy trial environment that Endpoint is odata-trial.cognigy.ai. For your own installation of Cognigy, it could be something different and it depends on the installation that you've set up. So just make sure you have the correct domain when you are reaching out to the to the particular instance of Cognigy that you're using. So once a query is made to the OData Endpoint the Cognigy domain is able to access the Cognigy Database of all the transaction records and retrieve that back into the NLU and send it back to the analytics software.
So, what analytics are available from Cognigy? We have two types of analytics records: Firstly, we have conversations and this is where we can retrieve information from both the bot and the users' messages. So you can see a basic conversation. Both of these messages, all of these messages are available inside the conversations record, and the use cases for accessing this record is to populate transcripts and for bot testing. The second record we have is called the Records record or for the full name it is the Analytics Record, and this contains only the user inputs. So the user messages that are sent to the bot are the only records that are available.
However, it includes more metadata for each record. So it includes things like the Intent that was found if there was an Intent. The Intent score if the sentence was understood, true or false, boolean, and many other features like if any Slots were found and if any of Goals were met. We can also provide custom analytics that are included in the records data and the use cases for this are to perform functions such as bot monitoring and overall general improvement of our bot.
So a little bit about the concept of Cognigy Analytics, and you can see on the left here, we have our conversations record that is only storing the user messages and the Virtual Agent outputs. And you can see that as soon as those messages are available, when the user sends their input or when the Flow is executed in the Virtual Agent response, they are immediately written to the conversations record. On the other side, we have our records or the analytics record and this is a little bit different. So when our input arrives from our user, the record is initialized then our Flow is executed, and in the Flow, this enables us to perform menu updates to the records. This means we can actually overwrite the analytics and add any of our own custom analytics that we want to use to track the performance of our bot. And then finally, when the Flow execution is finished, some final updates are made to the records, and the commit is made and that record is then made available.
So finally, how do we retrieve this data and how do we build a query to send to our OData Endpoint to retrieve that information? Well, there's three things that you need. So firstly, as we've talked about, you need a Domain URL, which in this case we're going to use odata-trial. Secondly, you need to specify the Collection that you're going to access. And we've just explored those two collections. So we have Records and we have Conversations. Thirdly, and finally is authentication. So you do need access to be able to retrieve this information and Cognigy approaches this using API Key authentication. I'll walk through how we can get that API in just a moment. And finally, an optional parameter is Filters. So if you do want to pre-filter your data before sending it into your analytics package, you can do this by adding Filters in the query string. Now, we're not going to add any Filters today, but if you need more information about this, please check out our documentation.
Ok, so it's time to jump into our query builder, and let's start building a query to grab some OData analytics. So this is our query, a small query builder and we're going to use this information to build a query. So we start by requiring the URL, which is odata-trial.cognigy.ai. We're going to be accessing the records collection so we add that into our string. Filters, we're going to skip that in this particular example and and jump straight across to API Key authentication. So to finish off the API Key, we do need to jump into Cognigy and grab an API Key. So let's do that now.
And this is our Cognigy Flow that we are going to be working with inside this example. To get an API Key for our account, we need to visit the little personal icon in the top right-hand corner and go to My Profile. Now, we just need to scroll down to API Keys and you can see there's a list of of existing API Keys that I've been using on this account. I can create a new API Key by adding with this plus button, which adds it to the top of the list, and then I can click on it to copy it. I'm just going to go back to our text editor here and I can paste that API Key into our builder here.
Now, I can copy this and that is our query complete. So let's test this out, and the easiest way to test this query is actually to use the browser itself. So let's copy that string, we will open a new browser tab and paste that into the browser. And you can see we have a whole lot of data provided in JSON format and our browser can't read it. But what this is great for is just testing that, yes, we have a valid connection, we're authenticated and API Key is working in retrieving that data for us. Excellent.
So the next step and what is a really important concept to to understand is that API Keys in Cognigy actually enable a whole lot of functions. It's not just tied to analytics records, but as an administrator in Cognigy, you can perform Flow editing commands, you can configure users, you can retrieve conversation profile data. There's a lot more to API Keys than than just analytics. So a really important concept is actually to use a dedicated user for retrieving analytics records. So in this tenant for Cognigy, if I go to the access control, obviously I am an admin user at this stage, and you can see that I've got a whole lot of users inside my tenant. What I have configured is a dedicated API User. So this user has been given base roles but has also been given API Key and OData access. This role will allow the user to create their own API Keys and it will also allow access to OData. But because they don't have the admin role, it means that they're not going to be able to form all the other management-related and building-related calls that are available and are open in the Cognigy OpenAPI.
The other thing they need to do for this user is give them access to the Agent that we want to extract the analytics for. And with this combination of roles, we're going to be able to create or access the OData records for this Agent from an API Key associated with this user. So if I was distributing a Power BI template or a Power BI dashboard throughout my organization and I didn't want to provide a query string with an API Key that was associated with an admin user, I can use this API user to generate a dedicated API Key that's only going to have access to those analytics records. So that's what I've done inside this this query. I've provided just a small, actually, i'll provide another example, and this is an API Key from the dedicated user that we're going to be using as part of the demo today.
So the next thing to look at is Analytics enabled Flow. So I'm going to go back into our sample Agent that we're going to be using throughout this video, and let's take a look at the Flow that we've built around this. So in this Flow, we are basically doing a very simple routing Flow. We start off with a question to ask the user what they are, what they are going to ask about. We have a sample list of Intents and if an Intent is found or not found, we perform some conversation logic. So, and then at the end of the conversation, we just thank the user for and we forward them to an Agent.
At this stage, a real-life scenario would mean forward onto an Agent, Agent interface, or onto a voice Agent. So depending on if we find an Intent, we are performing different things, and what Cognigy has is a dedicated analytics Node. This Node can be added by opening the Node search feature, now the Node search module, and visiting the profile tab where you'll find that Overwrite Analytics Node at the bottom of the menu. Obviously, you can also just search for it in the search bar. So with the Overwrite Analytics Node, we can now customize the analytics record that's created when this Flow is activated in a conversation.
So in this example, when an Intent Intent is found, we're overwriting the analytics under Custom Value 1, to say that correct Intent found. We're also adding the current Flow name just as an indicator to show that we can use Tokens within these custom fields and we can also write CognigyScript to use dynamic variables and add that. So it's really possible now to create any type of analytics records. Maybe you have some kind of backend integration that's retrieving data and you want to post that as well to your analytics. It's also possible using this Overwrite Analytics Node. Now, these are the custom fields that we can that we can change. We can also overwrite any of the default fields. So if we do want to overwrite the Intent name for some reason, we can now do that throughout this Node, and there's a few various options for for the fields that we can override there. So in the case, that Intent is not found, we're doing a very similar thing, we're just saying no Intent found.
Now I have two other Nodes in each of these paths. I have a complete Goal Node which just posts that a success and a fail was met, and that just indicates that an Intent was found or not. Now, this is just another way of achieving a very similar use case to what we've done above with the analytics. But there's just to just to demonstrate the multiple ways that you can go about using these analytics tools. So just to look at the NLU Intents that we've got available here, it's just a simple FAQ structure about Cognigy. Ok.
So, let's take a look at Microsoft Excel, and let's do a simple request to the OData Endpoint in order to request data. So, inside Microsoft Excel, in order to get to the OData tab or OData request menu, I need to go to the data tab at the top of the screen and then select "get data" followed by "from other source" and "from OData feed". I'm then going to grab the query that we built earlier or the sample query that's got our trial user or our API Key users credentials, and I can paste that URL straight into Excel and run the query. So initially, we can see a little sample of the data on screen and yes, that's exactly the data I need, so I am going to load that into Excel. Perfect.
So, now what you can see on screen is the raw data that's extracted from the OData record, and we can see that there's quite a lot. Now we're retrieving the analytics records collection as opposed to the conversations collection, and that means the data is a whole lot of extra information about the Intent that was mapped, the Intent Flow, the Intent scores, and we can also see our custom analytics that we've created. We've got our goals and at the end of the data, we've got our custom fields. So we have correct Intent, no Intent. We also have the Flow name that is being published there as well. And that's all being created by the Overwrite Analytics Node that we created in the Flow themselves.
So what you can also notice, I just zoom in a little bit so you can see a little bit better that we have some data points in here that are masked with asterisks. So in Cognigy, we have multiple ways of masking data. Now, this particular record here uses an Endpoint data masking technique, and this particular record down here uses a Flow based data masking technique. So what we mean by data masking is if we're collecting sensitive information from a user, maybe it's a credit card or a password. We don't want that to be posted in our analytics records or in any of our infrastructure logs. We just want to mask that and we can do that using using Cognigy. So I'll quickly show that now in the Cognigy Flow. So firstly from the Flow itself, if I do want to mask a particular input at any stage, I can open the blind mode Node and select where I want to mask the logs, where I want to mask the information. So by enabling both the logging and the analytics it masks not only in the analytics record that we've just looked at but also in the infrastructure logs as well.
So by saving this, and conversations that would be generated from this setup, it would look something like what we see here. Whereas we have our first initial messages sent that are not masked. And then when we do trigger an Intent, obviously we can see the Intent that's been triggered, but we cannot see the user message that was triggering that Intent and it also includes masking for the data, any data, that was associated with that message. So there is a specific message masking. Now if we want to have a setup like this where every single interaction is masked, we can use the Endpoint itself. I'm just going to remove this blind mode Node and we're going to go across to our Cognigy Endpoints. And we have a Webchat Endpoint set up already, which has been enabled and we've had a few sample conversations going on. Now under data management, this is where you'll find the setting to enable masking sensitive analytics and also masking sensitive logging, exactly the same as what we saw in the blind mode, except when this is enabled, every interaction, every message sent from our user, and all the data payloads will be completely masked with asterisks. Also in this data management drop down, we can enable or disable contact profiles and actually completely disable the collection of analytics.
And furthermore, we can integrate with Chatbase and Dashbot as I mentioned earlier on just by enabling and providing the authentication parameters that are required for each of those services. So some great features for masking data within our conversations and ensuring that doesn't get transmitted across to our analytics records.
So next, we're going to jump into Power BI and demonstrate how we can retrieve this data and build a dashboard. So, I'm going to open up a blank Power BI dashboard project and let's retrieve our analytics data. So in a very similar way to Excel, we use this get data dropdown and we select OData feed from that dropdown. Again, just need to go back and copy my query and paste that into the URL. That'll take just a couple of seconds to load and then we can see the same as we did see in Excel, a sample of the data confirming that the data is actually available and we can load that into the Power BI dashboard. That will take just a couple of seconds, there's not too much data. Of course, with larger data sets it takes a little bit longer to to load that information.
But now we can see that when we click on the little table in the top left-hand corner, the data that we've seen, exactly the same in Excel, is now available inside Power BI as well, and that includes all our masked datasets, perfect. So the next step is to build a dashboard around this data. Now we're starting off with a blank canvas. And to start with, what I'm going to do is create just a basic count for the number of messages that are being sent to the bot. So to do this, I can go to the top right and select an option for populating a graph in the dashboard. Next, I'm going to add the axis for for which the the graph is going to be built around. So I'm going to do this based on time and I will drag the timestamp variable from our data up into the axis. You can see on the right-hand side in this dropdown toolbar, all of the data parameters that we've retrieved from our query are now available and they can be added to our graph once we've added to our graph, once we've added the graph to our dashboard itself. So to add finally the count of messages, I'm just going to drag across the ID and we're just going to do a count on the number of messages. Now, at the moment, we just got 20-21 records, I'm going to take that out and just reduce it down to, let's say, just the day, and you can see we only have been testing the spot over two days, so there's not too much data. But this is just an example of how we can build one of these dashboards, and over time, as our as our Virtual Agent starts to see more and more volume, these datasets will grow in these visualizations will become more more valuable. So that is the count of ID. I can also add an a count by channel. So in the legend field, I can add a legend variable. I dragged in the channel data value, and that creates a legend to differentiate between messages that have been sent on the admin console, which is inside Cognigy in the Interaction Panel that we were testing with and differentiates with the admin Webchat, which is available also through the Cognigy platform, but that's accessing via the Endpoint. And this would also be where we could differentiate between maybe a voice channel using the Voice Gateway and maybe Facebook or Sunshine Conversations or any other channel that we have integrated.
Ok, next off, I'm going to build a pie chart to understand what - I'm going to overwrite it, I just want to create a new one so I click inside the dashboard - I want to create a pie chart to understand the, if the bot is understanding our user requests. So I've added this pie chart into the into the graph, into the dashboard, and I need to access the understood Intent, sorry, the understood data variable, and I can place that under values. So we can see that it is being understood. Actually, I'm going to place that under legend to differentiate between false and true, and then I'm just going to do a count of each message. So again, I'm going to drag the ID which was created individually for every message sent to Cognigy, and drop that into the values box and that will populate for the paragraph with all of our misunderstood and understood attempts or attempted messages. Perfect. So we're starting to build up this this dashboard.
Now, next off, I think it makes good sense to use our goals that we created inside Cognigy. To do this, let's use a clustered column chart. I'm going to put this one in the bottom right-hand corner, make it a little bit bigger. So inside the data, we also have our completed goal list, which I'm going to use for the legend and again, we're going to drag the ID into the values. And here we can see that we have three values: We have a failure, a success, and we have a blank value. So, the blank value is not so valuable to us. So I'm going to filter that out using the dropdown here. I can select what values I want to show and I can just untick. Of course, I can also right-click on the data value inside the graph and select exclude as well, performs the same thing. So again, that is a count of the goals that have been succeeded or failed by the bot. And that was something that we configured inside the Flow using the Complete Goal Node.
So next off, I'm going to add a couple more graphs to assess the volume that's being channeled through our bot. So for this, let's just use another bar graph. We're going to drop this above our existing line graph. Line that up nicely. And we're going to add in this one a count of the session IDs. So, again, we're going to use the timestamp for the axis.
We're going to reduce that down to maybe the day. And we're also going to add a Session ID to this particular field for the values. So here we can see again there's not too much data, it's only available over two days. But if we change the count - at the moment, it's counting every single value - if we change that to distinct count, it actually reduces that to every unique value, which gives us a count of the unique situations that are occurring throughout the day. So, again, we can see that's been performed and it's tallying up the number of unique customer sessions that are happening on every given day there.
Let's do one more graph here and I'll place that at the top of this column and let's add a count of the number of unique contacts. So, again, take the timestamp, place that into the axis field and we're going to bring in the contact ID, which is available under contact ID, there it is, and we're going to drop that into values. And again, we've got obviously the year we need to reduce that down. Of course, you can customize this to however you want to view your data. But we've just been testing this in the month of February and we can see, again, we need to filter by unique values. So we're going to change the count to a distinct count of contact ID and then we have it. So we can see that on this particular day, we just had one contact ID. On this day we had seven. Very good.
Now, a couple of other things that we can do here is use some of the more extensive cards and visualizations that are available within Power BI. So let's add a multi-row card to this dashboard. I'm just going to position that in the bottom right, sorry bottom left here. And what I'm going to add to this one is, again, just a summary by month of the number of messages that are going through our bot. So to do that, again I drag the timestamp into the fields. Time is something that we rely on quite a lot in Cognigy. It's a very powerful tool for viewing the performance over the bot and the way that changes and performance improvements have been implemented, whether they're working or not. You can see that all that time data is now available. For this example, I'm just going to track by month. So let's just take out the year, quarter, and day. Perfect. And the next thing I need to do is just add in the count of messages that was that was made, which we can add by just dropping in the ID. And of course, I need to change that to a count and perfect.
There you can see that for the month of February, we have 44 interactions. Another thing we might want to do is add maybe an average Intent score. So we can do that by grabbing the Intent score fields, dropping that into the fields, and then we have what we actually need to change this to an average, at the moment, it's a sum, perfect. So we can see that the average Intent score for the month of February across 44 Interactions was 0.7, and our bot improves and changes over time, we can see how that value changes.
Now, the other thing we might want to add here is just a table of common user messages so we can do that using the table feature. Expand that up to make it fill the space we have, might need a bit more space for this one because it will contain a lot of data. And we can simply add in the input text, which is what our user messages are available on into the values field. Perfect. So there we can see all the messages that have been sent to our bot and we can also maybe include a count of these messages. So if we bring in the ID field again and we'll change that to a distinct count. I just really change the sizing of this so we can see what's going on. We can filter that on the highest volume. So, we can see that's obviously when I've been testing the bot, our message Hi has been something that I've commonly used to trigger the start of the conversation and these 16 occurrences of that. And throughout the conversations, we can see the popularity of particular requests that have been sent.
Now, as you saw just there when I clicked on that message, it changes the visualization across the rest of the dashboard so we can see exactly which day this message has been sent, we can see which sessions, and we can also see which, you know, what percentage of times those messages have been understood or misunderstood. So, again, maybe something about company history, we can see whereabouts, those messages are being sent, maybe the time of day or the time of the year that those are being sent, depending on the size of the bot and who's been talking to it.
So the final thing that I want to add to this dashboard is a slider. So a slider is used inside Power BI to filter the data. And we can add a slider by going to the, where is it? Here it is here. Sorry, slicer, we can add a slicer, which is essentially the way that Power BI refers to filters, and that's an interactive filter that populates on the dashboard itself.
Now, for this slicer, let's add the timestamp and filter just on time itself. So we can do that just there, and straight away we have the ability to change the time that these data results are available for. And we have some ways of configuring the slicer itself, using the slicer configuration toolbar here. Now it doesn't make too much sense to do this because we only have two days of data. But in my other screen here, let me just bring across a sample data set where we have a little bit more data. And you can see that as we have a large dataset over over a longer period of time, the data becomes a lot more usable and we can change our slicer to show the the relevant data sets. And as we change the slicer, each of those graphs updates independently or actually together with the slicer, which is really nice, provides a lot of interaction between the user and the dashboard itself. So that's a little sample Power BI Dashboard, and that is that is concluding the video today.
So we're going to make this dashboard available for you to download and try out yourself. And essentially what you can do is go and change the data payload, sorry change the the OData Request Value in the OData feed menu and and update it to your own bot and populate your data straight into the dashboard. And if you need any more information about building those OData Request, please check out our documentation, we posted the link in the description as well, or check out the our community where you can post messages, and ask us questions.
Thanks very much for joining and I wish you all the best of luck with analytics in Cognigy.AI.
Thank you very much for watching our session on Cognigy.AI and external analytics. Please don't forget to visit our Cognigy Help Center under support.cognigy.com for additional resources and information and the link to our community where you can ask questions or leave us feedback for this episode. Thank you very much for watching and see you next time.