Cognigy's Agent Copilot is an AI-powered agent assist solution for enterprise contact centers. It is embeddable into any agent workspace or available as a standalone interface, offering a variety of advanced features that empower agents to provide fast and effective customer support.
These include contextual handover, real-time knowledge lookup, sentiment analysis, next best action suggestions, live transcription, and automated wrap-ups, among others.
The Agent Copilot interface adopts a fully customizable, dynamic grid layout with plug-and-play widgets that adapt to enterprise processes and requirements. In this video, we'll show you how to set up Agent Copilot and seamlessly integrate it into your contact center solution.
00:00 Introduction
01:44 Basic configuration
Grid configuration, Agent Copilot Nodes, Plug-and-Play Widgets
8:45 Deployment options
Digital messaging and contact center voice
15:25 Integration into contact centers
Standalone vs fully embedded into the agent workspace
21:48 AI Copilot architecture
25:00 Outro
Join the Cognigy.AI User Community
Welcome to Cognigy Sessions, our Techinar series from Conversational AI experts for experts. In every episode, we will do a deep dive into another Cognigy AI topic. We will go much deeper than any marketing collateral to help you get the most out of your favorite conversational AI platform.
Hi. I'm Dilek and in this session, we will cover AI Copilot, our next-gen Agent Assist for enterprise contact centers. We will show you the main components of AI Copilot, an in-depth how to as well as the next steps to start your own AI Copilot experience. This recording is also available in our Help Center. Please follow the link below for additional resources, go to our community for questions and discussions or start a free trial if you're not a Cognigy.AI user yet. And with that, let's start today's session.
So, let's start with a more detailed overview of what we're going to cover in this video. We are starting with the different components you will need for your AI Copilot experience - the grid layout, Copilot Nodes, as well as the Plug-and-Play Widgets. Then we will continue with different types of use cases such as digital and voice as well as Cognigy in front or the contact center being in front. We will have a quick look into the difference of using AI Copilot standalone as well as embedded into contact centers. And lastly, there will be a rough overview into the architecture to get a deeper understanding of specific components and workflows.
Let's start with the most important part of the AI Copilot configuration. The grid. The AI Copilot Workspace is based on a simple grid layout. This grid is fully customizable and gives the possibility to configure the number of columns, rows, placement of the widgets, as well as the gaps between them. There are 2 methods for configuring the grid layout.
Before diving into those details, let me provide a brief introduction to the underlying logic to facilitate a quick understanding. Let's create a quick example. We will create 3 columns and 8 rows as the base of our grid. The part from the left to right is our x axis. The part from top to bottom is our y axis. Within this grid, we can now place our widgets. Within the product, we refer to them as Tiles. Our 1st Tile will start in the upper left corner, will have the width of 1 column and the height of 4 rows. We will call it Identity Assist to display our customer's profile. It will start at x one and y one.
We’ll have 1 column and 4 rows. Now, let's continue to create more Tiles until we have the wished base of our Workspace. To configure your 1st grid layout, you can directly go to our open API documentation. If you do not have the link, do not worry. You can simply go to your Endpoint, scroll down to the Copilot section, and if you hover over the tooltip of the Copilot config, you can find the link.
In the API documentation, you can see the AI Copilot section. In here, you have multiple options like getting all of your created grid layouts, you can create new layouts, modify or delete them. In here you can directly configure your grid layout. For this you will need your API key. You can find this in Cognigy.AI under the user menu, my profile, API keys. If we now click on post, you will see a familiar structure.
As we did in our exercise before, we will need to define the basic layout of the grid with columns and rows, then we define the gap between the individual Tiles. Below that we can add all the necessary information regarding the Tiles. We give each Tile a Tile ID. Remember these ones, they will be needed in a later step. We define where the Tile starts, as well as the width and height with the columns and rows.
Now we need to add this for all of the Tiles we want to place in our grid. At the end, you need to define your project ID to ensure it is associated with the right project. Once you click on try, you will be able to find your grid layout in the drop down of your Endpoint settings. This will be used as a default layout in case you do not configure it within the Flow.
The configuration within the Flow is a bit easier and more flexible. You can use the Copilot set grid Node and do exactly the same configuration in there by adding the rows and columns as well as the Tiles. The layout preview is adjusting in real time, giving you a clear understanding of what you are configuring. This Node makes the usage of the grid more flexible as you can add this into different places of your Flow to adjust the grip to your needs.
Now we can start setting up our AI Copilot experience. The conversation with the customer will usually start with the main Flow, which is a virtual agent interacting with the customer. After the handover, we will then continue with our AI Copilot Flow which is being triggered through the customer input. For this, we need to create an AI Copilot Flow. As we are using a Flow, we also have access to every Cognigy feature which you would use with your virtual agent as well.
The important aspect to notice here is that we have specific Nodes for our AI Copilot experience that update the AI Copilot Workspace. Usual say or question Nodes will not update the AI Copilot Workspace. If you search for Copilot, you will find a long list of Nodes. Let's start with the basic 3 Nodes. We have an HTML Node, Adaptive Card Node, as well as an iFrame Node. These can be fully modified according to your use case.
Once we open one of those Nodes, you can see something familiar. The Tile ID. This ID basically defines that this Node should fill the Tile which you have placed in your grid layout with content. Therefore, please ensure that you have the exact naming in your grid layout as well as in the Nodes. As mentioned, the 3 Nodes you can see are fully configurable. This also includes the styling.
Therefore, we have added more Nodes which we call plug and play widgets. These are preconfigured Nodes that have the same styling as our AI Copilot Workspace. In these notes you can of course still modify the content. Let's quickly go through them. The Identity Tile will display the contact information of your customer. Within the Tile, you can configure the layout of the widget, image shape and to source where you want to get the information from.
If you are using our built-in Contact Profiles, you can use the tokens to get all necessary information or you can get the data from your CRM and place it within the Node. The customer data part consists of key value pairs, making it configurable to your needs. Let's continue with our next action Tile. This Tile is designed to give your agents suggested replies. You can either manually add the best answer for specific intents, create a fitting response through LLMs or display the results of knowledge AI. The transcript Tile, as the name suggests, displays the last user input. This is especially recommended for voice use cases giving the agent the possibility to see what the customer is saying. In here, you can as well define if you want to display the sentiment of the customer according to the last input. Please make sure to go to the project settings and configure your LLM resource you would like to use for sentiment analysis.
Of course, we also offer a Tile that shows the sentiment only. Here you can define if you want to display the current sentiment for an average of the last 10 customer inputs. Again, please configure your LLM before using this Tile. As we have released our Knowledge AI product, we have also added this as a plug and play widget to have a knowledge search option for the agent within the Workspace. Please ensure that you have configured your Knowledge Store before using this Node as this needs to be defined to generate the right output.
As we now know all of the basics, let's have a deeper look into the different use cases. Let's start with the digital use case with Cognigy being in front, meaning that the conversation will start through a channel supported by Cognigy as our own web chat or WhatsApp Endpoint.
Within your virtual agent Endpoint, you first need to define your handover provider. Then move on to the Copilot section to define your Copilot config, which is the grid layout we have created before, as well as the Copilot Flow which consists of Copilot specific Nodes.
We do have further settings you can configure. In standalone use cases where you would like to chat with the Workspace, you can enable a native transcript Tile as well as the ability to chat within that Tile. Please be aware that we do send all of the messages back to the contact center, but it might be displayed in the contact center transcript differently than when an agent is using the actual contact center UI. The last setting you can enable here is the reduction of messages within the transcript Tile to deal with sensitive information. Once you are done with your configuration and the Endpoint, have configured your Copilot Flow, you are ready to go.
Please note that the Endpoint configurations are only valid for use cases where Cognigy is in front and you are using the handover to agent Node within your main Flow for digital use cases.
Let's have a quick look into our Live Agent how it works. For the demo purpose, I created a main Flow which directly does a handover to the agent. Once the agent communicates with the customer and the 1st customer input is being sent load Flow. The Workspace will populate the right data. The same configurations can be used for every natively supported handover provider, which you can find in our handover provider list in our Endpoint settings.
Let's continue with our 2nd use case, which is still digital, but with the contact center in front. As a reference contact center, we will use Genesys Cloud CX. A short overview into the use case. The customer chats through the Genesys Webchat widget. Genesys forwards the messages to Cognigy to connect to a virtual agent. This can be done through the Genesys Bot Connector Endpoint within Cognigy. Once we head back to the contact center, we have the option within the Bot Endpoint to subscribe to the Genesys Notification API.
To ensure we have the right credentials to subscribe to the Notification API at your Genesys Cloud credentials which consists of OAuth URL, client ID, client secret, and OAuth scope. You can find all necessary information within your Genesys Cloud CX instance. The subscription to the notification API will enable us to receive all of the events from Genesys within Cognigy, as well as the transcription of the customer input. Now it's time to switch to the voice use cases.
Let's start with our voice gateway being in front. This means that the customer is calling a specific number, is being connected to the virtual voice agent, and then being handed over to the human agent. The way this works is through a transfer with a SIP invite, which basically means that voice gateway still stays within the call, listens and transcribes to the defined audio stream.
To configure this, set a transfer Node and choose the dial option. Now enable the Copilot toggle. This creates a specific UUI value that is being sent over to the contact center. We will get into more detail why this is needed in a later step. After enabling the toggle for Copilot, we will see a Copilot header key. As we are sending information through SIP headers, we need to define the specific header key for your contact center.
In our example, you can see the Genesys specific header team! In the next section we configure a transcription. You can define your preferred speech to text vendor and language. In the transcription webhook, you need to add your voice Copilot Endpoint URL. This is a specific Endpoint that can handle the voice transcription and separates audio streams to user and agent by channel tags. Please ensure to add the user and Session ID at the end of your Endpoint URL as this is how we are ensuring that we're updating the right AI Copilot Workspace with the output of the transcription.
The last configuration you need to make within the transfer Node is to define the audio stream you would like to transcribe. This can be the called party, caller or both. Now to finish off our voice gateway configuration, we need to get to our voice gateway Endpoint and select the right Copilot config to have the right default grid layout.
For now, I will skip the demo of the voice Copilot experience as I will first continue with the contact center and front use cases, and then demo how we can embed AI Copilot into the contact center and how this all connects to the configurations we have done within our transfer Node.
There are multiple use cases how the contact center can be in front in voice use cases. Either they are in front the entire time, stream the audio to Cognigy to connect to a virtual voice agent, or they use voice gateway for the virtual agent and then refer to the contact center so it can take over full control of the call. In both use cases, voice gateway does not receive the audio stream by default. Therefore, we have added support for SIPREC. This means that the contact center starts a SIPREC call, voice gateway receives the audio, does the transcription, and forwards it to the voice Copilot Endpoint to execute the AI Copilot Flow.
To configure this, we need to add a default SIPREC application within the account in our voice gateway portal. This default application is configured with the Endpoint URL of your voice Copilot Endpoint from within Cognigy. Once we receive the audio from the separate call, we are transcribing the audio and sending the transcription to the defined default application. The contact center sends us a user and Session ID through SIP Headers which we add to the transcription information to ensure we can update the right AI copilot Workspace.
The next part of our video is the actual integration to contact centers. We have 2 different options on how we can integrate with contact centers. The first one is a standalone option. This means that we are only sending the AI Copilot link to the agent in a private note, so it can be opened within a browser window. The entire AI Copilot experience will still be the same, only that the agent will need to work with his default contact center Workspace as well as the browser window with the AI Copilot data.
Here are a few examples of how the agent would receive the link within the contact center. As a reference, we use 8x8 as well as Genesys. This is the easiest way to start off with AI Copilot as this does not need more configurations within the contact center other than clicking on the link.
The more interesting part is the embedding into the contact center. Please remember that this process is different for every contact center. As a deep dive, I will show you in detail how we are doing the embedding into Genesys Cloud CX.
Let's start by getting into our Genesys Cloud CX platform. To embed the AI Copilot Workspace next to your agent's current Workspace within Genesys, you need to add a script. In Genesys Cloud, you can configure scripts within the contact center settings. Add a web page sized full screen to your script. The parameter needed for the webpage source is the UUI data. And this is where our cycle connects again. As this is the data we are sending in the set headers when we're doing our voice transfer.
The next step is to create a Flow for incoming calls. As every conversation can have a different use case, it is necessary to add a Flow within the Genesys architect to ensure the right conversation routing is enabled.
The first thing we will do is to get our UUI value from the separators. You can do this by adding a simple note to get the UUI data. To ensure that AI Copilot is enabled for the conversation, is the screen pop needs to be configured. For this, simply create a set screen pop Node and choose the right script that you have created before. Afterwards, you can transfer the conversation to a queue to be picked up by an agent. Let's now call our virtual voice agent and see what happens within Genesys.
Bot: Hello. You are now connected to the virtual agent. I will transfer you to my human colleague. Please hold the line.
Dilek: Hello. I am calling regarding the cancellation of my flight.
Okay. And now let's do a really quick recap what happened. We have called our virtual voice agent. The virtual voice agent introduced himself, and then transferred us to Genesys. When doing the transfer, as we have configured before, we had the UUI value that we're sending in the SIP Headers, those have been received in the architect Flow, then the screen pop got triggered. Therefore, we were able to see the AI Copilot Workspace within Genesys.
Everything you can see right now are the plug and play widgets that we have placed into our Flow in the beginning of this video, which should not take us a long time. And throughout the entire video, we have used the same Workspace as an example no matter if it was checked or if it was voice. Of course, there are also a lot of different ways how you can make this a little bit prettier, maybe changing the color, maybe changing the design a little bit to fit more to your corporate identity. But in general, no matter what you built with AI Copilot, you're always able to use this for voice and also for digital use cases.
As we have now learned about screen pops and scripts within Genesys, let me quickly show you how easy it is to do the same for digital use cases. We again start off with creating a script within Genesys. We had a web page full screen to the script. The parameter needed for the webpage source is the AI Copilot embedding URL, which you can find in Cognigy AI within your Endpoint settings in the Copilot section. Just copy paste this URL as a script source. Every conversation is triggered through this Endpoint, and we'll now be able to use AI Copilot within Genesys.
Then we continue within Genesys, we again need to create a Flow for incoming conversations, add the screen pop and select the script we have created. After this, we transfer the conversation to a queue to be picked up by a human agent. In the digital use cases, there is 1 more item we need to create and that is a widget.
For the AI Copilot Workspace to be displayed on the right side of your Agent Workspace, a widget needs to be created to ensure we can use the previously created script. The configuration page for widgets is within the contact center section of Genesys Cloud. Once a widget has been created, the Flow that has been configured in the previous step needs to be chosen. And that's it. You are done with your integration.
Of course, we cannot stop talking about contact center integrations if we do not have a look at our own Live Agent. For Live Agent, we have created a native integration which only consists of enabling AI Copilot within the Live Agent account settings.
As with every UI setting, we have also adjusted this in a way that the agents can choose themselves if they want to try out AI Copilot or not. Or the admin can set a flag to override the agent UI settings to enable AI Copilot by default.
As we have now learned about all basic configurations and integrations into contact centers, let's have a deeper dive into the architecture to actually understand what is happening, when we are using AI Copilot as well as enabling everyone to integrate AI Copilot into their contact centers even if it's not supported natively by Cognigy AI.
Let's use a voice use case as an example. The customer calls a specific number and is connected to the virtual agent. During the conversation, Voice Gateway takes care of speech to text and text to speech. Once we get to the transfer Node within the Flow of the virtual agent, we create our UUI value. This UUI value basically creates our AI Copilot URL. It includes the following parameters: AI Copilot Base URL, User ID, Session ID, and URL token.
As we want to map the right transcription to the right AI Copilot Workspace, it is necessary that the user and Session IDs match. Therefore, when adding the Endpoint URL to the transcription webhook field, we are as well adding the session and User ID to it to have the matching values to ensure that we're updating the right AI Copilot Workspace.
Once we are executing the AI Copilot Flow, we are sending the updates to the embedded iFrame to display the new suggested replies or the transcription. To as well clarify this for the SIPREC integration, let's have a quick look at that as well.
After transferring the call from voice gateway to the contact center, we will not have access to the ODA anymore as we're using a SIP:Refer. Through the SIPREC call, we are still receiving the audio to our voice gateway. While receiving this audio, we are also receiving the agent ID and conversation ID through SIP Headers. These IDs are important as when we're sending the transcription to the voice Copilot Endpoint, we will need to send the User ID and Session ID. Therefore, instead of using the Cognigy User ID and Session ID, we are using the contact center's agent ID and conversation ID but are renaming it to User ID and Session ID.
This will enable us to match the User ID and Session ID with the IDs from the AI Copilot URL that are embedded within the contact center. As these IDs are the only ones that are available from the contact center as well from Cognigy where both parties have access to, these are the values we're adding to the user and Session ID. When sending the transcription to the Voice Copilot Endpoint, we are executing the Copilot Flow. This results in sending the updates to the right AI Copilot Workspace within the contact center as we do have a matching of the agent ID and conversation ID referred as User ID and Session ID. Basically, this means that no matter into which contact center we are integrating AI Copilot, as long as we have the matching IDs and the IDs can be filled dynamically, we can embed AI Copilot into any contact center.
Thank you for watching our session on AI Copilot. Please don't forget to go to the Cognigy help center under support.cognigy.com for additional resources and information and a link to our community where you can ask questions or leave us feedback on this episode. Thank you very much for watching and see you next time.
Comments
0 comments