In this episode of Cognigy Sessions we will dive into Endpoint Transformers, a powerful feaure to enable customization of any Cognigy.AI Endpoint. You will get to know typical Endpoint Transformer applications, how to use them as a part of your Conversational AI stack, and how to deploy Transformers from our GitHub repository to your projects. Furthermore, we’ll demonstrate how to build your own Endpoint Transformers for individual use cases.
Join the Cognigy.AI User Community
Welcome to Cognigy Sessions, our Techinar series from Conversational AI experts for experts. Every episode will give you deep insights into a new Cognigy.AI topic. Today's episode is about Endpoint Transformers. If you're new to Cognigy.AI, there's a good chance you've never come across that feature. But for many advanced users, Endpoint Transformers are mission-critical capability that makes Cognigy.AI stand out as an enterprise-grade platform. This episode will show you: The typical applications for Endpoint Transformers, how to use Endpoint Transformers as part of your Conversational AI, and finally, how to build your own Endpoint Transformers for your individual use cases. This recording is also available in our Help Center. Please follow the link below to access additional resources and to go to our community for questions and discussions. Also, please be invited to start your own free trial if you're not already a Cognigy.AI user. And with that, let's start today's session.
Hi, everyone, this is Chris from Cognigy, and in this Techniar, I'm going to show you how to use Endpoint Transformers. Now, what exactly are they? Endpoint Transformers represent a powerful developer tool that enables you to customize any of our existing Cognigy Endpoints. It does so by hooking into the user inputs and the Cognigy output. Transformers used Typescript with extra functions. These are custom HTTP Requests and the Session Storage. More on these later on. The Endpoint Transformers also have an execution timeout of five seconds, which is customizable on all dedicated environments.
Now, if we take a look at our default Endpoint behavior, you can see that the user input goes straight from the channel the user is currently using to the platform and the platform output goes straight back to the user channel.
Once we bring the Endpoint Transformers into play, you can see that the user input first goes through the Input Transformer and only then into the platform and the platform output first goes through the Output Transformer and only afterwards to the user channel.
And what exactly is this good for? So plenty of use cases actually. The most common one is that you want to use Cognigy with a channel that Cognigy does not provide the Endpoints by default. This might, for example, be a new messenger app. And expanding upon that idea, basically, any custom integrations that you want to do. Speaking from experience, modifying an Endpoint is much faster than adapting the internal API, going through all the change requests and all of that. And also you don't usually have influence over external systems. So the Endpoint Transformer allows you to handle events from third-party systems that are not in your control.
Another very common use case is the auto-translation. Feel free to check out our multi-language Techinar for more information on that.
And the Endpoint Transformers can, of course, be used to modify all user inputs and all platform outputs and of course, much more.
Now, when we talk about custom integrations, you usually want to have your Input Transformer or your Output Transformer or both, talk to third-party APIs, or your internal logging API, that can also be the case. For this, you usually need some way to send an HTTP Request, which our HTTP Request function gives you the capability to.
Now, how to use it? Basically the HTTP Request in the Endpoint Transformers is a function with the same configuration object that has the exact same format as the Options object from the Request API module. There are some limitations by default on that functions as well. The first one is that you can only use it once per Transform execution, and the second one is that you are not allowed to send an HTTP Request to other Endpoints.
Both of these restrictions are in place to prevent self-DDos'ing on a SaaS environment and are, of course, customizable on dedicated environments. Can turn off these limitations if you're on your on-prem scenario.
The second function that we added is the Session Storage, and as the name implies, the Session Storage is a storage object that is available in all Transformer functions. The getSessionStorage function uses the user ID and the session ID as parameters. And it returns a promise which results with this Session Storage object. This has the implication that you should not mutate within the Session Storage directly. Instead, you should mutate locally and then reassign to the Session Storage. On the right, there is a small code example where you can see we define a temporary array. We put some elements into it and only after modifying the area, we store it in the Session Storage. This is the recommended behavior. To give you a little bit more of a graphic overview. Basically, the Session Storage is the tool to allow the Input and the Output Transformers to talk to each other. And with that, let's head over to Cognigy and see it in action.
As you can see, we are now in the Cognigy user interface. In the "Chris Techinar Agent" I previously created for this video. Now, before we head to the Endpoint section, let's first create a Flow that will help us understand the Transformers. As you can see we have our default Main Flow straight from the Agent creation, and let's change it so that the Hello World output only happens on the first time. And afterwards, we will repeat the user input.
Let's use a Cognigy Token here. Save Node. All right, wonderful. Now, if I say Hi, and Hi again, the second time it repeats back the user input.
If we now head over to the Endpoints section and have our little Webchat here, let's actually open it and see that the current status is pretty much the same as the Interaction Panel.
Now, if we head into the Endpoint settings, you can find a Transformer Functions area here. If we expand that, you can actually find the Transformer code here. And a few toggles above to configure it. Now in our documentation, you will find more details on which Transformer is relevant for which Endpoint. And also, like more details on what the Transformers are doing exactly. Let's just worry about the Input and the Output Transformer for this Techinar right now.
All right, so I've now enabled both the Input and the Output Transformer. However, I have not yet actually changed anything. This is basically the default configuration and it will now still behave as if I haven't enabled them. And I refreshed the page here. So if I now say "hi", I see "Hello world, "Hi again", and it will simply repeat "Hi again", exactly the same as previously.
Now, let's actually do some changes here, so let's have the inputs that we received be encased by some brackets, for example. So let's actually copy payload text here. Let's extract it. And now it should encase everything, all the understood input. Or not, even the understood input, the input that got sent to the Endpoint. Now, let's also change the output, and let's simply append a little "from Cognigy" to the end. For this, we can modify the processedOutput of text. And simply, let's add a little "from Cognigy" to the end. Let's save this.
And now let's actually try it out in the current format, so let's refresh the page so we get the first Hello world. If you now say "Hi", you can see that the "Hello world" output has received " - from Cognigy" in the end. If I now say "hi" again, you can see that the "Hi again" is enclosed in the round brackets. And the " - from Cognigy" got added to the end. So we now have modified the Input and Output Transformer. But if we head back to the platform and into the Interaction Panel, you will see that actually the behaviour in the Interaction Panel is still the same. The Endpoint Transformers are currently only affecting the Webchat. The Flow itself hasn't changed a single bit. As you can see everything is still the same here.
Now that we've added the Transformers, let's actually do something a little bit more usable with the Input Transformer. So if I now talk to this Flow in Spanish, for example, yu can see it repeats the Spanish input straight back to me. However, we can actually translate this in the Input Transformer so that the English Flow is still able to use English input.
For this, let's head back over to our Endpoint. And what we are going to do is: We will add an HTTP Request to the Azure Cognitive Translate API. And for this, let's first define the result objects. Let's define the options object. I've already prepared the options object beforehand to comply with the Cognitive Services Translator API. You can see it now requires a subscription key, which we currently do not have inside the Transformer code. So let's add this here at the top. This makes it easily configurable later on. Let's leave the subscription key to empty for now.
So, all right. Now, of course, we also need to actually try to send the HTTP Request. It's still the result object with the awaited HTTP Request result. Options: Let's also console-log these results. And let's append the stringified version of the results, which actually makes it readable in the Console log. And of course, we need to catch any error messages. So for now, let's just console.error log the error message. The console.error basically logs in a red color.
All right. So let's save it. Let's head back to our Webchat. And refresh the page here and create a new session. So if I now say "Hi", "Hi again", you can see that for the user right now, nothing has changed yet, and if I also write in Spanish, it currently still doesn't do anything instead of just repeating back to me the user input.
Now, if we head over to the logging section. You will see that we now have actually logged two errors. So you can see here the request is not authorized because credentials are missing or invalid. Of course, we need to add our subscription key to the Endpoint Transformer. So let me do that really quickly here and let's display a black bar so that you cannot see my Subscription Key. And let me update it really quickly here. Alright, let's say that. Let's scroll a bit down so you cannot see it.
All right, and now if we head back to the Webchat, behavior still will be the same to the user. So "Hi", "Hi again", and you can see that you know it's still not translating. However, if we look at the logging section, you can see, oh, no error messages. And if we turn on the debug messages and scroll all the way down, you can see now we actually log the HTTP Result and the object we receive back has a basically has a detected language object inside an array and then translation object.
And we can, of course, use this now in our Transformer. This is the information that we needed to finalize it. So, let's actually initialize the result in case of an error so that we can still use it and not have the Transformer crash on an error regarding the HTTP Request, which would be rather bad because then the user doesn't get anything. Translations and this was an array with a text inside, and text we will simply set it to be the same as the payload.text All right. So add a semicolon at the end.
So now we initialized our result object and what we now need to do is to actually use it. To append something to the text, so currently we have the original user input and after the HTTP Request, what we can do is we can simply say, all right, that we append some more stuff.
So let's add a line break here. And then we say, translated is the result at the position zero, dot translations at position zero, dot text. The API allows you to actually send multiple separate strings at once. So this is why we receive an array and this is also why we need a little bit of this.
All right, so, yeah, let's save it and now the user should actually see a different behavior. So, we now say "Hi", and now we say "Hi again". You will see that we have both the original and the translated. And if I now say "yo me llamo Chris", you can see now the original is the Spanish version and the translated is actually the English one.
If we now head back to our Flow, as previously seen, you can see, you know, the Flow hasn't changed so far. The Interaction Panel behavior is also still the same as the Interaction Panel is basically its own little Endpoint. And all of the translation and the Input and Output transformation is only happening for Webchat right now.
Now, instead of coding your own Transformer, you can, of course, also simply use our existing Transformer samples on our GitHub repository. So let me show you how you can find it. If we say GitHub, I can either go directly to Cognigy via the URL, or if you Google GitHubCognigy you will find our GitHub repository quite quickly. Now, if we head into that and check out the Transformer samples repository, and see, here we have an Endpoint folder and also NLU connector folder. For this Techinar right now, the Endpoint folder is relevant. And let's take our Telegram Transformer which is already in the custom Endpoint Transformer folder here. Skills will be transformer.ts. And now what you do is basically you simply copy the whole thing.
Alright. Head back over to Cognigy and now you create your suitable Endpoint. As you can see here, if we scroll all the way up, this is a Webhook Transformer. So let's now also create a Webhook here. Let's call it Webhook. Also, pointed to the Main Flow.
Here we go. Alright.
Now, what you do is basically go into the Transformer Function, select everything and replace it with what we just received from the GitHub repository.
I can see it has the Handle Input Transformer and the Handle Output transformer inside. You have plenty of interfaces which are according to the Telegram API specification. So you don't need to worry about that at all, it's great. And the HTTP Requests and all of that is already configurable via the bot Token and the Base API. So let's actually save that already. We also need to enable them. And again, I need to fill out the Token. And let me bring back the black box for this. This is not entered, let me scroll down a bit. Save it. I can disable the box.
And now if we head over to Telegram and to all the configurations, I can show you how it is working. Alright, we are now in Telegram, and if I now say "Hi" and "Hi again", you can see it is using our existing Flow. Wonderful. Yeah, this is how you use our existing GitHub repository and the plenty of Transformers that we already have in there. Of course, you can feel free to modify them. Well, it's yeah, that's it about the GitHub stuff.
Now, there is one more feature which I want to show you, which is the Cognigy CLI. It allows you to edit your Endpoint Transformers in VS Code on your PC and then push it to Cognigy. If we head to the GitHub Cognigy CLI, you will find that. You'll find it directly via Google. And in this repository, you have everything that you need to set it up. I will not go over all of the details right now. Basically, you clone the repository, then you can use Node to run and build the package. And afterwards, let me show you what it looks like. Let's first create an additional Endpoint. And let's call it the VS Code Webchat. Points to the main Flow. And as you can see, the Transformer functions are the default functions, no changes have been made so far. So now let's enable the Input and the Output Transformer again. Save it and we will not make any other change in Cognigy right now. I've already prepared the CLI and set it up, so now let's clone the Cognigy in our repositories. It will now download all of the resources.
Now, if I open up VS Code. You can see here now. Let me maximize this. Now we have here this Agent repository, and we can see our VS Code Webchat. If we go to the transformer.ts you can see we now have the Socket Transformer. And here, for example, I can now say "let text = "changed in VS Code". The semicolon, in the end, let's add this, save it. Now we only still have it locally, so what we now do is instead of clone, we will put a restore. And this will now push all of our resources to Cognigy.
And now it's run through. So let's check it out. Let's head over to our Endpoints. As you can see, everything has been changed a few seconds ago, and if I now go into the VS Code Webchat Transformer, and refresh the page, might have sort of cashed it, you will now see that it has the "let text" changed in VS Code.
Thank you very much for watching our session on Endpoint Transformers. Please don't forget to visit our Cognigy Help Center under support.cognigy.com. For additional resources and information and a link to our community where you can ask questions or leave us feedback for this episode. Thank you very much for watching and see you next time.