All articles on how to create & orchestrate LLM-powered AI Agents
- Getting Started with AI Agents
- Prerequisites: Set up your AI Agent's brain
- Create your AI Agent's persona
- Give your AI Agent a Job
- Make knowledge available to your AI Agent
- Give your AI Agent access to memory
- Deploy and use your AI Agent
- Improve your AI Agent’s skills using Tool Actions
- Enable your AI Agent to understand images
- Talk to your AI Agent via voice or phone
- Debugging your AI Agent
The power of AI Agents based on Large Language Models (LLMs) lies in their ability to generate natural, conversational responses, but this also introduces some uncertainties. These models rely on a combination of randomness and probability to produce replies that mimic human communication. This can sometimes lead to responses that fall short of expectations, making debugging and identifying issues particularly challenging.
AI Agents generate natural, conversational responses, but they also come with inherent uncertainties. These models use a blend of randomness and probability to create nondeterministic human-like replies. However, this can occasionally result in responses that do not meet expectations, where debugging and identifying issues can be challenging.
You can always combine AI Agents with classic conversational AI to enable more deterministic interactions where necessary. Cognigy also offers several tools to address challenges and solve issues, including features like Debug mode and Live Follow, to name a few.
To utilize Debug mode in the Interaction Panel and gain insight into behind-the-scenes activities, follow these steps:
-
Enable Debug mode.
Go to the settings page of the Interaction Panel and enable the Debug mode. Alternatively, you can click the … menu on the upper right of the Interaction Panel.
-
Specify which details you like to see.
In the Interaction Panel’s settings, you can filter by node types. Additionally, for each individual AI Agent Node, you can enable specific logs under the AI Agent Node’s Debug Settings section, such as job execution details, knowledge retrieval, or token consumption.
-
Activate Debug Outputs on the Nodes.
All AI Agent Nodes (including Tools, etc) contain a Debug Settings section in which you can selectively activate or deactivate debug messages.
If you're testing your AI Agent through an endpoint like a webchat or a voice gateway, Live Follow is a useful feature observe a conversation:
-
Define User ID.
For the Webchat demo page, you can set the user ID via an additional URL parameter in the format url?user=ID. If you know the ID or phone number, you can use it directly without the need to set it here. You also find Contact IDs for that int the sidebar Manage -> Contact Profiles menu. -
Set the User ID in the Interaction Panel.
Select the Live Follow tab located at the bottom in the Interaction Panel and enter the user ID here.
This allows you to track the ongoing conversation and its corresponding Flow, inputs, outputs and debug messages in real time.
Comments
0 comments