Welcome to this basic tutorial on building your first KAI (Knowledge AI) agent using Cognigy.AI! In this guide, we’ll walk you through the essential steps to get your KAI agent up and running. You’ll learn how to set up a knowledge base, connect to Large Language Models (LLMs), and create your first LLM powered conversational flows. By the end, you’ll have a functional AI agent ready to handle user queries with intelligence and efficiency.
Preamble: Describing the Use-Case
Before we begin let's define the use case we will be building in the next steps.
You are building an LLM based bot for a Golf Club and want to answer our member's questions based on the level of their membership, which has been split into three categories: Bronze, Silver, and Gold.
Step 1: Setting Up the Knowledge Base
To ensure your KAI agent can retrieve accurate information, the first step is to create a solid knowledge base. This will involve defining the core data and content your agent will use to respond to user queries.
1. Create a Knowledge Store
- Navigate to “Build” > “Knowledge”.
- In the Knowledge Stores section, enter a name for your Knowledge Store. For this tutorial, let’s call it “Golf Club Knowledge Database”.
2. Set Up Your Knowledge Store
- Follow the instructions in our latest documentation to set up your knowledge store.
3. Upload Knowledge Sources
- You will need to upload one or multiple Knowledge Sources, which will then be vectorized and chunked by your chosen Text-Embedding Model.
- For this tutorial, let’s create four Knowledge Sources:
- One for each membership type: Bronze, Silver, and Gold, containing information relevant to that membership tier.
- One general knowledge source containing information applicable to all membership tiers.
4. Additional Resources
- If you need further information on building a reliable Knowledge Database, refer to our other Knowledge Base Articles for best practices.
Step 2: Connecting to LLMs
Integrating the LLM is straightforward in Cognigy.AI. Just follow the steps outlined in our latest documentation.
Step 3: Creating Your First Flows
With your knowledge base and LLM connections ready, it's time to design the conversational flows that will dictate how your KAI agent interacts with users. Flows are essential for managing dialogues and determining how the agent responds to different inputs.
1. Designing the Conversational Flows
- In the next steps, we will guide you through building your first LLM-based FAQ bot for our Golf Club.
- The goal is to provide accurate answers based on the membership tier the user specifies. If the user does not have a membership, they will only have access to the general knowledge database.
2. Configuring the Search & Extract Node
- Use the “Search Extract Output” Node to configure how the bot will retrieve information based on user inputs.
- Enable the “Search & Extract” mode to allow for a general LLM prompt that formulates and outputs the answer.
- Assign “Source Tags” accordingly. For the Silver membership branch, use the tags “silver” and “general” to ensure the agent retrieves relevant information from the “GC Silver” and “GC General” sources.
3. Generating Answers with LLM
- After configuring the Search Extract Output Node, connect it to the “LLM Prompt” LLM Node.
- Provide basic LLM instructions to guide the model in generating responses based on the retrieved information.
Step 4: Testing and Improving Your KAI Agent
Congratulations on building your KAI agent! The next crucial phase is testing and improving its performance. Generating a knowledge base-driven LLM bot requires time and multiple iterations, and it's essential to refine both your knowledge base and LLM prompts to enhance user experience.
1. Testing Your KAI Agent
- Simulate User Interactions: Engage in conversations with your KAI agent as if you were a user. Test it with different questions across all membership tiers to see how well it retrieves and presents information.
- Evaluate Accuracy: Assess whether the answers provided by the agent are accurate and relevant.
- Check Conversational Flow: Pay attention to how smoothly the agent guides the user through interactions. A well-designed flow should feel natural and intuitive.
- Monitor for Hallucinations: Be aware of any instances where the agent generates information that is inaccurate or irrelevant. This phenomenon, known as “hallucination,” can mislead users and should be addressed promptly.
- Consider the Channel: Keep in mind that a chat use case operates differently than a voice use case. Tailor your testing approach accordingly to ensure the agent performs optimally across different channels.
2. Improving the Knowledge Base and LLM Prompts
Based on your testing and feedback, begin the process of improvement:
- Enhance the Knowledge Base: As you discover gaps in your agent’s responses, add new information or clarify existing content in your knowledge base. This is an ongoing process that will make your KAI agent more robust.
- Iterate on LLM Prompts: Experiment with different LLM prompts to improve the quality of responses. Consider adjusting the phrasing, providing more context, or refining the instructions to the LLM to ensure it generates better answers.
3. Iterate and Repeat
Building a knowledge base-driven LLM bot is not a one-time effort; it requires continuous iteration. Regularly revisit and enhance your KAI agent based on user interactions and evolving needs.
- Schedule Regular Updates: Plan for periodic reviews of your knowledge base and LLM prompts. This can include adding new content, refining existing information, and making necessary adjustments based on user feedback.
- Stay Informed: Keep an eye on advancements in AI and conversational design. Incorporating new techniques or tools can provide fresh opportunities for improving your agent.
Comments
0 comments