All articles on how to create & orchestrate LLM-powered AI Agents
- Getting Started with AI Agents
- Prerequisites: Set up your AI Agent's brain
- Create your AI Agent's persona
- Give your AI Agent a Job
- Make knowledge available to your AI Agent
- Give your AI Agent access to memory
- Deploy and use your AI Agent
- Improve your AI Agent’s skills using Tool Actions
- Enable your AI Agent to understand images
- Talk to your AI Agent via voice or phone
- Debugging your AI Agent
Cognigy’s AI Agents utilize the power of Large Language Models (LLMs). You will need to set up two Large Language Models (LLMs) in Cognigy.AI:
- An LLM for language generation (e.g. OpenAI GPT-4o)
- An LLM for embeddings generation (e.g. OpenAI ada-002)
To set this up, navigate to the sidebar and select Build -> LLMs. Here you configure the language model, such as Azure OpenAI GPT-4o or others. In case you want to allow image upload in a conversation, the model must support images as input (see chapter “Enable your AI Agent to see images”).
If you want to optionally add knowledge to your AI Agent, you'll also need an embedding model to index and search through your knowledge base (see “Add knowledge to your AI Agent”). For this purpose, the embedding model Azure OpenAI Ada is a popular choice.
Once all models are configured, go to the sidebar and select Manage -> Settings -> Knowledge AI Settings. Here, you can assign the embedding model for knowledge search and the generative language model for answer extraction.
Comments
0 comments