Source: https://www.virtually-human.com/
allowing you to use a hybrid user interface combining a Digital Human with a traditional
chatbot interface. The Digital Human will be animated in real-time and can show a smile, frown, nod and shake (among others) to mimic human facial expressions. The Digital Human also speaks the response as defined in the Node, by using text-to-speech.
Table of Contents
- Install the Virtually Human Extension
- Adding the Virtually Human Node to your Flow
- Using the Virtually Human Node
Install the Virtually Human Extension
The first step is to install the Extension. You can navigate to the Extensions Marketplace inside of your Cognigy Virtual Agent and add the Virtually Human extension by clicking on the Install button. This will install the extension, after which it will be displayed as follows:
Adding the Virtually Human Node to your Flow
In your Cognigy project, you can now select and add Virtually Human flow nodes to control the behavior of the Digital Human. When adding a Node, first click on the Extensions tab and then on the Virtually Human logo, so the node will become visible.
When you click on the Node, it will be added to your flow.
Using the Virtually Human Node
Clicking on the Node in the Flow, opens the configuration screen as shown below:
In the upper part (The text you want the frontend to display) you can add the response that you want to be shown as a text balloon on the right-hand side of the front-end.
In the lower part (The text you want your digital human to speak out) you can add the response that you want the Digital Human to speak out. This response can be (slightly) different to the text response as you might want to tailor it to being spoken instead of being written.
Additonally, you can add the Digital Human’s animations to the response, by using the buttons for the most common facial expressions.
For example, if you want your Digital Human so show a big smile, you can click on ‘Mouth’ and
then ‘Smile Large’ for the SSML code for a smile to appear on the place within the response
where you want the smile to start showing.
You can do the same for controlling the eyebrows, head movements and eyes. Adding a break
allows an animation to complete before continuing to the next part of the response. The full
list of supported animations that you can use, is shown on our website:
https://virtually-human.com/Animation/AnimationCatalog.html
The SSML tags that you will find in thisoverview, can be added manually into the Node.
For examples and inspiration, please visit: https://www.youtube.com/@virtually-human
If you need help configuring the animations, feel free to reach out to us by phone (+3188 888
9800) or email (info@virtually-human.com).
Comments
0 comments