Virtually Human: Animate Digital Humans in real-time


This extension integrates Cognigy with the Digital Human platform of Virtually Human,
allowing you to use a hybrid user interface combining a Digital Human with a traditional
chatbot interface. The Digital Human will be animated in real-time and can show a smile, frown, nod and shake (among others) to mimic human facial expressions. The Digital Human also speaks the response as defined in the Node, by using text-to-speech.
Before you can use this Extension, Virtually Human needs to connect the REST Endpoint of your Cognigy project to the Digital Human platorm and set up and configure your Digital Human.
You can use one of the available Digital Humans or Virtually Human can create your own - unique - Digital Human, based on your specific requirements.
You can also choose from the available TTS voices, and if desired then you can customize the look and feel of the frontend, so it matches your organization's visual guidelines. You can call Virtually Human at +3188 888 9800 or send an email to and you will be onboarded, and ready to use the extension within 1 or 2 business days.


Table of Contents

  1. Install the Virtually Human Extension
  2. Adding the Virtually Human Node to your Flow
  3. Using the Virtually Human Node

Install the Virtually Human Extension

The first step is to install the Extension. You can navigate to the Extensions Marketplace inside of your  Cognigy Virtual Agent and add the Virtually Human extension by clicking on the Install button. This will install the extension, after which it will be displayed as follows:

Adding the Virtually Human Node to your Flow

In your Cognigy project, you can now select and add Virtually Human flow nodes to control the behavior of the Digital Human. When adding a Node, first click on the Extensions tab and then on the Virtually Human logo, so the node will become visible.

When you click on the Node, it will be added to your flow.

Using the Virtually Human Node

Clicking on the Node in the Flow, opens the configuration screen as shown below:

In the upper part (The text you want the frontend to display) you can add the response that you want to be shown as a text balloon on the right-hand side of the front-end.

In the lower part (The text you want your digital human to speak out) you can add the response that you want the Digital Human to speak out. This response can be (slightly) different to the text response as you might want to tailor it to being spoken instead of being written.

Additonally, you can add the Digital Human’s animations to the response, by using the buttons for the most common facial expressions.

For example, if you want your Digital Human so show a big smile, you can click on ‘Mouth’ and
then ‘Smile Large’ for the SSML code for a smile to appear on the place within the response
where you want the smile to start showing.

You can do the same for controlling the eyebrows, head movements and eyes. Adding a break
allows an animation to complete before continuing to the next part of the response. The full
list of supported animations that you can use, is shown on our website:

The SSML tags that you will find in thisoverview, can be added manually into the Node.

For examples and inspiration, please visit:
If you need help configuring the animations, feel free to reach out to us by phone (+3188 888
9800) or email (



Please sign in to leave a comment.

Was this article helpful?
0 out of 0 found this helpful