Communications - Other


Chapter 5 explores the use of both symbolic/language and non-symbolic forms of communication which a virtual human might adopt. This page will no doubt break out into sub-pages as we develop more on each area. Note that the area of interest is in the communication of digital avatars within virtual worlds, or of digital avatars reaching out into the physical world (e.g. through a computer screen), not in physical robots or androids per se. Our working assumption is that if you can get something working in the virtual world then it's just an engineering challenge to get it working in the physical world!

The areas we look at in the book for non-symbolic communications are:
  • Facial Expressions
  • Gesture
  • Stance and Body Language



Facial Expressions

There are two sides to this - being able to create facial expressions - covered in Chapter 3/Bodies & Avatars and how to read them.

A quick recap on quality of facial expression at https://www.cubicmotion.com/cm-live
Research on reading facial expressions is being driven a lot by being able to use them to drive avatars in games/virtual worlds in real-time, and just using a web cam. Examples are:



Relevant papers are:




Gesture

Again, two sides - being able to animate the limbs to create the gestures - again see Chapter 3/Bodies & Avatars, and how to read the gestures. Of course in a virtual world there may be short-circuit to all three of these tasks (expression, gesture, stance) by reading the commands to the other persons avatar or mesh if available, but it may still be simpler to handle at an image (even if not rendered) level rather than as a sequence of x/y/z transforms.

Interesting examples:



And papers:



Stance and Body Language

Same logic as the other two - but seems to be less studied - although there is some interesting work around detecting the intent of people to jump in front of trains/onto train tracks!

Interesting examples:




And papers: