Chapter 5 explores the use of both symbolic/language and non-symbolic forms of communication which a virtual human might adopt. This page will no doubt break out into sub-pages as we develop more on each area. Note that the area of interest is in the communication of digital avatars within virtual worlds, or of digital avatars reaching out into the physical world (e.g. through a computer screen), not in physical robots or androids per se. Our working assumption is that if you can get something working in the virtual world then it's just an engineering challenge to get it working in the physical world!
The areas we look at in the book for non-symbolic communications are:
- Facial Expressions
- Stance and Body Language
A quick recap on quality of facial expression at https://www.cubicmotion.com/cm-live
- https://www.youtube.com/watch?v=eOjzC_NPCv8 (High Fidelity)
- https://www.smithsonianmag.com/innovation/app-captures-emotions-real-time-180951878/ (app now dead)
Relevant papers are:
- Benoit, A., Bonnaud, L., Caplier, A., Ngo, P., Lawson, L., Trevisan, D. G., ... & Chanel, G. (2009). Multimodal focus attention and stress detection and feedback in an augmented driver simulator.Personal and Ubiquitous Computing,13(1), 33-41.
- Ko, B. C. (2018). A Brief Review of Facial Emotion Recognition Based on Visual Information. sensors, 18(2), 401.
- Pfister, T., Li, X., Zhao, G., & Pietikäinen, M. (2011, November). Recognising spontaneous facial micro-expressions. InComputer Vision (ICCV), 2011 IEEE International Conference on Computer Vision(pp. 1449-1456). IEEE
- Poh, M. Z., McDuff, D. J., & Picard, R. W. (2011). Advancements in noncontact, multiparameter physiological measurements using a webcam. Biomedical Engineering, IEEE Transactions on58(1), 7-11
- Ramirez, G., Fuentes, O., Crites, S. L., Jimenez, M., & Ordonez, J. (2014, June). Color Analysis of Facial Skin: Detection of Emotional State. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2014 IEEE Conference on (pp. 474-479). IEEE.
Stance and Body Language