Interfaces

At the end of Chapter 6 we look briefly at some of the interfaces that have been developed to help integrate different parts of a virtual human - or to integrate a virtual human with virtual worlds and other realms.  There  ought  to  be  a  role  for  interconnection  and  particularly  interface  standards  within  virtual  humans  development,  but  at  the  present  time  this  does  not  seem  to  be  a  priority  within  the Virtual Humans community.

We summarise each of the following interfaces below, and will add to the list as we identify new interface standards:

  • Virtual  Human  Markup  Language  (VHML)  (Marriott,  2001) 
  • Discourse  Plan  Markup  Language  (DPML)  and  Affective  Presentation  Markup  Language  (APML)  (De  Carolis,  2004). 
  • Function  Markup  Language  (FML)  and  Behavior  Markup  Language  (BML)  Kopp  (2006)
  • Perception  Markup  Language  (PML)  (Scherer,  2012).  
  • Character  Markup  Language  (CML)  (Arafa,  2003)
  • Avatar  Markup  Language  (AML)  (Kshirsagar,  2002)
  • Avatar  Sensing  Markup  Language  (ASML)  and  Avatar  Action  Markup  Language  (AAML)  (Slater,  2009)
There's quite a nice list of a whole variety of markup languages for the physical world (so far more than virtual humans) at: http://web.mit.edu/mecheng/pml/standards.htm



Virtual  Human  Markup  Language  (VHML)  

https://www.vhml.org/

Built  on  existing  standards,  such  as  those  specified  by  the  W3C  Voice  Browser  Activity,  and  added  new  tags  to  accommodate  functionality  for  Virtual  Humans.

 XML/XSL  based  and  included  sub-standards  to  cover  areas  such  as  Diallogue  Management,  Facial  and  Body  animation,  speech,  emotion  and  gesture.  Whilst  broad  and  laudable  in  its  goals

VHML  does  not  appear  to  have  seen  any  activity  for  a  decade (just look at that web page background!).

Marriott, A. (2001, November). VHML–Virtual Human Markup Language. In Talking Head Technology Workshop, at OzCHI Conference (pp. 252-264). Available online http://ivizlab.sfu.ca/arya/Papers/Others/Representation%20and%20Agent%20Languages/OzCHI-01-VHML.pdf




Discourse  Plan  Markup  Language  (DPML)  and  Affective  Presentation  Markup  Language  (APML)  


Discourse  Plan  Markup  Language  (DPML)  and  Affective  Presentation  Markup  Language  (APML)  work  together  to  take  the  description  of  what  an  avatar  wants  to  do  (DPML)  and  enrich  it  to  provide  a  set  of  action  instructions  (APML)  for  the  avatars  body. 

De Carolis B., Pelachaud C., Poggi I., Steedman M. (2004) APML, a Markup Language for Believable Behavior Generation. In: Prendinger H., Ishizuka M. (eds) Life-Like Characters. Cognitive Technologies. (pp. 65-85). Springer, Berlin, Heidelberg




Function  Markup  Language  (FML)  and  Behavior  Markup  Language  (BML)  

http://www.mindmakers.org/projects/bml-1-0/wiki

Part  of  the  SAIBA  architecture  where  FML  describes  intent  without  referring  to  physical  behavior,  and  BML  describes  the  desired  physical  realization  of  the  actions.

Kopp, S., Krenn, B., Marsella, S., Marshall, A. N., Pelachaud, C., Pirker, H., ... & Vilhjálmsson, H. (2006). Towards a common framework for multimodal generation: The behavior markup language. In: Gratch J., Young M., Aylett R., Ballin D., Olivier P. (eds) Intelligent Virtual Agents. IVA 2006. Lecture Notes in Computer Science, vol 4133. (pp. 205-217). Springer, Berlin:  Heidelberg.




Perception  Markup  Language  (PML) 


PML  is  inspired  by  FML  and    BML  and  attempts  a  standardized  representation  of  perceived  nonverbal  behaviors.  The  work  was  funded  in  part  by  DARPA.

Scherer, S., Marsella, S., Stratou, G., Xu, Y., Morbini, F., Egan, A., & Morency, L. P. (2012). Perception markup language: Towards a standardized representation of perceived nonverbal behaviors. In: Nakano, Y., Neff, M., Paiva, A., Walker, M. (eds.) Intelligent Virtual Agents, LNCS, vol. 7502, pp455–463. Berlin, Heidelberg: Springer




Character  Markup  Language  (CML) 


Character  Markup  Language  (CML)  is  high  level  standard  for  body  and  facial  control  and  leverages  the  MPEG-4  standard  for  avatar  control.

Arafa, Y., & Mamdani, A. (2003). Scripting embodied agents behaviour with CML: character markup language. In Proceedings of the 8th international conference on Intelligent user interfaces (pp. 313-316). ACM.




Avatar  Markup  Language  (AML) 


The  Avatar  Markup  Language  (AML) encapsulated  Text  to  Speech,  Facial  Animation  and  Body  Animation  in  a  unified  manner  with  appropriate  synchronization,  and  as  with  CML  could  be  used  to  drive  MPEG-4  animations.

Kshirsagar SGuye-Vuilleme A, Kamyab KMagnenat-ThalmannNThalmann D., MamdaniE (2002Avatar Markup language. In Proceedings of 8th Eurographics Workshop on Virtual Environments, ACM Press, pp169–177.




Avatar  Sensing  Markup  Language  (ASML)  and  Avatar  Action  Markup  Language  (AAML)  

ASML  provides  a  way  of  defining  the  sensory  events  (text-speak  heard,  object  appearance/approach,  weather  etc)  and  passing  them  from  the  body  (avatar)  element  to  the  brain  element.  It’s  counterpart,  AAML  provides  a  way  of  defining  actions  (and  sequences  of  actions)  to  be  passed  from  a  virtual  human  brain  to  a  manifestation  of  the  virtual  human  (e.g.  avatar  in  a  3D  world,  chatbot  on  a  2D  web  page).

Slater, S., & Burden, D. (2009). Emotionally responsive robotic avatars as characters in virtual worlds. In Games and Virtual Worlds for Serious Applications, VS-GAMES'09. Conference in (pp. 12-19). IEEE.