Virtual Humans - now out in Paperback!


Our Virtual Humans book is now out in paperback! Get it from Amazon and all good bookshops - COVID willing!


 Looks like Hanson Robotics are releasing a new version of Sophia for "for research, academic and B2B applications".

Teaser video here

Frightening on so many levels :-)

AI Tech North Presentation

David's talk at AI Tech North on "Enriching Virtual Humans through the Semantic Web and Knowledge Graphs" is now available on video:

It's a 20min watch, and followed by another interesting session on analysis of language in social media use.

Virtual Humans - Thinkpiece Video

Earlier during lockdown I recorded a short talk on virtual humans inspired by the Virtual Humans book for our publishers.

Taylor and Francis have now posted the talk on their web site in their AI Knowledge Hub strand. You can view it at or below:

My co-author, Prof Maggi Savin-Baden has a similar piece on Digital Afterlife at

Enjoy, and be interested to hear your thoughts.

Aura Blogpost - "From Virtual Personas to a Digital Afterlife"

David has a blog post on "From Virtual Personas to a Digital Afterlife" on the blog of the new Aura website/service- "designed to help you prepare your memories, important information and connect with loved ones before you die", part of the burgeoning digital afterlife/memorial sector but also looking to open up the conversations ahead of time.

You can read the blog post in full at:

It's also interesting to compare some of the ideas in the post with Channel 4's recent documentary on Peter: The Human Cyborg as there certainly seems to be some common ground worth exploring.

Personal Digital Twins role in Epidemics control

Apart from some sobering thoughts about Digital Immortality I didn't think I'd see much linking Virtual Humans to COVID-19 but this is an interesting post around the concept of Personal Digital Twins (CDT). As I understand it the idea is that if your digital twin is put into the agent based model of the disease then a) you get a more personal sense as to whether the virus might get you, and so modify your behaviour accordingly (although the reverse may be a danger), b) the model is more accurate if your PDT is an accurate model of you rather than a generic agent and c) by combining the model action/results and  your real world action/results the model can be improved.

Read more at:

Digital Afterlife book is out!

Maggi's new book on Digital Afterlife is now out. The book delves deeper and wider into the topic of Digital Immortality introduced in our Virtual Humans book, and encompasses a wider range of artefacts and behaviours linked to the broader term of "Digital Afterlife". The book was co-edited by Victoria Mason-Robbie (with whom we've collaborated on other projects), and I have a chapter on the techie side of Digital Immortality, again digging a bit deeper than we could in the original book.

More information on the CRC site, or you can buy from Amazon in the US or UK, and all other good booksellers!

An Evaluation of a Linguistically Motivated Conversational Software Agent Framework

A cracking paper from Dr Kulvinder Panesar of  York St John University on a design for a "Conversational Software Agent" (aka chatbot) which seems to have all the elements we've been championing for some time:

  • Grammatical parsing
  • Speech acts
  • Dialogue management
  • Triples based memory
  • BDI motivation model

You can read the paper, and the work dates back to at least 2017, so a few papers to catch up on!

New Book: Digital Afterlife

Maggi Savin-Baden, co-author of Virtual Humans, has a new booking coming out in April on Digital Afterlife, subtitled "Death Matters in a Digital Age". We had a chapter on the topic in Virtual Humans and Maggi found there was so much interest in it (e.g. a recent story on Radio 4's Today programme which had a contribution from Dr Elaine Kasket who spoke at our book launch) that she put a whole book around the wider topic of Digital Afterlife. David has contributed a tech chapter looking more specifically at Virtual Personas within the context of Digital Afterlife/Digital Immortality.

The book is due to be published in April by Taylor and Francis and you can pre-order now.

Korean Virtual Child

Interesting piece on the Today programme this morning about a South Korean woman who has created a virtual copy of her dead child. Shades of Be Right Back, later episodes of Channel 4's Humans and of course presaged in our chapter on Digital Immortality in the book. Dr Elaine Kasket contributed to the BBC discussion and was also one of the speakers at our book launch. You can find several media reports on-line about the story including from the Times and Telegraph.

"The documentary Meeting You has been watched by millions in South Korea. It features Jang Ji-sung putting on a headset and “meeting” her daughter, Na-yeon, who died four years ago at the age of seven. Ms Jang also wore gloves that gave her some sensation of touching her child in a computer-generated park." - The Times

"The digital avatar was created using images collected from a child model and from pictures of Nayeon. The show is designed to give relatives who are grieving the loss of a loved one the chance to say goodbye to a virtual version of their lost family member."

The screengrab video (see Times article) is also very reminiscent of the old Milo demo from Lionhead Studios.

Meena - the latest Google ML Chatbot

There's been quite a bit in the press recently about Meena, Google's new chatbot powered by a "Evolved Transformer seq2seq" neural network model which is even more powerful than the recently touted GPT2. The bot has been trained on "341 GB of text, filtered from public domain social media conversations".

Whilst the results for general conversation seem impressive, I can't help but think that this is a bit of a dead end as the bot really has no idea and is in no way directing the conversation, its just coming up with the "best" response to each input, albeit taking the conversation to date into account. It just doesn't seem to have any intent or goals of its own.

More interestingly though Google has been measuring the performance of the bot (and using that as part of its reward function) with just two variables - how sensible is a reply, and how specific is it. Lots of bots try to go for sensibility and the expenses of specificity, they come back with general statements ("why do you say that") rather than actually trying to drive the conversation forward. For a while now we've been using Grice's maxims as a way of assessing bot performance (quantity, quality, relation, manner) which aim to cover similar ground, but the Google model may be even simpler and sharper - particularly when trying to train a human to do the assessing!

Samsung Neon

At the CES show Neon (backed by Samsung) showed off its Neon virtual humans. Press response (Engadget, Wired) was somewhat muted and sceptical, the main takeaways appear to be:

- Visuals are good, but really just a version of deep fake, and with uncanny valley issues
- Voice (TTS) is poor quality
- Chatbot engine is pretty basic
- Much of the viral video (eg still above) is actors, not even the Neon avatars!

So, see how far they've got by CES2021 then!