- Grammatical parsing
- Speech acts
- Dialogue management
- Triples based memory
- BDI motivation model
A cracking paper from Dr Kulvinder Panesar of York St John University on a design for a "Conversational Software Agent" (aka chatbot) which seems to have all the elements we've been championing for some time:
You can read the paper Academia.edu, and the work dates back to at least 2017, so a few papers to catch up on!
Maggi Savin-Baden, co-author of Virtual Humans, has a new booking coming out in April on Digital Afterlife, subtitled "Death Matters in a Digital Age". We had a chapter on the topic in Virtual Humans and Maggi found there was so much interest in it (e.g. a recent story on Radio 4's Today programme which had a contribution from Dr Elaine Kasket who spoke at our book launch) that she put a whole book around the wider topic of Digital Afterlife. David has contributed a tech chapter looking more specifically at Virtual Personas within the context of Digital Afterlife/Digital Immortality.
The book is due to be published in April by Taylor and Francis and you can pre-order now.
Interesting piece on the Today programme this morning about a South Korean woman who has created a virtual copy of her dead child. Shades of Be Right Back, later episodes of Channel 4's Humans and of course presaged in our chapter on Digital Immortality in the book. Dr Elaine Kasket contributed to the BBC discussion and was also one of the speakers at our book launch. You can find several media reports on-line about the story including from the Times and Telegraph.
"The documentary Meeting You has been watched by millions in South Korea. It features Jang Ji-sung putting on a headset and “meeting” her daughter, Na-yeon, who died four years ago at the age of seven. Ms Jang also wore gloves that gave her some sensation of touching her child in a computer-generated park." - The Times
"The digital avatar was created using images collected from a child model and from pictures of Nayeon. The show is designed to give relatives who are grieving the loss of a loved one the chance to say goodbye to a virtual version of their lost family member."
The screengrab video (see Times article) is also very reminiscent of the old Milo demo from Lionhead Studios.
There's been quite a bit in the press recently about Meena, Google's new chatbot powered by a "Evolved Transformer seq2seq" neural network model which is even more powerful than the recently touted GPT2. The bot has been trained on "341 GB of text, filtered from public domain social media conversations".
Whilst the results for general conversation seem impressive, I can't help but think that this is a bit of a dead end as the bot really has no idea and is in no way directing the conversation, its just coming up with the "best" response to each input, albeit taking the conversation to date into account. It just doesn't seem to have any intent or goals of its own.
More interestingly though Google has been measuring the performance of the bot (and using that as part of its reward function) with just two variables - how sensible is a reply, and how specific is it. Lots of bots try to go for sensibility and the expenses of specificity, they come back with general statements ("why do you say that") rather than actually trying to drive the conversation forward. For a while now we've been using Grice's maxims as a way of assessing bot performance (quantity, quality, relation, manner) which aim to cover similar ground, but the Google model may be even simpler and sharper - particularly when trying to train a human to do the assessing!