This website and blog support the Virtual Humans book by David Burden and Maggi Savin-Baden published by Taylor & Francis in February 2019. Here you can find links to images, videos, papers, articles and application resources related to the topics covered by the book, and a blog maintained by the authors to track the development of this fascinating and vitally important topic.
New Unity Real-Time Avatar Technology
New real-time character rendering demo from Unity - https://unity.com/demos/enemies
Current Avatar Tech Roundup
Quite a nice roundup of current avatar tech.
One thing I did spot in iHuman was the Baby X work, which apparently was the initial testbed for the Soul Machines technology. Looked a bit more convincing than the old Milo demo.
Finally got around to watching iHuman. Feel like watching again to plot on my AI Landscape model all the different ways it uses "AI". It also conflates AGI with Superhuman AI, and the idea that AGI will be able to replace almost all human workers despite a) it noting that it might take the energy of a small city to power one and b) lots of jobs need physical skills which we haven't cracked for robots yet. It doesn't stay into "sentient AI" at all, and a good 50% of it is about "basic" neural net style pattern recognition for marketing/intelligence/surveillance/military purposes. Nice visuals though.
|At Connect 21 in Horizon Venues|
Watched Mark Zuckerberg's Metaverse presentation in Horizon Venues last night. The avatar designer has been upgraded but otherwise no changes that I could see. Still couldn't photograph the screen!
My full write-up is at https://dadenblog.blogspot.com/2021/10/facebook-aka-meta-connect2021.html but the Virtual Human highlights seemed to be:
- Yep, pretty much everything talked about in terms of what you could do in the VW element of the Metaverse you could do in SL 15 years ago
- He talked about interoperability in terms of being able to bring avatars and virtual objects between worlds - but it wasn't clear if this was between FB/Meta ecosphere applications only, or out to third party virtual worlds. I fear the former. At the ReLive conference in 2011 we highlighted interoperability as the key medium term need - and we're still not there!
- There seemed to be a lot of emphasis on "holograms" and going to real concerts/meetings as holograms with real friends/colleagues. It was at all clear on how that was ever going to work, although the AR glasses would sort of make that feasible. Mind you having tried to stream a bit of video out of a concert recently I can't see how the bandwidth will ever be there! Oh, virtual after parties - tick, done that, great one after The Wall show in SL "back in the day" actually on the stage set.
- The Codec photo-realistic avatars looked quite good - volumetric and based on face/body scans, but with the ability to then change hair, clothes etc. The clothing demo and talk about hair modelling reminded me of how much we wrote about those areas in our Virtual Humans book.
Not a lot of talk about autonomous avatars to populate the places - guess they'll be too full of Facebook users!