Quite a nice roundup of current avatar tech.
One thing I did spot in iHuman was the Baby X work, which apparently was the initial testbed for the Soul Machines technology. Looked a bit more convincing than the old Milo demo.
Finally got around to watching iHuman. Feel like watching again to plot on my AI Landscape model all the different ways it uses "AI". It also conflates AGI with Superhuman AI, and the idea that AGI will be able to replace almost all human workers despite a) it noting that it might take the energy of a small city to power one and b) lots of jobs need physical skills which we haven't cracked for robots yet. It doesn't stay into "sentient AI" at all, and a good 50% of it is about "basic" neural net style pattern recognition for marketing/intelligence/surveillance/military purposes. Nice visuals though.
|At Connect 21 in Horizon Venues|
Watched Mark Zuckerberg's Metaverse presentation in Horizon Venues last night. The avatar designer has been upgraded but otherwise no changes that I could see. Still couldn't photograph the screen!
My full write-up is at https://dadenblog.blogspot.com/2021/10/facebook-aka-meta-connect2021.html but the Virtual Human highlights seemed to be:
- Yep, pretty much everything talked about in terms of what you could do in the VW element of the Metaverse you could do in SL 15 years ago
- He talked about interoperability in terms of being able to bring avatars and virtual objects between worlds - but it wasn't clear if this was between FB/Meta ecosphere applications only, or out to third party virtual worlds. I fear the former. At the ReLive conference in 2011 we highlighted interoperability as the key medium term need - and we're still not there!
- There seemed to be a lot of emphasis on "holograms" and going to real concerts/meetings as holograms with real friends/colleagues. It was at all clear on how that was ever going to work, although the AR glasses would sort of make that feasible. Mind you having tried to stream a bit of video out of a concert recently I can't see how the bandwidth will ever be there! Oh, virtual after parties - tick, done that, great one after The Wall show in SL "back in the day" actually on the stage set.
- The Codec photo-realistic avatars looked quite good - volumetric and based on face/body scans, but with the ability to then change hair, clothes etc. The clothing demo and talk about hair modelling reminded me of how much we wrote about those areas in our Virtual Humans book.
Not a lot of talk about autonomous avatars to populate the places - guess they'll be too full of Facebook users!
More on DeepFake approaches to voice synthesis
The Royal Air Force has just released a second edition of their "Stories from the Future" - fiction pieces designed to get people thinking about the future of the Services and Air Defence.
One of the stories, "Heads Together" draws on some of the work we've done for MOD around the concept of virtual personas:
"Diverse viewpoints make for better decisions, so imagine a world where the whole of society engages with Defence through some form of service and the friends that you make there can convene virtually when you need to discuss a problem, whether they now work in Defence, in industry, in academia – or at all. In this tale, we look at how our people might benefit from this in the future"
The question that the story explicitly poses at the end is:
How would you feel about bring perpetuated in virtual form after you had changed jobs, left Defence or even died? Would advice from your virtual self be a liability to your real self?
You can read this, and the other stories at https://www.raf.mod.uk/documents/pdf/stories-from-the-future-second-edition/
Nice article on emotion detection using ML and an Ekman type emotional model.