Yesterday, in my first steps out of my “spectator only” comfort zone, I evoked how one could envisage giving movements to the spectator, without making him feel like playing a video game.
Now I have to work, at least a little bit, on how we’re going to do it, technically speaking, in order to achieve this on the spectator side.
You all know, I think, that VR is currently accessed via a virtual reality helmet: there are two leading brands on the subject – and by the way, to develop themselves, these brands would do well to develop content.
There are more and more technologies that will allow the viewers to believe they are immersed.
The latest one – not too expensive – that I’ve heard about is the “point cloud” technology which allows, as in video games, to have in the computer’s memory only what the spectator can see and not the whole 3D environment. This makes it faster and more realistic.
And then afterwards, it becomes confusing.
My goal is to be able to have a spectator whose real movements and positions would be immediately “transcribed” into virtual reality.
What are the current tools in addition to the helmet?
There are joysticks, one per hand, which allow quite well to recover in VR the use of our hands and arms.
There will be gloves – for those who have dreamed so much while watching Minority Report.
There is already the technology of “Hand tracking” which allows you to recognize your hand movements quite quickly – well, it’s still in its infancy.
That’s good enough – it’s not quite enough for what I have in mind though.
It will therefore be necessary to be able to propose a Virtual Reality technology that recognizes the movements of the “spectator” – not just his hands.
In this way, we will achieve the whole goal – and having real artists “play like spectators” will become really interesting, as the real physical performances of the “spectators” will be taken into account. ( too bad for the sofa freaks…. ).
It seems to me, once again, that it makes sense to follow what is being done technologically for video games – they are the ones who will develop all this most, and most in detail.
Right now, if you’re playing Beat Saber, it’s not only about eliminating the stuff that’s coming at you, it’s also about physically moving away from you so you don’t get stuck in the background.
So when I was talking yesterday about putting the spectator in the place of a juggling ball , that could mean – if the spectator has chosen the corresponding virtual mode – seeing, in his living room, your spectator really getting into a ball and really trying to escape from the juggler’s hands. When I wrote this yesterday, I didn’t think to specify at this point – I understand what I’m thinking, for the moment – but I’m not so sure I always make myself understood.
I’ve seen that it’s starting to exist, especially in cyber cafes because they have the means to implement this by adding different hardware: platforms that move in real time, for thrill seekers etc..
Having lived through the madness of video games in the café (remember the 80s? ) – I think it’s completely useless to invest in the long term in this kind of very heavy technology, but rather to focus on what’s going to develop: “for home” technologies – by the time Altair develops, we’ll surely be in a period: all this is going to go “home”.
So what do we need?
- 360 technology for the decor
- motion capture for real artists’ movements
- to develop an Artificial Intelligence capable of adapting the movements of the artists to the movements of the virtual players/spectators
- a video game engine to animate our audience….
- technologies: “spectator” – the helmet exists; the hands exist; the whole body will soon arrive cleanly on all the screens.
With this, and with content that changes as much as possible, we will be able to really exploit Virtual Reality, without ever proposing to copy real life – thus significantly broadening the entertainment content to be offered to our audience.