I originally started to write this article about six months ago, far beyond the blog ever even existed. Just knew I had to get things out from my head, to get them organized. Then I saw a video which bummed me, since it presented pretty much the same thing and I scrapped the whole article. However, later on I’ve realized that the execution in the video (sadly, I can’t seem to locate this) was a bit saggy and not even about the thing I was about to write. So, here we go; posture input and games.’
More after the jump
Posture guided gaming, you ask. Well, let me explain. You know how “high-end” (as in reality) FPS games tend to have these lean -buttons (lean left, lean right)? If youre a gamer, you do. Anyways, they are used to change the characters posture so that he still remains standing at the same place, only his torso is twisted to either side, enabling him peek around corners. I personally tend to hate those buttons since they are tad hard to handle and are usually in the way of more important buttons, like reload. Next logical step was, of course, to think something to replace such buttons, and if possible, to think outside the box.
This is what I came up with.
Again, if you’re gamer you know how, when the going gets intensive, you tend to physically try to dodge the rocket that’s coming to your screen by jumping sideways on your chair? Or you’re creeping along the enemy base and coming closer to an edge under which supposedly are enemies. As you inch closer you instinctly physically rise from your chair, trying to look over the edge. This is natural behaviour, based on instincts, and as such should definately taken advantage of.
So, how in the hell could we achieve such things? First, I was thinking along the old IR-led trick where you shine couple of IR-leds to persons eyes and catch the reflections from retina with, say, web-camera without IR-filter. All nice and tired-and-true but not practical for the common gamer since it would need completely new set of devices and interfaces built, and as we all know, that takes time. So, have to think something different. Blank stare and few blinks later, facial recognition! Such sofware should easily (as easily as things get in face recognition world) detect the posture of the gamer; if he’s sitting straight, peeking over something or around something and change view according.
Here’s the basis how it would work
These movements would be translated to the player characters torso/head movements (camera nailed to his head, of course), so as the player would jump on his chair dodge the rocket, the character would dodge the rocket in the game world. Now, imagine playing FPS and standing behind a tree; you would lean right and your character in the game would do the same thing, revealing the view to the right of the tree, lean left and the character would react likewise, he could even change his weapon to reflect the stature so that he could shoot from the left side (him being right-handed originally). Next, imagine you’re laying prone on rooftops, sniping people (on the game, of course); as you would be on the neutral/normal stance you would look through the scope, then you could say, lean right and the character would lean right, ever so gently, looking past the scope (from the right side, scope and the gun filling the left side of the screen), letting you to point the gun at the rough direction of your target, then move to normal stance and do the rest through your scope. You could even peek over the scope, seeing the whole landscape at one glimpse, and of course revealing your precious skull to the counter sniper.
Now, imagine this on internet based game, let’s take Enemy Territory: Quake Wars for example and try to imagine the impact it would have for the immersion of the game. You would feel like you’re actually in the game, doing your thing as the view and perspective would reflect your movements. Now, imagine looking through this looking glass to Quake Wars scenery where people are peeking around the corners, over the edges and under the fallen trees. Natural posture changes are far more natural than computer generated/tweened animations, hence other player characters would act much more realistically, with practically limitess variations.
Now, if you could set the software to recognize your face, figure out your posture and to use simple web-camera for all of this, practically every gamer could just start using the posture input in their games. Providing the games would support such animation/input system.
So, what do you think?