What motion control game development can teach us about Virtual Reality

Hi my name is Lau Korsgaard. I am a game designer involved with the Copenhagen Game Collective. Through the last five years I have been part of teams and projects that explored a wide range of the new interfaces for games, such as Wii Remotes, Balance Board, Dance Pad, Move Controllers, Kinect, Buzz buttons and heck, even our own custom hardware. This August I participated in IndieCade and Oculus Rift’s VRjam together with Sebbe Selvig and Simon Nielsen making the game Virtual Internet Hacker VR that took the first price and a 10,000 $ check among the invited developers. I would argue that lesson learned from years of development for motion controls can help us a lot in understanding development for virtual reality. I wouldn’t say that this perspective is the only way to make good virtual reality games, but I hope it will help nuance game developers understanding of what virtual reality can and cannot do.

Bring VR to Reality

The promise of motion controllers

First let me look back at the modern history of motion controlled games. Nintendo took the world by storm in 2006 by introducing the Nintendo Wii and the philosophy of broadening the audience for games and making more immersive experiences by replacing the traditional “artificial” game controller with a more “natural” controller. The rhetoric was clear from the beginning: You were in the game! There is no reason to learn complicated interfaces, just tap into your knowledge of how your body and the physical world behave and you’ll know what to do. Sony and Microsoft soon moved into the same space, each with their answer. Most radical, Microsoft launched the Kinect, with the slogan You are the controller. The first product vision video from Microsoft sold this promise pretty clear: There is no layer between you and the game world. What you do, how your body moves translates to the virtual world and create great immersive experiences.

These new technologies failed to live up to their promises. I feel the general consensus among game developers, fans and critics when looking back at the last 7 years of motion controlled games is “great technologies but we never saw it realized in a truly great game”. Lots of excuses have been made, technology isn’t ready (let’s make a “+” version), lazy developers, cross platform uniformity.

What does this say about the future of the Oculus Rift? – I think a lot.

The promise of virtual reality


The Wii Remote and the Oculus Rift are two completely different technologies, they are made for different audiences, different play situations, heck you can even say one is an input device the other is an output device (even though I will question that later). So why compare them at all? – They share the same promise! In fact, Oculus Rift can be seen as the next piece of technology in a long line of attempts of removing the interface between the player and the game. The reason we are all excited about the Oculus Rift is to finally experience what it is to be “in the game”. So will Oculus succeeded where all the other technologies failed?

I think the premise of that question is wrong.

I’m not sure that immersion is what we really want out of these technologies. There has been great motion controlled games made, and there is, especially in the independent games community, a thriving scene of developers making truly great games with these technologies. Motion controlled technologies have in many ways succeeded in fueling a creative discussion about how, when, where and who we play games with. I believe that some of the lesson learned from successful Wii Remote hacks, Kinect prototypes and physical Move controller games can show a direction to take Virtual Reality. So let me try to sum up some key insights in motion controlled games and what they mean for Oculus Rift development.

Embrace the constraints of the technology

When some parts of the interface between player and game are simulated 1-to-1 it becomes much more obvious when other parts aren’t. The Kinect was excellent at tracking your full body position in real time, but the constraints of the technology forced the player to face the TV and more or less stand in one spot. Everybody thinks a 3rd person action adventure game would be really cool with Kinect, but when you realize you can’t move the player around in the world without reintroducing an artificial layer between the player and the game (some kind of special gesture that means move forward) then that genre becomes less promising.

Kinect Star Wars had more or less a 1-to-1 simulation of your upper body, but to move forward, the player had to place their feet in a special position and lean forward! This creates a conflict of modalities, some elements of the interface are simulation, and some elements are abstraction. This is not a minor problem; it breaks the fundamental promise of immersion.

A known control scheme like a game pad is more or less invisible to a skilled the player, it doesn’t take mental processing power to translate a desire (moving forward in the world) to a successful input (moving left thumb stick up) but trying to trigger a gestural binary button like the “move forward” gesture in Star Wars Kinect takes the player out of the experience. What is the solution?


Simply, come up with game ideas that embrace the constrains of the technology. Why was Fruit Ninja for Kinect such a great experience? Because there was no artificial layer between what you did and what happened in the game. You were a Ninja slashing fruit and you felt awesome.



What does that mean for Oculus Rift? Many Rift prototypes have the same problem: One part of the interface is simulated magnificently (your view) while other parts of the interface are abstractions (your controls). Many demos truly put you in an immersive virtual world, but when you have to start moving around, immersion is broken. I have to push buttons or move thumb sticks I can’t even see! In fact, if you want to make a game where you can turn around and move backwards, you are faced with the choice of having the player struggle with wires and chairs or also give control over the camera with a thumb stick. Suddenly two inputs, one abstracted and one simulated are fighting over the same in-game actions!

What is the solution? Abandon the idea of First Person Shooters! Yes I know, this is what we all want to play, but like the 3rd person action adventure is the genre of broken promises for Kinect, the First Person Shooter (or FPS movement controls in general) is the genre of broken promises for Virtual Reality. If you want to move the player around then situate them in a vehicle. This creates a much more natural relationship between the constrains of the play situation and the ingame world. Isn’t this limiting creativity? Yes, but that is a fact of all technology. A smart designer has to embrace the constrains of the technology instead of trying to force their dreams into it.

Design for the social context

Here is the reason you don’t play Wii Spots alone: You play to perform, and performance is fun but only with an audience. Playing Wii Sports tennis alone quickly makes you question why you should swing your arm like a real tennis player when a quick jerk with the wrist is enough. Why act out the tennis fantasy? The problem is, playing tennis WITHOUT acting out the fantasy is incredibly boring, so playing Wii Sports alone quickly gives you the choice of either feeling bored or feeling stupid. All this changes when playing in a group. Suddenly your body is not just an input device; it is also an output device! Playing motion controlled games with friends is much more about looking at each other, not looking at the screen. Successful motion controlled games are designed to be aware of this social context, for instance the playback functionality in Dance Central lets everybody re-watch how silly you looked after a session.


Happy Action Theater is another great example of motion controlled gameplay that is about performance, not immersion.

What does that mean for Oculus Rift? The Oculus Rift is also a social machine, believe it or not. Put on an Oculus Rift in a semi public space like a university campus and start playing, soon a crowd will gather around the player. My experience playing Oculus Rift is always deeply social; you stand in line waiting to try, you watch the player playing and comment on the action, you chat with the other spectators about what is going on, maybe you even tease the player with a surprise touch. Sitting alone playing in virtual reality for a longer period of time would feel… weird to me. Can this social context be utilized? Yes, think about how the player looks while playing, what it looks like on the screen while playing, how long a play session should be or if the spectators actually have a role in the game. First of all think in concepts that work well in a social setting. Those concepts would rarely be immersive.

Terminal Vertigo By Tim Garbos and Lau Korsgaard

Look at the hackers

The greatest lesson learned from years of motion controlled games is that the biggest innovations comes from underground, grassroots and subversive game development environments. Just take a look at Thomas Perl’s PS Move API and the extension to UniMove, those software hacks alone have spawned more interesting games than mainstream studios have managed to develop for PlayStation Move. These are games like Johann Sebastian Joust, Glow Tag and Magetize Me and many more. What is interesting is that this scene of PlayStation Move “hackers” subverts the intended use of the controller and makes games not imagined by Sony at all. The core technological idea behind the PlayStation move was to achieve precise 3D position tracking using the PlayStation Eye camera and the glowing colored orb on the wand. The light and color of the wand was primarily a tool to give reliable input to the system, but the PlayStation Move hacking scene is using this feature as an output device ditching the PlayStation Eye and screens in general.

Magnetize Me by Morten Mygind, Lena Mechtchanova, Amani Naseem, Giacomo Neri, Patrick Jarnfelt

What does that mean for Oculus Rift? The Oculus Rift is luckily a much more open platform than motion controllers for consoles. But as shown with the PlayStation Move hacks, hacking is not only a technological act. Intentions, designs and assumptions can also be hacked. Crazy designers are already doing weird stuff with the technology and looking in those directions might teach us something valuable about the hardware. What happens if we start ignoring some components of the Oculus Rift? If we treat inputs as outputs? If we use the Rift in places and situations that were never intented at all?

oculus experiment 2 by Marie Shreiner Poulsen, Sarah Homewood and Uffe Flarup

Share your thoughts