Designing for Kinect
It’s been a pretty busy week here at Twisted Pixel. We announced The Gunstringer, and the positive reaction we received has been really great and heartening – thanks everyone for your support! If you’re still catching up on the newspocalypse from last week, I’d recommend Russ Frushtick‘s great summary on MTV Multiplayer, as well as our fearless CEO Mike Wilford’s quick interview with Brad Nicholson on Giant Bomb
One common theme in all of the questions has been “how does the game control?”. Is it similar or different to our other games? Do I have to use my whole body? Why is this a game that only works on the Kinect? All of these are good questions that aren’t answered in quick snippets of gameplay.
From the start of development on The Gunstringer, we’ve focused on getting across the feel of puppeteering as well as the feel of being an awesome kickass undead cowboy. It’s really only something we could do with the Kinect sensor for two big reasons: pure analog actions and full skeletal data.
There are two exceptions to this: mouse control and accelerometer control.
Mouse control is a great example of analog control, but it’s only really great as a target acquisition device and not a movement device because of the way it moves – there’s no “null zone” to rest your movement in. Because of that, it’s very easy for you to creep out of a reference position over time if you’re using it for anything besides targetable movement.
Accelerometers try to solve this, but at the end of the day you’re dealing with interpreting a single point of data floating in the air. For most developers, the simple way to interpret that data is to build a library of gestures, then record every possible movement of that single reference point, then match the player’s movement to that library to trigger a binary gesture. You’re essentially trying to map binary actions to analog movements, at which point you could play the game with a controller. Gesture libraries and waggle are the designer’s way to fit a square binary peg into a round analog hole.
Then, there was Kinect. Like most new human computer interfaces, working with the Kinect requires you to rethink and relearn all of the interaction rules and behaviors that you’ve learned previously. We spent a lot of time with various prototypes trying to figure out what was fun and what didn’t work, and over the course of many (and I mean many) iterations we found a bunch of really cool things that the Kinect sensor does really well.
One great lesson we learned from our Kinect prototyping: because it’s reading in information about your body and not just a point, we can get the information of where your points are in real terms. “Your hand is stationary next to your hip” is incredibly more useful than “this dot of information isn’t moving”. Because your limbs have natural resting positions and extents, you also get the same benefits of Fitts’ Law that you’d get with a physical device like a thumbstick!
If you try to apply normal game mechanics that uses binary actions to this analog system though, you have the same waggle problems as using accelerometers. But, if you design your controls for true analog inputs instead, you can really make something new and inventive.
This is what we set out to do with The Gunstringer. Marionetting isn’t about binary actions like “move in this direction at x speed”, it’s about the analog feel of a puppet. Because of that, we ended up building a unique control system that uses your hand, wrist, arm and shoulder to determine how to move the Gunstringer through the environment.
Having that whole tree of skeletal information allows us to make really unique decisions. We know where your hand is relative to your shoulder and body, so you can move the Gunstringer anywhere along the screen just by moving your hand to that location instead of doing the “move left, move left, no move right, okay stop” shuffle that you’d have to do with an analog stick or D-Pad.
This isn’t limited to movement, either. Since we know how your entire arm from your hand to your shoulder is moving, we can accurately extrapolate what you’re aiming at with your hands, and place the reticle exactly where you’re pointing. It allows you to do either huge swipes with your hand, or smaller, more precise movements to target something specifically.
It also allows us to implement a real “fire” command using your arm without it conflicting with the hand movements you need to mark targets to kill. Our fire action involves literally firing your six shooter as if you just felt recoil in your arm. Because we can look at the full arm instead of just a point, we can tell the difference between the player firing a gun and the player just moving the reticle upwards.
In layman’s terms: you get to pew pew enemies with your hands.
All of this comes together to form a core gameplay experience that’s really awesome, and something that’s really unique and not like anything else you’ve played in a single game before. I still get a stupid grin on my face when I dodge a couple of chasms by throwing around the Gunstringer, then tag six puppet guys to kill by swiping over them, then fire my six shooter cap gun to take them all out in a mess of wood and stuffing explosions.
If you’re coming to PAX East, you can try this first hand at our booth #723! We’d love to see you there!