This is the third post, stimulated by the GDC Animators’ Roundtable, on where character animation might be going as we enter the virtual reality phase in gaming.
So what will be the big step forward in player engagement in VR games, if not some variation on shooting non-player controlled enemies? What is going to have the appeal of shoot-or-be-shot warrior game play in an increasingly real virtual space? What might be transformative in a similar way to Doom and the FPS genre? I think one answer is going to involve much higher quality NPC acting.
Or rather, NPCs that not only act believably, but react to us, and interact with us. This is the next frontier in VR. It will require a step up in gaming AI, and definitely a big step up in the quality and complexity, and deftness, of in-game character animation. But emotionally realistic reactions, and real-time interactions, in relation to the game context and player behavior, offers the potential to thrill and engage on a new, deeper level.
In part 2 I suggested that the thrill of shooting and ‘killing’ NPCs in shooter games come from a developmental pleasure we get from playing ‘warrior,’ and not from an innate sadism or sociopathy that some people assume. Because we lack an innate pleasure in violence, realistically killing human-like NPCs will easily be experienced as disturbing in VR. Game designers will need is to make the enemy VR NPCs more stylized, more absurd, more ugly, more alien, more anything as long as they are less human. Which means that the thrill of ‘playing warrior’ will be much the same as it is now in non-VR games. The VR environment might make for some new spins on the genre, but the ‘killing’ mechanic will continue to need to have some degree of abstraction from our emotional sensitivity.
VR is all about presence, and physical immersion. It will be up to character designers and animators and programmers to bring that sense of presence and immersion to the characters the player encounters. What is the low-hanging fruit in that endeavor?
First and foremost, NPCs whose eyes lock onto you as the player when you enter their space. This is overwhelmingly the most important initial human interaction in any culture, and it’s what never happens in current games. There’s a good reason why we always scan the eyes of a person when we encounter them, and why audiences spend the vast majority of their time looking at the eyes of the characters on screen. There is a staggering amount of information we gather from looking at the eyes.
How and when we meet another person’s eyes, and what we do with our eyes, and what they do with their eyes, is the most important bit of social signalling we do. We signal, and are signaled to. Dominance, fear, surprise, lust, happiness, wariness, confusion, contempt, it’s all there. VR headsets that support gaze tracking will allow players to signal the characters within the game when they (the player) are actually looking at them (the NPCs). If those characters respond to the player’s gaze in meaningful ways, with precision, it will be powerful. We’ll believe almost anything about those characters if they seem to actually see us, and respond to us. If those characters also blink appropriately, and change their gaze to follow the player’s change in gaze, and so on, the player will be sold.
Consider Alyx Vance, from Valve’s Half Life 2. She was introduced in 2004, and I think it’s safe to say she’s one of the most effective, and affecting, NPCs ever. Twelve years later, we have more detailed performances from NPCs in newer games, with higher resolution texturing, modeling, and lighting, but I don’t think they’re significantly more engaging. They, like Alyx, are limited especially by their eyes. Or rather, by what their eyes don’t do. The illusion of interaction is wrecked when the character isn’t quite looking at us, when they don’t blink and make appropriate eye darts, when they never scan our faces.
Add in believable and context-appropriate NPC body language changes, in response to the player’s actions, and the effect should be powerfully convincing. That won’t be easy, but it’s not Manhattan Project hard, either. Right now even the best human NPCs tend to fall into the uncanny valley, and it’s their dead eyes and vague behaviors that are the major causes. If we get the eye behavior right, and the basics of body language, these characters will engage players like no NPCs have ever done. This will open up options for game play mechanics and gaming complexity that will take years of exploration.