Theory: Player Expectations of Video Game Stories As a Resource

SPOILER WARNING: This post discusses the stories of Mark of the Ninja and Persona 4. If you haven’t played them yet, you might want to stop here. They’re both great games that deserve to be experienced without knowing what’s coming.

In order to use every resource at their disposal, game designers first have to recognize all of the possibilities. Some are easy and obvious: computer memory, playing cards. Others are more esoteric: tablets as addendums to board games, for example. Finally, there are the little-used, usually-unnoticed, difficult to employ resources, the narrow ones that are exceptionally powerful in the right situation. As an example, let’s take a look at player assumptions about how video game stories work.

Most video games have some unrealistic aspects that we accept, expect, and do not even process as out of the ordinary. Invisible walls limit our progress, even though the world appears to extend further. We can’t cross low objects because there’s no jump button, even though in the real world a few books or papers on the floor would pose no barrier. Conversations are limited and short to the point of rudeness. After a while none of these inhibit suspension of disbelief; we grow accustomed to and ignore them.

Mark of the Ninja does something canny with the way we accommodate ourselves to technical limitations on video game stories. Rather than just going along with our expectations, it uses them as a resource to make the story better.

(The SPOILERS start right here!)

In the game players take the role of a ninja who has gained Real Ultimate Power–but the power comes at the cost of gnawing, growing insanity. Players are led through the ensuing adventure by a helpful fellow ninja who provides guidance, commentary, and focus.

The guide routinely appears from nowhere and then disappears off-screen, never to be caught up with no matter how quickly the player-ninja moves. Occasionally the guide even does the impossible, able to “find another way” when there’s only one route.

I suspect that most players write all of that off completely, just as I did. Sure the NPC guide can do unlikely things: NPCs often can. They’re narrators, not subject to the limitations of the reader/player. Having them enter and leave the screen at all is a kindness, a nod to verisimilitude.

The big twist comes at the end of the game, when we learn–

(Seriously, SPOILERS!)

–that the guide was never there at all! She’s a visual representation of a voice in the ninja’s head, created by the ninja’s own mind so that the voice would seem to be coming from somewhere. Far from being an anchor against insanity, she’s a symptom of it!

With that single reveal, the entire game is thrown into question. Up to that point the goal has been clear: first to stop an attack on the ninja’s clan, then to take revenge on the evil megacorporation that attacked the clan, and then to take revenge on the clan’s own leader when it becomes clear that his schemes prompted the attack in the first place. However, all of those goals were presented by the guide–who was never really there, who’s just an expression of the ninja’s insanity. The player has to reevaluate the evidence, based not on the suddenly-unreliable narrator but rather on the player’s own judgment.

At the end of the game the player has to decide whether to strike down the clan leader. It’s a gripping moment because of the way expectations have been turned on their head. Throughout the game the player had a confidence born of the simplicity of most video game storytelling: the NPC tells you what to do and the challenge is in doing it. Now the NPC safety net is gone, and the player feels all the more adrift because it was once there.

By playing with expectations Mark of the Ninja goes beyond telling players that their character is going insane. It makes the player feel confusion and distress, thereby moving some distance toward putting the player in the character’s shoes. Mark of the Ninja uses the player’s assumptions just like it uses art, voice acting, and sound effects: as a resource it can call on to promote immersion.

Persona 4 does something similar, although its implementation of the technique is arguably less impressive. At the end of Persona 4 its characters feel that they have solved a mystery: they’ve discovered the identity of a murderer, learned his motives, and stopped his crime spree. There’s even an end-of-game wrapup of the sort that closes video game stories.

However, attentive players will note that the facts don’t quite add up. Good detectives among them won’t be satisfied, and if they insist on continuing to play rather than letting the credits roll they’ll discover that the game isn’t over yet! The true ending, with its ultimate reveals and conclusive answers, lie a few hours of gameplay beyond.

Unfortunately, it’s not easy for good detectives to signal that they want to keep investigating. It requires an unintuitive command that isn’t even legal at any other point during the game, with the result that checking an FAQ is almost required. That steals a lot of drama from what’s supposed to be the player’s biggest test.

Nevertheless, Persona 4’s handling of the player’s expectations is interesting and worth studying. The game demands that players rise above genre conventions to look hard at the mystery and decide for themselves whether it’s been solved, rather than going along when the game signals that they’re done. In the process Persona 4 captures something of real-world detective work, which is rife with incentives to stop investigating and close the case. A single weakness in Persona 4’s implementation thus takes nothing away from the superb underlying idea of using the player’s own assumptions to make her participate more fully in the mystery.

Video games have developed, as a medium, certain conceits. Good games can rely on those conceits, using them to hide technical limitations. It’s possible to go further, however, using not just the conceits as a resource but the expectations they’ve given rise to as well. Mark of the Ninja and Persona 4 demonstrate that being conscious of player assumptions regarding “how game stories work” allows designers to play off of them, potentially to very powerful effect.

Theory: Greater Immersion Through Fewer Options

Designers seeking immersive gameplay are urged to give players more choices. Life, though, is more often about working within constraints than it is about total freedom of decision. The way to create a more realistic environment is thus not to allow players unlimited control, but rather to impose realistic boundaries on their influence.

We all deal, in the real world, with limits on our decisions. Circumstances can make some paths unavailable, for example. Lack of money and time prevent us from doing things we think would be fun or even important; obligations to family and work call on us to let certain opportunities pass by.

Even if we can free up the money and time to do something, other people can stymie our efforts through malice or innocent misunderstanding. A co-worker might accidentally trip up a project, or—circumstances rearing their ugly head again—need to prioritize something else. Third parties with their own objectives can get in the way.

Forces beyond our control also impact what we can accomplish. Bad weather, car trouble, a labor strike—there’s no end of external situations that force us to adapt.

To feel realistic, a game should acknowledge these limitations. Life is not subject to comprehensive planning and thoroughgoing control. An immersive game shouldn’t be, either.

Immersion Done Right: Persona 3

As one example of a limitation that we can confront in the real world, consider team dynamics. Coordinating a group can be just as difficult as the problem to be solved. People might have different visions of what the final result should look like, causing them to work at cross-purposes. Lack of communication, especially in emergencies when the group is pressed for time, can result in group members wasting effort or even undermining each other. Sometimes someone just plain messes up, and the error impacts what everyone else is doing. Working well in a team is a challenge unto itself.

Shin Megami Tensei: Persona 3 (usually just called “Persona 3” in the U.S.) is one of the very few games to reflect that challenge. Whereas most adventure games give the player control over a party, allowing the player to operate an entire squad like a well-oiled machine, Persona 3 limits the player to controlling one single adventurer, with everyone else acting in accordance with their own strategies and preferences (as dictated by the AI). The result is an experience that feels like a real-world team activity; all involved are trying to achieve the goal, but they’re not always in perfect sync.

It should be said that the AI teammates’ independence can be deeply frustrating. The AI for one character is so preoccupied with a single move that Google will autocomplete the phrase “Mitsuru Ice Break.” Persona 3’s combat system is all about doing the right moves in the right sequence, and the computer-controlled teammates can’t be relied on to follow the playbook.

Yet, their imperfections contribute greatly to the game’s immersion. The other characters are more realistic and compelling because they behave like independent actors, possessed of their own agency. Put simply, they feel like people, and the game feels like a place where people live.

Persona 3 thus becomes more enthralling even as it limits player control. It poses a real-world problem—a group member’s bounded influence over other members of the group—and obliges players to solve it just the way they would in the real world, by learning about the rest of the group and finding ways to work in concert with them. The result is a game which features monsters and magic and robots, but which also has a nugget of reality that makes it easy to suspend disbelief.

Immersion Done Wrong: Persona 4

Persona 4 is an adventure game much like Persona 3, but—perhaps in response to complaints about Persona 3’s teammate AI—its designers gave the player more control. In this iteration of the series the player is allowed to manage each group member directly, choosing their actions by hand.

As is so often the case, we didn’t know what we were wishing for.

Persona 4’s characters are, like Persona 3’s, well-written and likeable. However, the change in combat control removes some of the sense of independent reality that made Persona 3’s teammates so special. They’re not people anymore; they’re extensions of the player’s will.

Unfortunately, that loss reverberates throughout the experience. Where Persona 3 was enthralling, a chance to step into and inhabit a different world, Persona 4 is a touch game-y. After hours and hours of combat in which the other characters are just extra action points, it’s hard to switch gears and treat them as living, breathing individuals in the story sequences.

To be fair, it’s possible to set the teammates back to AI control. However, even that feels artificial; the player is allowing them their independence, subject to their making one too many mistakes. Having let the control genie out of the bottle, Persona 4 can’t put it back in.

That’s a shame, because Persona 4 does just about everything else right. It’s got some superb writing, an imaginative ending sequence, a combat mechanism that’s just complex enough to generate decisions without being so elaborate as to weigh down the game, and a valuable improvement to Persona 3’s user interface that saves a lot of aggravation. Yet, Persona 3 is ever-present on my “play again at some point” list . . . and Persona 4, which gave me control at the cost of immersion, just isn’t.

Go Ahead and Take With the Other

Immersion doesn’t require that the player be able to do everything. After all, we don’t get—or even expect—total freedom of action or absolute control in our real lives. For a truly immersive experience, it’s better instead to impose reasonable limitations on the player. Such limits might not sound very exciting, but they make for a more realistic and compelling game.

Theory: Better Spectating Through Strategic Understanding

The New York Times has a superb article on basketball’s “Triangle offense.” It’s interesting for its exploration of basketball strategy and personalities. What I really found gripping as a designer, though, is its discussion of how much people who understand the Triangle enjoy watching it used.

Other articles have pointed toward the idea that the way to have the most fun as a spectator of a game is to really get its inner workings. The classic example, in my mind, is an article written years ago about a Street Fighter match between Justin Wong and Umehara Daigo. Unfortunately the original seems to have been lost to time, but in brief summary, Umehara knows that Wong wants to win with chip damage. He therefore puts himself at the precise distance to get Wong to use a specific move–and then counters all the parts of that move, one after another in rapid succession, before counter-attacking for the win. As the article pointed out, without an understanding of Umehara’s strategy the match video (found in the summary above) looks like a feat of dexterity, neat but something anyone who’s spent time in practice mode could do; with the necessary understanding, it becomes a one-in-a-million combination of physical and mental achievement that marks out a true master.

Both basketball and Street Fighter are complex games whose strategy is not obvious to the casual observer. Announcers and commentators help bridge that gap, but they can only go as far as they themselves understand; the New York Times article notes that even most basketball professionals can’t explain how the Triangle works, much less pass the knowledge along. I’m left to wonder: what can we do, in-game, to help spectators see what great players know?

Theory: Chambers Bay and the Point of Tournaments

What is a tournament about?

I don’t generally follow professional golf, but it’s just provided a case study that encapsulates a larger debate regarding what tournaments should be used for. The U.S. Open played over the weekend was–at least according to some–a weak measure of overall golf skill. Yet, it was an excellent test of who was tops at solving unique problems on a specific day. Views of the event thus depend on the purpose one thinks tournaments should serve, making it a great vehicle for discussing the issue.

A bit of background is in order. The U.S. Open is a major golf tournament held each summer. It moves from course to course, and this year was held at Chambers Bay in Washington state. Chambers Bay proved controversial. The drought afflicting the West Coast hit the already-difficult course, substantially impacting how it played. One player memorably “said that Chambers Bay Golf Course’s dry, bumpy greens were ‘pretty much like putting on broccoli;’” another replied that the greens didn’t have broccoli’s color, and were more like cauliflower. All of this led some to complain that the results were not a proper measure of skill, including one player who went off on network television about good putts being derailed and bad putts being knocked in.

For all the problems, though, the fact remains that the U.S. Open was extremely interesting to watch. Gut checks were constant. Can this guy get out of a giant maw of a sand trap, ten feet deep? One player was ill; could he make it through the last few hours? The course is extremely hilly, and the golfers had to keep it together when they hit a ball up toward the hole . . . and it rolled all the way back down to where it had started. (That happened more than once!) I can’t say whether Jordan Spieth, the ultimate winner, was the best overall golfer—but he was certainly the best at adapting to a crazy, challenging situation.

Is that enough to deserve to win? There are two camps, and it’s good practice as a lawyer to think about both sides’ arguments. Here they are, presented for your consideration.

The prosecution: this isn’t why we have tournaments.

We call the U.S. Open a championship event. That’s because it’s intended to determine who is the champion—the best. Chambers Bay, and by extension this U.S. Open, didn’t accomplish that.

Tournaments are, fundamentally, measures of skill. That’s why we have tournaments, with all their formal rules: to strip away everything that isn’t skill, and to ensure to the greatest extent possible that the most skilled player wins out. At a fundamental level, the difference between a tournament and playing casually with friends is that tournaments have as their purpose showcasing skill, with fun as a secondary goal, whereas most “play” is the other way around.

Being the best golfer means performing difficult tasks reliably. First one must hit a tiny ball with a stick so that it travels accurately to a location hundreds of yards away. Then one must hit the ball, again using a lengthy stick, so that it drops into a hole barely larger than the ball itself. We make tournament golfers do these things again and again, seeing who most consistently chooses the precise angles and forces necessary to get the ball where it is meant to go. Those choices are the essence of skill in golf, and we gather lots of data about how golfers make them so that we can make fine judgments between who is good at these decisions and who is the best.

Chambers Bay’s ragged greens undermined that analysis. We can’t say who was most reliably able to pick the correct angles and forces, because good decisions were undone by flaws in the playing surface. A random element crept in which made it impossible to say whether scores were based on skill or luck.

The 15th U.S. Open might have been won by the best golfer . . . or it might not have. We don’t know, and a tournament that leaves us uncertain on that point is a tournament that hasn’t done its job.

The defense: the event did everything an event can do.

It’s absolutely true that tournaments are meant to measure who’s the best. However, no single event can determine who’s the best in some cosmic sense. Chambers Bay did a fine job of establishing who was the best on the day, and that’s all we can ever ask of a tournament.

There’s never been a game where all involved played completely perfectly, to the outermost limit of their skill, so that we can say the winner was absolutely supreme. We have to accept that everyone involved in a tournament is merely mortal, and that every individual data point about player skill is therefore flawed and approximate. Overall best-ness can only be determined in hindsight, when more such points have accumulated than any single tournament can offer.

Once we acknowledge the limits on all tournaments, it’s clear that the U.S. Open was great. The event made players stretch beyond their normal boundaries, and allowed those who could do so to shine. Perhaps it tested whether a golfer was determined and even-keeled more than the golfer’s putting stroke; if so, that’s not a bad thing. Focus and steadiness are traits of good golfers, too.

Twenty years from now we’ll be able to look back and say whether Jordan Spieth was the best, or whether he triumphed over some other best to take this particular event. We need that extra time, with all the information it will bring, to make that evaluation. In the interim, the U.S. Open was a demanding challenge that tested the limits even of top people. An interesting game resulted. That’s what it’s reasonable to want from a tournament, and that’s exactly what the Open and Chambers Bay gave us.

Jurors, the matter is now in your hands

I have a personal sympathy for the second position–but the first has its merits. So: was this a good U.S. Open? More generally, are somewhat offbeat tournaments acceptable?

Theory: Contests

First, I wanted to follow up on last time’s post regarding Twitter by noting Eric Lang’s “TCG Design 101” tweets. Each one is a superb distillation of years of design experience. They’re the kind of content that makes Twitter so valuable to designers, and are well worth checking out.

Unfortunately, sometimes one isn’t inspired to create the next great TCG—or much of anything else, for that matter. When that happens to me, I often check out game design contests. They’re very useful for breaking through mental blocks, getting out of comfort zones, and generally putting the creative engine into gear.

Contests provide two invaluable things to designers searching for inspiration: a seed (e.g., “a game involving kings and queens” or “dexterity game”) and a deadline. The value of the former is plain. In game design as in writing, one of the most challenging parts is facing a blank page and having to narrow down the universe of ideas. Having a requirement to work from makes things a lot easier.

Imposing a deadline, too, ought not be underestimated. There’s nothing better for forcing movement, for getting past speculation and starting to design. When you’re in a rut an impetus to do something now can actually be very helpful.

There’s a range of contests out there, formal and informal, some with long histories. I’d encourage anyone looking to get past a roadblock, or improve their skills, or get some feedback, or just to try their hand at the art of design to give one a try.

Theory: Add Instead of Subtracting

Sometimes game design is about filing down rough edges, implementing things in ways that remove small but irksome play issues. I’ve run into one opportunity to do that recently: it’s often better to add than to subtract.

This might seem pretty trivial; after all, addition and subtraction are basic skills everyone learns in elementary school. However, it turns out that subtracting can lead to weird rules issues. Rather than have to deal with them, it’s often better to see if the same effect can be achieved through addition.

By way of example, consider a game where players roll dice and try to get above a certain number. (In other words, the vast majority of games with dice in them!) As the designer, you’ve decided that in certain situations the player should be less likely to succeed. Should you subtract from the player’s roll, or add to the total needed?

From a mathematical perspective, the two might be exactly the same. Subtracting, however, can create problems in extreme circumstances. What if the total of (roll – penalty) is less than zero? Does that have meaning?

Don’t laugh—it’s possible that a negative result could. In an economic game, for example, negative cost might serve as a way to reflect economies of scale. In a wargame based on ancient Greece, where morale was the most important factor for a defending army, a negative attack value might represent an attack so weak that it actually reinforces the defenders’ confidence in themselves.

If a roll of less than zero doesn’t have meaning, how exactly will it be handled? The value could just stop at zero, with a rule that it’s impossible to go lower. In that case it becomes necessary to address order-of-operations issues; if there are both penalties and bonuses to a roll, canny players might apply them so that some of the penalties are “wasted” by the not-below-zero rule.

At least one game I’ve played tried to avoid that problem by tracking negative values, but treating them as zero; the negatives only came into effect when a bonus tried to bring the total back up. The resulting system was mathematically workable, but somewhat hard to explain to new players. “You’re actually at -2, but we play like it’s 0, unless you try to increase it, in which case it’s -2.” Wrapping one’s brain around that while also trying to keep track of the basic game rules was not trivial.

Compare all of that to what happens if we just add to the total needed. In the abstract, that raises absolutely no rules questions. Nor can I think, offhand, of any specific game where it would.

Sometimes a game has to have subtraction. Keep in mind, though, that subtraction has a certain measure of built-in complexity. Where possible, use the mirror-image addition instead; it’s probably equally intuitive, and it will usually avoid creating FAQ entries.

Theory: The Redemption of All-Chat

It’s an article of faith that all-chat is a cesspool. That reputation is richly deserved. However, it’s not a given that channels for communicating with players on other teams will only ever be used for flinging insults. Global chat channels can work in games designed around them.

Let’s start by laying out the problem to be solved. As a rule, all-chat—that is, a communication mechanism in online games that allows every player in a game or match to talk to each other—is silent at best and hurtful at worst. It says something that one of the first things League of Legends did to curb unpleasantness in its playing community was to set all-chat to “off” by default. Perhaps more remarkable, MMOs now allow players to opt out of their global chat channels. That’s how bad the situation is: an entire genre built on the social aspects of gaming has to let players shut down a primary means of socializing because it’s so awful.

What would it take to make all-chat good? There are two things I can think of:

  1. A good all-chat has a gameplay purpose. Everything in a game should have a gameplay purpose. Social features used to get a pass on that, on the theory that more ways for players to talk to each other automatically made for a better overall experience; time has put the lie to that belief. If all-chat is going to be rescued it will have to earn its place.
  2. All-chat needs players to be reasonable when using it. Making all-chat in its current form central to a game would make that game the least pleasant thing on the internet. For it to be beneficial the messages that go through all-chat must be free of the lowest-common-denominator vitriol so common today.

We can discuss each of those in turn.

The simple part: making all-chat important to the game

The former problem is relatively easy. Opposing parties talk with each other all the time, and there are plenty of ways to bring that into a game. Negotiation, for example, can be a centerpiece of strategic play; Diplomacy is a sufficient proof of that. For a sneakier version of communication, a wargame might include the concept of sending false messages to the enemy, or an economic game could involve market manipulation. Co-ops and team games often demand synchronized effort. Semi co-ops involve lots of talking as players try to balance their personal goals with the group’s needs. There’s no kind of game that can’t be built so as to encourage the players to talk to each other.

The hard part: kinder communication

It’s the latter issue, that of achieving good behavior, that’s the tricky one.

Solution 1: Put the players in an environment where dominating others isn’t the goal.

Keith Burgun recently presented an interesting argument that a game’s thematic elements affect how players view what they’re doing, and by extension how they interact with each other. When players are told in advance that the goal is to have fun together, he explains, they generally act in ways that are consistent with everyone having fun. He cites as an example his very different experiences in games with different art styles; players were nicer to each other in Team Fortress 2 than in Counter-Strike, even though they’re both violent games, because TF2’s cartoonish visuals emphasized that everyone was there to have a good time.

It’s when players are told that the goal is to dominate and harm others, Mr. Burgun argues, that they adopt language to suit. “[W]hile a player is operating in a world of violence, he is more likely to think violently.” (emphasis omitted) Players naturally respond to a game that tells them to hurt the enemy by trying to do so in every way they can, cruel words included.

Mr. Burgun’s theory points toward games that are built from the ground up to send specific messages: that winning doesn’t require achieving power over the other players, that the overall project is fun rather than in-game success, that other players are co-participants in the overall project and should be treated as valued teammates rather than as obstacles. Global chat could work fine in such a context. Without the nudge toward unpleasantness that comes from a violent theme, most players will default to a reasonable mode of conversation. Outliers will hopefully be few, and easily dealt with.

Solution 2: Effective deterrence.

There are games that don’t look at all like Mr. Burgun’s ideal, and yet the conversation manages to be civil. Diplomacy is again my go-to example. It’s a wargame that’s expressly about conquering Europe and eliminating players, but it’s unusual to run into someone who’s openly nasty. By and large people are cordial, even when they’re stabbing each other in the back and overrunning each other’s territories. Why does Diplomacy work?

Here’s my theory: Diplomacy, along with Twilight Imperium, the Game of Thrones board game, and others of their ilk, has the most effective deterrence around. In fact, Diplomacy has a level of deterrence that the criminal law envies! The structure of the game ensures that players who want to be mean are powerfully and reliably discouraged from doing so.

I recognize that that’s a pretty bold claim, so let me back up and discuss this more fully. Deterrence requires at least three things: (1) there is a rule you want people to follow, (2) people know about the rule, and (3) people are more afraid of the consequences of violating the rule than they are eager for the rewards to be had from doing so.

(1) is trivial. (2) is very much not trivial. New laws, highly technical laws, laws about unusual issues–all of these can have a weak deterrent effect simply because people don’t understand what’s forbidden or don’t think to ask whether there’s a law on point. Still, for our purposes we can assume that (2) is easily achieved in the context of rules about “don’t be a jerk on the internet;” everyone’s been told not to be unkind at some point.

(3) is the hard one. This is for a couple of reasons. First, humans discount the threat of punishment by the chance that it won’t happen. Put simply, people aren’t afraid of violating rules when they think they can get away with it. The greater the odds of getting away with it, the weaker the deterrence.

Second, humans aren’t very good at weighing future events against current ones; we tend to discount future harms based on how far away they are. The longer it will take for punishment to happen, the less we tend to care about it.

These foibles make it harder for the criminal law to achieve its deterrent purpose. Every time somebody goes to break a law, they implicitly weigh the consequences against the ideas that (a) they might not get caught and (b) the price of getting caught will be paid at an indeterminate point in the future, whereas the rewards will be here promptly. As the continued existence of crime demonstrates, some people do that calculus and come to a regrettable conclusion.

Diplomacy, on the other hand, creates an environment where those human failings aren’t given much room. The negative consequences of being nasty to other players happen right away and are extremely predictable. Negotiations break off; other players won’t provide the assistance necessary to progress; the game ends in swift defeat. The whole process takes a few hours at most.

As the theory of deterrence predicts, that leads to most Diplomacy players being polite. Tempers can flare and the gameplay is often vicious, but the kind of hateful, profanity-laden speech one finds in online games is absent. It’s remarkable: Diplomacy is basically built around all-chat, but it doesn’t sound like the all-chat we’ve come to know and disdain.

Compare this to games that try to achieve deterrence by having rules in the Terms of Service and banning users who break them. They suffer from the very problems of uncertain and distant punishment that the criminal law does, with the added weakness that banning isn’t nearly as severe as what the criminal law can impose. The sad reputation of all-chat is in part due to the fact that the deterrent effect in these games is very weak indeed.

From Diplomacy and similar examples I think that deterrence can be an effective mechanism for promoting good communication behavior in games. However, strong deterrence isn’t achieved simply by hiring some mods. It requires that the game be designed from the ground up to have a short feedback loop that consistently discourages unkindness.

Build from the right foundation

We’ve learned from sad experience that all-chat isn’t something that can be tossed on top of a game. The results are unsatisfactory, to say the least. However, global chat could be a valuable, positive thing. A game designed with the needs of all-chat in mind from the beginning, tuned in such a way as to bring about friendly communication, could elevate the global channel from cesspool to centerpiece.

Inquiring Minds Want to Know: Gating Power Behind Mechanical Skill

A while ago designers at Riot Games suggested that they didn’t intend to make the more mechanically difficult characters stronger. They viewed mechanical difficulty as an opt-in experience for those interested in that particular kind of challenge.

On the other hand, fighting games often make the harder-to-play characters the strongest ones. The Street Fighters series’ Yun and Guilty Gear’s Zato-1 are both top-tier characters–in some versions of those games, dominant characters–who are very difficult to pick up.

I’ve been struggling with which approach is better for a while, and I haven’t come to any firm conclusions. Certainly I find Riot’s position appealing; it’s not obvious that mechanical challenge directly equates to interesting decisions. Furthermore, when the mechanically difficult characters are better they inevitably rise to the top of the tier lists; players will practice as much as they need to to access their power. However, Jay has reminded me previously that getting the mechanics down is part of the fun for some players; they are attaining a kind of mastery that’s important to the game, and perhaps they should be rewarded for it.

Are games with a mechanical component inherently so focused on the physical requirements of playing that we should reward players who are the best at them? Or are mechanics just a buffer between a “real game” that plays out in decisions and a “physical game” that we want to reflect the real game as perfectly as possible?

Theory: Playing Isn’t Working

Being a good game designer involves having a reasonable familiarity with existing games. Every kind of artist learns by studying the works of others, after all. It’s important to recognize, however, that playing other designers’ games is not the same as doing design work. To make real progress, design time needs to be spent hammering away at one’s own games.

One of the perils of game design, I’ve found, is that research can be an awful lot of fun. Part of how I learn about fighting games is by playing them–and I really like fighting games. So too for wargames, worker-placement games, co-ops, semi co-ops, deckbuilders, and on and on. Learning is fun, because “have fun with this” is the default way to interact with the medium.

So far, so good. The danger is that it can feel natural to flip the equation around, turning “learning is fun” into “fun is learning.” From the latter statement, it’s easy to arrive at “having fun is also doing work.”

Unfortunately, that last position is wrong. Playing other people’s games might help one refine ideas for one’s own games, or be a source of inspiration, or demonstrate a useful technique. It will never, however, bring one’s own games into actual physical existence. It will never playtest them or write their rulebooks or do any of the other things that need doing to make one’s own games happen. Having fun isn’t doing work; it’s taking one away from the tough stuff.

This doesn’t mean that a designer should only work, leaving no time for play. Experiencing other designers’ games can be very valuable. Again, no artist would be expected to practice in a vacuum, ignoring the masterworks of his or her field.

What it does mean is that play time and work-on-own-designs time need to be kept separate. Don’t set aside two hours to work on a project, and then spend them playing Flower “to learn about non-conflictual games.” Play Flower during free time, and put those two hours into creating the next generation of non-conflictual gaming.

It’s often said that ideas aren’t worth much in game design, because lots of people have them; what’s rare and valuable is the follow-through to make an idea into a publishable game. Getting into a “playing = working” mindset is an easy way to end up on the wrong side of the ideas/follow-through divide. Play, definitely play–take it from a lawyer, making some time for not-work is a good idea–but recognize that playing doesn’t move one’s own designs forward, and keep the time for that latter goal sacrosanct.

Theory: Morten Monrad Pedersen on Emotional AIs

A while ago I talked about stand-in AIs. I mentioned that they need to imitate human players–but I didn’t have much to say about how that could be achieved.

That question was very ably addressed in Mr. Pedersen’s BGG blog post on Monday, which gives some really fascinating suggestions about how deck-building could be used to change an AI giving random responses into an AI that looks like it hates you personally . . . or loves you!

If you haven’t seen it yet, I would urge you to give his post a look. Even if designing solo games isn’t your thing, the ideas have applications elsewhere.