Theory: Designing for Imbalance

As a Philly native, I’m usually rooting for bad sports teams. The Eagles are often more worth following for the soap opera than for the on-field play. The Sixers are, charitably, rebuilding. The Flyers haven’t been the Broad Street Bullies for a long time, and the Phillies are in such dire straits that their manager recently couldn’t call for a relief pitcher because someone had left the bullpen phone off the hook!

Here’s the thing, though: rooting for those teams is a lot of fun. It’s not disappointing when they lose, and it’s a thrill when they win. What’s more, talking about a lousy team is way more fun than talking about a great one; there’s more room for disagreement, more topics to hash out, and more debate to be had.

I know I’m not the only one who feels this way, either. It’s long established that people like to root for the underdog. We can see that in every sports movie for kids; they feature plucky come-from-behind wins, not the crushing dominance of the best squad.

Given all of that, I’m surprised that more attention hasn’t been given to designing multiplayer games that are intentionally unbalanced. We have imbalance in professional sports, and it makes them more fun, not less. Why not tap into that same dynamic in other genres?

For example, imagine a two-player game in which the players choose their sides. One side is visibly stronger at the outset: it has more pieces, a better position, whatever. However, both players have the same victory condition: to get X points, or to remove all the opponent’s pieces.

This game would absolutely be unfair–but I’m not at all convinced it would be unfun. Whoever picks the stronger side is likely to win, and everybody likes winning. Furthermore, winning could still be a challenge and something to take pride in.

The player on the weak side will probably lose. What he or she gets in return is the chance to be the underdog. Losing isn’t a big deal, and winning means he or she shot the moon!

In this design, then, fun would be decoupled from fairness. The players would know in advance who was likely to win, but that wouldn’t matter because each one would have entertaining things to do. Rather than the fun coming from the players competing for victory, the fun would come from each player helping the other play out a great experience: the player who begins the game with the advantage gets to be the skilled master, while the player who starts out weak gets to be the never-say-die hero.

It would be important, of course, that the players in this game pick their sides. Not everyone enjoys struggling uphill; those players shouldn’t be forced to be the underdog. For those who like the challenge, though, I think this could be an exciting way to make losing fun. If a player isn’t expected to win, but will be hugely rewarded for doing so, then there’s entertainment just in giving it a try.

Obviously, this doesn’t work for every game or every circumstance. Strict tests of skill, for example, should start on a level playing field. Not every design needs to be for tournament play, however, and we don’t need to let the requirements of tournaments limit the avenues we explore. Sports have taught us that it can be fun to start from a bad position; other types of games might benefit from exploring how to apply that lesson in other contexts.

Theory: Player Expectations of Video Game Stories As a Resource

SPOILER WARNING: This post discusses the stories of Mark of the Ninja and Persona 4. If you haven’t played them yet, you might want to stop here. They’re both great games that deserve to be experienced without knowing what’s coming.

In order to use every resource at their disposal, game designers first have to recognize all of the possibilities. Some are easy and obvious: computer memory, playing cards. Others are more esoteric: tablets as addendums to board games, for example. Finally, there are the little-used, usually-unnoticed, difficult to employ resources, the narrow ones that are exceptionally powerful in the right situation. As an example, let’s take a look at player assumptions about how video game stories work.

Most video games have some unrealistic aspects that we accept, expect, and do not even process as out of the ordinary. Invisible walls limit our progress, even though the world appears to extend further. We can’t cross low objects because there’s no jump button, even though in the real world a few books or papers on the floor would pose no barrier. Conversations are limited and short to the point of rudeness. After a while none of these inhibit suspension of disbelief; we grow accustomed to and ignore them.

Mark of the Ninja does something canny with the way we accommodate ourselves to technical limitations on video game stories. Rather than just going along with our expectations, it uses them as a resource to make the story better.

(The SPOILERS start right here!)

In the game players take the role of a ninja who has gained Real Ultimate Power–but the power comes at the cost of gnawing, growing insanity. Players are led through the ensuing adventure by a helpful fellow ninja who provides guidance, commentary, and focus.

The guide routinely appears from nowhere and then disappears off-screen, never to be caught up with no matter how quickly the player-ninja moves. Occasionally the guide even does the impossible, able to “find another way” when there’s only one route.

I suspect that most players write all of that off completely, just as I did. Sure the NPC guide can do unlikely things: NPCs often can. They’re narrators, not subject to the limitations of the reader/player. Having them enter and leave the screen at all is a kindness, a nod to verisimilitude.

The big twist comes at the end of the game, when we learn–

(Seriously, SPOILERS!)

–that the guide was never there at all! She’s a visual representation of a voice in the ninja’s head, created by the ninja’s own mind so that the voice would seem to be coming from somewhere. Far from being an anchor against insanity, she’s a symptom of it!

With that single reveal, the entire game is thrown into question. Up to that point the goal has been clear: first to stop an attack on the ninja’s clan, then to take revenge on the evil megacorporation that attacked the clan, and then to take revenge on the clan’s own leader when it becomes clear that his schemes prompted the attack in the first place. However, all of those goals were presented by the guide–who was never really there, who’s just an expression of the ninja’s insanity. The player has to reevaluate the evidence, based not on the suddenly-unreliable narrator but rather on the player’s own judgment.

At the end of the game the player has to decide whether to strike down the clan leader. It’s a gripping moment because of the way expectations have been turned on their head. Throughout the game the player had a confidence born of the simplicity of most video game storytelling: the NPC tells you what to do and the challenge is in doing it. Now the NPC safety net is gone, and the player feels all the more adrift because it was once there.

By playing with expectations Mark of the Ninja goes beyond telling players that their character is going insane. It makes the player feel confusion and distress, thereby moving some distance toward putting the player in the character’s shoes. Mark of the Ninja uses the player’s assumptions just like it uses art, voice acting, and sound effects: as a resource it can call on to promote immersion.

Persona 4 does something similar, although its implementation of the technique is arguably less impressive. At the end of Persona 4 its characters feel that they have solved a mystery: they’ve discovered the identity of a murderer, learned his motives, and stopped his crime spree. There’s even an end-of-game wrapup of the sort that closes video game stories.

However, attentive players will note that the facts don’t quite add up. Good detectives among them won’t be satisfied, and if they insist on continuing to play rather than letting the credits roll they’ll discover that the game isn’t over yet! The true ending, with its ultimate reveals and conclusive answers, lie a few hours of gameplay beyond.

Unfortunately, it’s not easy for good detectives to signal that they want to keep investigating. It requires an unintuitive command that isn’t even legal at any other point during the game, with the result that checking an FAQ is almost required. That steals a lot of drama from what’s supposed to be the player’s biggest test.

Nevertheless, Persona 4’s handling of the player’s expectations is interesting and worth studying. The game demands that players rise above genre conventions to look hard at the mystery and decide for themselves whether it’s been solved, rather than going along when the game signals that they’re done. In the process Persona 4 captures something of real-world detective work, which is rife with incentives to stop investigating and close the case. A single weakness in Persona 4’s implementation thus takes nothing away from the superb underlying idea of using the player’s own assumptions to make her participate more fully in the mystery.

Video games have developed, as a medium, certain conceits. Good games can rely on those conceits, using them to hide technical limitations. It’s possible to go further, however, using not just the conceits as a resource but the expectations they’ve given rise to as well. Mark of the Ninja and Persona 4 demonstrate that being conscious of player assumptions regarding “how game stories work” allows designers to play off of them, potentially to very powerful effect.

Theory: The Last Step on the Age of Sigmar Road

I read the Age of Sigmar rules over the weekend with great interest. Even knowing some of what to expect, it was certainly disorienting when I realized that there’s absolutely no limitation on what players are allowed to put on the table. I don’t mind that, though; in fact, I think it’s possible that Games Workshop didn’t go far enough.

That probably sounds insane—there’s nothing about balance, how could they go further than nothing—but hear me out. Over the weekend a friend likened Age of Sigmar to Magic: the Gathering’s Commander format. Commander is a casual approach to Magic that only works when the players sit down in advance and discuss what kind of game they want to play: super-competitive, slow and casual, etc. So long as the players do that, though, it’s great.

Age of Sigmar seems to be built on the same principle as Commander: the game allows players to make what they will of it, and trusts them to figure out as a group what that’s going to be. Does everyone want to play a story-driven narrative game, with scenarios based on an overarching plot and armies that grow and shrink with their nations’ fortunes? That’s fine. Would the players prefer instead instead to play regimented armies marching in formation? That’s supported. Just want to play a bunch of dragons that breathe fire on everything because it’ll be SO METAL? Awesome, you can absolutely do that.

For all of that to work, however, the players have to be on the same page—and the Age of Sigmar rules never actually suggest that the players should talk. Every new-player article about Commander makes it clear up-front that groups picking up the format need to decide on their own ground rules, and that people coming into a group must find out what the group’s rules are. The Age of Sigmar rule sheet lacks that guidance, and given how outside the norm that kind of discussion is in miniatures circles I think it’s going to be sorely missed.

I’m excited to give Age of Sigmar a try. As I read over the rules, though, I can’t help but wish that Games Workshop had taken a page from recent paper RPGs by stating not just what the rules are, but why they are that way. I want Games Workshop to take the final step on Age of Sigmar’s road: having built a game that puts players very much in the role of scenario designers, be open in telling them so.

VarianceHammer

Thinking about Age of Sigmar (which only looks more promising the more I hear about it being freeform and consciously rules-light) reminded me that there’s another Warhammer-related item that’s had my interest: the superb site VarianceHammer. A blog about Warhammer 40K written by a computational scientist, VarianceHammer is an ongoing effort to do dice math right. It sets aside the assumptions behind most analyses of dice to figure out what will actually happen on the tabletop.

Not being a mathematician myself, I’ve found the site’s discussion both technically enlightening and subjectively fascinating. The author of VarianceHammer can apply mathematical concepts that I’ve never even heard of to determine how dice will impact the player experience much more precisely than I’m able to. Seeing how it’s done is as nifty as getting the results.

Every time I read one of the heavy-math posts on VarianceHammer I’m reminded of the breadth of game design as a field, of how many approaches there are to thinking about it and how many tools one can use in applying it. It’s humbling and inspiring, all at the same time. For that reason if no other, I would encourage you to give the site a look.

The Best Game of 40K Ever

More than twenty years ago I played the Best Game of Warhammer 40K Ever™. This was in 2nd Edition, using the Dark Millennium expansion and its strategy cards. Before turn 1 my opponent played the “Virus Outbreak” card, and my Imperial Guard army was wiped out. I went from a horde of troopers to having three models on the table: two characters who were immune to the virus and the one single soldier who was lucky enough not to catch the disease. My entire army was destroyed during deployment!

Here’s the thing about that game: it was so much fun. No single match of any minis game I’ve played before or since—and there have been many—has given rise to such a great story. Sure, the Best Game of Warhammer 40K Ever™ wasn’t balanced or reasonable. I didn’t care then, and I don’t care now! The tale of the three plucky survivors trying to play without the army they were meant to be a part of is worth more than ten fair games.

I haven’t bought new Games Workshop product in decades. The forthcoming Age of Sigmar edition of Warhammer Fantasy looks to change that, however, because it appears (we haven’t seen the full item just yet) to be all set to create great stories. Where most miniatures games strive for tournament balance, Age of Sigmar has the courage to say “this is a game, it’s meant to be fun, do remarkable stuff and don’t worry about it.”

You see, most minis games are designed around the central principle that any given match should be even. The players have different armies with asymmetric capabilities, but the overall power level is to be the same at the outset. Usually this is accomplished through a “points” system: each model/group of models is worth a certain number of points, and players spend their budget of points to build their armies.

There are two problems with this. First, as a practical matter, points systems are very hard to get right. Jake Thornton, an expert of long standing at point-driven games, has even described points systems as “invariably doomed to failure” because there are innumerable contextual factors they cannot realistically incorporate. He explains that we use points systems because “they are . . . the best tool we currently have for picking reasonably even forces from variable lists,” but “they do not account for everything and . . . the more seriously you take the requirement for balance, the poorer job they do.”

The second issue is that fair is not always synonymous with fun. In focusing on equality of power points systems tend to ignore the question of whether an army is joyless to play with or against. Any minis gamer of even brief experience can cite examples of armies that are moderately effective but highly aggravating.

Age of Sigmar seems, at least based on the information available so far, to be directing its attention away from fairness to emphasize fun and the social nature of miniatures games. Balance will be maintained at least in part through social contracts, or just disregarded entirely in the name of awesome. There is much concern that this will make the game unplayable in a tournament setting, to which I say–

So be it. I have many miniatures games that promise fairness, and achieve it to a greater or lesser extent.

I want to add a game that promises fun to my library. One that promises, and delivers, great stories. If Age of Sigmar is that game, sign me up.

Theory: Greater Immersion Through Fewer Options

Designers seeking immersive gameplay are urged to give players more choices. Life, though, is more often about working within constraints than it is about total freedom of decision. The way to create a more realistic environment is thus not to allow players unlimited control, but rather to impose realistic boundaries on their influence.

We all deal, in the real world, with limits on our decisions. Circumstances can make some paths unavailable, for example. Lack of money and time prevent us from doing things we think would be fun or even important; obligations to family and work call on us to let certain opportunities pass by.

Even if we can free up the money and time to do something, other people can stymie our efforts through malice or innocent misunderstanding. A co-worker might accidentally trip up a project, or—circumstances rearing their ugly head again—need to prioritize something else. Third parties with their own objectives can get in the way.

Forces beyond our control also impact what we can accomplish. Bad weather, car trouble, a labor strike—there’s no end of external situations that force us to adapt.

To feel realistic, a game should acknowledge these limitations. Life is not subject to comprehensive planning and thoroughgoing control. An immersive game shouldn’t be, either.

Immersion Done Right: Persona 3

As one example of a limitation that we can confront in the real world, consider team dynamics. Coordinating a group can be just as difficult as the problem to be solved. People might have different visions of what the final result should look like, causing them to work at cross-purposes. Lack of communication, especially in emergencies when the group is pressed for time, can result in group members wasting effort or even undermining each other. Sometimes someone just plain messes up, and the error impacts what everyone else is doing. Working well in a team is a challenge unto itself.

Shin Megami Tensei: Persona 3 (usually just called “Persona 3” in the U.S.) is one of the very few games to reflect that challenge. Whereas most adventure games give the player control over a party, allowing the player to operate an entire squad like a well-oiled machine, Persona 3 limits the player to controlling one single adventurer, with everyone else acting in accordance with their own strategies and preferences (as dictated by the AI). The result is an experience that feels like a real-world team activity; all involved are trying to achieve the goal, but they’re not always in perfect sync.

It should be said that the AI teammates’ independence can be deeply frustrating. The AI for one character is so preoccupied with a single move that Google will autocomplete the phrase “Mitsuru Ice Break.” Persona 3’s combat system is all about doing the right moves in the right sequence, and the computer-controlled teammates can’t be relied on to follow the playbook.

Yet, their imperfections contribute greatly to the game’s immersion. The other characters are more realistic and compelling because they behave like independent actors, possessed of their own agency. Put simply, they feel like people, and the game feels like a place where people live.

Persona 3 thus becomes more enthralling even as it limits player control. It poses a real-world problem—a group member’s bounded influence over other members of the group—and obliges players to solve it just the way they would in the real world, by learning about the rest of the group and finding ways to work in concert with them. The result is a game which features monsters and magic and robots, but which also has a nugget of reality that makes it easy to suspend disbelief.

Immersion Done Wrong: Persona 4

Persona 4 is an adventure game much like Persona 3, but—perhaps in response to complaints about Persona 3’s teammate AI—its designers gave the player more control. In this iteration of the series the player is allowed to manage each group member directly, choosing their actions by hand.

As is so often the case, we didn’t know what we were wishing for.

Persona 4’s characters are, like Persona 3’s, well-written and likeable. However, the change in combat control removes some of the sense of independent reality that made Persona 3’s teammates so special. They’re not people anymore; they’re extensions of the player’s will.

Unfortunately, that loss reverberates throughout the experience. Where Persona 3 was enthralling, a chance to step into and inhabit a different world, Persona 4 is a touch game-y. After hours and hours of combat in which the other characters are just extra action points, it’s hard to switch gears and treat them as living, breathing individuals in the story sequences.

To be fair, it’s possible to set the teammates back to AI control. However, even that feels artificial; the player is allowing them their independence, subject to their making one too many mistakes. Having let the control genie out of the bottle, Persona 4 can’t put it back in.

That’s a shame, because Persona 4 does just about everything else right. It’s got some superb writing, an imaginative ending sequence, a combat mechanism that’s just complex enough to generate decisions without being so elaborate as to weigh down the game, and a valuable improvement to Persona 3’s user interface that saves a lot of aggravation. Yet, Persona 3 is ever-present on my “play again at some point” list . . . and Persona 4, which gave me control at the cost of immersion, just isn’t.

Go Ahead and Take With the Other

Immersion doesn’t require that the player be able to do everything. After all, we don’t get—or even expect—total freedom of action or absolute control in our real lives. For a truly immersive experience, it’s better instead to impose reasonable limitations on the player. Such limits might not sound very exciting, but they make for a more realistic and compelling game.

Theory: Better Spectating Through Strategic Understanding

The New York Times has a superb article on basketball’s “Triangle offense.” It’s interesting for its exploration of basketball strategy and personalities. What I really found gripping as a designer, though, is its discussion of how much people who understand the Triangle enjoy watching it used.

Other articles have pointed toward the idea that the way to have the most fun as a spectator of a game is to really get its inner workings. The classic example, in my mind, is an article written years ago about a Street Fighter match between Justin Wong and Umehara Daigo. Unfortunately the original seems to have been lost to time, but in brief summary, Umehara knows that Wong wants to win with chip damage. He therefore puts himself at the precise distance to get Wong to use a specific move–and then counters all the parts of that move, one after another in rapid succession, before counter-attacking for the win. As the article pointed out, without an understanding of Umehara’s strategy the match video (found in the summary above) looks like a feat of dexterity, neat but something anyone who’s spent time in practice mode could do; with the necessary understanding, it becomes a one-in-a-million combination of physical and mental achievement that marks out a true master.

Both basketball and Street Fighter are complex games whose strategy is not obvious to the casual observer. Announcers and commentators help bridge that gap, but they can only go as far as they themselves understand; the New York Times article notes that even most basketball professionals can’t explain how the Triangle works, much less pass the knowledge along. I’m left to wonder: what can we do, in-game, to help spectators see what great players know?

Theory: Chambers Bay and the Point of Tournaments

What is a tournament about?

I don’t generally follow professional golf, but it’s just provided a case study that encapsulates a larger debate regarding what tournaments should be used for. The U.S. Open played over the weekend was–at least according to some–a weak measure of overall golf skill. Yet, it was an excellent test of who was tops at solving unique problems on a specific day. Views of the event thus depend on the purpose one thinks tournaments should serve, making it a great vehicle for discussing the issue.

A bit of background is in order. The U.S. Open is a major golf tournament held each summer. It moves from course to course, and this year was held at Chambers Bay in Washington state. Chambers Bay proved controversial. The drought afflicting the West Coast hit the already-difficult course, substantially impacting how it played. One player memorably “said that Chambers Bay Golf Course’s dry, bumpy greens were ‘pretty much like putting on broccoli;’” another replied that the greens didn’t have broccoli’s color, and were more like cauliflower. All of this led some to complain that the results were not a proper measure of skill, including one player who went off on network television about good putts being derailed and bad putts being knocked in.

For all the problems, though, the fact remains that the U.S. Open was extremely interesting to watch. Gut checks were constant. Can this guy get out of a giant maw of a sand trap, ten feet deep? One player was ill; could he make it through the last few hours? The course is extremely hilly, and the golfers had to keep it together when they hit a ball up toward the hole . . . and it rolled all the way back down to where it had started. (That happened more than once!) I can’t say whether Jordan Spieth, the ultimate winner, was the best overall golfer—but he was certainly the best at adapting to a crazy, challenging situation.

Is that enough to deserve to win? There are two camps, and it’s good practice as a lawyer to think about both sides’ arguments. Here they are, presented for your consideration.

The prosecution: this isn’t why we have tournaments.

We call the U.S. Open a championship event. That’s because it’s intended to determine who is the champion—the best. Chambers Bay, and by extension this U.S. Open, didn’t accomplish that.

Tournaments are, fundamentally, measures of skill. That’s why we have tournaments, with all their formal rules: to strip away everything that isn’t skill, and to ensure to the greatest extent possible that the most skilled player wins out. At a fundamental level, the difference between a tournament and playing casually with friends is that tournaments have as their purpose showcasing skill, with fun as a secondary goal, whereas most “play” is the other way around.

Being the best golfer means performing difficult tasks reliably. First one must hit a tiny ball with a stick so that it travels accurately to a location hundreds of yards away. Then one must hit the ball, again using a lengthy stick, so that it drops into a hole barely larger than the ball itself. We make tournament golfers do these things again and again, seeing who most consistently chooses the precise angles and forces necessary to get the ball where it is meant to go. Those choices are the essence of skill in golf, and we gather lots of data about how golfers make them so that we can make fine judgments between who is good at these decisions and who is the best.

Chambers Bay’s ragged greens undermined that analysis. We can’t say who was most reliably able to pick the correct angles and forces, because good decisions were undone by flaws in the playing surface. A random element crept in which made it impossible to say whether scores were based on skill or luck.

The 15th U.S. Open might have been won by the best golfer . . . or it might not have. We don’t know, and a tournament that leaves us uncertain on that point is a tournament that hasn’t done its job.

The defense: the event did everything an event can do.

It’s absolutely true that tournaments are meant to measure who’s the best. However, no single event can determine who’s the best in some cosmic sense. Chambers Bay did a fine job of establishing who was the best on the day, and that’s all we can ever ask of a tournament.

There’s never been a game where all involved played completely perfectly, to the outermost limit of their skill, so that we can say the winner was absolutely supreme. We have to accept that everyone involved in a tournament is merely mortal, and that every individual data point about player skill is therefore flawed and approximate. Overall best-ness can only be determined in hindsight, when more such points have accumulated than any single tournament can offer.

Once we acknowledge the limits on all tournaments, it’s clear that the U.S. Open was great. The event made players stretch beyond their normal boundaries, and allowed those who could do so to shine. Perhaps it tested whether a golfer was determined and even-keeled more than the golfer’s putting stroke; if so, that’s not a bad thing. Focus and steadiness are traits of good golfers, too.

Twenty years from now we’ll be able to look back and say whether Jordan Spieth was the best, or whether he triumphed over some other best to take this particular event. We need that extra time, with all the information it will bring, to make that evaluation. In the interim, the U.S. Open was a demanding challenge that tested the limits even of top people. An interesting game resulted. That’s what it’s reasonable to want from a tournament, and that’s exactly what the Open and Chambers Bay gave us.

Jurors, the matter is now in your hands

I have a personal sympathy for the second position–but the first has its merits. So: was this a good U.S. Open? More generally, are somewhat offbeat tournaments acceptable?

Theory: Advice for Teaching Games

Little, if anything, has more impact on a new player’s experience of a game than how it is taught. Poor teaching sends the player into the game confused, bored, or flatly annoyed, all but guaranteeing a weak experience. By contrast, good instruction encourages active, interested participation—and ultimately more fun.

Below are some lessons I’ve learned over the years about how to teach games. They’ve consistently been true across different groups; I’m absolutely confident that they’ll work for you as well.

1. You can do it. I’ve occasionally heard people say that they “can’t teach.” That’s not true! Anyone who can read aloud from the rulebook can teach a game. Everything after that is refinement of technique. The thoughts below will help you get started.

2. One voice. Decide who’s going to teach the game, and then let that person speak, beginning to end. “Helpful” comments and suggestions are usually just confusing for the learner; they divide attention and break up the logical flow of the instruction. If at the end the person teaching has missed a rule, mention it to the instructor.

3. No advice. Rules instruction should be entirely about rules, with no tactical tips. New players have enough to do grasping how to play. Adding how to play well on top of that does them no favors.

3a. No advice during play, either, unless it’s requested. The first play of a game—and sometimes the second and third plays, for complex games—are part of the learning process. New players often need to explore what moves are legal in a concrete way before they can grapple with strategy. Refrain from adding the strategic dimension too early.

If the new player does request advice, stick to generalities and legalities. “In this situation you can do X, or Y, or Z. All of those moves have potential, depending on what you want to accomplish.” Letting the new player make decisions is critical; it’s not fun to feel puppeted about.

(3) and (3a) are especially important, in my experience, when men are teaching women. I have noticed that men are much more likely to give very specific, “you should do this” advice to women—and that the women usually resent the being patronized in that fashion. If you don’t give advice unless requested, and stick to generalities when it is, you’ll be fine.

4. Find out whether the new player wants a comprehensive overview, or to learn-as-you-go. Some people get frustrated when they’re partway through a game and are told “actually, you can’t do that;” others are annoyed by having to wait through long rules explanations. Ask specifically what the new player wants, so that you can provide it. If you’re teaching a group, try to get a consensus; failing that, use your best judgment as to which approach is better.

4a. Have a plan for both methods. Teaching in the classroom has shown me that there’s no substitute for preparation. Think through, at the very least, the order in which you’re going to present information. If you’re not sure, following the rulebook is most likely fine.

4b. For “lifestyle” games, default to learn-as-you-go. Warmachine has, I would estimate, about 100 pages of rules. Trying to teach a new player all of them is madness; even someone with a photographic memory would be hard-pressed to grasp everything that was going on. If the game is clearly too complicated to teach all at once, don’t even offer that as an option; just launch into the learn-as-you-go style.

Keep in mind that choosing this option dictates certain things about the game to be played. It’s not OK to tell someone they’re going to learn on the way, and then seed the experience with gotchas that will leave them feeling helpless or like their decisions were unimportant. By committing the other player to go in with incomplete information, you commit yourself to making that information sufficient.

5. Stop after each topic and ask for questions. People often don’t feel comfortable asking about things that have confused them, because they don’t want to interrupt or don’t want to look foolish. Explicitly giving them opportunities for questions makes it clear that (1) this is a good time and (2) having questions is reasonable.

Of those, I would especially emphasize “one voice” and “no advice.” I see those principles violated constantly, and it never works out. Stick to the points above, and your rules teaching will go much more smoothly.

Theory: Contests

First, I wanted to follow up on last time’s post regarding Twitter by noting Eric Lang’s “TCG Design 101” tweets. Each one is a superb distillation of years of design experience. They’re the kind of content that makes Twitter so valuable to designers, and are well worth checking out.

Unfortunately, sometimes one isn’t inspired to create the next great TCG—or much of anything else, for that matter. When that happens to me, I often check out game design contests. They’re very useful for breaking through mental blocks, getting out of comfort zones, and generally putting the creative engine into gear.

Contests provide two invaluable things to designers searching for inspiration: a seed (e.g., “a game involving kings and queens” or “dexterity game”) and a deadline. The value of the former is plain. In game design as in writing, one of the most challenging parts is facing a blank page and having to narrow down the universe of ideas. Having a requirement to work from makes things a lot easier.

Imposing a deadline, too, ought not be underestimated. There’s nothing better for forcing movement, for getting past speculation and starting to design. When you’re in a rut an impetus to do something now can actually be very helpful.

There’s a range of contests out there, formal and informal, some with long histories. I’d encourage anyone looking to get past a roadblock, or improve their skills, or get some feedback, or just to try their hand at the art of design to give one a try.