It’s an article of faith that a great opponent makes a game more fun. A good pure co-op game, then, needs a good AI foe to challenge the players. Designing that opponent requires, first and foremost, deciding what kind of co-op you’re creating: a stand-in or a puzzle. Mashing elements from both types together leads to trouble.
A stand-in challenges the players by imitating a living opponent. It tries to do what a human would do in a given situation. In essence, it simulates the experience of having a human sitting across the table (or on the other side of the internet connection) playing against you.
A puzzle challenges the players by presenting a problem for them to solve. It is not concerned with doing as a real person would do; its only goal is to provide an interesting dilemma, and it acts in whatever way the designer thinks will best achieve that. Puzzles may (should?) have a theme, and they may be good simulations of that theme, but they aren’t trying to simulate an opposing player.
Which category a game falls into has a huge impact on what kind of AI is appropriate. Stand-in AIs need capacities that puzzle AIs don’t. A human will respond to his or her opponent’s actions; to feel “real,” the stand-in needs to be able to do the same. It has to be able to find out what the players are doing, determine what an appropriate response might be, and implement that response.
Puzzles, by contrast, don’t have to care what the players are doing. In fact, they don’t have to do any specific thing so long as they’re interesting. The central question is what the game needs, not how a person would behave, and the AI needs only those capabilities relevant to the game’s particular answer.
Either choice can lead to a great game. Pandemic and Forbidden Desert, for example, are both great puzzles. The diseases to cure in Pandemic and the sandstorm to dig through in Forbidden Desert don’t act like human opponents–but why would they? Diseases and sandstorms aren’t sapient, and it would be weird if they could respond to the players’ actions. Instead they operate in ways that are both thematic for the natural forces they represent and interesting in play. For puzzles, that’s the gold standard.
As an example of a great stand-in I always go back to the Reaper Bot and Zeus Bot for the original Quake. (Wow. I’ve been playing FPS games for a long time.) At a time when a lot of people were on dial-up and internet play with other humans was a lag-filled affair, the Reaper and Zeus Bots were striking for their ability to navigate without bumping into walls, good aim, and consistent connection. Many real players, fighting against 300-500ms pings, couldn’t offer those things. The bots out-humaned the humans!
Designers run into trouble, however, when they mix the two categories. One sees this a lot with “cheating” computer game AIs (which are usually intended for solo play rather than co-op, but the issues involved are comparable). They look like stand-ins but are actually puzzles, and as a result they often end up being unsatisfactory.
For example, players often express frustration with the AI in the Civilization series of games. Civ’s AI promises stand-ins; the player controls one civilization and the others are guided by an AI that has each civilization pursue its own ends–just like they would if humans were guiding them. The goal is to beat the other civilizations, eliminating or outscoring each as though they were separately controlled by human players. The AI-driven civilizations sometimes cooperate and sometimes attack each other, imitating what humans do. It looks like a stand-in, it quacks like a stand-in . . .
. . . but it’s not a stand-in, and at least anecdotally it ends up irking many players as a result. Civ’s AI doesn’t get much smarter as the difficulty level goes up; it just gets more and more resources, far outstripping what the human player receives. Those resources enable the AI to challenge a skilled player, but they undermine the simulation; no human can do the things a high-difficulty AI can do. Ultimately the game becomes a puzzle in which the player must find optimal moves that will allow him or her to keep up with the AIs’ lead in technology and production. Players choose a higher difficulty level looking for a simulation testing their diplomatic ability and battlefield tactics, instead find an optimization problem testing their command of the math behind the game, and walk away aggravated.
(To be fair, some players greatly enjoy the higher difficulty levels. However, they’re usually knowledgeable about the game, know they’re in for an optimization problem, and are specifically seeking that experience.)
Civilization demonstrates–-has in fact been demonstrating for years–-that a really tasty apple is not a substitute for an orange. When designing a pure co-op, follow in the mold of Pandemic, Forbidden Island, and Quake’s excellent bots by figuring out whether you need a puzzle or a stand-in and then delivering fully on that experience. Slipping elements of one into the other is apt to confuse the game’s message and frustrate players.