Forgot password
Enter the email address you used when you joined and we'll send you instructions to reset your password.
If you used Apple or Google to create your account, this process will create a password for your existing account.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Reset password instructions sent. If you have an account with us, you will receive an email within a few minutes.
Something went wrong. Try again or contact support if the problem persists.

Philosophy of Game Design – Part Two

This article is over 14 years old and may contain outdated information

To review from Part 1: Plato valued absolute truth, irrespective of player preferences, and so he argues that good games come from good developers. Aristotle had a slightly more pluralistic account of truth that was player-dependent, and so he argues that good games come from good players – and “good players” are skilled players who can beat difficult games.

For Part 2, we’ll derive some additional philosophies from Aristotle’s account – some more modern, mainstream player-centric theories that are all the rage right now.

But first, some history that’s crucial for understanding those approaches:

image

In 1982, Atari had a wildly popular videogame console in the US, but didn’t regulate who could publish games – so in 1983, the industry crashed from the collective weight of so many poorly designed games made by pet food companies and other ilk. Gamers have never forgotten: We’re obsessed with whether a game is “too short” or if it was “worth it,” and videogame reviews, unlike their literary, music, film, and art counterparts, routinely take price into account.

So now we quantify: How many weapons, levels and hours of playtime? You could only fit so many levels into the limited memory of an NES cartridges so developers found other ways to inflate playtime – Mega Man reuses levels and bosses in more challenging ways, Final Fantasy recolors enemy sprites for more powerful variants – because a more difficult game took longer to beat, which in the end was a more “valuable” game.

But, as we mentioned before, relatively few people have what it takes to master videogames: Namely, enough disposable income (or allowance) to pay for these games and several long, uninterrupted stretches of free time to master these games, not to mention a whole lot of luck, skill and perseverance.

Such people were usually middle-class teenagers, the source of the “gamer” stereotype that’s thankfully dying today. While these gamers had internalized the crash of 1983, so had the industry. They sought stabilization though stringent quality control, an emphasis on general “entertainment” (e.g. “wow, the PlayStation 2 plays DVDs too!”) – and more recently, by expanding their audience through accessibility.

All modern player-centric design philosophies re-cast the “good player” – from the classic Aristotelian notion of “skilled player” to “every player.”

Now as philosophers, we have to ask: What does it mean to be accessible?

For one sense of “accessible,” perhaps we can take the release of Valve’s FPS puzzler Portal as a watershed moment in this field.

Portal defined accessibility as “almost anyone can play and beat this game.” It was rather short, yet few complained about its length. (It was also part of the Orange Box, a collection of five games for $50 that utterly exploded our collective notion of value.)

While accessibility had been an industry concern for many years leading up to the game’s release, never had it been so fundamentally integrated into public accounts of the development process. Much of the press and interviews focused on how frequent testing decided which puzzles to keep and which to reject.

This emphasis on collecting data – most often quantitative data to balance multiplayer games – is an empirical approach to game design. Here, accessibility means posing a hypothesis (“If the build time for a Protoss Zealot is longer, it will balance early game harassment.”) and collecting evidence to confirm or deny that hypothesis (“Protoss are now winning fewer matches under four minutes in the Gold league.”).

Recommended Videos

Charts, graphs, heat maps, death maps, kill maps, eye tracking, heart rate monitors, player analytics – an empirical method to game design argues that collecting player data and interpreting it properly makes good games.

image

(Taking that idea a hundred steps further, logical positivism argues that anything unscientific isn’t verifiable and thus is meaningless, which in itself is an unscientific statement, which is partly why logical positivism quickly died the way it did.)

But this kind of data-driven design is plagued by similar problems posed in the philosophy of science:

When is data accurate / pertinent, and how do you go about collecting it?

If you collect data from highly competitive clan servers, or perhaps from someone who’s never played a videogame before in their life, are those sets of data valid for balancing the game for everyone else? (It depends.) Should we instead test on some sort of “average player” and if so, then who is that player? (It depends.) Is that really the best way to achieve accessibility, or do we end up pleasing no one by trying for everyone? (It depends.)

And then how do you go about interpreting that data you’ve just collected?

Imagine in Team Fortress 2 that data indicates fewer players are playing as spies – does that mean Pyros are overpowered or that Engineers are too difficult to kill or something else entirely? (We would need more data.) And if Engineers are too difficult to kill as a Spy, is it actually a level design problem with specific overpowered build sites on popular maps, or is it a sound-related bug where the Spy’s cloak sound is too loud, or is it a balancing issue with how the Spy’s cloak doesn’t last long enough to get past the front line? (We would need more data.) Or is this a good thing, to have so few players playing as Spies? (It depends.)

But now let’s say you want to know why a player keeps falling off a cliff.

Do you track the player’s camera position and vector to produce a heat map of what they look at, to determine whether they notice the “Danger! Don’t Fall Off!” sign, and increase the contrast on the sign texture to compensate?

Do you map their movement vectors against the level’s collision model – maybe their movement speed doesn’t decelerate fast enough – or do you increase the friction parameter on the dirt materials?

Do you just ask them, “Why do you keep falling off the cliff?”

Maybe this behaviorist notion, that we can deduce a player’s intention from observing their actions, is just side-stepping the issue. Why not just solicit player feedback directly and have them verbalize their intentionality? Social liberalism holds that all members of society should have (at least some) input with regards to the process of running their government.

While the empirical school of game design collects quantitative player data, this social liberal approach collects a form of qualitative player data through focus groups, surveys, and analyzing player feedback from emails, forums and blogs.

The social liberal account holds that good games come from listening to as many individual players as possible and interpreting that feedback properly. Here, “accessibility” means decentralizing power and sharing the reins of design.

(As a sort of pseudo-variant, perhaps a neoliberal approach would argue for feedback from clans and guilds, or maybe third-party vendors and game publishers, and value that over individual players’ opinions. The resulting design changes might trickle down and indirectly help individual players.)

In Left 4 Dead 2, players vote for which game modes to keep; in Halo: Reach, Bungie uses voting results to balance multiplayer playlists. Increasingly, players are now making game design decisions through direct democracy.

Don’t stretch this political analogy too far, though. Compared to citizens in real-life constitutional democracies, players have very little political power and rarely get real input on design. It’s still the developers who sort feedback to determine what is signal and what is noise, and they ultimately do the design.

image

Plus, there’s another reason not to base your game design on player feedback: Players often change their opinions or stop playing entirely.

Let’s return briefly to the empirical approach and quantitative data, with the mindset that social liberal player feedback isn’t actually shared governance but rather just more data – qualitative data.

How do you know that a particular set of data or interpretation will hold true for the future? Many players could suddenly start playing as Spies for some reason. Maybe one day, suddenly your entire player-based economy uses Stone of Jordans as currency instead of gold or gems. Or tomorrow, gravity could suddenly cease to exist.

This is, more or less, the core problem of empiricism as posed by David Hume: How do we know that observable phenomena will continue to act that way, consistently, in the future?

People are much more unstable than the laws of nature, whether in their feedback and rants on forums or their erratic playstyles that could abruptly change upon reading a guide or watching a YouTube video of a strategy.

We can’t collect more player statistics or solicit more player feedback in order to decide whether collecting statistics or feedback is good; that is, we can’t use induction to prove the validity of induction because that’s circular logic.

However, that very reasoning about using logic is a form of deduction from a set of premises – and to prove the validity of deduction, we can’t use deduction because that’s circular logic too – so we must use induction to prove the validity of deduction … but we just used deduction to argue for the fallibility of induction!

It’s okay if you’re confused – so was Hume. In the end, he adopted a kind of common sense “wait and see” approach, a type of practical skepticism. “Don’t worry about whether it will hold true forever, but just worry about whether it holds true for now.”

And that, I guess, is a philosophical justification for frequent game patches, MMOs and Valve’s “games as services” mantra (as of this writing, there have been 150 patches to Team Fortress 2.)

Compare this attitude to the classic Aristotelian conception of player-centrism – players might’ve complained that “Mega Man is too hard because of Cold Fusion Man” – and Capcom’s response probably would’ve been, “How did you get this number?”

Suddenly Aristotle doesn’t look so pluralistic and liberal anymore – instead, it seems immovable, static and unresponsive.

Perhaps we must accept that a “good” game design is only good for a while, until the player data indicates it isn’t good anymore – and then you redesign and rebalance it until it’s good again. This is distinctly a player-centric notion, the idea that a developer must “do right” by the community of players.

So what makes a good game?

Perhaps it’s the willingness to change it.

Robert Yang is currently an MFA student studying “Design and Technology” at Parsons, The New School for Design. Before, he studied English and taught game design at UC Berkeley. If he’s famous for anything, it’s probably for his artsy-fartsy Half-Life 2 mod series “Radiator” that’s still (slowly) being worked on.


The Escapist is supported by our audience. When you purchase through links on our site, we may earn a small affiliate commission.Ā Learn more about our Affiliate Policy