Shawn Elliott
put up some questions he was going to do for a symposium with
N'Gai Croal; the first person I know of to answer them was
Mitch Krpata; I do not wish to steal thunder, and will say he's been reviewing longer than I have, and he's done it much more profesionally, too. So! Read his first.
I interviewed all three of them when I was even more wet behind the ears than I am now, and ever since then I've pretty much hung on everything they have to say. Considering that the questions are loosely related to the stuff I talked to them about a while ago, I have been thinking about them for a while.
So I'm going to hop on the "me-too" train. I know that I'm not the most famous reviewer you've heard of, nor have I done all that many--I just counted and I've written 20 reviews at
Snackbar (formerly etoychest.org), 12 of which have been in the last 2 months. I'm also the editor at the same place I wrote them, and have had to decide the policy on content, especially reviews; there have been 53 reviews in the last two months, all of which I've edited. So while I don't necessarily count as part of the "game reviewers club", I feel it's not outrageous to claim they are questions that affect me.
Take this as the viewpoint of a writer and a site that wants to "break in"; the kind that is getting PR copies in the mail, but not of every game; the kind with hundreds of page views every single day, but not thousands.
Question 1: How much is on our minds before we begin playing any given game for review purposes? Will we imagine a range of probable scores that a heavily marketed, highly budgeted, and hugely anticipated game will get? What when the game is branded “budget” or is the work of a lesser-known, less-storied studio? If so, how closely have actual scores correlated with our assumptions?
This ends up being a double-edged sword for us, because some of our reviews are of copies we purchased with our own money. Think free games have an impact on game reviewers? So does a lack of them--we get more of the less-desired titles and use them to trade-in for what we really wanted and anticipated in the first place. We did not receive a copy of Fallout 3, Fable II, or Far Cry II, but our writers purchased the first two mainly using review copies of games they didn't like.
On the other hand, these lesser-desired titles do a lot to fuel our content, so I like to think our self-consciousness ensures that we give them a fair shake; budget titles and indie-titles, if they're good, are more likely to receive attention from us.
Question 2: Ought reviewers settle on a score before, during, or after writing a review? How consistent are our practices with our prescriptions? Have we, for instance, revised a score after writing our reviews, even though we advocate against it, and if so, why?
I don't see why before should ever be a good idea; you should also be discovering the score in the writing process. While the number, if you use one, can be in your head at any time, it should be edited and reviewed like the text. A few times we've had a 4/5 or a 3/5 have inconsistent tone with the corresponding text; it's telling that the writers I have have been more willing to change the text rather than the number (though that may be due to the smaller scale we have).
Question 3: When possible, do we look at the scores that other critics give to the games that we're reviewing, as we review them? If so, are groupthink or iconoclasty potential problems?
I try not to, but confess I've had a couple games I reviewed that I read about much earlier. We got Silent Hill: Homecoming for the 360 over a month after it came out, and I'd already read all the conversation about it by Leigh Alexander and
Variety's Ben Fritz. This made it much harder to review--I was then reviewing it in context of everything I knew that has been said about it. It was as if I were a literary academic being publicly asked what I thought about
Finnegan's Wake.
Question 4: Often times we will have repeatedly played and/or previewed games in development prior to reviewing them. Does this familiarity with a particular game's developmental process influence the scores that we assign to the final product in the way that a professor will take into consideration her students' limitations and proven potential when she evaluates papers at the end of the semester?
This hasn't been an issue for us for obvious reasons. A few of us went to some shows once but I don't think any reviews correlated with any pre-release exposure since I've been there
.
Question 5: Review writing carries real consequence, especially among members of the enthusiast press. Once-warm PR people and game producers can become cold upon our publication of undesirable review scores, diminishing or eliminating our ability to secure subsequent interviews and access. Postmortem discussions and exclusive looks at the publisher and/or developer's forthcoming products are less likely. Conversely, a few publishers will permit us to post reviews before competitors, provided our review scores are favorable. Do such pressures produce a subliminal background or even enter our thoughts as we write reviews and assign scores?
As a small site, we end up getting games last, usually. EA has recently warmed up to us and sent us Warhammer and Dead Space before release date, but they ended up getting good scores (by me, incidentally), so this has not become an issue, though it has the potential to. I already tell my writers to give more priority to reviewing the games we get earlier rather than the ones we get later; while this seems fair, it has the potential to be a problem.
Question 6: Is grade inflation an ongoing problem?
As a whole, yes. Not all outlets suffer from it, but many do. Metacritic and Gamerankings become problematic, especially in the 50-70, 70-80, and 80-90 range. Like, what's the big difference amongst them? Is a 74 average really mixed reviews and 75 really generally favorable?
Question 7: Do scores determine our tone? Can a “3” encourage us to explain an aspect of a game in clearly negative terms where our attitude is actually less decided? Example: Game X's camera obscures the action, combat is irritatingly difficult, and “save” stations are few and far between. In our reviews, is Game X's plot, which we're still thinking through, more likely to become miserable than plain?
Ah, interesting throwback to question 2. At Snackbar, the 5-point scale was introduced to be more practical and not have to deal with so many of these problems. However, since 1 point is a huge deal, the writers stick by their scores once they give them to me.
The process can become muddled, but at the end of the day the final version is what the reader will see, and for me the most important thing is that the score and text correspond and complement each other; it needs to be clear why the game ultimately got the score it did. If the review doesn't do this, I tell them to make them align.
Question 8: Do scores encourage our readers to conduct a sort of text-to-number calculus where the two obviously negative statements in an otherwise positive-sounding review necessarily translate into every point deducted from the “10” that the game didn't get? Does this make reviews with high marks more likely to overlook fault, and reviews with low marks less likely to celebrate accomplishment?
I agree that many reviews tend to have this problem. However, if it looks obvious that the game was simply a 3/5 or 4/5 because they started at 5 and docked it points for game "penalties", I send it back. However, due to our scale this usually hasn't been a problem.
Question 9: Which is more important to us, our scores or our copy? If the latter, have our responses revealed any inconsistencies between our attitudes and actions? Are we still convinced of the importance and power of scores?
The copy, obviously--I already ended up answering this.
As a small site, we have to place value on the number in some way because we fear the readers wouldn't accept a lack of numbers since we aren't mainstream with highly-experienced writers. Also, we (*sigh*) want to get on Metacritic so we can increase exposure. It's become this necessary evil; it seems like bigger sites have to set the example and thus a precedent before we could ever get away with doing differently. And the bad part is that I suggested we switch to a 10-point scale soon for the sake of getting listed because 20/40/60/80/100 seems too constrictive if we have to play with everyone else.
Related suggestions for Ethics section:
Have we ever submitted review scores to publishers prior to their publication? If so, why?
Have we ever submitted review copy to publishers prior to its publication. If so, why?
Have PR people suggested that specific critics review specific games? Have we complied with their suggestions?
No to all the above as far as Snackbar is concerned. We have, however, been asked when a review will be posted, but that seems harmless.
On the last question, though, I was once approached by a certain PR rep (through Facebook!) to cover a certain game because of an article I once wrote (not on Snackbar). That someone who has written as few paid articles as I have had already been singled out to cover a game because I would be predisposed to liking it was an uncomfortable wakeup call.
It's been fascinating how someone who has written as little as I have and done work at a site as small as Snackbar has already experienced firsthand many of the little dances that writers and journos have with PR. I'm convinced there other industry has an environment where newcomers can be so easily...uh, accosted.
Reviews Vs CriticismQuestion 1: What is the object of a review? What are the review writer's obligations?
Right now, it seems like most are like an automobile review, doing a checklist of features and how well they work mechanically.
What it ought to be is something Ebert said, which I quoted in an earlier post:
"Provide a sense of the experience. No matter what your opinion, every review should give some idea of what the reader would experience in actually seeing the film. In other words, if it is a Pauly Shore comedy, there are people who like them, and they should be able to discover in your review if the new one is down to their usual standard."
For Snackbar, I want them to craft it around a thesis statement; basically "it is good/bad in this/that way(s); here's why" kind of thesis. We are actually trying to figure out what we want to do as a site that offers something of unique value to readers, making the issue of reviews a very fuzzy and haranguing one.
On a personal note, I am going to shop around an article I'm pitching on the 5 dealbreakers that can apply to every game: sociability, reckless and deliberate gaming, and (again, from Mitch) games that are rewarding in the areas of skill and content. The lack of recognition on these leads people to say irrelevant, useless things like Bionic Commando is bad because there's no jumping or an RPG or FPS campaign is bad solely because it is "linear" or a game is bad because the player isn't able to change the outcome enough. These are preferences, but not standards.
Question 2: If the purpose of a review is to suggest to consumers how they should spend their time and money, why do we avoid less-granular grading scales such as Buy, Try, or Avoid? Example: Giant Bomb founder and former Gamespot editorial director Jeff Gerstmann told MTV's Multiplayer blog that “'How can I save people money today?' is basically the kind of mentality that I tackle this stuff with.” Under Gerstmann's directorship, Gamespot reviewed games on a hundred-point scale. Is a 9.6 different than a 9.7 when the wisdom of a purchase is what the reviewer wants to communicate?
If reviews are serving the purpose of being buyer's guides, the scale should not be more than 1-10, and even those can be difficult enough. The closer the scale is to 10 or below, the better. Crispy's scale (buy it, try it, fry it) is not something I'd want to see everywhere, but it's a view I can appreciate; I'm always curious what their score is on any titles I'm interested in.
Question 3: Actual sales rarely correlate with review scores in cases where games are not also heavily hyped and marketed. Increasingly, gamers pre-order games prior to the publication of reviews. Interactive demos allow our audiences to decide for themselves whether or not a game will be worth their dollars. In addition, word of mouth and message board discussions inform our potential audiences' purchasing decisions with an intimacy and directness that we cannot provide. Finally, review aggregation sites such as Metacritic mute the bias of individual reviewers and provide a bigger picture. Do these circumstances suggest that our self-perception is, well, delusional – a throwback to a time when magazines and websites were gaming's gatekeepers? If our audiences believe this, even if we do not, what are they really reading for?
I don't know! This is the question that has gotten everyone talking and self-analyzing again, but I tackled it in my last 3 or 4 blog posts.
I find it telling that it takes a very long time for people to figure out whether a game is
one they would like. Even forums struggle; Dead Space is still a game people
most people cannot figure out by reading about it, and I wish the recent arguments on innovation had taken place around that title rather than Mirror's Edge, since it did not innovate that much but did have solid delivery. It really should be the other title mentioned (also by EA's push for a little bit more IP, interestingly enough) in these discussions, but I think everyone's burned out now.
Question 4: Can criticism (concerned with telling our audiences what they're spending time and/or money playing as opposed to whether or not a game is worth spending time and/or money to play) coexist with reviews? Is a competent review also a critique -- as is so often the case where lit, movies, and music are concerned -- or should we separate the two?
As with most elements of pop culture, it seems inevitable that the two will combine. However, the buyer's-guide-reviews will still exist without critique, so I hope criticism gets its own spot in culture at large later on. It drives me crazy that you can discuss music, movies, books or anything else with a stranger or in groups, and to talk about them as critical and cultural products, but not games.
One thing at a time, I guess; though ideally there would be space for all three types, with criticism and criticism/reviews in a state of growth. I don't worry about buyer guides, obviously, because economics is a stronger force here; they will obviously never go away.
Question 5: What can (or should) such criticism take into account? [Note: I don't want to jump the gun on the Evolving Reviews section here, so bear with me if you're wondering why I'm not yet asking certain obvious questions about the shape and challenges of videogame criticism.]
The framing of this question suggests this is a very, very big question. It deserves as much space as the rest of these combined, perhaps. However, to be uber-brief, I do wish for more analysis that is similar to literary criticism, the kind done by Ian Bogost, and for analysis of games as social systems. The latter is a space I try to tackle; I'm kind of bad at it, but I get to mumble about it
elsewhere.