COE Reviews RAQ Update — Experiment #627!

You’re back.

Yup. Back with a vengeance.

 

New features I presume?

Indeed-y do. First up: “Plays Like”, inspired by the folks over at Extra Credits. A small section after “Parent Talk” which likens the game in question to other ones out there. We may expand this bit by comparing a game to other forms of entertainment in film and TV, but this bit will mostly focus in comparing games to each other. Also, just as a reminder to all four of you reading this, but we’ve added (coughrippedoffkotakucough) a “Review Basis” bit which basically describes how many hours the reviewer has played along with the number of modes played and achievements unlocked.

 

Nice. What’s the other feature?

Scores are back.

 

…come again?

Yeah, we’re going to start using numbers again.

 

…the same scores that you’ve been against since the new review format?

More or less.

 

I give up. You guys never make up your minds on anything! Give me one good reason why you’re using scores again?

We’re trying to maximize the viewership of our site. By not attaching scores to our game reviews, we can’t post our reviews in sites such as Metacritic and GameStats. We need more views so we can go to E3.

 

Fair enough. You did say more or less though. What are you changing?

While we’re basing this off our old system, we want to try to bring in all sides of the argument and to be as fair as possible. As we previously mentioned, reviews are opinionated pieces, and we now realize that scores fall under that opinion as well. There’s no perfect way to score and games are never scored fairly when opinion is involved.

 

So here’s how this is going to work. We’re doing this in three steps:

1) First off, we won’t score individual categories for simplicity’s sake. Only a general score in a 10-point scale, going up or down by 0.5 (again, for simplicity’s sake). However, it will be given twice. The first score is the average scale based on the general opinion of the internet. Sites such as Metacritic, GameStats, and user reviews will help weigh in this scale.

Here’s how this scale will look: 7 +/- 1.

+/- literally means plus/minus, and 7 is the average of this scale. As you can see, this scale is limited to plus or minus 1 point, but can go up to 2 as well like so: 7 +/- 2. The latter will be limited to games with highly conflicting scores, while the former represents your standard scale.

 

2) Our personal score as a site which will most likely fall under this scale, but again that’s not necessary should the reviewer feels that his score has to be outside of it. Let’s have that example up again: 7 +/- 1. If the reviewer gives it an 8, the score is termed as INFLATED. If the reviewer gives it a 7, it’s NEUTRAL. 6 means DEFLATED.  The term OUT OF SCALE should be used in combination with the previous over and under-inflation terms should a reviewer insists on scoring outside of the average. So far, everything will be displayed like so:

Score Scale: 7 +/- 1

Personal Score: 8 (INFLATED)

 

3) Here’s where opinion comes into play. Finally, the reviewer will write up his main reasons of the over- or under-inflation of these scores, which reflects on the scale itself and what he/she has written in the review. Like so:

Reason for +1 inflation: over-the-top humor

Reason for -1 deflation: conflicting controls

 

The reviewer will then mark which ones he has used to affect his score, like so:

Reason for +1 inflation: over-the-top humor (used)

Reason for -1 deflation: conflicting controls

 

This reflects our current ongoing example and can vary depending on how the reviewer wants to use this scale.

 

This sounds overly complicated? What gives?

All I want to do is highlight the terms: inflated, neutral, and deflated. Seems like the most efficient method to do so. These terms have been inspired by many gamers who claim that gaming media sites tend to over- and under-inflate proper scores, constantly countering giving their personal opinions on what should be a fair score to give to specific games. The truth of the matter is that none of us can ever give a unified score that’ll satisfy readers around the world, thus the liberal use of these terms attached to our own scores. Same-minded readers may agree with whatever score we give, while others tend to reply with their own. It’s a never-ending cycle of over and under-inflating numbers.

 

As usual, any questions/suggestions from our staff and readers are welcome with open arms. Your opinions will help shape up this experiment into a healthy scoring system. Don’t forget, we’re keeping everything we’ve created in our current review format. These are just add-ons to help get us into the mainstream. A review template will be up soon for everyone to read.

Leave a Reply