The (terrible?) world of “professional” Amazon reviewers

A recent study of some of Amazon.com’s top 1000 reviewers has PC Magazine writer John Dvorak questions the validity of their reviews:

In the first academic study of its kind, Trevor Pinch, Cornell University professor of sociology and of science and technology studies, independently surveyed 166 of Amazon’s top 1,000 reviewers, examining everything from demographics to motives. What he discovered was 85 percent of those surveyed had been approached with free merchandise from authors, agents or publishers.

Pinch, who also found the median age range of the reviewers he surveyed was 51 to 60, a surprise said Pinch, because the image of the internet is more of a young person’s thing. Amazon is encouraging reviewers to receive free products through Amazon Vine, an invitation-only program in which the top 1,000 reviewers are offered a catalog of free products to review…

This is the fraud aspect of the process that cannot be tolerated. And now to find out they are in a much older demographic makes me think they are just product hoarders who will say what they need to say to get more products. This conclusion is hinted at by the professor.

I do not like man on the street reviews. I never have, and I’ve always thought they could be easily corrupted by smart public relations folks who have already dove into what they call social media. This includes phony personas on Twitter and Facebook that are used to sway public opinion, shipping free goodies to “influential” bloggers, and things like this Amazon scandal.

Dvorak is not really arguing that reviews are not valuable but rather that because Amazon does not fully disclose how these reviewers operate, customers could be duped. The problem here is trust: Dvorak and others might assume that reviewers are doing this out of the goodness of their hearts but instead they are “professionals.” Instead, these reviewers are being “paid.” This is a classic gatekeepers problem: how do you know that a reviewer is trustworthy and giving unvarnished opinions? There are plenty of critics these days for various media outlets and websites. I suspect many average citizens have to read through multiple reviews from a single critic to see if their thoughts line up with their own or to see if they are consistent.

Of course, Amazon relies on a crowd sourcing approach, just like aggregator websites such as Rotten Tomatoes or Metacritic. Do these top reviewers really sway people’s opinions about products since there are often many others who provide reviews of the same products?

Why not ask Amazon whether critical reviewers have been kicked out of these programs? Dvorak is suggesting that these reviewers would speak positively about products just in order to receive more – couldn’t Amazon fight back against this?

My first thoughts when I saw this study a while back was that how confident could Pinch be about his findings based on 166 reviewers. Why not go for a larger sample out of the 1000 Top Reviewers?

(Side note: at the end, Dvorak applauds Pinch for tackling this topic:

By the way (and off topic), you should read my writings over the past 30 years, because I have been hounding sociologists around the world to begin to study these sorts of computer and Internet activities. Give Professor Pinch an award, will you! Maybe that will encourage more studies.

Maybe so.)

Ben Folds + Nick Hornby = new album

Time reports on the collaborative efforts of musician Ben Folds and novelist Nick Hornby. Here is a description of what the creative process looked like for the album that was released September 28:

For Lonely Avenue, Hornby e-mailed lyrics to Folds, who turned them into songs. “The process almost goes against what I’ve learned, which is that songwriting should be a labor,” says Folds. “I find it so easy this way. It’s natural and quick.”

Well, not that quick. The songs on the album took several months to produce, with Hornby writing lyrics in London and sending them to Folds, who arranged and recorded the music in Nashville. An e-mail between the songwriters, reprinted in the liner notes, illustrates the complex process of turning one man’s words into another man’s music. Hornby wrote a song called “Belinda,” about an aging rock star who has to sing his big hit, a love song about someone he no longer loves, at every concert he plays. “You’ve quoted the chorus of this fabled hit song in the second line of the verse,” Folds says to Hornby in the e-mail, before going on to explain the difficulty of writing a song about a song, and the placement of the fake chorus in between the real one. “It was like a hell [of a] crossword puzzle.”

I am going to have to go to Amazon and listen to the song clips right away. To me, Folds and Hornby operate in the same creative genre: tales of sad sack, hipster, occasionally endearing, 20 to 30 somethings. So if the two are put together, will we get an extra heavy dose of sad sack hipsterdom? Or will they create something new?

The online “mega-reviewers”

One of the innovations of online stores is the ability for users to rate what they like and then for other users to base decisions or comment on those previous ratings. A site like Amazon.com is amazing in this regard; within a few minutes, a reader can get a much better idea about a product.

But statistics from Netflix, another site that allows user reviews, indicate that many users don’t rate anything while there is a small percentage of people who might be called “mega-reviewers”:

About a tenth of one percent (0.07%) of Netflix users — more than 10,000 people —  have rated more than 20,000 items. And a full one percent, or nearly 150,000 Netflixers, have rated more than 5,000 movies. By contrast, only 60 percent of Netflix users rate any movies at all, and the typical person only gives out 200 starred grades.

This rating pattern might fit a Poisson or a negative binomial regression where many people rate none or very few movies while there is a smaller percentage who rate a lot. (A useful statistic in helping to figure out the shape of the curve: while there is 40% that doesn’t rate anything, of the 60 percent who rate any movies at all, what is that median?)

The Atlantic talks to two of mega-reviewers who seem to motivated by seeing what the system would recommend to them after having all of their input. Interestingly, they suggest Netflix still recommends movies to them that they don’t like after watching them.