While websites can collect lots of information about readers, how exactly this should all be measured is still unclear. Here are four options:
Uniques: Unique visitors is a good metric, because it measures monthly readers, not just meaningless clicks. It’s bad because it measures people rather than meaningful engagement. For example, Facebook viral hits now account for a large share of traffic at many sites. There are one-and-done nibblers on the Web and there are loyal readers. Monthly unique visitors can’t tell you the difference.
Page Views: They’re good because they measure clicks, which is an indication of engagement that unique visitors doesn’t capture (e.g.: a blog with loyal readers will have a higher ratio of page views-to-visitors, since the same people keep coming back). They’re bad for the same reason that they can be corrupted. A 25-page slideshow of the best cities for college graduates will have up to 25X more views than a one-page article with all the same information. The PV metric says the slideshow is 25X more valuable if ads are reloaded on each page of the slideshow. But that’s ludicrous.
Time Spent/Attention Minutes: Page views and uniques tell you an important but incomplete fact: The article page loaded. It doesn’t tell you what happens after the page loads. Did the reader click away? Did he stay for 20 minutes? Did he open the browser tab and never read the story? These would be nice things to know. And measures like attention minutes can begin to tell us. But, as Salmon points out, they still don’t paint a complete picture. Watching a 5 minute video and deciding it was stupid seems less valuable than watching a one minute video that you share with friends and praise. Page views matter, and time spent matters, but reaction matters, too. This suggests two more metrics …
Shares and Mentions: “Shares” (on Facebook, Twitter, LinkedIn, or Google+) ostensibly tell you something that neither PVs, nor uniques, nor attention minutes can tell you: They tell you that visitors aren’t just visiting. They’re taking action. But what sort of action? A bad column will get passed around on Twitter for a round of mockery. An embarrassing article can go viral on Facebook. Shares and mentions can communicate the magnitude of an article’s attention, but they can’t always tell you the direction of the share vector: Did people share it because they loved it, or because they loved hating it?
Here are some potential options for sorting this all out:
1. Developing a scale or index that combines all of these factors. It could be as easy as each of these four counts for 25% or the components could be weighted differently.
2. Heavyweights in the industry – whether particular companies or advertisers or analytical leaders – make a decision about which of these is most important. For example, comments after this story note the problems with Nielsen television ratings over the decades but Nielsen had a stranglehold on this area.
3. Researchers outside the industry could “objectively” develop a measure. This may be unlikely as outside actors have less financial incentive but perhaps someone sees an opportunity here.
In the meantime, there is plenty of information on online readership to look at, websites and companies can claim various things with different metrics, and websites and advertisers will continue to have a strong financial interest in all of this.
Like this:
Like Loading...