Social scientists critique Facebook’s study claiming the news feed algorithm doesn’t lead to a filter bubble

Several social scientists have some concerns about Facebook’s recent findings that its news feed algorithm is less important than the choices of individual users in limiting what they see to what they already agree with:

But even that’s [sample size] not the biggest problem, Jurgenson and others say. The biggest issue is that the Facebook study pretends that individuals choosing to limit their exposure to different topics is a completely separate thing from the Facebook algorithm doing so. The study makes it seem like the two are disconnected and can be compared to each other on some kind of equal basis. But in reality, says Jurgenson, the latter exaggerates the former, because personal choices are what the algorithmic filtering is ultimately based on:“Individual users choosing news they agree with and Facebook’s algorithm providing what those individuals already agree with is not either-or but additive. That people seek that which they agree with is a pretty well-established social-psychological trend… what’s important is the finding that [the newsfeed] algorithm exacerbates and furthers this filter bubble.”

Sociologist and social-media expert Zeynep Tufekci points out in a post on Medium that trying to separate and compare these two things represents the worst “apples to oranges comparison I’ve seen recently,” since the two things that Facebook is pretending are unrelated have significant cumulative effects, and in fact are tied directly to each other. In other words, Facebook’s algorithmic filter magnifies the already human tendency to avoid news or opinions that we don’t agree with…

Christian Sandvig, an associate professor at the University of Michigan, calls the Facebook research the “not our fault” study, since it is clearly designed to absolve the social network of blame for people not being exposed to contrary news and opinion. In addition to the framing of the research — which tries to claim that being exposed to differing opinions isn’t necessarily a positive thing for society — the conclusion that user choice is the big problem just doesn’t ring true, says Sandvig (who has written a paper about the biased nature of Facebook’s algorithm).

Research on political echo chambers has grown in recent years and has included examinations of blogs and TV news channels. Is Facebook “bad” if it follows the pattern of reinforcing boundaries? While it may not be surprising if it does, I’m reminded of what I’ve read about Mark Zuckerberg’s intentions for what Facebook would do: bring people together in ways that wouldn’t happen otherwise. So, if Facebook itself has the goal of crossing traditional boundaries, which are usually limited by homophily (people choosing to associate with people largely like themselves) and protecting the in-group against out-group interlopers, then does this mean the company is not meeting its intended goals? I just took a user survey from them recently that didn’t include much about crossing boundaries and instead asked about things like having fun, being satisfied with the Facebook experience, and whether I was satisfied with the number of my friends.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s