Claim: Facebook wants to curate the news through an algorithm

Insiders have revealed how Facebook is selecting its trending news stories:

Launched in January 2014, Facebook’s trending news section occupies some of the most precious real estate in all of the internet, filling the top-right hand corner of the site with a list of topics people are talking about and links out to different news articles about them. The dozen or so journalists paid to run that section are contractors who work out of the basement of the company’s New York office…

The trending news section is run by people in their 20s and early 30s, most of whom graduated from Ivy League and private East Coast schools like Columbia University and NYU. They’ve previously worked at outlets like the New York Daily News, Bloomberg, MSNBC, and the Guardian. Some former curators have left Facebook for jobs at organizations including the New Yorker, Mashable, and Sky Sports.

According to former team members interviewed by Gizmodo, this small group has the power to choose what stories make it onto the trending bar and, more importantly, what news sites each topic links out to. “We choose what’s trending,” said one. “There was no real standard for measuring what qualified as news and what didn’t. It was up to the news curator to decide.”…

That said, many former employees suspect that Facebook’s eventual goal is to replace its human curators with a robotic one. The former curators Gizmodo interviewed started to feel like they were training a machine, one that would eventually take their jobs. Managers began referring to a “more streamlined process” in meetings. As one former contractor put it: “We felt like we were part of an experiment that, as the algorithm got better, there was a sense that at some point the humans would be replaced.”

The angle here seems to be that (1) the journalists who participated did not feel they were treated well and (2) journalists may not be part of the future process because an algorithm will take over. I don’t know about the first but is the second a major surprise? The trending news will still require content to be generated, presumably created by journalists and news sources all across the Internet. Do journalists want to retain the privilege to not just write the news but also to choose what gets reported? In other words, the gatekeeper role of journalism may slowly disappear if algorithms guide what people see.

Imagine the news algorithms that people might have available to them in the future: one that doesn’t report any violent crime (it is overreported anyway); one that only includes celebrity news (this might include politics, it might not); one that reports on all forms of government corruption; and so on. I’m guessing, however, Facebook’s algorithm would be proprietary and probably is trying to push people into certain behaviors (whether that is sharing more on their profiles or pursuing particular civic or political actions).

Facebook wants global guidelines but has local standards

A recent addition to Facebook’s standards in Spain highlights a larger issue for the company: how to have consistent guidelines around the world while remaining respectful or relevant in local contexts.

For Facebook and other platforms like it, incidents such as the bullfighting kerfuffle betray a larger, existential difficulty: How can you possibly impose a single moral framework on a vast and varying patchwork of global communities?

If you ask Facebook this question, the social-media behemoth will deny doing any such thing. Facebook says its community standards are inert, universal, agnostic to place and time. The site doesn’t advance any worldview, it claims, besides the non-controversial opinion that people should “connect” online…

Facebook has modified its standards several times in response to pressure from advocacy groups – although the site has deliberately obscured those edits, and the process by which Facebook determines its guidelines remains stubbornly obtuse. On top of that, at least some of the low-level contract workers who enforce Facebook’s rules are embedded in the region – or at least the time zone – whose content they moderate. The social network staffs its moderation team in 24 languages, 24 hours a day…

And yet, observers remain deeply skeptical of Facebook’s claims that it is somehow value-neutral or globally inclusive, or that its guiding principles are solely “respect” and “safety.” There’s no doubt, said Tarleton Gillespie, a principal researcher at Microsoft Research, New England, that the company advances a specific moral framework – one that is less of the world than of the United States, and less of the United States than of Silicon Valley.

I like the shift in this discussion from free speech issues (mentioned later in the article) to issues of a particular moral framework that corporations have and promote. Some might argue that simply by being a corporation there is a very clear framework: Facebook needs to make money. How exactly can the company claim to be truly about connection when there is an overriding concern? On the other hand, various companies across industries have had to wrestle with this issue: when a company expands into additional culture, how do they balance the existing moral framework with new frameworks? Customers are at stake but so are basic concerns of dealing with people on their own terms and respecting other approaches to the world.

But, with a global capitalistic system where Facebook play a prominent role (in terms of rapid growth, connecting people, and market value), can it truly be “neutral”? Like many other behemoth companies (think McDonald’s or Walmart), it will certainly encounter its share of dissenters in the years to come.

How many Facebook friends can you depend on?

A new study suggests most Facebook friends cannot be depended on in times of trouble:

Robin Dunbar, a professor of evolutionary psychology at Oxford University, undertook a study to find out the connection between whether people have lots of Facebook friends and real friends.

He found that there was very little correlation between having friends on social networks and actually being able to depend on them, or even talking to them regularly.

The average person studied had around 150 Facebook friends. But only about 14 of them would express sympathy in the event of anything going wrong…

Those numbers are mostly similar to how friendships work in real life, the research said. But the huge number of supposed friends on a friend list means that people can be tricked into thinking that they might have more close friends.

The last paragraph seems key: online or offline, people have a relatively small number of close relationships. As the saying goes, you learn who your friends are in times of trouble. Simply having a connection to someone – whether knowing them as an acquaintance or friending them on social media – is at a different level than having regular contact or providing mutual support. Using the words “real” and “fake” friends tries to get at that but it would be better to use terms close friend, acquaintance, family member, or other terms to denote the closeness of the relationship. Of course, when Facebook chose to use the term friends for everyone you link to on Facebook, this was very intentional and an attempt to prompt more connections and openness.

The Dunbar here is the same researcher behind Dunbar’s number that suggests humans can have around 150 maximum stable relationships.

Cruz campaign using psychological data to reach potential voters

Campaigns not working with big data are behind: Ted Cruz’s campaign is working with unique psychological data as they try to secure the Republican nomination.

To build its data-gathering operation widely, the Cruz campaign hired Cambridge Analytica, a Massachusetts company reportedly owned in part by hedge fund executive Robert Mercer, who has given $11 million to a super PAC supporting Cruz. Cambridge, the U.S. affiliate of London-based behavioral research company SCL Group, has been paid more than $750,000 by the Cruz campaign, according to Federal Election Commission records.

To develop its psychographic models, Cambridge surveyed more than 150,000 households across the country and scored individuals using five basic traits: openness, conscientiousness, extraversion, agreeableness and neuroticism. A top Cambridge official didn’t respond to a request for comment, but Cruz campaign officials said the company developed its correlations in part by using data from Facebook that included subscribers’ likes. That data helped make the Cambridge data particularly powerful, campaign officials said…

The Cruz campaign modified the Cambridge template, renaming some psychological categories and adding subcategories to the list, such as “stoic traditionalist” and “true believer.” The campaign then did its own field surveys in battleground states to develop a more precise predictive model based on issues preferences.

The Cruz algorithm was then applied to what the campaign calls an “enhanced voter file,” which can contain as many as 50,000 data points gathered from voting records, popular websites and consumer information such as magazine subscriptions, car ownership and preferences for food and clothing.

Building a big data operation behind a major political candidate seems pretty par for the course these days. The success of the Obama campaigns was often attributed to tech whizzes behind the scenes. Since this is fairly normal these days, perhaps we need to move on to other questions: what do voters think about such micro targeting and how do they experience it? Does this contribute to political fragmentation? What is the role of the mass media amid more specific approaches? How valid are the predictions for voters and their behavior (since they are based on certain social science data and theories)? How does this all significantly change political campaigns?

How far are we from just getting ridding of the candidates all together and putting together AI apps/machines/data programs that garner support…

 

More lurking, less sharing on Facebook

Social media interactions can thrive when users share more. Thus, when sharing is down on Facebook, the company is looking to boost it:

Surveys show users post less often on the social network, which relies on users for an overwhelming majority of its content. In the third quarter, market researcher GlobalWebIndex said 34% of Facebook users updated their status, and 37% shared their own photos, down from 50% and 59%, respectively, in the same period a year earlier.

Facebook users still visit the network often. Some 65% of Facebook’s 1.49 billion monthly users visited the site daily as of June. But these days, they are more likely to lurk or “like” and less likely to post a note or a picture…

So Facebook is fighting back with new features. Since May, the social network has placed prompts related to ongoing events at the top of some users’ news feeds, aiming to spur conversations. The prompts are partly based on a user’s likes and location, according to Facebook and companies working with Facebook…

Facebook has introduced other features to encourage sharing, including new emojis that give users a wider range of expressions beyond “like.” In March, Facebook launched “On This Day,” a feature that lets users relive and share past posts.

The article notes that isn’t necessarily a big problem for now – Facebook is expected to announce a jump in revenue – but it could be a larger issue down the road if the social media site is seen as boring. If users aren’t gaining new knowledge or reacting to interesting things posted by people they know, why should they keep coming back?

It would be important to find data to answer this question: is the decrease in sharing on Facebook limited to this one social media source or is it down across the board? This could be an issue just facing Facebook which then could be related to its particular features or its age (it is ancient in social media terms). Or, this might be a broader issue facing all social media platforms as users shift their online behavior. Users have certainly been warned enough about sharing too much and social norms have developed about how much an individual should share.

Facebook’s new emoji reactions based on sociological work

Facebook used sociological work to help roll out new emojis next to the “Like” button:

Adam Mosseri has a very important job. As head of Facebook’s news feed, Mosseri and his team were assigned the task of determining which six cartoon images would accompany the social network’s ubiquitous thumbs-up button. They did not take the task lightly. To help choose the right emoji to join “like,” Mosseri said Facebook consulted with several academic sociologists “about the range of human emotion.”…

The decision was reached after much deliberation. Arriving at the best of those trivial and common picture faces followed a lot of data crunching and outside help. Mosseri combined the sociologists’ feedback with data showing what people do on Facebook, he said. The goal was to reduce the need for people to post a comment to express themselves. “We wanted to make it easier,” he said. “When things are easier to do, they reach more people, and more people engage with them.”…

In order for something to qualify for the final list, it had to work globally so users communicating among various countries would have the same options, Mosseri said. One plea from millions of Facebook users, which the company ultimately ignored, was a request for a “dislike” button. Mosseri wanted to avoid adding a feature that would inject negativity into a social network fueled by baby photos and videos of corgis waddling at the beach. A dislike option, Mosseri said, wouldn’t be “in the spirit of the product we’re trying to build.”

Operation emoji continues at Facebook while the company monitors how Spaniards and Irish take to the new feature. The list isn’t final, Mosseri noted. The first phase in two European countries is “just a first in a round of tests,” he said. “We really have learned over the years that you don’t know what’s going to work until it’s out there, until people are using it.”

Facebook and Mark Zuckerberg have been clear for years that they do not want Facebook to spread negative emotions. Rather, the social network site is about finding and strengthening relationships. The emojis both avoid dislike (though this set of six emojis includes one for sad and one for angry – but these are different than dislike) and make it easier for people to react to what others post.

Here are two factors that could affect these reaction emojis:

  1. Facebook will be pressured to add more. But, how many should they have? At what point does more options slow down reactions? Is there a proper ratio for positive to negative emojis? I’m guessing that Facebook will try to keep the number limited as long as they can.
  2. Users in different countries will use different emojis more and ask for different new options. At some point, Facebook will have to choose between universal emotions and providing country-specific options that appeal to particular values and expressions.

The potential to redline customers through Facebook

If Facebook is used to judge creditworthiness, perhaps it could lead to redlining:

If there was any confusion over why Facebook has so vociferously defended its policy of requiring users to display their real, legal names, the company may have finally laid it to rest with a quiet patent application. Earlier this month, the social giant filed to protect a tool ostensibly designed to track how users are networked together—a tool that could be used by lenders to accept or reject a loan application based on the credit ratings of one’s social network…

Research consistently shows we’re more likely to seek out friends who are like ourselves, and we’re even more likely to be genetically similar to them than to strangers. If our friends are likely to default on a loan, it may well be true that we are too. Depending on how that calculation is figured, and on how data-collecting technology companies are regulated under the Fair Credit Reporting Act, it may or may not be illegal. A policy that judges an individual’s qualifications based on the qualifications of her social network would reinforce class distinctions and privilege, preventing opportunity and mobility and further marginalizing the poor and debt-ridden. It’s the financial services tool equivalent of crabs in a bucket...

But a lot of that data is bad. Facebook isn’t real life. Our social networks are not our friends. The way we “like” online is not the way we like in real life. Our networks are clogged with exes, old co-workers, relatives permanently set to mute, strangers and catfish we’ve never met at all. We interact the most not with our best friends, but with our friends who use Facebook the most. This could lead not just to discriminatory lending decisions, but completely unpredictable ones—how will users have due process to determine why their loan applications were rejected, when a mosaic of proprietary information formed the ultimate decision? How will users know what any of that proprietary information says about them? How will anyone know if it’s accurate? And how could this change the way we interact on the Web entirely, when fraternizing with less fiscally responsible friends or family members could cost you your mortgage?

On one hand, there is no indication yet that Facebook is doing this. Is there any case of this happening with online data? On the other hand, the whole point of these social network sites is that they have information that can be used to make money. Plus, they could offer to speed up the approval process for loans if people just given them access to their online social networks. Why do you need mortgage officers and others to approve these things if a simple scan of Facebook would provide the necessary information?

Additionally, given the safety of our data these days, redlining might be the least of our worries…

Zuckerberg on the role of sociology in Facebook’s success

A doctor recommending the liberal arts for pre-med students references Mark Zuckerberg describing Facebook in 2011:

“It’s as much psychology and sociology as it is technology.”

Zuckerberg went further in discussing the social aspects of Facebook:

“One thing that gets blown out of proportion is the emphasis on the individual,” he said. “The success of Facebook is really all about the team that we’ve built. In any company that’s going to be true. One of the things that we’ve focused on is keeping the company as small as possible … Facebook only has around 2,000 people. How do you do that? You make sure that every person you add to your company is really great.”…

On a more positive, social scale, Zuckerberg said the implications of Facebook stretch beyond simple local interactions and into fostering understanding between countries. One of Facebook’s engineers put together a website, peace.facebook.com, which tracks the online relationships between countries, including those that are historically at odds with one another.

Clearly, the sociological incentives are strong for joining Facebook as users are participating without being paid for their personal data. The social network site capitalizes on the human need to be social with the modern twist of having control of what one shares and with whom (though Zuckerberg has suggested in the past that he hopes Facebook opens people up to more sharing with new people).

I still haven’t seen much from sociologists on whether they think Facebook is a positive thing. Some scholars have made their position clear; for example, Sherry Turkle highlights how humans can become emotionally involved with robots and other devices. Given the explosion of new kinds of sociability in social networks, sociologists could be making more hay of Facebook, Twitter, Instagram, and all of the new possibilities. But, perhaps it is (1) difficult to asses these changes so close to their start and (2) the discipline sees much more pressing issues such as race, class, and gender in other areas.

To pay or not to pay for Facebook

Would you rather pay Facebook with money or data?

Not long ago, Zeynep Tufekci, a sociologist who studies social media, wrote that she wanted to pay for Facebook. More precisely, she wants the company to offer a cash option (about twenty cents a month, she calculates) for people who value their privacy, but also want a rough idea of what their friends’ children look like. In return for Facebook agreeing not to record what she does—and to not show her targeted ads—she would give them roughly the amount of money that they make selling the ads that she sees right now. Not surprisingly, her request seems to have been ignored. But the question remains: just why doesn’t Facebook want Tufekci’s money? One reason, I think, is that it would expose the arbitrage scheme at the core of Facebook’s business model and the ridiculous degree to which people undervalue their personal data…

The trick is that most people think they are getting a good deal out of Facebook; we think of Facebook to be “free,” and, as marketing professors explain, “consumers overreact to free.” Most people don’t feel like they are actually paying when the payment is personal data and when there is no specific sensation of having handed anything over. If you give each of your friends a hundred dollars, you might be out of money and will have a harder time buying dinner. But you can hand over your personal details or photos to one hundred merchants without feeling any poorer.

So what does it really mean, then, to pay with data? Something subtler is going on than with the more traditional means of payment. Jaron Lanier, the author of “Who Owns the Future,” sees our personal data not unlike labor—you don’t lose by giving it away, but if you don’t get anything back you’re not receiving what you deserve. Information, he points out, is inherently valuable. When billions of people hand data over to just a few companies, the effect is a giant wealth transfer from the many to the few…

Ultimately, Tufekci wants us to think harder about what it means when we pay with data or attention instead of money, which is what makes her proposition so interesting. While every business has slightly mixed motives, those companies that we pay live and die by how they serve the customer. In contrast, the businesses we are paying with attention or data are conflicted. We are their customers, but we are also their products, ultimately resold to others. We are unlikely to stop loving free stuff. But we always pay in the end—and it is worth asking how.

Perhaps we are headed toward a world where companies like Facebook would have to show customers (1) how much data they actually have about the person and (2) what that data is worth. But, I imagine the corporations would like to avoid this because it is better if the user is unaware and shares all sorts of things. And what would it take for customers to demand such transparency or do we simply like the allure of Facebook and credit cards and others products too much to pull back the curtain?

Is it going too far to suggest that personal data is the most important asset individuals will have in the future?

Mark Zuckerberg encouraging people to read sociological material

Mark Zuckerberg has been recommending an important every two weeks in 2015 and his list thus far includes a number of works that touch on sociological material:

Zuckerberg’s book club, A Year of Books, has focused on big ideas that influence society and business. His selections so far have been mostly contemporary, but for his eleventh pick he’s chosen “The Muqaddimah,” written in 1377 by the Islamic historian Ibn Khaldun…

Ibn Khaldun’s revolutionary scientific approach to history has established him as one of the foundational thinkers of modern sociology and historiography…

The majority of Zuckerberg’s book club selections have been explorations of issues through a sociological lens, so it makes sense that he is now reading the book that helped create the field.

A Year of Books so far:

  • “The End of Power: From Boardrooms to Battlefields and Churches to States, Why Being In Charge Isn’?t What It Used to Be” by Moisés Naím
  • “The Better Angels of Our Nature: Why Violence Has Declined” by Steven Pinker
  • “Gang Leader for a Day: A Rogue Sociologist Takes to the Streets” by Sud hir Venkatesh
  • “On Immunity: An Inoculation” by Eula Biss
  • “Creativity, Inc.: Overcoming the Unseen Forces That Stand in the Way of True Inspiration” by Ed Catmull and Amy Wallace
  • “The Structure of Scientific Revolutions” by Thomas S. Kuhn
  • “Rational Ritual: Culture, Coordination, and Common Knowledge” by Michael Chwe
  • “Dealing with China: An Insider Unmasks the New Economic Superpower” by Henry M. Paulson
  • “Orwell’s Revenge: The 1984 Palimpsest” by Peter Huber
  • “The New Jim Crow: Mass Incarceration in the Age of Colorblindness” by Michelle Alexander
  • “The Muqaddimah” by Ibn Khaldun

An interesting set of selections. At the least, it suggests Zuckerberg is broadly interested in social issues and not just the success of Facebook (whether through gaining users or producing sky-high profits). More optimistically, perhaps Zuckerberg has a sociological perspective and can take a broader view of society. This could be very helpful given that his company is a sociological experiment in the making – not the first social networking site but certainly a very influential one that has helped pioneer new kinds of interactions as well as changed behaviors from news gathering to impression management.

The more cynical take here is that this book list is itself an impression management tool intended to bolster his reputation. Look, I do really want the best for our users and society! However, would this be the set of books that would most impress the public or investors? Listing sociology books as well as books regarding sociological topics may only impress some.