Facebook wants global guidelines but has local standards

A recent addition to Facebook’s standards in Spain highlights a larger issue for the company: how to have consistent guidelines around the world while remaining respectful or relevant in local contexts.

For Facebook and other platforms like it, incidents such as the bullfighting kerfuffle betray a larger, existential difficulty: How can you possibly impose a single moral framework on a vast and varying patchwork of global communities?

If you ask Facebook this question, the social-media behemoth will deny doing any such thing. Facebook says its community standards are inert, universal, agnostic to place and time. The site doesn’t advance any worldview, it claims, besides the non-controversial opinion that people should “connect” online…

Facebook has modified its standards several times in response to pressure from advocacy groups – although the site has deliberately obscured those edits, and the process by which Facebook determines its guidelines remains stubbornly obtuse. On top of that, at least some of the low-level contract workers who enforce Facebook’s rules are embedded in the region – or at least the time zone – whose content they moderate. The social network staffs its moderation team in 24 languages, 24 hours a day…

And yet, observers remain deeply skeptical of Facebook’s claims that it is somehow value-neutral or globally inclusive, or that its guiding principles are solely “respect” and “safety.” There’s no doubt, said Tarleton Gillespie, a principal researcher at Microsoft Research, New England, that the company advances a specific moral framework – one that is less of the world than of the United States, and less of the United States than of Silicon Valley.

I like the shift in this discussion from free speech issues (mentioned later in the article) to issues of a particular moral framework that corporations have and promote. Some might argue that simply by being a corporation there is a very clear framework: Facebook needs to make money. How exactly can the company claim to be truly about connection when there is an overriding concern? On the other hand, various companies across industries have had to wrestle with this issue: when a company expands into additional culture, how do they balance the existing moral framework with new frameworks? Customers are at stake but so are basic concerns of dealing with people on their own terms and respecting other approaches to the world.

But, with a global capitalistic system where Facebook play a prominent role (in terms of rapid growth, connecting people, and market value), can it truly be “neutral”? Like many other behemoth companies (think McDonald’s or Walmart), it will certainly encounter its share of dissenters in the years to come.

How many Facebook friends can you depend on?

A new study suggests most Facebook friends cannot be depended on in times of trouble:

Robin Dunbar, a professor of evolutionary psychology at Oxford University, undertook a study to find out the connection between whether people have lots of Facebook friends and real friends.

He found that there was very little correlation between having friends on social networks and actually being able to depend on them, or even talking to them regularly.

The average person studied had around 150 Facebook friends. But only about 14 of them would express sympathy in the event of anything going wrong…

Those numbers are mostly similar to how friendships work in real life, the research said. But the huge number of supposed friends on a friend list means that people can be tricked into thinking that they might have more close friends.

The last paragraph seems key: online or offline, people have a relatively small number of close relationships. As the saying goes, you learn who your friends are in times of trouble. Simply having a connection to someone – whether knowing them as an acquaintance or friending them on social media – is at a different level than having regular contact or providing mutual support. Using the words “real” and “fake” friends tries to get at that but it would be better to use terms close friend, acquaintance, family member, or other terms to denote the closeness of the relationship. Of course, when Facebook chose to use the term friends for everyone you link to on Facebook, this was very intentional and an attempt to prompt more connections and openness.

The Dunbar here is the same researcher behind Dunbar’s number that suggests humans can have around 150 maximum stable relationships.

Cruz campaign using psychological data to reach potential voters

Campaigns not working with big data are behind: Ted Cruz’s campaign is working with unique psychological data as they try to secure the Republican nomination.

To build its data-gathering operation widely, the Cruz campaign hired Cambridge Analytica, a Massachusetts company reportedly owned in part by hedge fund executive Robert Mercer, who has given $11 million to a super PAC supporting Cruz. Cambridge, the U.S. affiliate of London-based behavioral research company SCL Group, has been paid more than $750,000 by the Cruz campaign, according to Federal Election Commission records.

To develop its psychographic models, Cambridge surveyed more than 150,000 households across the country and scored individuals using five basic traits: openness, conscientiousness, extraversion, agreeableness and neuroticism. A top Cambridge official didn’t respond to a request for comment, but Cruz campaign officials said the company developed its correlations in part by using data from Facebook that included subscribers’ likes. That data helped make the Cambridge data particularly powerful, campaign officials said…

The Cruz campaign modified the Cambridge template, renaming some psychological categories and adding subcategories to the list, such as “stoic traditionalist” and “true believer.” The campaign then did its own field surveys in battleground states to develop a more precise predictive model based on issues preferences.

The Cruz algorithm was then applied to what the campaign calls an “enhanced voter file,” which can contain as many as 50,000 data points gathered from voting records, popular websites and consumer information such as magazine subscriptions, car ownership and preferences for food and clothing.

Building a big data operation behind a major political candidate seems pretty par for the course these days. The success of the Obama campaigns was often attributed to tech whizzes behind the scenes. Since this is fairly normal these days, perhaps we need to move on to other questions: what do voters think about such micro targeting and how do they experience it? Does this contribute to political fragmentation? What is the role of the mass media amid more specific approaches? How valid are the predictions for voters and their behavior (since they are based on certain social science data and theories)? How does this all significantly change political campaigns?

How far are we from just getting ridding of the candidates all together and putting together AI apps/machines/data programs that garner support…

 

More lurking, less sharing on Facebook

Social media interactions can thrive when users share more. Thus, when sharing is down on Facebook, the company is looking to boost it:

Surveys show users post less often on the social network, which relies on users for an overwhelming majority of its content. In the third quarter, market researcher GlobalWebIndex said 34% of Facebook users updated their status, and 37% shared their own photos, down from 50% and 59%, respectively, in the same period a year earlier.

Facebook users still visit the network often. Some 65% of Facebook’s 1.49 billion monthly users visited the site daily as of June. But these days, they are more likely to lurk or “like” and less likely to post a note or a picture…

So Facebook is fighting back with new features. Since May, the social network has placed prompts related to ongoing events at the top of some users’ news feeds, aiming to spur conversations. The prompts are partly based on a user’s likes and location, according to Facebook and companies working with Facebook…

Facebook has introduced other features to encourage sharing, including new emojis that give users a wider range of expressions beyond “like.” In March, Facebook launched “On This Day,” a feature that lets users relive and share past posts.

The article notes that isn’t necessarily a big problem for now – Facebook is expected to announce a jump in revenue – but it could be a larger issue down the road if the social media site is seen as boring. If users aren’t gaining new knowledge or reacting to interesting things posted by people they know, why should they keep coming back?

It would be important to find data to answer this question: is the decrease in sharing on Facebook limited to this one social media source or is it down across the board? This could be an issue just facing Facebook which then could be related to its particular features or its age (it is ancient in social media terms). Or, this might be a broader issue facing all social media platforms as users shift their online behavior. Users have certainly been warned enough about sharing too much and social norms have developed about how much an individual should share.

Facebook’s new emoji reactions based on sociological work

Facebook used sociological work to help roll out new emojis next to the “Like” button:

Adam Mosseri has a very important job. As head of Facebook’s news feed, Mosseri and his team were assigned the task of determining which six cartoon images would accompany the social network’s ubiquitous thumbs-up button. They did not take the task lightly. To help choose the right emoji to join “like,” Mosseri said Facebook consulted with several academic sociologists “about the range of human emotion.”…

The decision was reached after much deliberation. Arriving at the best of those trivial and common picture faces followed a lot of data crunching and outside help. Mosseri combined the sociologists’ feedback with data showing what people do on Facebook, he said. The goal was to reduce the need for people to post a comment to express themselves. “We wanted to make it easier,” he said. “When things are easier to do, they reach more people, and more people engage with them.”…

In order for something to qualify for the final list, it had to work globally so users communicating among various countries would have the same options, Mosseri said. One plea from millions of Facebook users, which the company ultimately ignored, was a request for a “dislike” button. Mosseri wanted to avoid adding a feature that would inject negativity into a social network fueled by baby photos and videos of corgis waddling at the beach. A dislike option, Mosseri said, wouldn’t be “in the spirit of the product we’re trying to build.”

Operation emoji continues at Facebook while the company monitors how Spaniards and Irish take to the new feature. The list isn’t final, Mosseri noted. The first phase in two European countries is “just a first in a round of tests,” he said. “We really have learned over the years that you don’t know what’s going to work until it’s out there, until people are using it.”

Facebook and Mark Zuckerberg have been clear for years that they do not want Facebook to spread negative emotions. Rather, the social network site is about finding and strengthening relationships. The emojis both avoid dislike (though this set of six emojis includes one for sad and one for angry – but these are different than dislike) and make it easier for people to react to what others post.

Here are two factors that could affect these reaction emojis:

  1. Facebook will be pressured to add more. But, how many should they have? At what point does more options slow down reactions? Is there a proper ratio for positive to negative emojis? I’m guessing that Facebook will try to keep the number limited as long as they can.
  2. Users in different countries will use different emojis more and ask for different new options. At some point, Facebook will have to choose between universal emotions and providing country-specific options that appeal to particular values and expressions.

The potential to redline customers through Facebook

If Facebook is used to judge creditworthiness, perhaps it could lead to redlining:

If there was any confusion over why Facebook has so vociferously defended its policy of requiring users to display their real, legal names, the company may have finally laid it to rest with a quiet patent application. Earlier this month, the social giant filed to protect a tool ostensibly designed to track how users are networked together—a tool that could be used by lenders to accept or reject a loan application based on the credit ratings of one’s social network…

Research consistently shows we’re more likely to seek out friends who are like ourselves, and we’re even more likely to be genetically similar to them than to strangers. If our friends are likely to default on a loan, it may well be true that we are too. Depending on how that calculation is figured, and on how data-collecting technology companies are regulated under the Fair Credit Reporting Act, it may or may not be illegal. A policy that judges an individual’s qualifications based on the qualifications of her social network would reinforce class distinctions and privilege, preventing opportunity and mobility and further marginalizing the poor and debt-ridden. It’s the financial services tool equivalent of crabs in a bucket...

But a lot of that data is bad. Facebook isn’t real life. Our social networks are not our friends. The way we “like” online is not the way we like in real life. Our networks are clogged with exes, old co-workers, relatives permanently set to mute, strangers and catfish we’ve never met at all. We interact the most not with our best friends, but with our friends who use Facebook the most. This could lead not just to discriminatory lending decisions, but completely unpredictable ones—how will users have due process to determine why their loan applications were rejected, when a mosaic of proprietary information formed the ultimate decision? How will users know what any of that proprietary information says about them? How will anyone know if it’s accurate? And how could this change the way we interact on the Web entirely, when fraternizing with less fiscally responsible friends or family members could cost you your mortgage?

On one hand, there is no indication yet that Facebook is doing this. Is there any case of this happening with online data? On the other hand, the whole point of these social network sites is that they have information that can be used to make money. Plus, they could offer to speed up the approval process for loans if people just given them access to their online social networks. Why do you need mortgage officers and others to approve these things if a simple scan of Facebook would provide the necessary information?

Additionally, given the safety of our data these days, redlining might be the least of our worries…

Zuckerberg on the role of sociology in Facebook’s success

A doctor recommending the liberal arts for pre-med students references Mark Zuckerberg describing Facebook in 2011:

“It’s as much psychology and sociology as it is technology.”

Zuckerberg went further in discussing the social aspects of Facebook:

“One thing that gets blown out of proportion is the emphasis on the individual,” he said. “The success of Facebook is really all about the team that we’ve built. In any company that’s going to be true. One of the things that we’ve focused on is keeping the company as small as possible … Facebook only has around 2,000 people. How do you do that? You make sure that every person you add to your company is really great.”…

On a more positive, social scale, Zuckerberg said the implications of Facebook stretch beyond simple local interactions and into fostering understanding between countries. One of Facebook’s engineers put together a website, peace.facebook.com, which tracks the online relationships between countries, including those that are historically at odds with one another.

Clearly, the sociological incentives are strong for joining Facebook as users are participating without being paid for their personal data. The social network site capitalizes on the human need to be social with the modern twist of having control of what one shares and with whom (though Zuckerberg has suggested in the past that he hopes Facebook opens people up to more sharing with new people).

I still haven’t seen much from sociologists on whether they think Facebook is a positive thing. Some scholars have made their position clear; for example, Sherry Turkle highlights how humans can become emotionally involved with robots and other devices. Given the explosion of new kinds of sociability in social networks, sociologists could be making more hay of Facebook, Twitter, Instagram, and all of the new possibilities. But, perhaps it is (1) difficult to asses these changes so close to their start and (2) the discipline sees much more pressing issues such as race, class, and gender in other areas.