Cruz campaign using psychological data to reach potential voters

Campaigns not working with big data are behind: Ted Cruz’s campaign is working with unique psychological data as they try to secure the Republican nomination.

To build its data-gathering operation widely, the Cruz campaign hired Cambridge Analytica, a Massachusetts company reportedly owned in part by hedge fund executive Robert Mercer, who has given $11 million to a super PAC supporting Cruz. Cambridge, the U.S. affiliate of London-based behavioral research company SCL Group, has been paid more than $750,000 by the Cruz campaign, according to Federal Election Commission records.

To develop its psychographic models, Cambridge surveyed more than 150,000 households across the country and scored individuals using five basic traits: openness, conscientiousness, extraversion, agreeableness and neuroticism. A top Cambridge official didn’t respond to a request for comment, but Cruz campaign officials said the company developed its correlations in part by using data from Facebook that included subscribers’ likes. That data helped make the Cambridge data particularly powerful, campaign officials said…

The Cruz campaign modified the Cambridge template, renaming some psychological categories and adding subcategories to the list, such as “stoic traditionalist” and “true believer.” The campaign then did its own field surveys in battleground states to develop a more precise predictive model based on issues preferences.

The Cruz algorithm was then applied to what the campaign calls an “enhanced voter file,” which can contain as many as 50,000 data points gathered from voting records, popular websites and consumer information such as magazine subscriptions, car ownership and preferences for food and clothing.

Building a big data operation behind a major political candidate seems pretty par for the course these days. The success of the Obama campaigns was often attributed to tech whizzes behind the scenes. Since this is fairly normal these days, perhaps we need to move on to other questions: what do voters think about such micro targeting and how do they experience it? Does this contribute to political fragmentation? What is the role of the mass media amid more specific approaches? How valid are the predictions for voters and their behavior (since they are based on certain social science data and theories)? How does this all significantly change political campaigns?

How far are we from just getting ridding of the candidates all together and putting together AI apps/machines/data programs that garner support…

 

More lurking, less sharing on Facebook

Social media interactions can thrive when users share more. Thus, when sharing is down on Facebook, the company is looking to boost it:

Surveys show users post less often on the social network, which relies on users for an overwhelming majority of its content. In the third quarter, market researcher GlobalWebIndex said 34% of Facebook users updated their status, and 37% shared their own photos, down from 50% and 59%, respectively, in the same period a year earlier.

Facebook users still visit the network often. Some 65% of Facebook’s 1.49 billion monthly users visited the site daily as of June. But these days, they are more likely to lurk or “like” and less likely to post a note or a picture…

So Facebook is fighting back with new features. Since May, the social network has placed prompts related to ongoing events at the top of some users’ news feeds, aiming to spur conversations. The prompts are partly based on a user’s likes and location, according to Facebook and companies working with Facebook…

Facebook has introduced other features to encourage sharing, including new emojis that give users a wider range of expressions beyond “like.” In March, Facebook launched “On This Day,” a feature that lets users relive and share past posts.

The article notes that isn’t necessarily a big problem for now – Facebook is expected to announce a jump in revenue – but it could be a larger issue down the road if the social media site is seen as boring. If users aren’t gaining new knowledge or reacting to interesting things posted by people they know, why should they keep coming back?

It would be important to find data to answer this question: is the decrease in sharing on Facebook limited to this one social media source or is it down across the board? This could be an issue just facing Facebook which then could be related to its particular features or its age (it is ancient in social media terms). Or, this might be a broader issue facing all social media platforms as users shift their online behavior. Users have certainly been warned enough about sharing too much and social norms have developed about how much an individual should share.

Facebook’s new emoji reactions based on sociological work

Facebook used sociological work to help roll out new emojis next to the “Like” button:

Adam Mosseri has a very important job. As head of Facebook’s news feed, Mosseri and his team were assigned the task of determining which six cartoon images would accompany the social network’s ubiquitous thumbs-up button. They did not take the task lightly. To help choose the right emoji to join “like,” Mosseri said Facebook consulted with several academic sociologists “about the range of human emotion.”…

The decision was reached after much deliberation. Arriving at the best of those trivial and common picture faces followed a lot of data crunching and outside help. Mosseri combined the sociologists’ feedback with data showing what people do on Facebook, he said. The goal was to reduce the need for people to post a comment to express themselves. “We wanted to make it easier,” he said. “When things are easier to do, they reach more people, and more people engage with them.”…

In order for something to qualify for the final list, it had to work globally so users communicating among various countries would have the same options, Mosseri said. One plea from millions of Facebook users, which the company ultimately ignored, was a request for a “dislike” button. Mosseri wanted to avoid adding a feature that would inject negativity into a social network fueled by baby photos and videos of corgis waddling at the beach. A dislike option, Mosseri said, wouldn’t be “in the spirit of the product we’re trying to build.”

Operation emoji continues at Facebook while the company monitors how Spaniards and Irish take to the new feature. The list isn’t final, Mosseri noted. The first phase in two European countries is “just a first in a round of tests,” he said. “We really have learned over the years that you don’t know what’s going to work until it’s out there, until people are using it.”

Facebook and Mark Zuckerberg have been clear for years that they do not want Facebook to spread negative emotions. Rather, the social network site is about finding and strengthening relationships. The emojis both avoid dislike (though this set of six emojis includes one for sad and one for angry – but these are different than dislike) and make it easier for people to react to what others post.

Here are two factors that could affect these reaction emojis:

  1. Facebook will be pressured to add more. But, how many should they have? At what point does more options slow down reactions? Is there a proper ratio for positive to negative emojis? I’m guessing that Facebook will try to keep the number limited as long as they can.
  2. Users in different countries will use different emojis more and ask for different new options. At some point, Facebook will have to choose between universal emotions and providing country-specific options that appeal to particular values and expressions.

The potential to redline customers through Facebook

If Facebook is used to judge creditworthiness, perhaps it could lead to redlining:

If there was any confusion over why Facebook has so vociferously defended its policy of requiring users to display their real, legal names, the company may have finally laid it to rest with a quiet patent application. Earlier this month, the social giant filed to protect a tool ostensibly designed to track how users are networked together—a tool that could be used by lenders to accept or reject a loan application based on the credit ratings of one’s social network…

Research consistently shows we’re more likely to seek out friends who are like ourselves, and we’re even more likely to be genetically similar to them than to strangers. If our friends are likely to default on a loan, it may well be true that we are too. Depending on how that calculation is figured, and on how data-collecting technology companies are regulated under the Fair Credit Reporting Act, it may or may not be illegal. A policy that judges an individual’s qualifications based on the qualifications of her social network would reinforce class distinctions and privilege, preventing opportunity and mobility and further marginalizing the poor and debt-ridden. It’s the financial services tool equivalent of crabs in a bucket...

But a lot of that data is bad. Facebook isn’t real life. Our social networks are not our friends. The way we “like” online is not the way we like in real life. Our networks are clogged with exes, old co-workers, relatives permanently set to mute, strangers and catfish we’ve never met at all. We interact the most not with our best friends, but with our friends who use Facebook the most. This could lead not just to discriminatory lending decisions, but completely unpredictable ones—how will users have due process to determine why their loan applications were rejected, when a mosaic of proprietary information formed the ultimate decision? How will users know what any of that proprietary information says about them? How will anyone know if it’s accurate? And how could this change the way we interact on the Web entirely, when fraternizing with less fiscally responsible friends or family members could cost you your mortgage?

On one hand, there is no indication yet that Facebook is doing this. Is there any case of this happening with online data? On the other hand, the whole point of these social network sites is that they have information that can be used to make money. Plus, they could offer to speed up the approval process for loans if people just given them access to their online social networks. Why do you need mortgage officers and others to approve these things if a simple scan of Facebook would provide the necessary information?

Additionally, given the safety of our data these days, redlining might be the least of our worries…

Zuckerberg on the role of sociology in Facebook’s success

A doctor recommending the liberal arts for pre-med students references Mark Zuckerberg describing Facebook in 2011:

“It’s as much psychology and sociology as it is technology.”

Zuckerberg went further in discussing the social aspects of Facebook:

“One thing that gets blown out of proportion is the emphasis on the individual,” he said. “The success of Facebook is really all about the team that we’ve built. In any company that’s going to be true. One of the things that we’ve focused on is keeping the company as small as possible … Facebook only has around 2,000 people. How do you do that? You make sure that every person you add to your company is really great.”…

On a more positive, social scale, Zuckerberg said the implications of Facebook stretch beyond simple local interactions and into fostering understanding between countries. One of Facebook’s engineers put together a website, peace.facebook.com, which tracks the online relationships between countries, including those that are historically at odds with one another.

Clearly, the sociological incentives are strong for joining Facebook as users are participating without being paid for their personal data. The social network site capitalizes on the human need to be social with the modern twist of having control of what one shares and with whom (though Zuckerberg has suggested in the past that he hopes Facebook opens people up to more sharing with new people).

I still haven’t seen much from sociologists on whether they think Facebook is a positive thing. Some scholars have made their position clear; for example, Sherry Turkle highlights how humans can become emotionally involved with robots and other devices. Given the explosion of new kinds of sociability in social networks, sociologists could be making more hay of Facebook, Twitter, Instagram, and all of the new possibilities. But, perhaps it is (1) difficult to asses these changes so close to their start and (2) the discipline sees much more pressing issues such as race, class, and gender in other areas.

To pay or not to pay for Facebook

Would you rather pay Facebook with money or data?

Not long ago, Zeynep Tufekci, a sociologist who studies social media, wrote that she wanted to pay for Facebook. More precisely, she wants the company to offer a cash option (about twenty cents a month, she calculates) for people who value their privacy, but also want a rough idea of what their friends’ children look like. In return for Facebook agreeing not to record what she does—and to not show her targeted ads—she would give them roughly the amount of money that they make selling the ads that she sees right now. Not surprisingly, her request seems to have been ignored. But the question remains: just why doesn’t Facebook want Tufekci’s money? One reason, I think, is that it would expose the arbitrage scheme at the core of Facebook’s business model and the ridiculous degree to which people undervalue their personal data…

The trick is that most people think they are getting a good deal out of Facebook; we think of Facebook to be “free,” and, as marketing professors explain, “consumers overreact to free.” Most people don’t feel like they are actually paying when the payment is personal data and when there is no specific sensation of having handed anything over. If you give each of your friends a hundred dollars, you might be out of money and will have a harder time buying dinner. But you can hand over your personal details or photos to one hundred merchants without feeling any poorer.

So what does it really mean, then, to pay with data? Something subtler is going on than with the more traditional means of payment. Jaron Lanier, the author of “Who Owns the Future,” sees our personal data not unlike labor—you don’t lose by giving it away, but if you don’t get anything back you’re not receiving what you deserve. Information, he points out, is inherently valuable. When billions of people hand data over to just a few companies, the effect is a giant wealth transfer from the many to the few…

Ultimately, Tufekci wants us to think harder about what it means when we pay with data or attention instead of money, which is what makes her proposition so interesting. While every business has slightly mixed motives, those companies that we pay live and die by how they serve the customer. In contrast, the businesses we are paying with attention or data are conflicted. We are their customers, but we are also their products, ultimately resold to others. We are unlikely to stop loving free stuff. But we always pay in the end—and it is worth asking how.

Perhaps we are headed toward a world where companies like Facebook would have to show customers (1) how much data they actually have about the person and (2) what that data is worth. But, I imagine the corporations would like to avoid this because it is better if the user is unaware and shares all sorts of things. And what would it take for customers to demand such transparency or do we simply like the allure of Facebook and credit cards and others products too much to pull back the curtain?

Is it going too far to suggest that personal data is the most important asset individuals will have in the future?

Mark Zuckerberg encouraging people to read sociological material

Mark Zuckerberg has been recommending an important every two weeks in 2015 and his list thus far includes a number of works that touch on sociological material:

Zuckerberg’s book club, A Year of Books, has focused on big ideas that influence society and business. His selections so far have been mostly contemporary, but for his eleventh pick he’s chosen “The Muqaddimah,” written in 1377 by the Islamic historian Ibn Khaldun…

Ibn Khaldun’s revolutionary scientific approach to history has established him as one of the foundational thinkers of modern sociology and historiography…

The majority of Zuckerberg’s book club selections have been explorations of issues through a sociological lens, so it makes sense that he is now reading the book that helped create the field.

A Year of Books so far:

  • “The End of Power: From Boardrooms to Battlefields and Churches to States, Why Being In Charge Isn’?t What It Used to Be” by Moisés Naím
  • “The Better Angels of Our Nature: Why Violence Has Declined” by Steven Pinker
  • “Gang Leader for a Day: A Rogue Sociologist Takes to the Streets” by Sud hir Venkatesh
  • “On Immunity: An Inoculation” by Eula Biss
  • “Creativity, Inc.: Overcoming the Unseen Forces That Stand in the Way of True Inspiration” by Ed Catmull and Amy Wallace
  • “The Structure of Scientific Revolutions” by Thomas S. Kuhn
  • “Rational Ritual: Culture, Coordination, and Common Knowledge” by Michael Chwe
  • “Dealing with China: An Insider Unmasks the New Economic Superpower” by Henry M. Paulson
  • “Orwell’s Revenge: The 1984 Palimpsest” by Peter Huber
  • “The New Jim Crow: Mass Incarceration in the Age of Colorblindness” by Michelle Alexander
  • “The Muqaddimah” by Ibn Khaldun

An interesting set of selections. At the least, it suggests Zuckerberg is broadly interested in social issues and not just the success of Facebook (whether through gaining users or producing sky-high profits). More optimistically, perhaps Zuckerberg has a sociological perspective and can take a broader view of society. This could be very helpful given that his company is a sociological experiment in the making – not the first social networking site but certainly a very influential one that has helped pioneer new kinds of interactions as well as changed behaviors from news gathering to impression management.

The more cynical take here is that this book list is itself an impression management tool intended to bolster his reputation. Look, I do really want the best for our users and society! However, would this be the set of books that would most impress the public or investors? Listing sociology books as well as books regarding sociological topics may only impress some.

Social scientists critique Facebook’s study claiming the news feed algorithm doesn’t lead to a filter bubble

Several social scientists have some concerns about Facebook’s recent findings that its news feed algorithm is less important than the choices of individual users in limiting what they see to what they already agree with:

But even that’s [sample size] not the biggest problem, Jurgenson and others say. The biggest issue is that the Facebook study pretends that individuals choosing to limit their exposure to different topics is a completely separate thing from the Facebook algorithm doing so. The study makes it seem like the two are disconnected and can be compared to each other on some kind of equal basis. But in reality, says Jurgenson, the latter exaggerates the former, because personal choices are what the algorithmic filtering is ultimately based on:“Individual users choosing news they agree with and Facebook’s algorithm providing what those individuals already agree with is not either-or but additive. That people seek that which they agree with is a pretty well-established social-psychological trend… what’s important is the finding that [the newsfeed] algorithm exacerbates and furthers this filter bubble.”

Sociologist and social-media expert Zeynep Tufekci points out in a post on Medium that trying to separate and compare these two things represents the worst “apples to oranges comparison I’ve seen recently,” since the two things that Facebook is pretending are unrelated have significant cumulative effects, and in fact are tied directly to each other. In other words, Facebook’s algorithmic filter magnifies the already human tendency to avoid news or opinions that we don’t agree with…

Christian Sandvig, an associate professor at the University of Michigan, calls the Facebook research the “not our fault” study, since it is clearly designed to absolve the social network of blame for people not being exposed to contrary news and opinion. In addition to the framing of the research — which tries to claim that being exposed to differing opinions isn’t necessarily a positive thing for society — the conclusion that user choice is the big problem just doesn’t ring true, says Sandvig (who has written a paper about the biased nature of Facebook’s algorithm).

Research on political echo chambers has grown in recent years and has included examinations of blogs and TV news channels. Is Facebook “bad” if it follows the pattern of reinforcing boundaries? While it may not be surprising if it does, I’m reminded of what I’ve read about Mark Zuckerberg’s intentions for what Facebook would do: bring people together in ways that wouldn’t happen otherwise. So, if Facebook itself has the goal of crossing traditional boundaries, which are usually limited by homophily (people choosing to associate with people largely like themselves) and protecting the in-group against out-group interlopers, then does this mean the company is not meeting its intended goals? I just took a user survey from them recently that didn’t include much about crossing boundaries and instead asked about things like having fun, being satisfied with the Facebook experience, and whether I was satisfied with the number of my friends.

Hillary Clinton’s biggest urban Facebook fan base is Baghdad?

Melding political, social media, and urban analysis, a look at Hillary Clinton’s Facebook fans has an interesting geographic dimension:

Hillary Clinton’s Facebook pages have an unexpected fan base. At least 7 percent of Clinton’s Facebook fans list their hometown as Baghdad, way more than any other city in the world, including in the United States.

Vocativ’s exclusive analysis of Clinton’s Facebook fan statistics yielded a number of surprises. Despite her reputation as an urban Democrat favored by liberal elites, Iraqis and southerners are more likely to be a Facebook fan of Hillary than people living on America’s coasts. And the Democratic candidate for president has one of her largest followings in the great red-state of Texas.

While Chicago and New York City, both with 4 per cent of fans, round out the top three cities for Hillary’s Facebook base, Texas’ four major centers—Houston (3 percent), Dallas (3 percent), Austin (2 percent) and San Antonio (2 percent)—contain more of her Facebook supporters. Los Angeles with 3 percent of her fans, and Philadelphia and Atlanta, each with 2 percent, round out the Top 10 cities for Facebook fans of Hillary.

On a per capita basis, in which Vocativ compared a town’s population to percentage of Hillary’s likes, people living in cities and towns in Texas, Kentucky, Ohio, Arkansas, North Carolina and Wisconsin were the most likely to be her fans on Facebook than any other American residents.

This hints at the broader knowledge we might gain from social media and should beg the question of how this information could be well used. I imagine this information could be used for political ends. Is this a curiosity? Is this something the Clinton campaign would want to change? Would this influence the behavior of other voters? The article itself is fairly agnostic about what this means.

This sounds like data mining and here is how the company behind this – Vocativ – describes its mission:

Vocativ is a media and technology venture that explores the deep web to discover original stories, hidden perspectives, emerging trends, and unheard voices from around the world. Our audience is the young, diverse, social generation that wants to share what’s interesting and what’s valuable. We reach them with a visual language, wherever they are naturally gathering…

Our proprietary technology, Verne, allows us to search and monitor the deep web to spot breaking news quickly, and discover stories that otherwise might not be told. Often we know what we’re looking for, such as witnesses near the front lines of a conflict or data related to an emerging political movement. We also uncover unexpected information, like pro-gun publications giving away assault rifles to fans of their Facebook pages.

Is this the Freakonomicization of journalism?

“Robber barons would have loved Facebook’s employee housing”

Facebook’s new campus includes more residential units. This leads one writer to compare the development to a company town:

Company towns of this era had a barely-hidden paternalistic agenda. Wealthy businessmen saw their workers as family, sort of, and they wanted to provide their wards with safe, modern housing. But many were strict fathers, dictating the minutiae of their grown employees’ lives, from picking the books in the library to restricting the availability of alcohol. It’s hard to imagine Facebook going that far, though the company does try to subtly influence its employees lives by offering such healthy freebies as on-site gyms, bike repair, and walking desks. It’s a strategy that mimics what happened with some later company towns, which employed paternalism to better the company, not just employees’ lives. “Company welfare was seen as an important strategy to promote company loyalty and peaceful relations,” Borges says.

Of course, Facebook isn’t exactly like the Pullmans, Hersheys, and Kohlers of olden times. For one, those were all built on what developers call greenfields, or land which hadn’t been previously developed for housing or commercial uses. Borges also points out that they didn’t have to deal with any existing municipal governments, either. Such greenfield freedom allowed industrialists to maintain a level of autonomy that would make even the most libertarian techies blush. Today, in Silicon Valley, there’s not much of undeveloped land left, so Facebook will have to renovate or demolish to accommodate its plans.

Those discrepancies means Facebook won’t be creating a company town from whole cloth, but slowly taking over the existing city of Menlo Park and re-envisioning it for their employees. The Facebook-backed Anton Menlo development, for example, will consist of 394 units when it opens next year. Just 15 of those are reportedly available for non-Facebook employees…

So maybe Facebookville is an arcology—a political one. What Facebook is building is both entirely similar and completely different from Pullman, Illinois, and its turn-of-the-last-century brethren. It’s a 21st century company town—built by slowly, occasionally unintentionally, taking over a public entity, and building a juggernaut of a private institution in its place.

As noted in an earlier post, this isn’t the first time the concern has been raised that Facebook employees or the company could wield political power over the official municipality in which it is located. Does it matter here if the company is perceived differently than previous company towns from manufacturers like Pullman? Does Facebook exploit its workers in the way that some thought manufacturers and robber baron era corporations exploited their workers? What if the tech employees of today don’t mind this arrangement? Perhaps the pricing on these units is a lot more reasonable than the rest of the Bay Area. In the end, are we sure that company towns are doomed to fail or that it represents an inappropriate mingling of corporate and civic interests? It is not as if Facebook or Google or other major corporations don’t have political power through other channels…