Hillary Clinton’s biggest urban Facebook fan base is Baghdad?

Melding political, social media, and urban analysis, a look at Hillary Clinton’s Facebook fans has an interesting geographic dimension:

Hillary Clinton’s Facebook pages have an unexpected fan base. At least 7 percent of Clinton’s Facebook fans list their hometown as Baghdad, way more than any other city in the world, including in the United States.

Vocativ’s exclusive analysis of Clinton’s Facebook fan statistics yielded a number of surprises. Despite her reputation as an urban Democrat favored by liberal elites, Iraqis and southerners are more likely to be a Facebook fan of Hillary than people living on America’s coasts. And the Democratic candidate for president has one of her largest followings in the great red-state of Texas.

While Chicago and New York City, both with 4 per cent of fans, round out the top three cities for Hillary’s Facebook base, Texas’ four major centers—Houston (3 percent), Dallas (3 percent), Austin (2 percent) and San Antonio (2 percent)—contain more of her Facebook supporters. Los Angeles with 3 percent of her fans, and Philadelphia and Atlanta, each with 2 percent, round out the Top 10 cities for Facebook fans of Hillary.

On a per capita basis, in which Vocativ compared a town’s population to percentage of Hillary’s likes, people living in cities and towns in Texas, Kentucky, Ohio, Arkansas, North Carolina and Wisconsin were the most likely to be her fans on Facebook than any other American residents.

This hints at the broader knowledge we might gain from social media and should beg the question of how this information could be well used. I imagine this information could be used for political ends. Is this a curiosity? Is this something the Clinton campaign would want to change? Would this influence the behavior of other voters? The article itself is fairly agnostic about what this means.

This sounds like data mining and here is how the company behind this – Vocativ – describes its mission:

Vocativ is a media and technology venture that explores the deep web to discover original stories, hidden perspectives, emerging trends, and unheard voices from around the world. Our audience is the young, diverse, social generation that wants to share what’s interesting and what’s valuable. We reach them with a visual language, wherever they are naturally gathering…

Our proprietary technology, Verne, allows us to search and monitor the deep web to spot breaking news quickly, and discover stories that otherwise might not be told. Often we know what we’re looking for, such as witnesses near the front lines of a conflict or data related to an emerging political movement. We also uncover unexpected information, like pro-gun publications giving away assault rifles to fans of their Facebook pages.

Is this the Freakonomicization of journalism?

“Robber barons would have loved Facebook’s employee housing”

Facebook’s new campus includes more residential units. This leads one writer to compare the development to a company town:

Company towns of this era had a barely-hidden paternalistic agenda. Wealthy businessmen saw their workers as family, sort of, and they wanted to provide their wards with safe, modern housing. But many were strict fathers, dictating the minutiae of their grown employees’ lives, from picking the books in the library to restricting the availability of alcohol. It’s hard to imagine Facebook going that far, though the company does try to subtly influence its employees lives by offering such healthy freebies as on-site gyms, bike repair, and walking desks. It’s a strategy that mimics what happened with some later company towns, which employed paternalism to better the company, not just employees’ lives. “Company welfare was seen as an important strategy to promote company loyalty and peaceful relations,” Borges says.

Of course, Facebook isn’t exactly like the Pullmans, Hersheys, and Kohlers of olden times. For one, those were all built on what developers call greenfields, or land which hadn’t been previously developed for housing or commercial uses. Borges also points out that they didn’t have to deal with any existing municipal governments, either. Such greenfield freedom allowed industrialists to maintain a level of autonomy that would make even the most libertarian techies blush. Today, in Silicon Valley, there’s not much of undeveloped land left, so Facebook will have to renovate or demolish to accommodate its plans.

Those discrepancies means Facebook won’t be creating a company town from whole cloth, but slowly taking over the existing city of Menlo Park and re-envisioning it for their employees. The Facebook-backed Anton Menlo development, for example, will consist of 394 units when it opens next year. Just 15 of those are reportedly available for non-Facebook employees…

So maybe Facebookville is an arcology—a political one. What Facebook is building is both entirely similar and completely different from Pullman, Illinois, and its turn-of-the-last-century brethren. It’s a 21st century company town—built by slowly, occasionally unintentionally, taking over a public entity, and building a juggernaut of a private institution in its place.

As noted in an earlier post, this isn’t the first time the concern has been raised that Facebook employees or the company could wield political power over the official municipality in which it is located. Does it matter here if the company is perceived differently than previous company towns from manufacturers like Pullman? Does Facebook exploit its workers in the way that some thought manufacturers and robber baron era corporations exploited their workers? What if the tech employees of today don’t mind this arrangement? Perhaps the pricing on these units is a lot more reasonable than the rest of the Bay Area. In the end, are we sure that company towns are doomed to fail or that it represents an inappropriate mingling of corporate and civic interests? It is not as if Facebook or Google or other major corporations don’t have political power through other channels…

 

Does posting the number of highway deaths in Illinois lead to safer driving?

A columnist discusses the effects of signs on Illinois Tollways that post the number of automobile fatalities on area highways:

The first time I saw one of those grim Illinois expressway signs was in 2012. I was merrily driving to the family farm in Indiana to visit my mom when I spotted a roadside sign dishing a little shock and awe to commuters and vacationers. There was something cold about the little electric bulbs in the sign above my expressway lane letting me know: “679 TRAFFIC DEATHS THIS YEAR.”

It made me think…

That’s precisely what the sign was meant to do. While many states were seeing fewer traffic fatalities during the summer of 2012, Illinois was seeing a substantial increase in the number of people killed on Illinois roads in the first half of that year. After the Illinois Department of Transportation started posting a running total of the dead in July, the last half of 2012 saw fewer fatalities than the last half of sign-free 2011.

Still, the number of fatalities went up in 2012, from 918 to 957. Last year, with those same signs updating our death toll daily and urging us to drive more safely, our fatalities inched higher again, to 973.

This evidence suggests the signs had little effect. This would line up with research that suggests drivers don’t pay all that much attention to road signs; hence, the suggestion that perhaps no signs might even be better. Indeed, the Illinois Department of Transportation has moved on to other strategies to reduce traffic deaths:

Michael Rooker, the actor who played Merle Dixon on TV’s “The Walking Dead,” stars in the latest IDOT safety campaign, a series of videos at thedrivingdeadseries.com and Facebook posts titled “The Driving Dead.” The postings don’t have anything close to the power of watching a young mother of two die while pinned in her car, but perhaps they will prove more effective than the road signs. The catchphrase of “The Driving Dead” gives those behind the wheel a new way of thinking about driving.

I would be curious to know whether IDOT is pursuing these strategies based on evidence that suggest they work or the agency is mounting what they think might work and/or what is publicly visible. Driving is a dangerous activity – one of the most dangerous the average person will partake in each day – and you would want solutions that work rather than guesses.

Facebook not going to run voting experiments in 2014

Facebook is taking an increasing role in curating your news but has decided to not conducts experiments with the 2014 elections:

Election Day is coming up, and if you use Facebook, you’ll see an option to tell everyone you voted. This isn’t new; Facebook introduced the “I Voted” button in 2008. What is new is that, according to Facebook, this year the company isn’t conducting any experiments related to election season.

That’d be the first time in a long time. Facebook has experimented with the voting button in several elections since 2008, and the company’s researchers have presented evidence that the button actually influences voter behavior…

Facebook’s experiments in 2012 are also believed to have influenced voter behavior. Of course, everything is user-reported, so there’s no way of knowing how many people are being honest and who is lying; the social network’s influence could be larger or smaller than reported.

Facebook has not been very forthright about these experiments. It didn’t tell people at the time that they were being conducted. This lack of transparency is troubling, but not surprising. Facebook can introduce and change features that influence elections, and that means it is an enormously powerful political tool. And that means the company’s ability to sway voters will be of great interest to politicians and other powerful figures.

Facebook will still have the “I voted” button this week:

On Tuesday, the company will again deploy its voting tool. But Facebook’s Buckley insists that the firm will not this time be conducting any research experiments with the voter megaphone. That day, he says, almost every Facebook user in the United States over the age of 18 will see the “I Voted” button. And if the friends they typically interact with on Facebook click on it, users will see that too. The message: Facebook wants its users to vote, and the social-networking firm will not be manipulating its voter promotion effort for research purposes. How do we know this? Only because Facebook says so.

It seems like there are two related issues here:

1. Should Facebook promote voting? I would guess many experts would like popular efforts to try to get people to vote. After all, how good is democracy if many people don’t take advantage of their rights to vote? Facebook is a popular tool and if this can help boost political and civic engagement, what could be wrong with that?

2. However, Facebook is also a corporation that is collecting data. Their efforts to promote voting might be part of experiments. Users aren’t immediately aware that they are participating in an experiment when they see a “I voted” button. Or, the company may decide to try to influence elections.

Facebook is not alone in promoting elections. Hundreds of media outlets promote election news. Don’t they encourage voting? Aren’t they major corporations? The key here appears to be the experimental angle: people might be manipulated. Might this be okay if (1) they know they are taking part (voluntary participation is key to social science experiments) and (2) it promotes the public good? This sort of critique implies that the first part is necessary because fulfilling a public good is not enough to justify the potential manipulation.

Facebook as the new gatekeeper of journalism

Facebook’s algorithms now go a long way in dictating what news users see:

“We try to explicitly view ourselves as not editors,” he said. “We don’t want to have editorial judgment over the content that’s in your feed. You’ve made your friends, you’ve connected to the pages that you want to connect to and you’re the best decider for the things that you care about.”…

Roughly once a week, he and his team of about 16 adjust the complex computer code that decides what to show a user when he or she first logs on to Facebook. The code is based on “thousands and thousands” of metrics, Mr. Marra said, including what device a user is on, how many comments or likes a story has received and how long readers spend on an article…

If Facebook’s algorithm smiles on a publisher, the rewards, in terms of traffic, can be enormous. If Mr. Marra and his team decide that users do not enjoy certain things, such as teaser headlines that lure readers to click through to get all the information, it can mean ruin. When Facebook made changes to its algorithm in December 2013 to emphasize higher-quality content, several so-called viral sites that had thrived there, including Upworthy, Distractify and Elite Daily, saw large declines in their traffic.

Facebook executives frame the company’s relationship with publishers as mutually beneficial: when publishers promote their content on Facebook, its users have more engaging material to read, and the publishers get increased traffic driven to their sites. Numerous publications, including The New York Times, have met with Facebook officials to discuss how to improve their referral traffic.

Is Facebook a better gatekeeper than news outlets, editors, and the large corporations that often run them? I see three key differences:

1. Facebook’s methods are based on social networks and what your friends and others in your feed like. This may be not too much different than checking sites yourselves – especially since people often go to the same sites or go to ones that end to agree with them – but the results are out of your hands.

2. Ultimately, Facebook wants to connect you to other people using news, not necessarily give you news for other purposes like being an informed citizen or spurring you to action. This is a different process than seeking out news sites that primarily produce news (even if that is now often a lot of celebrity or entertainment info).

3. The news is interspersed with new pieces of information about the lives of others. This likely catches people’s attention and doesn’t provide an overwhelming amount of news or information that is abstracted from the user/reader.

What if Facebook could consistently improve users’ moods?

There has been a lot of hubbub about the ethics of a mood experiment Facebook ran several years ago. But, what if Facebook could consistently alter what it presents users to improve their mood and well-being? Positive psychology guru Marty Seligman hints at this in Flourish:

It is not only measuring well-being that Facebook and its cousins can do, but increasing well-being as well. “We have a new application: goals.com,” Mark continued. “In this app, people record their goals and their progress toward their goals.”

I commented on Facebook’s possibilities for instilling well-being: “As it stands now, Facebook may actually be building four of the elements of well-being: positive emotion, engagement (sharing all those photos of good events), positive relationships (The heart of what ‘friends’ are all about), and now accomplishment. All to the good. The fifth element of well-being, however, needs work, and in the narcissistic environment of Facebook, this work is urgent, and that is belonging to and serving something that you believe is bigger than the self – the element of meaning. Facebook could indeed help to build meaning in the lives of the five hundred million users. Think about it, Mark.” (page 98)

This might still be a question of ethics and letting users know what is happening.  And I’m sure some critics would argue that it is too artificial, the relationships sustained online are of a different kind than that of face-to-face relationships (though we know most users interact with people online that they already know offline), and this puts too power in the hands of Facebook. Yet, what if Facebook could help improve well-being? What if a lot of good be done by altering the online experience?

Facebook ran a mood altering experiment. What are the ethics for doing research with online subjects?

In 2012, Facebook ran a one-week experiment by changing news feeds and looking how people’s moods changed. The major complaint about this seems to be the lack of consent and/or deception:

The backlash, in this case, seems tied directly to the sense that Facebook manipulated people—used them as guinea pigs—without their knowledge, and in a setting where that kind of manipulation feels intimate. There’s also a contextual question. People may understand by now that their News Feed appears differently based on what they click—this is how targeted advertising works—but the idea that Facebook is altering what you see to find out if it can make you feel happy or sad seems in some ways cruel.

This raises important questions about how online research intersects with traditional scientific ethics. In sociology, we tend to sum up our ethics in two rules: don’t harm people and participants have to volunteer or give consent to be part of studies. The burden falls on the researcher to ensure that the subject is protected. How explicit should this be online? Participants on Facebook were likely not seriously harmed though it could be quite interesting if someone could directly link their news feed from that week to negative offline consequences. And, how well do the terms of service line up with conducting online research? Given the public relations issues, it would behoove companies to be more explicit about this in their terms of services or somewhere else though they might argue informing people immediately when things are happening online can influence results. This particular issue will be one to watch as the sheer numbers of people online alone will drive more and more online research.

Let’s be honest about the way this Internet stuff works. There is a trade-off involved: users get access to all sorts of information, other people, products, and the latest viral videos and celebrity news that everyone has to know. In exchange, users give up something, whether that is their personal information, tracking of their online behaviors, and advertisements intended to part them from their money. Maybe it doesn’t have to be this way, set up with such bargaining. But, where exactly the line is drawn is a major discussion point at this time. But, you should assume websites and companies and advertisers are trying to get as much from you as possible and plan accordingly. Facebook is not a pleasant entity that just wants to make your life better by connecting you to people; they have their own aims which may or may not line up with your own. Google, Facebook, Amazon, etc. are mega corporations whether they want to be known as such or not.

Facebook to hold pre-ASA conference

Last year’s ASA meetings included some special sessions on big data and Facebook is hosting a pre-conference this year at the company’s headquarters.

VentureBeat has learned that Facebook is to hold an academics-only conference in advance of the American Sociological Association 2014 Annual Meeting this August in San Francisco.

Facebook will run shuttles from the ASA conference hotel to Facebook’s headquarters in Menlo Park, Calif. According to the company’s event description, the pre-conference focuses on “techniques related to data collection with the advent of social media and increased interconnectivity across the world.”…

According to the event schedule, Facebook will give a demo of its tools and software stack at the conference…

There seems to be a great demand for sociologists who can code. Corey now spends a lot of time hiring fellow sociologists, according to his article. It is also the case in other big companies. In one interview conducted with the London School of Economics, Google’s Vice President Prabhakar Raghavan claimed that he just couldn’t hire enough social scientists.

This is a growing area of employment for sociologists who would benefit from getting access to proprietary yet amazing data and would also have to negotiate different structures in the private technology world versus academia.

NBC: social media use driven by popular TV shows, not the other way around

The Financial Times reports that after studying media habits related to its Olympic coverage, NBC found less social media activity linked to television broadcasts than might have been expected. In other words, it isn’t apparent that people tune into television programs because they see activity about it on social media. At stake is a lot of advertising money.

It will be interesting to see how this plays out. From its early days, one of the major critiques of television was that it encouraged passivity: people generally sat on the couch in their private homes watching a screen. While they may have had conversations about TV with others (and a lot of this has moved online – just see how many sites have Game of Thrones recaps each week), television watching was a limited social activity practiced alone, with family, or close friends. Whether social media changes this fundamental posture in watching television remains to be seen.

Research shows new mothers are less active on Facebook, aren’t flooding news feeds with babies

A researcher finds that new mothers are quite a bit less active on Facebook after their children are born:

Recently, Meredith Ringel Morris—a computer scientist at Microsoft Research—gathered data on what new moms actually do online. She persuaded more than 200 of them to let her scrape their Facebook accounts and found the precise opposite of the UnBaby.Me libel. After a child is born, Morris discovered, new mothers post less than half as often. When they do post, fewer than 30 percent of the updates mention the baby by name early on, plummeting to not quite 10 percent by the end of the first year. Photos grow as a chunk of all postings, sure—but since new moms are so much less active on Facebook, it hardly matters. New moms aren’t oversharers. Indeed, they’re probably undersharers. “The total quantity of Facebook posting is lower,” Morris says.

And therein lies an interesting lesson about our supposed age of oversharing. If new moms don’t actually deluge the Internet with baby talk, why does it seem to so many of us that they do? Morris thinks algorithms explain some of it. Her research also found that viewers disproportionately “like” postings that mention new babies. This, she says, could result in Facebook ranking those postings more prominently in the News Feed, making mothers look more baby-obsessed.

And a reminder of how we could see beyond our personal experiences and anecdotes and look at the bigger picture:

I have another theory: It’s a perceptual quirk called a frequency illusion. Once we notice something that annoys or surprises or pleases us—or something that’s just novel—we tend to suddenly notice it more. We overweight its frequency in everyday life. For instance, if you’ve decided that fedoras are a ridiculous hipster fashion choice, even if they’re comparatively rare in everyday life, you’re more likely to notice them. And pretty soon you’re wondering, why is everyone wearing fedoras now? Curse you, hipsters!…

The way we observe the world is deeply unstatistical, which is why Morris’ work is so useful. It reminds us of the value of observing the world around us like a scientist—to see what’s actually going on instead of what just happens to gall (or please) us. I’d hazard that perceptual illusions lead us to overamplify the incidence of all sorts of ostensibly annoying behavior: selfies on Instagram, people ignoring one another in favor of their phones, Google Glass. We don’t have a plague of oversharing. We have a plague of over-noticing. It’s time to reboot our eyes.

This study suggests the mothers themselves are not at fault but the flip side of this study would seem to be to then study the news feeds of friends of new mothers to see how often these pictures and posts show up (and how algorithms might be pushing this). And who are the people more likely to like such posts and pictures? This study may have revealed the supply side of the equation but there is more to explore.