Sociology experiment shows how parties can flip positions

Cass Sunstein describes a sociology study that could help explain how attachment to a political party can lead to divergent political positions:

Here’s how the experiment worked. All participants (consisting of thousands of people) were initially asked whether they identified with Republicans or Democrats. They were then divided into 10 groups. In two of them, participants were asked what they thought about 20 separate issues — without seeing the views of either political party on those issues. This was the “independence condition.” In the eight other groups, participants could see whether Republicans or Democrats were more likely to agree with a position. This was the “influence condition.”

In the influence condition, each participant was asked his own view, which was used to update the relative level of support of each party. That updated level was displayed, in turn, to the next participant in the same group.

The authors carefully selected issues on which people would not be likely to begin with strong convictions along party lines. For example: “Companies should be taxed in the countries where they are headquartered rather than in the countries where their revenues are generated.” And, “The exchange of cryptocurrencies (such as Bitcoin, Ethereum, or Litecoin) should be banned in the United States.” Or this: “Artificial intelligence software should be used to detect online blackmailing on email systems.”

The authors hypothesized that in the influence condition, it would be especially hard to predict where Republicans and Democrats would end up. If the early Republican participants in one group ended up endorsing a position, other Republicans would be more likely to endorse it as well — and Democrats would be more likely to reject it. But if the early Republicans rejected it, other Republicans would reject it as well — and Democrats would endorse it.

And the findings:

Across groups, Democrats and Republicans often flipped positions, depending on what the early voters did. On most of the 20 issues, Democrats supported a position in at least one group but rejected it in at least one other, and the same was true of Republicans. As the researchers put it, “Chance variation in a small number of early movers” can have major effects in tipping large populations — and in getting both Republicans and Democrats to embrace a cluster of views that actually have nothing to do with each other.

This seems like a good reminder regarding humans: attachments to groups are very important. When faced with taking in information, what the groups we identify with matters. This is the case even in an age where we would claim to be individuals.

Studying social change more broadly is a difficult task. It is perhaps easiest to see large-scale change after it has already happened and observers can look back and pick out a path by which society changed. It can be quite hard to see social change as it is occurring when it is unclear what exactly is happening or in which direction a trend line will go. It can also be difficult to see changes that did not take off or trends that did not go very far.

Online experiment looks at “who driverless cars should kill”

Experiments don’t have to take place in a laboratory: the MIT Media Lab put together the “Moral Machine” to look into how people think driverless cars should operate.

That’s the premise behind “Moral Machine,” a creation of Scalable Corporation for MIT Media Lab. People who participate are asked 13 questions, all with just two options. In every scenario, a self-driving car with sudden brake failure has to make a choice: continue ahead, running into whatever is in front, or swerve out of the way, hitting whatever is in the other lane. These are all variations on philosophy’s “Trolley Problem,” first formulated in the late 1960s and named a little bit later. The question: “is it more just to pull a lever, sending a trolley down a different track to kill one person, or to leave the trolley on its course, where it will kill five?” is an inherently moral problem, and slight variations can change greatly how people choose to answer.

For the “Moral Machine,” there are lots of binary options: swerve vs. stay the course; pedestrians crossing legally vs. pedestrians jaywalking; humans vs. animals; and crash into pedestrians vs. crash in a way that kills the car’s occupants.

There is also, curiously, room for variation in the kinds of pedestrians the runaway car could hit. People in the scenario are male or female, children, adult, or elderly. They are athletic, nondescript, or large. They are executives, homeless, criminals, or nondescript. One question asked me to choose between saving a pregnant woman in a car, or saving “a boy, a female doctor, two female athletes, and a female executive.” I chose to swerve the car into the barricade, dooming the pregnant woman but saving the five other lives…

Trolley problems, like those offered by the Moral Machine, are eminently anticipated. At the end of the Moral Machine problem set, it informs test-takers that their answers were part of a data collection effort by scientists at the MIT Media Lab, for research into “autonomous machine ethics and society.” (There is a link people can click to opt-out of submitting their data to the survey).

It will be interesting to see what happens with these results. How does the experiment get around the sampling issue of who chooses to participate in such a study? Should the public get a voice in deciding how driverless cars are programmed to operate, particularly when it comes to life and death decisions? Are life and death decisions ultimately reducible to either/or choices?

At the same time, I like how this takes advantage of the Internet. This experiment could be conducted in a laboratory: subjects would be presented with a range of situations and asked to respond. But, the N possible in a lab is much lower than what is available online. Additionally, if this study is at the beginning of work regarding driverless cars, perhaps a big N with a less representative sample is more desirable just to get some idea of what people are thinking.

Get back to the actual behavior in the science of behavior

An interesting look at the replicability of the concept of ego depletion includes this bit toward the end about doing experiments:

If the replication showed us anything, Baumeister says, it’s that the field has gotten hung up on computer-based investigations. “In the olden days there was a craft to running an experiment. You worked with people, and got them into the right psychological state and then measured the consequences. There’s a wish now to have everything be automated so it can be done quickly and easily online.” These days, he continues, there’s less and less actual behavior in the science of behavior. “It’s just sitting at a computer and doing readings.”

Perhaps, just like with the reliance on smartphones in daily life, researchers are also becoming overly dependent on the Internet and computers to help them do the work. On one hand, it certainly speeds up the work, both in data collection and analysis. Speed is very important in academia where the stakes for publishing quickly and often continue to rise. On the other hand, the suggestion here is that we miss something by sitting at a computer too much and not actually analyzing behavior. We might take mental shortcuts, not ask the same kind and number of questions, and perform different analyses compared to direct observation and doing some work by hand.

This reminds me of a reading I had my social research students do last week. The reading involved the different types of notes one should take when doing fieldwork. When it came to doing the analysis, the researcher suggested nothing beat spreading out all the paper notes on the floor and immersing oneself in them. This doesn’t seem very efficient these days; whether one is searching for words in a text document or using qualitative data analysis software, putting paper all over the floor and wading through it seems time consuming and unnecessary. But, I do think the author was right: the physical practice of immersing oneself in data and observations is simply a unique experience that yields rich data.

Facebook not going to run voting experiments in 2014

Facebook is taking an increasing role in curating your news but has decided to not conducts experiments with the 2014 elections:

Election Day is coming up, and if you use Facebook, you’ll see an option to tell everyone you voted. This isn’t new; Facebook introduced the “I Voted” button in 2008. What is new is that, according to Facebook, this year the company isn’t conducting any experiments related to election season.

That’d be the first time in a long time. Facebook has experimented with the voting button in several elections since 2008, and the company’s researchers have presented evidence that the button actually influences voter behavior…

Facebook’s experiments in 2012 are also believed to have influenced voter behavior. Of course, everything is user-reported, so there’s no way of knowing how many people are being honest and who is lying; the social network’s influence could be larger or smaller than reported.

Facebook has not been very forthright about these experiments. It didn’t tell people at the time that they were being conducted. This lack of transparency is troubling, but not surprising. Facebook can introduce and change features that influence elections, and that means it is an enormously powerful political tool. And that means the company’s ability to sway voters will be of great interest to politicians and other powerful figures.

Facebook will still have the “I voted” button this week:

On Tuesday, the company will again deploy its voting tool. But Facebook’s Buckley insists that the firm will not this time be conducting any research experiments with the voter megaphone. That day, he says, almost every Facebook user in the United States over the age of 18 will see the “I Voted” button. And if the friends they typically interact with on Facebook click on it, users will see that too. The message: Facebook wants its users to vote, and the social-networking firm will not be manipulating its voter promotion effort for research purposes. How do we know this? Only because Facebook says so.

It seems like there are two related issues here:

1. Should Facebook promote voting? I would guess many experts would like popular efforts to try to get people to vote. After all, how good is democracy if many people don’t take advantage of their rights to vote? Facebook is a popular tool and if this can help boost political and civic engagement, what could be wrong with that?

2. However, Facebook is also a corporation that is collecting data. Their efforts to promote voting might be part of experiments. Users aren’t immediately aware that they are participating in an experiment when they see a “I voted” button. Or, the company may decide to try to influence elections.

Facebook is not alone in promoting elections. Hundreds of media outlets promote election news. Don’t they encourage voting? Aren’t they major corporations? The key here appears to be the experimental angle: people might be manipulated. Might this be okay if (1) they know they are taking part (voluntary participation is key to social science experiments) and (2) it promotes the public good? This sort of critique implies that the first part is necessary because fulfilling a public good is not enough to justify the potential manipulation.

Using randomized controlled trials to test methods for addressing global poverty

Here is a relatively new way to test options for addressing poverty: use randomized controlled trials.

What Kremer was suggesting is a scientific technique that has long been considered the gold standard in medical research: the randomized controlled trial. At the time, though, such trials were used almost exclusively in medicine—and were conducted by large, well-funded institutions with the necessary infrastructure and staff to manage such an operation. A randomized controlled trial was certainly not the domain of a recent PhD, partnering with a tiny NGO, out in the chaos of the developing world…

The study wound up taking four years, but eventually Kremer had a result: The free textbooks didn’t work. Standardized tests given to all students in the study showed no evidence of improvement on average. The disappointing conclusion launched ICS and Kremer on a quest to discover why the giveaway wasn’t helping students learn, and what programs might be a better investment.

As Kremer was realizing, the campaign for free textbooks was just one of countless development initiatives that spend money in a near-total absence of real-world data. Over the past 50 years, developed countries have spent something like $6.5 trillion on assistance to the developing world, most of those outlays guided by little more than macroeconomic theories, anecdotal evidence, and good intentions. But if it were possible to measure the effects of initiatives, governments and nonprofits could determine which programs actually made the biggest difference. Kremer began collaborating with other economists and NGOs in Kenya and India to test more strategies for bolstering health and education…

In the decade since their founding, J-PAL and IPA have helped 150 researchers conduct more than 425 randomized controlled trials in 55 countries, testing hypotheses on subjects ranging from education to agriculture, microfinance to malaria prevention, with new uses cropping up every year (see “Randomize Everything,” below). Economists trained on randomized controlled trials now work in the faculties of top programs, and some universities have set up their own centers to support their growing rosters of experiments in the social sciences.

If this is indeed a relatively new approach, what took so long? Perhaps the trick was thinking that experiments, typically associated with very controlled laboratory or medical settings, could be preformed in less controlled settings. As the article notes, they are not easy to set up. One of the biggest issues might be randomizing enough people into the different groups to wash out all of the possible factors that might influence the results.

This also seems related to the uptick in interest in natural experiments where social scientists take advantage of “natural” occurrences, perhaps a policy change or a natural disaster, to compare results across groups. Again, laboratories offer controlled settings but there are only so many things that can be addressed and the number of people in the studies tend to be pretty small.

Good school districts give homes up to a $50 per square foot boost in value

Redfin suggests a home located in a high-performing school district can command a higher price:

How much more do they have to pay for a home that feeds into a top-ranked elementary school as opposed to an average-ranked school? Nationally, try an extra $50 per square foot, on average, according to the data crunchers at Redfin.

In the Chicago area, the median price of a home near top-tier schools was $257,500, 58.5 percent higher than the median price of $162,500 for a home near an average-ranked school.

The findings are a jolt of reality for almost 1,000 consumers who plan to buy a home in the next two years and completed a Realtor.com survey in July. More than half of those potential buyers said they’d be willing to pay as much as 20 percent above their budget to buy a home within certain school boundaries. Apparently, that’s not enough to get into the best schools.

To do its calculations, Redfin compared median sale prices of similar homes in the same neighborhood but which fell within the boundaries of different elementary schools. The transactions studied were those that closed between May 1 and Aug. 31 — a time when home prices were showing recovery in most parts of the country — and were listed on local multiple listing services. Then Redfin boiled those numbers down into median sales prices per square foot.

An interesting experimental design – houses matched by neighborhood but in different school districts – and an interesting finding.

This reminds me of hearing Annette Lareau speak at the American Sociological Association meetings this past August in New York City. When she and her fellow researchers looked at how middle and upper-class families took schools into account when searching for where to live, they found that they were able to quickly eliminate most school districts as not being good enough. In contrast to the lengthy research these parents did regarding other areas of life, through word of mouth, they were able quickly learn what neighborhoods they would buy in.

Putting this all together, if there are only so many homes in the top school districts, buyers can ask for more and expect some competition among people who want to be part of the better school district.

Focus groups examine home designs in a warehouse

Pulte recently put together some new home designs in a Chicago area warehouses to see how consumers would respond:

Basically, it was the latest incarnation of the company’s ongoing experiment: walking focus groups of consumers through full-size prototypes of floor plans of homes that Pulte intends to build, and asking for reactions before the first shovelful of earth has been dug. The consumers’ input enables the builder to tout the homes as “Life Tested.”

So on this September day, in an 88,000-square-foot warehouse in suburban Franklin Park, nine Chicago-area homeowners were life-testing “houses” framed in lumber and covered with sheets of Tyvek house wrap to simulate walls.

Pulte brought in a team of carpenters to do the framing for 11 houses and the fixtures within, such as kitchen islands and bathroom sinks, which were covered in corrugated paper and marked — in case you weren’t sure what you were looking at — “island,” “sink,” etc…

Total silence ensued — they weren’t supposed to speak to one another, so as not to influence opinions — as they wandered from room to room. Then they moved “upstairs” (that is, next door) to do the same thing.

This sounds like a helpful approach to getting feedback about particular interior features, even if the features aren’t fully constructed. However, I wonder how valuable this feedback is without situating a home within a particular neighborhood. I assume Pulte would say the neighborhood is another important factor and that they build attractive neighborhoods that only enhance the individual homes.

It is also interesting to see that Pulte’s designs are then said to be “life tested.” Pulte has built enough homes over the decades to legitimately claim this for established featuresbut can they really say this for new designs?