“So what are the rules of ethnography, and who enforces them?”

A journalist looking into the Goffman affair discusses the ethics of ethnography:

To find out, I called several sociologists and anthropologists who had either done ethnographic research of their own or had thought about the methodology from an outside perspective. Ethnography, they explained, is a way of doing research on groups of people that typically involves an extended immersion in their world. If you’re an ethnographer, they said, standard operating procedure requires you to take whatever steps you need to in order to conceal the identities of everyone in your sample population. Unless you formally agree to fulfill this obligation, I was told, your research proposal will likely be blocked by the institutional review board at your university…

The frustration is not merely a matter of academics resenting oversight out of principle. Many researchers think the uncompromising demand for total privacy has a detrimental effect on the quality of scholarship that comes out of the social sciences—in part because anonymization makes it impossible to fact-check the work…

According to Goffman, her book is no less true than Leovy’s or LeBlanc’s. That’s because, as she sees it, what sociologists set out to capture in their research isn’t truths about specific individuals but general truths that tell us how the world works. In her view, On the Run is a true account because the general picture it paints of what it’s like to live in a poor, overpoliced community in America is accurate.

“Sociology is trying to document and make sense of the major changes afoot in society—that’s long been the goal,” Goffman told me. Her job, she said, as a sociologist who is interested in the conditions of life in poor black urban America, is to identify “things that recur”—to observe systemic realities that are replicated in similar neighborhoods all over the country. “If something only happens once, [sociologists are] less interested in it than if it repeats,” she wrote to me in an email. “Or we’re interested in that one time thing because of what it reveals about what usually happens.” This philosophy goes back to the so-called Chicago school of sociology, Goffman added, which represented an attempt by observers of human behavior to make their work into a science “by finding general patterns in social life, principles that hold across many cases or across time.”…

Goffman herself is the first to admit that she wasn’t treating her “study subjects” as a mere sample population—she was getting to know them as human beings and rendering the conditions of their lives from up close. Her book makes for great reading precisely because it is concerned with specifics—it is vivid, tense, and evocative. At times, it reads less like an academic study of an urban environment and more like a memoir, a personal account of six years living under extraordinary circumstances. Memoirists often take certain liberties in reconstructing their lives, relying on memory more than field notes and privileging compelling narrative over strict adherence to the facts. Indeed, in a memoir I’m publishing next month, there are several moments I chose to present out of order in order to achieve a less convoluted timeline, a fact I flag for the reader in a disclaimer at the front of the book.

Not surprisingly, there is disagreement within the discipline of sociology as well as across disciplines about how ethnography could and should work. It is a research method that requires so much time and personal effort that it can be easy to tie to a particular researcher and their laudable steps or mistakes. This might miss the forest for the trees; I’ve thought for a while that we need more discussion across ethnographies rather than seeing them as either the singular work on the subject. In other words, does Goffman’s data line up with what others have found in studying race, poor neighborhoods, and the criminal justice system? And if there are not comparisons to make with Goffman’s work, why aren’t more researchers wrestling with the same topic?

Additionally, this particular discussion highlights longstanding tensions in sociology: qualitative vs. quantitative data (with one often assumed to be more “fact”); “facts” versus “interpretation”; writing academic texts versus books for more general audiences; emphasizing individual stories (which often appeals to the public) versus the big picture; dealing with outside regulations such as IRBs that may or may not be accustomed to dealing with ethnographic methods in sociology; and how to best do research to help disadvantaged communities. Some might see these tensions as more evidence that sociology (and other social sciences) simply can’t tell us much of anything. I would suggest the opposite: the realities of the social world are so complex that these tensions are necessary in gathering and interpreting comprehensive data.

Teaching student tech designers to treat users more humanely

Here are a few college classes intended to help future tech designers keep the well-being of users in mind:

The class, which she taught at the Rhode Island School of Design and the MIT Media Lab, attempted to teach a sense of responsibility to technology inventors through science fiction, a genre in which writers have been thinking deeply about the impact of today’s technologies for decades. “It encourages people to have that long-term version that I think is missing in the world of innovation right now,” she says, “What happens when your idea scales to millions of people? What happens when people are using your product hundreds of times a day? I think the people who are developing new technologies need to be thinking about that.”

Students in Brueckner’s class built functional prototypes of technologies depicted by science fiction texts. One group created a “sensory fiction” book and wearable gadget that, in addition to adding lights and sounds to a story, constricts the body through air pressure bags, changing temperature and vibrating “to influence the heart” depending on how the narrative’s protagonist feels. Another group was inspired by a dating technology in Dave Eggers’s The Circle that uses information scraped from the Internet about a date to give suggestions about how to impress him or her. They created an interactive website about a friend using his public information to see how he would react to the idea. A third group imagined how a material that could transition from liquid to solid on command like the killing material “ice-nine” from Kurt Vonnegut’s Cat’s Cradle could be used as a prototyping tool…

Neema Moraveji, the founding director of Standford’s Calming Technology Lab and a cofounder of breath-tracking company Spire, has a different approach for teaching students to consider the human impact of what they are designing. His classes teach students to create technology that actively promotes a calm or focused state of mind, and he co-authored a paper that laid out several suggestions for technology designers, including:

  • Letting users control or temporarily disable interruptions, the way that TweetDeck allows users to control from whom to receive notifications on Twitter.
  • Avoiding overload through the number of features available and the way information is presented. For instance, a Twitter app that opens to the least-recent tweet, “gives users the sense that they must read through all the tweets before they are done.”
  • Using a human tone or humor
  • Providing positive feedback such as “Thanks for filling out the form” and “You successfully updated the application” in addition to error alerts
  • Including easy ways to interact socially, such as Likes and Retweets, which allow people to interact without worrying about how they appear to others.
  • Avoiding time pressure when not necessary.
  • Incorporating natural elements like “soothing error tones, naturalistic animations, and desktop wallpapers taken from the natural world.”

These sound like interesting ideas that may just help designers think not just about the end goals of a product but also consider the user experience. Yet, I still wonder about the ability of tech designers to resist the pressure their employers might put on them. For example, putting these more humane options into practice could be easier when working for your own startup but would be more difficult if a big corporation is breathing down your neck to push the bottom line or end product. Think the Milgram experiment: can individual designers follow the ethical path? Perhaps some of this training also needs to happen at the executive and managerial levels so that the emphasis on protecting the user is pervasive throughout organizations.

Facebook not going to run voting experiments in 2014

Facebook is taking an increasing role in curating your news but has decided to not conducts experiments with the 2014 elections:

Election Day is coming up, and if you use Facebook, you’ll see an option to tell everyone you voted. This isn’t new; Facebook introduced the “I Voted” button in 2008. What is new is that, according to Facebook, this year the company isn’t conducting any experiments related to election season.

That’d be the first time in a long time. Facebook has experimented with the voting button in several elections since 2008, and the company’s researchers have presented evidence that the button actually influences voter behavior…

Facebook’s experiments in 2012 are also believed to have influenced voter behavior. Of course, everything is user-reported, so there’s no way of knowing how many people are being honest and who is lying; the social network’s influence could be larger or smaller than reported.

Facebook has not been very forthright about these experiments. It didn’t tell people at the time that they were being conducted. This lack of transparency is troubling, but not surprising. Facebook can introduce and change features that influence elections, and that means it is an enormously powerful political tool. And that means the company’s ability to sway voters will be of great interest to politicians and other powerful figures.

Facebook will still have the “I voted” button this week:

On Tuesday, the company will again deploy its voting tool. But Facebook’s Buckley insists that the firm will not this time be conducting any research experiments with the voter megaphone. That day, he says, almost every Facebook user in the United States over the age of 18 will see the “I Voted” button. And if the friends they typically interact with on Facebook click on it, users will see that too. The message: Facebook wants its users to vote, and the social-networking firm will not be manipulating its voter promotion effort for research purposes. How do we know this? Only because Facebook says so.

It seems like there are two related issues here:

1. Should Facebook promote voting? I would guess many experts would like popular efforts to try to get people to vote. After all, how good is democracy if many people don’t take advantage of their rights to vote? Facebook is a popular tool and if this can help boost political and civic engagement, what could be wrong with that?

2. However, Facebook is also a corporation that is collecting data. Their efforts to promote voting might be part of experiments. Users aren’t immediately aware that they are participating in an experiment when they see a “I voted” button. Or, the company may decide to try to influence elections.

Facebook is not alone in promoting elections. Hundreds of media outlets promote election news. Don’t they encourage voting? Aren’t they major corporations? The key here appears to be the experimental angle: people might be manipulated. Might this be okay if (1) they know they are taking part (voluntary participation is key to social science experiments) and (2) it promotes the public good? This sort of critique implies that the first part is necessary because fulfilling a public good is not enough to justify the potential manipulation.

What if Facebook could consistently improve users’ moods?

There has been a lot of hubbub about the ethics of a mood experiment Facebook ran several years ago. But, what if Facebook could consistently alter what it presents users to improve their mood and well-being? Positive psychology guru Marty Seligman hints at this in Flourish:

It is not only measuring well-being that Facebook and its cousins can do, but increasing well-being as well. “We have a new application: goals.com,” Mark continued. “In this app, people record their goals and their progress toward their goals.”

I commented on Facebook’s possibilities for instilling well-being: “As it stands now, Facebook may actually be building four of the elements of well-being: positive emotion, engagement (sharing all those photos of good events), positive relationships (The heart of what ‘friends’ are all about), and now accomplishment. All to the good. The fifth element of well-being, however, needs work, and in the narcissistic environment of Facebook, this work is urgent, and that is belonging to and serving something that you believe is bigger than the self – the element of meaning. Facebook could indeed help to build meaning in the lives of the five hundred million users. Think about it, Mark.” (page 98)

This might still be a question of ethics and letting users know what is happening.  And I’m sure some critics would argue that it is too artificial, the relationships sustained online are of a different kind than that of face-to-face relationships (though we know most users interact with people online that they already know offline), and this puts too power in the hands of Facebook. Yet, what if Facebook could help improve well-being? What if a lot of good be done by altering the online experience?

Facebook ran a mood altering experiment. What are the ethics for doing research with online subjects?

In 2012, Facebook ran a one-week experiment by changing news feeds and looking how people’s moods changed. The major complaint about this seems to be the lack of consent and/or deception:

The backlash, in this case, seems tied directly to the sense that Facebook manipulated people—used them as guinea pigs—without their knowledge, and in a setting where that kind of manipulation feels intimate. There’s also a contextual question. People may understand by now that their News Feed appears differently based on what they click—this is how targeted advertising works—but the idea that Facebook is altering what you see to find out if it can make you feel happy or sad seems in some ways cruel.

This raises important questions about how online research intersects with traditional scientific ethics. In sociology, we tend to sum up our ethics in two rules: don’t harm people and participants have to volunteer or give consent to be part of studies. The burden falls on the researcher to ensure that the subject is protected. How explicit should this be online? Participants on Facebook were likely not seriously harmed though it could be quite interesting if someone could directly link their news feed from that week to negative offline consequences. And, how well do the terms of service line up with conducting online research? Given the public relations issues, it would behoove companies to be more explicit about this in their terms of services or somewhere else though they might argue informing people immediately when things are happening online can influence results. This particular issue will be one to watch as the sheer numbers of people online alone will drive more and more online research.

Let’s be honest about the way this Internet stuff works. There is a trade-off involved: users get access to all sorts of information, other people, products, and the latest viral videos and celebrity news that everyone has to know. In exchange, users give up something, whether that is their personal information, tracking of their online behaviors, and advertisements intended to part them from their money. Maybe it doesn’t have to be this way, set up with such bargaining. But, where exactly the line is drawn is a major discussion point at this time. But, you should assume websites and companies and advertisers are trying to get as much from you as possible and plan accordingly. Facebook is not a pleasant entity that just wants to make your life better by connecting you to people; they have their own aims which may or may not line up with your own. Google, Facebook, Amazon, etc. are mega corporations whether they want to be known as such or not.

Wait, What’s Your Problem: the Census does or does not require people to participate?

Sunday’s What’s Your Problem? column in the Chicago Tribune featured a woman irritated by some Census workers who did sound like creepers. Yet, a Census employee is still unclear about whether U.S. residents have to participate in Census surveys:

He said census interviewers are trained to be professional, courteous, and to never use the possibility of a fine to coerce people into participating.

Olson said the American Community Survey is mandatory and there is a potential fine for people who fail to participate, but the Census Bureau relies on public cooperation to encourage responses.

The survey is important because its data guide nearly 70 percent of federal grants, Olson said.

This is a common response from the Census but it is still vague. Is participating in the Census and the American Community Survey mandatory or not? Is there a fine for participation or not? The answer seems to be yes and yes – mandatory, a fine is possible, and yet no has to really worry about incurring a penalty.

Typical social science research, which is akin to what the Census Bureau is doing (and the organization has been led by sociologists), has several basic rules regarding ethics in collecting information from people. Don’t harm people. (See the above story about peeking in people’s windows.) And participation has to be voluntary. This can include contacting people multiple times. So is participation really voluntary if there is even the implicit idea of a fine? This is where it is less like social science research and more like government action, which is a fine line the Census is walking here. Clearing this up might help improve relations with people who are suspicious of why the Census wants basic information about their lives.

 

Sociologist on how studying an extremist group led to a loss of objectivity

A retired sociologist who studied the Aryan Nation discusses how his research led to a loss of objectivity and a change of research topics:

Aho began his research in the mid-1980s with a focus on the most notorious group in Idaho, the Aryan Nation Church near Coeur d’Alene and Hayden Lake. Annual conferences were held there with people from all around the world to fight what they called the “race war.” The group, originally formed in California, was forced to relocate to Idaho due to pressure from authorities. Aho was able to interview members of the group face to face, conduct phone interviews and correspond with prison inmates who were part of the organization.

“These individuals were genuinely good, congenial folks,” said Aho. “They were very independent, married, church-going people with deep beliefs. It was only when they gathered in groups and reaffirmed each other’s prejudices that things became dangerous…

In his research, Aho tried to place himself in his subjects’ shoes. He expressed how it is important to see yourself in the other person to find mutual ground and truths that can only be obtained by using this research methodology. However, after nearly a decade of research, he felt that he was losing objectivity and only adding to the problem.

“I spent years trying to understand the people who are attracted to violence, but I began to feel like my fascination with violence made me partly responsible for it,” Aho said. “I think I lost my sociological objectivity, and thought it was time to end my efforts of trying to understanding it, and move on to other scholarly activities.”

Some candor about researching a difficult topic. Given statements by some recently that we should not “commit sociology” and refrain from looking for explanations for violence, we could just ignore such groups. But, looking for explanations is not the same as excusing or condoning behavior and may help limit violence in the future. At the same time, spending lots of time with people, whether they are good or bad, can lead to relationships and a humanizing the research subject. This may provide better data for a while as well as dignity for the research subject but can lead to the “going native” issue that anthropologists sometimes discuss. A sociologist wants to be able to remain an observer and analyst, even as they try to put themselves in the shoes of others.

It would be interesting to hear the opinions of sociologists regarding studying clearly unpopular groups like white supremacists/terrorists. Sociologists are often interested in studying disadvantaged or voiceless groups but what about groups with which they profoundly disagree?