Having to “prove” racism versus assuming that it is a common feature of American life

In defending some comments she made regarding white liberals and their support for President Obama, Melissa Harris-Perry looks at three common objections to conversations about race in the United States: “prove it,” “I have black friends,” and “who made you an expert?” While these are all familiar responses, the first one is a particularly sociological point that raises questions about how we view society and how this plays out in court:

The first is a common strategy of asking any person of color who identifies a racist practice or pattern to “prove” that racism is indeed the causal factor. This is typically demanded by those who are certain of their own purity of racial motivation. The implication is if one cannot produce irrefutable evidence of clear, blatant and intentional bias, then racism must be banned as a possibility. But this is both silly as an intellectual claim and dangerous as a policy standard.

In a nation with the racial history of the United States I am baffled by the idea that non-racism would be the presumption and that it is racial bias which must be proved beyond reasonable doubt. More than 100 years of philosophical, psychological and sociological research that begins, at least, with the work of W.E.B. Du Bois has mapped the deeply entrenched realities of racial bias on the American consciousness. If anything, racial bias, not racial innocence is the better presumption when approaching American political decision-making. Just fifty years ago, nearly all white Democrats in the US South shifted parties rather than continuing to affiliate with the party of civil rights. No one can prove that this decision was made on the basis of racial bias, but the historical trend is so clear as to require mental gymnastics to imagine this was a choice not motivated by race.

Progressives and liberals should be particularly careful when they demand proof of intentionality rather than evidence of disparate impact in conversations about racism. Recall that initially the 1964 Civil Rights Act made “disparate impact” a sufficient evidentiary claim for racial bias. In other words, a plaintiff did not need to prove that anyone was harboring racial animus in their hearts, they just needed to show that the effects of a supposedly race neutral policy actually had a discernible, disparate impact on people of color. The doctrine of disparate impact helped to clear many discriminatory housing and employment policies off the books.

Michelle Alexander brilliantly demonstrates in The New Jim Crow, the pernicious effect of the Supreme Court moving away from disparate impact as a standard to forcing plaintiffs to demonstrate racist intention. This new standard has encouraged the explosive growth of incarceration of African-Americans, turning a blind eye to disparate impact while it demands “proof” of racial bias.

I believe we must be careful and judicious in our conversations about racism. But I also believe that those who demand proof of interpersonal intention to create a racist outcome are missing the point about how racism works. Racism is not exclusively about hooded Klansmen; it is also about the structures of bias and culture of privilege that infect the left as well.

I like how Harris-Perry flips this objection: looking at the broad sweep of American history, from its days of more overt racism to more covert racism today, why don’t we assume that racism plays a role in everyday life in this society? Can we really assume, as many seem to do, that the issues with race ended at some point, either in the Civil Rights legislation of the 1960s or in the election of minority politicians or the ending of segregationist society in the South? With plenty of indicators of racial disparity today, from online comments from young adults to incarceration rates to homeownership to wealth to residential segregation, perhaps we should we see racism as a default feature of American society until proven otherwise.

Harris-Perry hints at one reason why it is difficult for Americans to see the effects of racism: the court system moving to the burden of proof shifting to “proving” “racist intention.” Without the proverbial smoking gun, it then becomes more difficult to develop arguments just from data and patterns, even if they are overwhelming. While the recent court case involving gender discrimination at Walmart and sociologists siding with the prosecution isn’t about race, it illustrates some of these principles. The data suggests discrimination may have taken place as more women did not receive promotions or pay raises. But without “proof” that this was a deliberate Walmart policy meant to harm women, the numbers may not be enough. The same holds true with race: “statistical discrimination,” stereotypes about large groups of people, may be okay because no individual or corporation can be held directly responsible for the outcome.

Loss of Sunday as a day of rest more about sociology than loss of religious beliefs?

I’ve had several conversations in the last year or so about how Sundays have shifted from being days for church to normal days full of athletic activities, football, and shopping. One commentator suggests this change is due to larger sociological forces:

The revolution in the American Sunday was wrought not so much by paganism as it was by sociology. The workweek shrank from six to five days, and with two days free each week, Sunday lost its specialness. Women went to work, and retailers had to adjust their hours to suit them. The traditional American Sunday, which consisted largely of attending church and abstaining from work, was conditioned by cultural circumstances that no longer existed. Americans could not adapt themselves to 19th-century agrarian life.

So Sundays are the way they are because of an extended weekend, more women in the workforce, and an information-age society? Were all these changes necessary for this to come to pass or would have one, say the extended weekend, been enough to erode the importance of Sundays? What about the rise of the NFL? If these larger social forces are responsible for this change, what could religious congregations or others do to re-promote the Sunday as sabbath idea?

I wonder if someone has some hard data on when and how exactly this shift took place…

Sociologist considers “Humanity 2.0”

A sociologist who is “Auguste Comte chair in social epistemology in Warwick University’s Department of Sociology” discusses his new book titled Humanity 2.0. In my opinion, here is the most interesting part of the interview:

Let’s put it this way: we’ve always been heading towards a pretty strong sense of Humanity 2.0. The history of science and technology, especially in the west, has been about remaking the world in our collective “image and likeness”, to recall the biblical phrase. This means making the world more accessible and usable by us. Consider the history of agriculture, especially animal and plant breeding. Then move to prosthetic devices such as eyeglasses and telescopes.

More recently, and more mundanely, people are voting with their feet to enter Humanity 2.0 with the time they spend in front of computers, as opposed to having direct contact with physical human beings. In all this, it’s not so much that we’ve been losing our humanity but that it’s becoming projected or distributed across things that lack a human body. In any case, Humanity 2.0 is less about the power of new technologies than a state of mind in which we see our lives fulfilled in such things.

Wouldn’t someone like Archimedes describe us as Humanity 3.0 compared to his era?

Yes, Archimedes would probably see us as pretty exotic creatures. He would already be impressed by what we take for granted as Humanity 1.0, since the Greeks generally believed that “humanity” was an elite prospect for ordinary Homo sapiens, requiring the right character and training. Moreover, he would be surprised – if not puzzled – that we appear to think of science and technology as some long-term collective project of self-improvement – “progress” in its strongest sense. While the Greeks gave us many of our fundamental scientific ideas, they did not think of them as a blueprint for upgrading the species. Rather, those ideas were meant either to relieve drudgery or provide high-brow entertainment.

What is considered “normal” for human beings has changed quite a bit over the centuries. This reminds me of something I read months ago about the concept of “normal” in medicine: we tend to focus on more unusual circumstances so don’t know as much what the possible ranges of “normal.” When first introduced, many technological changes were not “normal” but humans adapted. As Fuller suggests, perhaps we need to have a conversation about what is “normal” and how much change we are willing to accept and how quickly it might be implemented.

Were Archimedes and the Greeks correct in focusing more on “character and training” rather than scientific progress?

When people talk about these sorts of topics, readers start thinking about things like robots, prosthetics, and computer chip implants and don’t think so much about eyeglasses or common crops. Indeed, the book cover plays off these common stereotypes with its “futuristic” look at a human head. Does this jump to future technology and the potential problems immediately turn some possible readers off while a cover that played around more with “safer” ideas like eyeglasses would be attractive to more people?

American language about government policy and economic life shifts from community to individualism

Here is an interesting argument about how common American discourse about public policy and economic life has shifted since the 1930s:

In 1934, the focus was on people, family security and the risks to family economic well-being that we all share. Today, the people have disappeared. The conversation is now about the federal budget, not about the real economy in which real people live. If a moral concept plays a role in today’s debates, it is only the stern proselytizing of forcing the government to live within its means. If the effect of government policy on average people is discussed, it is only as providing incentives for the sick to economize on medical costs and for the already strapped worker to save for retirement.

From the 1930s to the 1960s, as the Princeton historian Daniel T. Rodgers demonstrates in his recent book, “The Age of Fracture,” American public discourse was filled with references to the social circumstances of average citizens, our common institutions and our common history. Over the last five decades, that discourse has changed in ways that emphasize individual choice, agency and preferences. The language of sociology and common culture has been replaced by the language of economics and individualism.

In 1934, the government was us. We had shared circumstances, shared risks and shared obligations. Today the government is the other — not an institution for the achievement of our common goals, but an alien presence that stands between us and the realization of individual ambitions. Programs of social insurance have become “entitlements,” a word apparently meant to signify not a collectively provided and cherished basis for family-income security, but a sinister threat to our national well-being.

Over the last 50 years we seem to have lost the words — and with them the ideas — to frame our situation appropriately.

This is a fascinating line: “The language of sociology and common culture has been replaced by the language of economics and individualism.” This reminds me of the findings about how public opinion changes when asked about “welfare” versus “assistance for the poor.” The concepts are similar but the connotations of the specific terms matter.

Is the end argument here that changing the language will lead to more communal understandings or does reversing the “Bowling Alone” phenomenon have to come first? It would be helpful to know what exactly these commentators think happened in this period beyond simply the change in language. Could we argue that the success of the community-oriented policies of the mid 1900s that led to a booming economy, rising incomes, suburbanization, and homeownership was “too successful” in that it led to these shifts in language and focus?

Sociology class at Brown has teams of students give away $15,000 dollars

I’m guessing that it is a pretty unique sociology course at Brown that has students work in teams to give away $15,000:

Receiving $15,000 for a college class might sound like a laughable dream, but in SOC 1870A: “Investing in Social Change,” a course offered by the Department of Sociology in conjunction with the Swearer Center for Public Service, that is exactly what happens. There is, of course, a catch — students do not keep the $15,000, but instead work in teams of five to award the money in grants to one or more community organizations.

After reading about a philanthropy-based class at another school, Martin Granoff P’93 approached the Office of the Dean of the College about funding a similar class at the University. They brought the idea to Roger Nozaki MAT’89, director of the Swearer Center for Public Service and associate dean of the College for community and global engagement, who then approached Associate Professor of Sociology Ann Dill about co-teaching the class…

This past year there were 34 applicants for the 18 spots.

In addition to assigned readings, the class also features a number of speakers, a majority of whom are Brown alums who work for Rhode Island or Providence nonprofits.

Obviously, it takes a good amount of money to make a course like this happen but it sounds like an exciting opportunity.

I wonder if a class like this is best-suited for a wealthy school like Brown where students could easily end up in positions to give away corporate, government, or private money or for less-advantaged schools where being able to give away this amount would put students in a more unusual position.

“Authentic” Philadelphia Main Line mansions ruined by McMansion interiors?

Common critiques of McMansions spend a lot of time on their exterior: the mishmash of architectural styles, the large garage facing the street, the oversized front door and windows, and the impressive front that doesn’t extend to the sides and back. But what happens if the outside of the home is an “authentic” exterior and the insides are changed to reflect more modern, perhaps McMansion-like, tastes?

Something unsettling has been happening on Philadelphia’s storied Main Line. Magnificent early 20th-century mansions, which are meticulously maintained on the outside, have had their interiors transformed to the very height of muddled McMansion style. This is no isolated incident, but a veritable epidemic among the mansions of this traditional old money bastion. For example, this 1929 stone manor in Haverford is well presented on the outside, but the interior is some post-modernish mess where the lowlights include a garish abstract area rug, a pair of hideous curved couches in the living room, and glossy black tile. The brokerbabble tells it one way—”grand old world made new”—but it looks more like grand old world messed up. Meanwhile, the high price tag, $2.9M, virtually ensures that no one will take on the challenge of restoring this country estate to its former glory…

This raises an interesting question: can a home be a McMansion just because of its interior? This is not the traditional definition of a McMansion but the criticism is along the same lines of the complaints about the exterior: it is not “authentic” and is more garish and driven by popular tastes (granite countertops, stainless steel appliances, etc.).

While the exteriors of homes can be protected by preservation districts and regulations regarding teardowns, how would those who don’t like these McMansion interiors fight against them?

And while this article suggests this is a “veritable epidemic” for older mansions like these, are there any numbers to back this up? It is unreasonable for people to update the interiors in older homes to match newer tastes?

Americans want smaller homes but are still looking online at big ones

There have been several indicators in recent months that Americans are interested in smaller homes. But what if they say they would purchase smaller homes but are still looking at bigger homes? An economist for Trulia.com discusses this:

We asked people to tell us their ideal home size. They’re shunning super-sized homes, the McMansions. Only 6 percent of Americans say their ideal home size is more than 3,200 square feet. Thirty-two percent said they see the ideal home at 1,401 to 2,000 square feet. About 27 percent said 2,001 to 2,600 square feet.

This is partly due to the economic troubles of the recession and recovery. But this could be part of a permanent shift toward smaller homes. And it could reflect baby boomers wanting to downsize and increasing environmental awareness, with some people wanting a smaller environmental footprint.

On the other hand, when we look at the homes that people view on our site — even though only 6 percent of the people in our survey said the McMansion size range was ideal — 27 percent of the property views people are looking at are of that size. So even though people aren’t saying those large homes are their ideal size, they want to see what these homes look like and want to dream big.

This disconnect could be explained in several different ways:

1. Americans look at bigger houses online because it is free. These days, one can look at hundreds of homes and get a good idea what is on the market. Perhaps we would need to ask realtors about what sized homes people actually ask to see.

2. Americans actually do want to buy bigger homes but they know the economic realities and perhaps even the cultural shift and so say they would want a smaller home. As the economist suggests, Americans simply like to dream big. This certainly wouldn’t be the first time that self-reported actions and aspirations don’t match up. If the economy picked up, we could then figure out whether the shift toward smaller homes is real or was a reaction to the economic crisis.

3. Americans want to look at bigger homes because they want the features of the bigger homes in a smaller home.

Time will help us figure out which of these interpretations is most accurate as would more data.

You can read Trulia.com’s press release concerning the survey here and a more  interpretation here. The web survey involved some weighting:

Figures for age, sex, race/ethnicity, education, region and household income were weighted where necessary to bring them into line with their actual proportions in the population. Propensity score weighting was used to adjust for respondents’ propensity to be online. These online surveys are not based on a probability sample and therefore no estimate of theoretical sampling error can be calculated. For complete survey methodologies, including weighting variables, click here.

“Not based on a probability sample” is usually a problem for surveys, even if proper weights are assigned to results. I would like to see some more thorough survey data on some of these issues.

The civil rights argument against NCAA Division I football and men’s basketball

The cover story of the latest Atlantic, The Shame of College Sports,” is provocative and fascinating. The article is mainly about a series of court cases involving the civil rights of “student-athletes” and procuring a share of the NCAA’s football and men’s basketball profits for these “student-athletes.” After reading the full argument, it is difficult to feel much goodwill toward the NCAA.

Facebook moving toward users being able to “treat their life as a 24/7 reality show”

Wired looks at some of Facebook’s recent changes and future plans and summarizes their intentions:

Combined with other Facebook recent announcements — “friend lists” that help you classify your contacts into groups, a Ticker that gives updates from your cohorts as they happen,  and changes in the newsfeed to make it more reflective of what your close friends are doing — Facebook is not so subtly doubling down on its ambitions to enable people to shed the pre-digital cloak of isolation and treat their life as a 24/7 reality show, broadcast to those in their social spheres.

Remember when Time named “you” as the person of the year for 2006, before Facebook had swept across the planet? Here is how the story described the effect of the Internet:

It’s a story about community and collaboration on a scale never seen before. It’s about the cosmic compendium of knowledge Wikipedia and the million-channel people’s network YouTube and the online metropolis MySpace. It’s about the many wresting power from the few and helping one another for nothing and how that will not only change the world, but also change the way the world changes.

This is a more hopeful vision than what Wired offers where individuals can produce and star in their own reality show.

The fulcrum on which Facebook’s future might hinge is whether it is able to help people forge new connections  or whether people continue to hunker down in their existing social groups. The desire that Facebook users would forge new connections is not surprising if you have read sources like The Facebook Effect that highlighted the company’s goals of opening up the world. While research studies still suggest that the majority of Facebook contact and relationships exist between people who already know each other prior to Facebook, perhaps this will change due to Facebook’s interface changes as well as the growing cultural acceptance of conducting our social lives through this online realm. Or perhaps we are destined to live in a world where our highest goal is to become individual celebrities.

Assembling your own furniture benefits you through “the Ikea effect”

Ikea may be able to have lower prices because consumers have to put together their own furniture but there could be another benefit as well for consumers: they will value their assembled purchased product more.

“When labor leads to love,” a paper in the Journal of Consumer Psychology experimentally tests “the Ikea effect” that leads to people valuing things that they assemble, customize or build themselves more highly than premade, finished goods. We’ve all heard the story of how cake-mixes didn’t sell until they were reformulated to require the “cook” to stir in a fresh egg, but most of what we know about this effect is marketing lore, not research. It’s fascinating stuff.

The abstract of the paper:

In four studies in which consumers assembled IKEA boxes, folded origami, and built sets of Legos, we demonstrate and investigate boundary conditions for the IKEA effect—the increase in valuation of self-made products. Participants saw their amateurish creations as similar in value to experts’ creations, and expected others to share their opinions. We show that labor leads to love only when labor results in successful completion of tasks; when participants built and then destroyed their creations, or failed to complete them, the IKEA effect dissipated. Finally, we show that labor increases valuation for both “do-it-yourselfers” and novices.

I suspected there may not be much positive effect when the consumer can’t assemble their purchase.

While this is interesting in itself, it leads me to another question: were companies like Ikea and others aware of this effect and therefore required assembly for more items so that consumers would have more positive feelings for certain products?