Sports writer reviews new book “A Dreadful Deceit: The Myth of Race from the Colonial Era to Obama’s America”

Not too many football columns include a book review of a new book on the social construction of race:

This emerging theory is reflected in a book about to be released, “A Dreadful Deceit: The Myth of Race from the Colonial Era to Obama’s America,” by Jacqueline Jones, a highly regarded University of Texas historian. Your columnist just finished an advance copy, and was impressed — the volume may have a lasting impact on American thought.

Jones persuasively argues that the wealthy and powerful of previous centuries were obsessed with holding back the poor. Pretending blacks represent a different “race” than whites created an excuse, she contends, for the well-off to mistreat blacks; and also a lever to prevent poor blacks and poor whites from joining in common cause. Whites “fashioned their own identity by contrasting themselves to blacks,” Jones writes, ingraining the concept that skin color is somehow fundamentally different from all the other cosmetic distinctions among persons, then using the biases to prevent blacks from achieving the education and economic power that would disprove racial assumptions.

“A Dreadful Deceit” is one of those books that may succeed more because it coincides with developments in public thought, than because of being a great work. Jones employs the “storytelling” structure that is all the rage in academia, which posits that because minorities and women of the past were marginalized, they can be understood only through their personal narratives. This may be true; the trouble is that for every personal narrative of oppression, there is a personal narrative of someone who was not mistreated. Grand themes of history, one of which Jones claims to have discovered, need more than anecdotes, however compelling. Jones also comes perilously close to contending, “Race is an imaginary concept for which the white race should be blamed.”…

Such faults aside, “A Dreadful Deceit” may put into the national conversation the notion that categorizing by “race” is an obsolescent idea. Skin color tells nothing more about a person than eye color; there is simply one human race. That is a powerful, progressive idea.

Sounds like an interesting book. However, I wonder if it could be used to justify a color blind view: if everyone is more or less the same genetically, why talk about race at all? Even if race is socially constructed, it continues to have real ramifications.

On a separate note, I must say I enjoy sports writers who can also converse intelligently about a broad range of academic topics. Gregg Easterbrook does this quite well but most do not. Bill Simmons has too much pop culture and often acts like he wants to be viewed as smart rather than actually is learned. The typical big-city newspaper columnist will often make reference to social issues but does so in a ham-handed way. Think Rick Reilly who often uses personal narratives to try to make a bigger point. Too often, sports writers acts like sports are the main things that matter – and the rest of life supports it.

Is the media narrative that bullying directly leads to suicide a social construction?

A member of the Poynter Institute argues the media narrative that bullying leads to suicide is too simple:

The common narrative goes like this: Mean kids, usually the most popular and powerful, single out and relentlessly bully a socially weaker classmate in a systemic and calculated way, which then drives the victim into a darkness where he or she sees no alternative other than committing suicide.

And yet experts – those who study suicide, teen behavior and the dynamics of cyber interactions of teens – all say that the facts are rarely that simple. And by repeating this inaccurate story over and over, journalists are harming the public’s ability to understand the dynamics of both bullying and suicide…

Yet when journalists (and law enforcement, talking heads and politicians) imply that teenage suicides are directly caused by bullying, we reinforce a false narrative that has no scientific support. In doing so, we miss opportunities to educate the public about the things we could be doing to reduce both bullying and suicide…

It is journalistically irresponsible to claim that bullying leads to suicide. Even in specific cases where a teenager or child was bullied and subsequently commits suicide, it’s not accurate to imply the bullying was the direct and sole cause behind the suicide.

I don’t know this literature too well outside of reading some work by Michael Kimmel on gender and bullying and Katherine Newman et al. regarding school shootings. Some thoughts:

1. Bullying is not a good thing, even if it doesn’t lead to tragic outcomes.

2. Even if a majority of kids who are bullied don’t commit suicide, that doesn’t mean that there isn’t a relationship. It might be that under certain conditions (perhaps social and environmental conditions or perhaps it has to do with more individual physiological traits) this relationship is more likely.

3. It seems that the media does not generally do very well in conveying complex stories. Perhaps it is because they don’t lend themselves to soundbites and headlines. Perhaps it is the need to find the winners, just like on ESPN. Perhaps the audience doesn’t want a complex story. But, look at any of the major events of recent years that have drawn a lot of media attention – from invading Iraq to Hurricane Katrina to the Trayvon Martin case – and you see relatively simple narratives for incredibly complex situations. Context matters.

As researchers look more at this issue, this is a reminder that the public perceptions of tragic events matter.

h/t Instapundit

Journalists: stop saying scientists “proved” something in studies

One comment after a story about a new study on innovation in American films over time reminds journalists that scientists do not “prove” things in studies.

The front page title is “Scientist Proves…”

I’m willing to bet the scientist said no such thing. Rather it was probably more along the lines of “the data gives an indication that…”

Terms in science have pretty specific meanings that differ from our day-to-day usage. “Prove” and “theory, among others, are such terms. Indeed, science tends to avoid “prove” or “proof.” To quote another article “Proof, then, is solely the realm of logic and mathematics (and whiskey).”

[end pedantry]

To go further, using the language of proof/prove tends to relay a particular meaning to the public: the scientist has shown without a doubt and that in 100% of cases that a causal relationship exists. This is not how science, natural or social, works. We tend to say outcomes are more or less likely. There can also be relationships that are not causal – correlation without causation is a common example. Similarly, a relationship can still be true even if it doesn’t apply to all or even most cases. When teaching statistics and research methods, I try to remind my students of this. Early on, I suggest we are into “proving” things but rather looking for relationships between things using methods, quantitative or qualitative, that still have some measure of error built-in. If we can’t have 100% proof, that doesn’t mean science is dead – it just means that done correctly, we can be more confident about our observations.

See an earlier post regarding how Internet commentors often fall into similar traps when responding to scientific studies.

 

Irresponsible to take FBI crime statistics and name a “murder capital”

News stories like this one seem to suggest that the FBI just designated Chicago the murder capital of the United States.

Move over New York, the Second City is now the murder capital of America.

According to new crime statistics released this week by the Federal Bureau of Investigation, Chicago had more homicides in 2012 than any other city in the country. There were 500 murders in Chicago last year, the FBI said, surpassing New York City, which had 419.

In 2011, there were 515 homicides in the Big Apple, compared with the 431 in Chicago.

But as the Washington Post noted, residents of Chicago and New York were much less likely to be victims of a homicide than some Michigan residents. In Flint, for example, there were 63 killings — a staggering number when you consider Flint’s population is 101,632 — “meaning 1 in every 1,613 city residents were homicide victims.” In Detroit, where 386 killings occurred in 2012, 1 in 1,832 were homicide victims.

Check out the FBI press release announcing the 2012 figures: there is no mention of a “murder capital.” In fact, the press release seems to caution against the sort of sensationalistic interpretations that are implied by “murder capital”:

Each year when Crime in the United States is published, some entities use the figures to compile rankings of cities and counties. These rough rankings provide no insight into the numerous variables that mold crime in a particular town, city, county, state, tribal area, or region. Consequently, they lead to simplistic and/or incomplete analyses that often create misleading perceptions adversely affecting communities and their residents. Valid assessments are possible only with careful study and analysis of the range of unique conditions affecting each local law enforcement jurisdiction. The data user is, therefore, cautioned against comparing statistical data of individual reporting units from cities, metropolitan areas, states, or colleges or universities solely on the basis of their population coverage or student enrollment.

To their credit, a number of these news stories include figures like those in the quoted section above: the murder rate is probably more important than the actual number of murders since populations can vary quite a bit. But, that still doesn’t stop media sources from leading with the “murder capital” idea.

My conclusion: this is an example of an irresponsible approach to crime statistics. Even if murders were down everywhere, the media could still designate a “murder capital” referring to whatever city had the most murders.

“Why do we believe North America’s biggest cities are dangerous when they are, in fact, among the safest places in the world?”

Just how dangerous are North American cities?

Why do we believe North America’s biggest cities are dangerous when they are, in fact, among the safest places in the world? In large part, because it was once true: For most of the 20th century (and a good part of the 19th), our big cities really were dangerous. Murders, muggings, armed robberies and sexual assaults were big-city phenomena, and the way to escape physical danger was to move away. Today, the opposite is true.

If you really want to find murder city, you need to get out of North America. The most violent cities in the world are places that used to be small and peaceful, but have very recently become huge cities. And no wonder: The cities of the Southern and Eastern hemispheres are doing today what our cities did a century ago: Absorbing huge, formerly rural populations. In 50 years, Kinshasa has grown from 500,000 to 8 million people; Istanbul from 900,000 to 12 million…

“The twenty-first century,” it concludes, “is witness to a crisis of urban violence.” The two billion people becoming city-dwellers are facing the “urban dilemma” – they realize that moving to the city is an improvement in their lives by most known measures, but it does expose them to greater risk and danger. So while urbanization has cut world poverty in half and lifted billions out of starvation, the hives of crime and danger in the city are preventing the next step into prosperity: “This dark side of urbanization threatens to erase its potential to stimulate growth, productivity and economic dividends.”

By no means is this inevitable. Cities are not naturally more violent: Yes, Caracas and Cape Town have horrendous murder rates. On the other hand, very densely-populated cities such as Dhaka and Mumbai have rates below their national averages – they are actually safer places to live than the villages migrants are leaving behind. In poor countries, and here in the West, the really huge cities are often much safer than the small and medium-sized ones, where the real corruption and danger lie. In India, which has been galvanized by a rape crisis in the fast-urbanizing north, new research shows that rates of sexual assault and rape remain higher in rural areas. And we have learned from Brazil and South Africa that big, bold interventions can make dangerous cities safer.

It is helpful to keep a global perspective on this issue. What counts as violent is relative: Americans tend to compare their cities to other American big cities, perhaps within regions or to the biggest cities in the country.

Another reason our big cities are seen as violent: urban violence is a consistent media story, even as violent crime rates have dropped in many cities.

Congressional town halls not necessarily indicative of public opinion

I heard two news reports yesterday from two respected media sources about Congressional members holding towns halls in their districts about possible military action in Syria. Both reports featured residents speaking up against military action. Both hinted that constituents weren’t happy with the idea of military action. However, how much do town halls like these really tell us?

I would suggest not much. While they give constituents an opportunity to directly address a member of Congress, these events are great for the media. There are plenty of opportunities for heated speeches, soundbites, and disagreement amongst the crowd. One report featured a soundbite of a constituent suggesting that if he were in power, he would put charge both the president and his congressman with treason. The other report featured some people speaking for military action in Syria – some Syrian Americans asking for the United States to stand up to a dictator – and facing boos from others in the crowd.

Instead of focusing on town halls which provide some political theater, we should look to national surveys to American public opinion. Focus on the big picture, not on towns halls which provide small samples.

Homelessness went down in last decade but not much coverage of this policy success

Here is a story you may not have heard: homelessness in the United States has gone down in the last decade.

The National Alliance to End Homelessness, a leader in homelessness service and research, estimates a 17% decrease in total homelessness from 2005 to 2012. As a refresher: this covers a period when unemployment doubled (2007-2010) and foreclosure proceedings quadrupled (2005-2009)…And what about the presidents responsible for this feat? General anti-poverty measures – for example, expanding the Earned Income Tax Credit — have helped to raise post-tax income for the poorest families. But our last two presidents have made targeted efforts, as well. President George W. Bush’s “housing first” program helped reduce chronic homelessness by around 30% from 2005 to 2007. The “housing first” approach put emphasis on permanent housing for individuals before treatment for disability and addiction.

The Great Recession threatened to undo this progress, but the stimulus package of 2009 created a new $1.5 billion dollar program, the Homeless Prevention and Rapid Re-Housing Program. This furthered what the National Alliance called “ground-breaking work at the federal level…to improve the homelessness system by adopting evidence-based, cost effective interventions.” The program is thought to have aided 700,000 at-risk or homeless people in its first year alone, “preventing a significant increase in homelessness.”

Since then, the Obama administration also quietly announced in 2010 a 10-year federal plan to end homelessness. This is all to say that the control of homelessness, in spite of countervailing forces, can be traced directly to Washington—a fact openly admitted by independent organizations like the National Alliance to End Homelessness.

The article goes on to suggest why there hasn’t been much coverage of this success: homelessness is not much of a social problem in Washington or the national media. The social construction of homelessness as a social problem that should receive a lot of public attention either hasn’t been very successful, was never really attempted, or other social problems (like various wars on crime, poverty, terrorism, etc.) have captured more attention.

But, if all the numbers cited above are correct, it seems a shame that a positive effect of public policies regarding a difficult problem is going relatively unnoticed…

Three possible reasons why the harsh national spotlight is on Chicago

Whet Moser proposes three reasons Chicago has received negative attention recently from the national media:

It’s a big, easy target. Chicago’s “Big Shoulders” image—it was the city that “built the American dream,” to use the historian Thomas Dyja’s words—makes any fall from that perch seem that much more momentous. “We were the future,” says the Northwestern professor Bill Savage.

The Obama factor. Chicago’s problems never used to be much of a national story (unless a governor got indicted). But after a skinny Chicagoan became president—a man whose team has included a Daley, our current mayor, and one of the country’s most powerful political advisers—the light of press attention shone more brightly. “When you look at what’s wrong [with the country],” says Savage, “you look at Chicago.”

It’s our turn. In the 1970s, New York City “was collapsing,” the Reader media critic Michael Miner points out. “The Summer of Sam, ‘Ford to New York: Drop Dead.’?” When Los Angeles hit hard times in the early 1990s, it “was just as much of a [media] whipping boy,” says Savage. Chicago is a logical third. It will be somebody else’s turn soon enough. Prepare yourself, Houston (which is projected to surpass Chicago in population by 2030): You may be next.

Some thoughts about each of these proposed reasons:

#1: Out of the three reasons listed above, I find this one the least plausible. Yes, Chicago was once the new American city (see the late 1800s) but it has been eclipsed by Los Angeles (perhaps Hollywood and the generally glitter of the city limits negative attention?) and Chicago has been suffering from the same kinds of problems as today (loss of manufacturing jobs, poverty, crime, inequality) since at least the 1970s if not all the way back in the early 1900s with the Black Belt and immigrant experience. Chicago may have once been the future (also see the 1893 Columbian Exposition) but that future disappeared a long time ago (and perhaps Chicagoans hold on to that 1893 fair a little too closely as well). This might be a longer story about Chicago representing the problems of the Rust Belt – a cycle of loss, rebirth (1990-2006 or so in Chicago), then problems again – than about the loss of a future.

#2: Chicago has never had a president so linked to the city. And, while Obama spent much of his adult life in Chicago, he isn’t originally from the city. While the Daleys are well known, their rule was much more provincial.

#3: This suggests that such negative attention is cyclical, either because different cities experience trouble at different times or there is a sort of revolving set of cities that receive attention. Houston might be next if people first learn about its growth and changes.

Plus, has Chicago received more negative attention recently than Detroit?

Methodological issues with the “average” American wedding costing $27,000

Recent news reports suggest the average American wedding costs $27,000. But, there may be some important methodological issues with this figure: selection bias and using an average rather than a median.

The first problem with the figure is what statisticians call selection bias. One of the most extensive surveys, and perhaps the most widely cited, is the “Real Weddings Study” conducted each year by TheKnot.com and WeddingChannel.com. (It’s the sole source for the Reuters and CNN Money stories, among others.) They survey some 20,000 brides per annum, an impressive figure. But all of them are drawn from the sites’ own online membership, surely a more gung-ho group than the brides who don’t sign up for wedding websites, let alone those who lack regular Internet access. Similarly, Brides magazine’s “American Wedding Study” draws solely from that glossy Condé Nast publication’s subscribers and website visitors. So before they do a single calculation, the big wedding studies have excluded the poorest and the most low-key couples from their samples. This isn’t intentional, but it skews the results nonetheless.

But an even bigger problem with the average wedding cost is right there in the phrase itself: the word “average.” You calculate an average, also known as a mean, by adding up all the figures in your sample and dividing by the number of respondents. So if you have 99 couples who spend $10,000 apiece, and just one ultra-wealthy couple splashes $1 million on a lavish Big Sur affair, your average wedding cost is almost $20,000—even though virtually everyone spent far less than that. What you want, if you’re trying to get an idea of what the typical couple spends, is not the average but the median. That’s the amount spent by the couple that’s right smack in the middle of all couples in terms of its spending. In the example above, the median is $10,000—a much better yardstick for any normal couple trying to figure out what they might need to spend.

Apologies to those for whom this is basic knowledge, but the distinction apparently eludes not only the media but some of the people responsible for the surveys. I asked Rebecca Dolgin, editor in chief of TheKnot.com, via email why the Real Weddings Study publishes the average cost but never the median. She began by making a valid point, which is that the study is not intended to give couples a barometer for how much they should spend but rather to give the industry a sense of how much couples are spending. More on that in a moment. But then she added, “If the average cost in a given area is, let’s say, $35,000, that’s just it—an average. Half of couples spend less than the average and half spend more.” No, no, no. Half of couples spend less than the median and half spend more.

When I pressed TheKnot.com on why they don’t just publish both figures, they told me they didn’t want to confuse people. To their credit, they did disclose the figure to me when I asked, but this number gets very little attention. Are you ready? In 2012, when the average wedding cost was $27,427, the median was $18,086. In 2011, when the average was $27,021, the median was $16,886. In Manhattan, where the widely reported average is $76,687, the median is $55,104. And in Alaska, where the average is $15,504, the median is a mere $8,440. In all cases, the proportion of couples who spent the “average” or more was actually a minority. And remember, we’re still talking only about the subset of couples who sign up for wedding websites and respond to their online surveys. The actual median is probably even lower.

These are common issues with figures reported in the media. Indeed, these are two questions the average reader should ask when seeing a statistic like the average cost of the wedding:

1. How was the data collected? If this journalist is correct about these wedding cost studies, then this data is likely very skewed. What we would want to see is a more representative sample of weddings rather than having subscribers or readers volunteer how much their wedding cost.

2. What statistic is reported? Confusing the mean and median is a big program and pops up with issues as varied as the average vs. median college debtthe average vs. median credit card debt, and the average vs. median square footage of new homes. This journalist is correct to point out that the media should know better and shouldn’t get the two confused. However, reporting a higher average with skewed data tends to make the number more sensationalistic. It also wouldn’t hurt to have more media consumers know the difference and adjust accordingly.

It sounds like the median wedding cost would likely be significantly lower than the $27,000 bandied about in the media if some basic methodological questions were asked.

Combining sociology and journalism

The efforts of a hyper-local journalism website in Alhambra, California illustrate an intriguing combination: journalism plus sociology.

This fixation on community interaction is part of the site’s DNA. As city newspapers inexorably decline, a smattering of new “hyperlocal” news outlets have sprung up, from Aol’s Patch network to bootstrap start-ups. But the Source has an unusual ingredient: more than a decade of research by University of Southern California communications expert Sandra Ball-Rokeach and her team…

Ball-Rokeach studies what she calls “communication ecologies”—the web of ways in which different communities get and spread information, from Facebook to the grocery-store bulletin board, from the local tabloid to chatting with neighbors. She’s found that these networks can differ dramatically from community to community, ethnic group to ethnic group…

Understanding those differences is crucial for anyone, be they advertisers or political parties, trying to reach specific communities. Ball-Rokeach believes it’s also important for civic engagement. Strong cities with plugged-in citizens tend to have dense “neighborhood storytelling networks”—crisscrossing lines of media outlets, community groups, and other institutions that hold a running conversation about what it means to live there…

Instead of simply sketching out the usual beats—city council, business, sports—they sent out a team of USC researchers who interviewed and held focus groups with residents in all three local languages. Their exploration showed that residents wanted to know more about education, local businesses, dining and entertainment deals, crime, and traffic and parking. “Many of them just said, ‘We don’t know what’s happening in Alhambra,’” says Ball-Rokeach…

Still, even if the Alhambra Source goes the same way, there’s an intriguing idea in this relationship between newspaper and university. What could embattled major dailies from The Boston Globe to the Los Angeles Times learn about their readers by teaming with sociology grad students? Tailoring a news outlet to reflect its community might not always produce the most in-depth journalism—but it might at least help the news business survive.

It sounds like what sociology and social science bring to the table in this combination is the ability to collect and analyze data. However, it still sounds like this social science research is more about marketing or targeting an audience than anything else. In an era of difficulty for newspapers and other news sources, this is not to be underestimated. But, this still puts the social science in more of a marketing role: what do we need to address in order to attract readers? At the same time, I could envision a stronger combination of these two disciplines where the journalism is much more informed and shaped by research and data rather than anecdotes and single cases and the sociologists then have another outlet to share their findings and explanations about the social world.