WikiLeak cables as historical documents

How should the WikiLeaks cables be viewed as historical documents? One historian suggests caution:

In the short term, this is a potential gold mine for foreign-affairs scholarship. In the long term, however, what WikiLeaks wants to call “Cablegate” will very likely make life far more difficult for my profession.

For now, things certainly look very sweet. Timothy Garton Ash characterized the documents as “the historian’s dream.” Jon Western, a visiting professor of international relations at the University of Massachusetts at Amherst, blogged that WikiLeaks may allow scholars to “leapfrog” the traditional process of declassification, which takes decades. While the first wave of news reports focused on the more titillating disclosures (see: Col. Muammar el-Qaddafi’s Ukrainian nurse), the second wave has highlighted substantive and trenchant aspects of world politics and American foreign policy. The published memos reveal provocative Chinese perspectives on the future of the Korean peninsula, as well as American policy makers’ pessimistic perceptions of the Russian state.

Scholars will need to exercise care in putting the WikiLeaks documents in proper perspective. Some researchers suffer from “document fetishism,” the belief that if something appears in an official, classified document, then it must be true. Sophisticated observers are well aware, however, that these cables offer only a partial picture of foreign-policy decision-making. Remember, with Cablegate, WikiLeaks has published cables and memos only from the State Department. Last I checked, other bureaucracies—the National Security Council, the Defense Department—also shape U.S. foreign policy. The WikiLeaks cables are a source—they should not be the sole source for anything.

Seems like a reasonable argument to me. Much research, history included, includes collecting a variety of evidence from a variety of sources. Claiming that these cables represents THE view of the United States is naive. They do reveal something, particularly about how diplomatic cables and reports work, but not everything. How much one can generalize based on these cables is unclear.

As this article points out, how these cables have been portrayed in the media is interesting. Where are the historians and other scholars to put these cables in perspective?

The methodology behind Money’s 2010 best places to live

Every year, Money magazine publishes a list of “the best places to live.” I’ve always enjoyed this list as it attempts to distill what communities truly match what people would desire in a community. The winner in 2010 (in the August issue) was Eden Prairie, Minnesota

But one issue with this list is how the communities are selected. In 2009, the list was about small towns, communities between 8,500 and 50,000. In 2010, the list was restricted to “small cities,” places with 50,000 to 300,000 residents. Here is how the magazine selected its 2010 list of communities to grade and rank:

746
Start with all U.S. cities with a population of 50,000 to 300,000.

555
Exclude places where the median family income is more than 200% or less than 85% of the state median and those more than 95% white.

322
Screen out retirement communities, towns with significant job loss, and those with poor education and crime scores. Rank remaining places based on housing affordability, school quality, arts and leisure, safety, health care, diversity, and several ease-of-living criteria.

100
Factor in additional data on the economy (including fiscal strength of the government), jobs, housing, and schools. Weight economic factors most heavily.

30
Visit towns and interview residents, assessing traffic, parks, and gathering places and considering intangibles like community spirit.

1
Select the winner based on the data and reporting.

A couple of questions I have:

1. I agree that it can be hard to compare communities with 10,000 people and 150,000 people. But can the list from each year be called “the best place to live” if the communities of interest change?

2. I wonder how they chose the median income cutoffs. So this cuts out places that might be “too exclusive” or “not exclusive enough.” Are these places not desirable to people?

3. Some measure of racial homogeneity is included in several steps. How many home buyers desire this? We know from a lot of research that whites tend to avoid neighborhoods with even moderate levels of African-Americans.

4. Weighting economic factors heavily seems to make sense. Jobs and economic opportunities are a good enticement for moving.

5. I would be interested to see what kind of information they collected on their 30 community visits. How many residents and leaders did they talk to? How does one measure “community spirit”? If a community says it has “community spirit,” how exactly do you check to see whether that is correct?

Overall, this is a complicated methodology that accounts for a number of factors. What I would like to know is how this list compares with how Americans make decisions about where to live. Do people want to move up to places like this and then stay there or is the dream for many to move on to more exclusive communities (if possible)? How many Americans could realistically afford to or possibly move into these communities?

(A side note: the four Chicago suburbs in the top 100 for 2010: Bolingbrook at #43, Naperville at #54, Mount Prospect at #56, and Arlington Heights at #59. Naperville used to rank much higher earlier in the 21st century – I wonder how it has slipped in the rankings.)

New study on American church attendance: a 10-18 percent gap between what people say versus what they actually do

The United States is consistently cited as a religious nation. The contrast is often drawn with a number of European nations where church attendance is usually said to be significantly lower than the American rate of about 40-45% of Americans attending on a regular basis. These figures have driven several generations of sociologists to debate the secularization thesis and why the American religious landscape is different.

But what if Americans overstate their church attendance on surveys and in reality, do attend church on a rate similar to European nations? A new study based on time diary data suggests this is the case:

While conventional survey data show high and stable American church attendance rates of about 35 to 45 percent, the time diary data over the past decade reveal attendance rates of just 24 to 25 percent — a figure in line with a number of European countries.

America maintains a gap of 10 to 18 percentage points between what people say they do on survey questions, and what time diary data says they actually do, Brenner reports. The gaps in Canada resemble those in America, and in both countries, gaps are both statistically and substantively significant…

“The consistency and magnitude of the American gap in light of the multiple sources of conventional survey data suggests a substantive difference between North America and Europe in overreporting.”

Given these findings, Brenner notes, any discussion of exceptional American religious practice should be cautious in using terms like outlier and in characterizing American self-reported attendance rates from conventional surveys as accurate reports of behavior. Rather, while still relatively high, American attendance looks more similar to a number of countries in Europe, after accounting for over-reporting.

A couple of thoughts about this:

1. This is another example where the research method used to collect data matters. Ask people about something on a survey and then compare that data to what people report in a time diary and it is not unusual to get differing responses. What exactly is going on here? Surveys ask people to consult their memory, a notoriously faulty source of information. Diaries have their own issues but supposedly are better at getting better information about daily or regular practices.

2. Even if church attendance data is skewed in the US, it doesn’t necessarily mean that America might still not be exceptional in terms of religion. Religiosity is made up of a number of factors including doctrinal beliefs, importance of religion in everyday life, membership in a religious congregation, the prevalence of other religious practices, and more. Church attendance is a common measure of religiosity but not the only one.

3. This is interesting data but it leads to another interesting question: why exactly would Americans overestimate their church attendance by this much? Since the time diary data from Europe showed a smaller gap, it suggests that Americans think they have something to gain by overestimating their church attendance. Perhaps Americans think they should say they attend church more – there is still social value and status attached to the idea that one attends church.

Making gratitude part of the socialization process

A sociologist from UC-Berkeley suggests that children can be taught gratitude from a young age:

Most of us are actually born feeling entitled to our parents’ care. That means that if we don’t teach kids gratitude and practice it with them, they grow up feeling entitled, and entitlement does not lead to happiness. On the contrary, it leads to feelings of disappointment and frustration. In contrast, gratitude makes us happy and satisfied with our lives…

Studies of adults and college students show positive outcomes from consciously practicing gratitude. My own experience with children has been that they become kinder, more appreciative, more enthusiastic and just generally happier.

I wonder if there is broad-level data to support her claims that children who have more gratitude are happier. One could do a study of grateful adults and try to trace back where exactly they think (and where they actually did) develop this attitude. Could we also figure out why some children develop gratitude and others do not?

Also, these claims about gratitude leading to happiness sounds more like contentment rather than happiness. If we measured happiness on two levels, immediate happiness and longer-term satisfaction, gratitude would seem to lead to more longer-term satisfaction.

Pew finds that landline-only surveys are biased toward Republicans

Polling techniques have become more complicated in recent years with the introduction of cell phones. In the past, researchers could reasonably assume most US residents could be accessed through a landline. However, Pew now suggests there may be a political bias in surveys that only access people though landlines:

Across three Pew Research polls conducted in fall 2010 — conducted among 5,216 likely voters, including 1,712 interviewed on cell phones — the GOP held a lead that was on average 5.1 percentage points larger in the landline sample than in the combined landline and cell phone sample…

The difference in estimates produced by landline and dual frame samples is a consequence not only of the inclusion of the cell phone-only voters who are missed by landline surveys, but also of those with both landline and cell phones — so called dual users — who are reached by cell phone. Dual users reached on their cell phone differ demographically and attitudinally from those reached on their landline phone. They are younger, more likely to be black or Hispanic, less likely to be college graduates, less conservative and more Democratic in their vote preference than dual users reached by landline…

Cell phones pose a particular challenge for getting accurate estimates of young people’s vote preferences and related political opinions and behavior. Young people are difficult to reach by landline phone, both because many have no landline and because of their lifestyles. In Pew Research Center surveys this year about twice as many interviews with people younger than age 30 are conducted by cell phone than by landline, despite the fact that Pew Research samples include twice as many landlines as cell phones.

This seems to make sense: those who have cell phones and don’t have landlines are likely to be different than those who are reached by landlines.

A few questions that I have: does this issue exist in all phone surveys today (and this article suggests there was a sizable differences between landline people and cell phone people in five of six surveys)? Have other polling firms had similar findings? If Pew now has some ideas about the extent of this issue, is the proper long-term response to call more cell phones or to weight the results more toward cell phone users?

One possible response would be to include multiple methods for more surveys. This might include samples of landline respondents, cell phone respondents, and web respondents. While this is more costly and time-consuming, research firms could then triangulate results.

Trying to explain American differences in 12 easy categories

I recently flipped through Our Patchwork Nation, a recent book that tries to explain differences in America by splitting counties into twelve types: “boom towns, evangelical epicenters, military bastions, service worker centers, campus and careers, immigration nation, minority central, tractor community, Mormon outposts, emptying nests, industrial metropolises and monied burbs.” A review in the Washington Post offers a quick overview of this genre of book:

And every few years there’s another book promising to chart the country’s divisions by splitting it into categories more telling than the 50 states. Former Washington Post writer Joel Garreau offered his “Nine Nations of North America” in 1981; two decades later came Richard Florida with “The Rise of the Creative Class,” followed by Bill Bishop’s “The Big Sort,” which sought to explain why so many of us are clustering in enclaves of the like-minded.

The latest aspiring taxonomists are Dante Chinni, a journalist, and James Gimpel, a University of Maryland government professor, who use socioeconomic data to break the country’s 3,141 counties into 12 categories.

This sort of analysis is now fairly common: there is a lot of publicly available data from the Census Bureau and many more people are now interested in looking at the United States as a whole.

I have two concerns about this data. My main complaint about this effort is how the types are developed at the county level. This may be a good level for obtaining data (easy to do from the Census Bureau) but it is debatable about whether this is a practical level for the lives of Americans. When asked where they live, most people would name a community/city first and then next a state or region before getting to a county. County rules and ordinances have limited effect in many places as municipal regulations take precedence.

A second concern is that this type of sorting or clustering tells us where places are now but doesn’t say as much about how they arrived at this point or how they might change in the future. This is a cross-sectional analysis: it tells us what American counties look like right now. This may be useful for looking at recent and upcoming trends but most of these places have deeper histories and characters than just a moniker like “monied burbs.” This would explain some of the Post’s confusion about lumping together “emptying nests” communities in the Midwest and Florida.

Large cities with most, least crime

CQ Press has compiled a list of the safest and least safe big cities in terms of crime:

The study by CQ Press found St. Louis had 2,070.1 violent crimes per 100,000 residents, compared with a national average of 429.4. That helped St. Louis beat out Camden, which topped last year’s list and was the most dangerous city for 2003 and 2004.

Detroit, Flint, Mich., and Oakland, Calif., rounded out the top five. For the second straight year, the safest city with more than 75,000 residents was Colonie, N.Y.

I would not have guessed St. Louis as topping this list. Of course, St. Louis doesn’t like this ranking and suggests that the crime situation in the city has been improving:

The annual rankings are based on population figures and crime data compiled by the FBI. Some criminologists question the findings, saying the methodology is unfair.

Greg Scarbro, unit chief of the FBI’s Uniform Crime Reporting Program, said the FBI also discourages using the data for these types of rankings.

Kara Bowlin, spokeswoman for St. Louis Mayor Francis Slay, said the city actually has been getting safer over the last few years. She said crime in St. Louis has gone down each year since 2007, and so far in 2010, St. Louis crime is down 7 percent.

Erica Van Ross, spokeswoman for the St. Louis Police Department, called the rankings irresponsible.

“Crime is based on a variety of factors. It’s based on geography, it’s based on poverty, it’s based on the economy,” Van Ross said.

“That is not to say that urban cities don’t have challenges, because we do,” Van Ross said. “But it’s that it’s irresponsible to use the data in this way.”

It probably doesn’t matter if methodology is good or bad for these rankings because what really matters is public perception. If St. Louis becomes known as a city of crime, comparable to places like Camden, Oakland, Detroit, and Flint, this could have a negative effect on the number of businesses and residents who want to move to the area. It is not a surprise to see the City of St. Louis fight back by attacking the data and also suggesting that crime rates have gone down in recent years (though this is relative and doesn’t give an indication of how their crime rate compares to other places).

(I was curious to see where Chicago and its suburbs, such as Naperville, ranked. Unfortunately, it looks like the data for the whole Chicago MSA was not available.)

An emerging portrait of emerging adults in the news, part 3

In recent weeks, a number of studies have been reported on that discuss the beliefs and behaviors of the younger generation, those who are now between high school and age 30 (an age group that could also be labeled “emerging adults”). In a three-part series, I want to highlight three of these studies because they not only suggest what this group is doing but also hints at the consequences. A study in part one showed that there is an association between hyper-texting and hyper social networking use and risky behavior. A study in part two showed that teens and college students today are more tolerant than previous generations but less empathetic.

Another interesting aspect of the lives of emerging adults is living alone. While this is common among the middle-aged, the proportion of emerging adults living alone is growing:

The stats are arresting. In this country, approximately 31 million people live alone, and one-person households make up 28 percent of the total, tying with childless couples as the most common residential type — “more common,’’ Klinenberg pointed out, “than the nuclear family, the multigenerational family, and the roommate or group home.’’

Those who live alone are mostly middle-age, with young adults the fastest-growing segment, and there are more women than men. No longer a transitional stage, living alone is one of the most stable household arrangements. And while one-person households were once scattered in low-density rural settings, they’re now concentrated in cities. “In Manhattan,’’ he said, “more than half of all residences are one-person dwellings.’’

I’ve seen a number of commentators attempt explanations for this: this is part of becoming an adult today, television shows like Friends or How I Met Your Mother glamorized the social life in the city (though these shows tend to show roommates living together), outrageous housing costs push younger people into odd living arrangements.

But couldn’t this trend toward living alone be linked to the two prior studies we looked at? If a lot of social life occurs through texting or through social networking sites and emerging adults are more tolerant but less empathetic, then living alone makes some sense. Emerging adults still have a social life – but this social life may look quite different as friends are found and communicated with through technology or social outings rather than through closer ties (such as living together).

And what if living alone or being alone more is the outcome for younger generations? How might this impact society? Such arrangements may be good for self-actualization (or not) but there will be consequences. What will “community” look like in several decades? If these three studies were all the evidence we had, we might conclude that emerging adults like to be social but also like to keep people at an arm’s length.

It is hard to draw conclusions from three studies that are reported in the news – but here is the emerging portrait: social interaction is changing. It may be easy to dismiss this new interaction as bad or wrong but we need more information and research on this particular topic. We need more measurement of depth or quality of relationships. Out of these three studies, we have two measures of interaction quality: the prevalence of risky behaviors (though this is only an association or correlation) and levels of empathy. We could be asking other questions like how many students in college today make arrangements for single rooms in dorms or would prefer to live in single rooms? How many students who study abroad actually are able to fully understand and appreciate a new culture versus just being able to see the differences two cultures?

All of this will be interesting to watch in the coming years as emerging adults  obtain the power to shape society’s values regarding interaction and community.

The statistical calculations used for counting votes

Some might be surprised to hear that “Counting lots of ballots [in elections] with absolute precision is impossible.” Wired takes a brief look at how the vote totals are calculated:

Most laws leave the determination of the recount threshold to the discretion of registrars. But not California—at least not since earlier this year, when the state assembly passed a bill piloting a new method to make sure the vote isn’t rocking a little too hard. The formula comes from UC Berkeley statistician Philip Stark; he uses the error rate from audited precincts to calculate a key statistical number called the P-value. Election auditors already calculate the number of errors in any given precinct; the P-value helps them determine whether that error rate means the results are wrong. A low P-value means everything is copacetic: The purported winner is probably the one who indeed got the most votes. If you get a high value? Maybe hold off on those balloon drops.

A p-value is a key measure in most statistical analysis – it provides a measure of how much error is in the data and whether the obtained results are just by chance or whether we can be fairly sure (95% or more) the statistical estimation represents the whole population.

So what is the acceptable p-value for elections in California?

I would be curious to know whether people might seize upon this information for two reasons: (1) it shows the political system is not exact and therefore, possibly corrupt and (2) they distrust statistics altogether.

The globalization of scientific research

A recent report from the United Nations suggests that while the West (and the United States, in particular) still dominate scientific work, other countries are gaining ground. Here are some of the measures from the UNESCO report:

In 2007 Japan spent 3.4% of its GDP on R&D, America 2.7%, the European Union (EU) collectively 1.8% and China 1.4% (see chart 1). Many countries seeking to improve their global scientific standing want to increase these figures. China plans to push on to 2.5% and Barack Obama would like to nudge America up to 3%. The number of researchers has also grown everywhere. China is on the verge of overtaking both America and the EU in the quantity of its scientists. Each had roughly 1.5m researchers out of a global total of 7.2m in 2007…

One indicator of prowess is how much a country’s researchers publish. As an individual country, America still leads the world by some distance. Yet America’s share of world publications, at 28% in 2007, is slipping. In 2002 it was 31%. The EU’s collective share also fell, from 40% to 37%, whereas China’s has more than doubled to 10% and Brazil’s grew by 60%, from 1.7% of the world’s output to 2.7%…

UNESCO’s latest attempt to look at patents has therefore focused on the offices of America, Europe and Japan, as these are deemed of “high quality”. In these patent offices, America dominated, with 41.8% of the world’s patents in 2006, a share that had fallen only slightly over the previous our years. Japan had 27.9%, the EU 26.4%, South Korea 2.2% and China 0.5%.

Even though the United States still dominates a number of measures, UNESCO concluded Asia is the “dominant scientific continent in the coming years.”

A couple of things are interesting here:

1. Even if jobs have left the United States for cheaper locales, the US still has advantages in scientific research. How long this advantage holds up remains to be seen.

2. These are just three possible measures of scientific output. Other ones, such as journal citations, could be used but this seems fairly effective to quickly look at several measures.

3. It is interesting to think about how science itself will change based on increased research roles in non-Western nations.

h/t Instapundit