From gang member to sociologist

A sociologist tells how he journeyed from being a gang member to obtaining a PhD in sociology:

As a doctoral candidate in ethnic studies at the University of California at Berkeley, Rios spent three years shadowing 40 youths between the ages of 14 and 17, a lot of whom had arrest records and gang affiliations. He had plenty of opportunity to learn that many police officers had a poor opinion of any efforts to understand inner-city youths. The police were instead part of a system that kept the boys under constant surveillance, criminalized their even relatively benign behavior, and left them demoralized and angry, Rios argues in a new book, Punished: Policing the Lives of Black and Latino Boys (New York University Press).

When police officers demanded to know what he was doing, Rios knew the routine: Be deferential, even when abusively spoken to. He had grown up on those Oakland streets and he knew the costs of stepping out of line. One day, when he was 14, an officer “stomped my face against the ground with his thick, black, military-grade rubber boot,” he writes.

Rios, now an assistant professor of sociology at the University of California at Santa Barbara, was no angel when that happened. He had just been pulled over in a car he had stolen. He had joined a gang at 13, lured by the promise of protection in Oakland’s drug-riddled, gang-controlled neighborhoods. Soon he was dealing drugs. He was witnessing beatings, knifings, and murders. He served a string of juvenile-detention sentences. And he would soon see his best friend, Smiley, killed by a rival gang member, a bullet to his head.

How Rios, now 33, came to escape that life, and earn a Ph.D., is one striking narrative in Punished. Another is his account of the dissertation research that took him back to the neighborhoods where he grew up. Starting in 2002, he wandered the streets with his subjects at all times of day and night. He saw the jeopardy that defined their lives. And he met their families, their probation officers, and the police officers who constantly monitored them. The boys’ encounters with the police were almost always negative.

It sounds like Rios could have some unusual insights into gangs and policing from his experiences. It also sounds like there are some interesting methodological issues here as Rios was familiar with what he was studying: on one hand, this likely allowed him to understand certain things in ways that outsiders could not but on the other hand, he was warned about “going native.”

I also like how he flips the script with this remark:

Over lunch at the beachside faculty club on the Santa Barbara campus, where a whole academic lifetime seems indisputably safer than one day in gang territory, he says: “A great research question would be: Why not more violence? Why aren’t these kids attacking everyday people? Why are they only attacking themselves?” Knowing the answers, “we might get a little closer to finding ways to implement policies that will allow communities to bring in their own controls relating to group violence.”

This goes against many media portrayals of violence which seems to focus on how violence affects law-abiding (and wealthier?) citizens. I also ask my Intro to Sociology class to think about social order in this way: instead of thinking of why people are deviant at times, why not ask why many/most people are not deviant most of the time?

Additionally, is this growing evidence (along with this) that sociologists are more interested in including more biographical information in their work?

Except more communities to challenge 2010 Census counts

Amidst an economic crisis that has also affected many municipal budgets, expect more communities to appeal the 2010 Census counts:

Cities have two years to contest their counts under the Census Bureau’s appeals process, which began this month…

In recent decades, the peak for challenges was 6,600, or 17 percent of all U.S. jurisdictions, in 1990, when the census missed four million people, including five percent of all blacks and Hispanics.

In 2000, roughly 1,200 jurisdictions, or 3 percent, contested the count. The net change due to census challenges that year was just 2,700 people.

Apart from the challenges, analysts later determined the 2000 census had an overcount of 1.3 million people, due mostly to duplicate counts of more affluent whites with multiple residences. About 4.5 million people were ultimately missed, mostly blacks and Hispanics.

Interestingly, the article suggests that while government dollars are behind these challenges, it is also about the “psychological impact” on civic pride. I wonder who exactly will appeal: St. Louis, Chicago, and a host of other Rust Belt cities lost population and New York City didn’t have the population increase that was expected. Since budgets are tight everywhere, could we even get appeals from places like Houston which experienced sizable growth?

It would also be interesting to hear how exactly the Census Bureau adjusts these figures based on subsequent analyses of overcounts and undercounts. This is a reminder that Census figures are not perfect even as many things, including many social science studies based on population proportions calculated in the Census, are based on these figures. I am not suggesting that the Census figures are wrong but rather that it is a very complicated process that is bound to be tweaked some after the first figures are released.

Urban ethnographer = “pretty much a good street reporter with a PhD”?

A Baltimore journalist describes Elijah Anderson’s career:

Elijah Anderson, who might be the nation’s leading people-watcher, has spent most of the last 30 years observing human beings of all colors and ethnicities mixing it up in public spaces — Philadelphia’s, mainly — and of late he mostly likes what he sees.

He’s found whites, blacks and immigrants from all over the world shopping shoulder to shoulder in Reading Terminal Market and equally stunning diversity in Philly’s Rittenhouse Square. Attention must be paid, Mr. Anderson says. As segregated as Americans are in terms of where we live, the great melting that occurs in public spaces is a phenomenon of consequence. We might be suspicious of each other on streets, but there are important places where diverse people come together and, for the most part, practice getting along. These “cosmopolitan canopies,” as Mr. Anderson calls them, give us a glimpse of post-racial America.

Mr. Anderson, a sociologist who has been on the faculty of two Ivy League universities, calls himself an urban ethnographer, which is pretty much a good street reporter with a PhD. He’s interviewed Philadelphians in their neighborhoods, homes, bars and workplaces to figure out how they live and what they think. He was in Baltimore last week with copies of “The Cosmopolitan Canopy: Race and Civility in Everyday Life,” his latest book on urban social dynamics.

The journalist describes Anderson in two ways: he “might be the nation’s leading people-watcher” and “an urban ethnographer, which is pretty much a good street reporter with a PhD.” Neither of these seem to be particularly complementary. The first suggests that anyone can do what Anderson does – indeed, people watching is a pastime of many people. The second suggests urban ethnographers do what any good reporter would do by observing and interacting with people in neighborhoods.

I think both of these descriptions shortchange ethnography. To start, ethnography is a process that requires practice and particular skills. It is not enough to show up and start talking to people or sit and watch. It often involves participant observation, taking part in the practices of the people you are studying. Second, the goal of ethnography is to return to theories, sociological or otherwise. Ethnography should not end with description but connect to and provide insights regarding a broader body of knowledge.

Perhaps this journalist was providing his thoughts about Anderson’s latest book (see my review here) through his description of ethnography.

Claim of social desirability bias in immigration polls

Social desirability bias is the idea that people responding to surveys or other forms of data collection will say the socially correct answer rather than what they really think. A sociologist argues that this is the case for immigration polls:

A Gallup survey taken last year found 45 percent believe immigration should be decreased, compared to 17 percent saying it should be increased and 34 percent saying it should be kept at present levels. But should such figures be taken at face value? University of California, Berkeley, sociologist Alexander Janus argues not. Using a polling technique designed to uncover hidden bias, he concluded about 61 percent of Americans support a cutoff of immigration. Janus, who published his findings in the journal Social Science Quarterly, argues that “social desirability pressures” lead many on the left to lie about their true feelings on immigration — even when asked in an anonymous poll. In an interview, he discussed the survey he conducted in late 2005 and early 2006:

THE SURVEY: “The survey participants were first split into two similar groups. Individuals in one of the groups were presented with three concepts — ‘The federal government increasing assistance to the poor,’ ‘Professional athletes making millions of dollars per year,’ and ‘Large corporations polluting the environment’ — and asked how many of the three they opposed. Individuals in the second group were given the same three items as individuals in the first group, plus an immigration item: ‘Cutting off immigration to the United States.’ They were asked how many of the four they opposed. The difference in the average number of items named between the two groups can be attributed to opposition to the immigration item. The list experiment is superior to traditional questioning techniques in the sense that survey participants are never required to reveal to the interviewer their true attitudes or feelings.”…

I estimated that about 6 in 10 college graduates and more than 6 in 10 liberals hide their opposition to immigration when asked directly, using traditional survey measures.”

This sounds like an interesting technique because as he mentions, the respondents never have to say exactly which ideas they are opposed to.

In the long run for immigration policy, does it matter that much for liberals if people are secretly against immigration if they are willing to support it publicly? Of course, it could influence individual or small group interactions and how willing people are to participate in rallies and public events. But if people are still willing to vote in a socially desirable way, is this good enough?

I wonder if there are other numbers out there that are influenced by social desirability bias…

James Q. Wilson on the difficulties of studying culture

In a long opinion piece looking at possible explanations for the reduction in crime in America, James Q. Wilson concludes by suggesting that cultural explanations are difficult to test and develop:

At the deepest level, many of these shifts, taken together, suggest that crime in the United States is falling—even through the greatest economic downturn since the Great Depression—because of a big improvement in the culture. The cultural argument may strike some as vague, but writers have relied on it in the past to explain both the Great Depression’s fall in crime and the explosion of crime during the sixties. In the first period, on this view, people took self-control seriously; in the second, self-expression—at society’s cost—became more prevalent. It is a plausible case.

Culture creates a problem for social scientists like me, however. We do not know how to study it in a way that produces hard numbers and testable theories. Culture is the realm of novelists and biographers, not of data-driven social scientists. But we can take some comfort, perhaps, in reflecting that identifying the likely causes of the crime decline is even more important than precisely measuring it.

I find it a little strange that a social scientist wants to leave culture to the humanities (“novelists and biographers”). This sounds like a traditional social science perspective: culture is a slippery concept that is difficult to quantify and make generalizations about. I can imagine this viewpoint from quantitatively minded social scientists who would ask, “where it the data?”

But there is a lot of good research regarding culture that utilizes data. Some of this data is fuzzier qualitative data that involves ethnographies and long interviews and observations. But other data regarding culture comes from more traditional data sources such as large surveys. And if you put together a lot of these data-driven studies, qualitative and quantitative, I think you could put together some hypotheses and ideas regarding American culture and crime. Perhaps all of this data can’t fit into a regression or this isn’t the way that crime is traditionally studied but that doesn’t mean we have to simply abandon cultural explanations and studies.

Getting better data on how students use laptops in class: spy on them

Professors like to talk about how students use laptops in the classroom. Two recent studies shed some new light on this issue and they are unique in how they obtained the data: they spied on students.

Still, there is one notable consistency that spans the literature on laptops in class: most researchers obtained their data by surveying students and professors.

The authors of two recent studies of laptops and classroom learning decided that relying on student and professor testimony would not do. They decided instead to spy on students.

In one study, a St. John’s University law professor hired research assistants to peek over students’ shoulders from the back of the lecture hall. In the other, a pair of University of Vermont business professors used computer spyware to monitor their students’ browsing activities during lectures.

The authors of both papers acknowledged that their respective studies had plenty of flaws (including possibly understating the extent of non-class use). But they also suggested that neither sweeping bans nor unalloyed permissions reflect the nuances of how laptops affect student behavior in class. And by contrasting data collected through surveys with data obtained through more sophisticated means, the Vermont professors also show why professors should be skeptical of previous studies that rely on self-reporting from students — which is to say, most of them.

While these studies might be useful for dealing with the growing use of laptops in classrooms, discussing the data itself would be interesting. A few questions come to mind:

1. What discussions took place with an IRB? It seems that this might have been a problem in the study using spyware on student computers and this was reflected in the generalizability of the data with just 46% of students agreeing to have the spyware on their computer. The other study also could run into issues if students were identifiable. (Just a thought: could a professor insist on spyware being on student computers if the students insisted on having a laptop in class?)

2. These studies get at the disparities between self-reported data and other forms of data collection. I would guess that students would underestimate their distractable laptop use on self-reported surveys because they would suspect that this is the answer that they should give (social desirability bias). But it could also reveal things about how cognizant computer/Internet users are about how many windows and applications they actually cycle through.

3. Both of these studies are on a relatively small scale: one had 45 students, the other had a little more than 1,000 but the data was “less precise” since it involved TAs sitting in the back monitoring students. Expanding the Vermont study and linking laptop use to outcomes on a larger scale is even better: move beyond just talking about the classroom experience and look at its impact on learning outcomes. Why doesn’t someone do this on a larger scale and in multiple settings? Would it be too difficult to get past some of the IRB issues?

In looking at the comments about this story, it seems like having better data on this topic would go a long ways to moving the discussion beyond anecdotal evidence.

The rankings of liveable cities

Architecture critic Edwin Heathcote of the Financial Times asks why the most livable cities in the world, such as Vancouver, are not necessarily the the most loved cities.

This is another argument that deals with methodology: how exactly does one determine which cities are the “most liveable”? If just one or two factors are tweaked by certain publications, the list changes. Just like college rankings (recent thoughts here), such lists should be viewed with some skepticism.

Additionally, the criteria used by publications is not necessarily the criteria used by citizens who have some choices about where to move. Indeed, such lists seem to presume that these are the choices people would make if they had equal opportunity to move within their own country and/or around the world. Of course, most people have more restricted options due to job availability, price, personal preferences, location of family, and more.

In reading about this, it also strikes me that lists of liveable cities also might not make sense to many Americans: why would they want to live in a city when a majority have already chosen a suburban life?

h/t Instapundit

Rising debt for college loans better than debt for a McMansion

The college Class of 2011 might expect more in life than simply to be known as “the most indebted ever“:

22,900: Average student debt of newly minted college graduates

The Class of 2011 will graduate this spring from America’s colleges and universities with a dubious distinction: the most indebted ever.

Even as the average U.S. household pares down its debts, the new degree-holders who represent the country’s best hope for future prosperity are headed in the opposite direction. With tuition rising at an annual rate of about 5% and cash-strapped parents less able to help, the mean student-debt burden at graduation will reach nearly $18,000 this year, estimates Mark Kantrowitz, publisher of student-aid websites Fastweb.com and FinAid.org. Together with loans parents take on to finance their children’s college educations — loans that the students often pay themselves – the estimate comes to about $22,900. That’s 8% more than last year and, in inflation-adjusted terms, 47% more than a decade ago.

In the long run, the investment is probably worth it. Education is a much better reason to borrow money than buying cars or McMansions, and it endows people with economic advantages that the recession and slow recovery have only accentuated. As of 2009, the annual pre-tax income of households headed by people with at least a college degree exceeded that of less-educated households by 101%, up from 91% in 2006. As of April, the unemployment rate among college graduates stood at 4.5%, compared to 9.7% for those with only a high-school diploma and 14.6% for those who never finished high school.

I am intrigued by the McMansion comparison here as it is used to illustrate the foolishness of overspending on a big or expensive house versus the possible “good debt” of college loans. Of course, this is all in economic terms as the education is expected to pay off down the road while McMansion purchases of the last 15 years are not expected to yield such great values in this poor housing market. (And using a car as a debt comparison seems a bit strange: a car is rarely an investment but rather a black hole for money.) But this view of a house, as an investment opportunity, is a relatively recent development.

There is something about this data that could warrant a closer look: while it appears that the average college student debt has increased, is the average really the best measure here? I would much rather see a distribution of college debt in order to better know whether this mean is heavily influenced by people with massive amounts of college debt. Here is a paragraph from a recent New York Times article regarding college loans:

Two-thirds of bachelor’s degree recipients graduated with debt in 2008, compared with less than half in 1993. Last year, graduates who took out loans left college with an average of $24,000 in debt. Default rates are rising, especially among those who attended for-profit colleges.

And here is some additional data from recent years that sheds more light on the distribution of college debt:

These figures were calculated using the data analysis system for the 2007-2008 National Postsecondary Student Aid Study (NPSAS) conducted by the National Center for Education Statistics at the US Department of Education. (For comparison, cumulative education debt statistics from the 2003-2004 NPSAS are also available.) The 2007-2008 NPSAS surveyed 114,000 undergraduate students and 14,000 graduate and professional students. These statistics are not necessarily available from published NPSAS reports.The median cumulative debt among graduating Bachelor’s degree recipients at 4-year undergraduate schools was $19,999 in 2007-08. One quarter borrowed $30,526 or more, and one tenth borrowed $44,668 or more. 9.5% of undergraduate students and 14.6% of undergraduate student borrowers graduating with a Bachelor’s degree graduated with $40,000 or more in cumulative debt in 2007-08. This compares with 6.4% and 10.0%, respectively, for Bachelor’s degree recipients graduating with $40,000 or more (2008 dollars) in cumulative debt in 2003-04.

This data provides a median that is somewhat similar to the two figures cited above. Based on these three figures and interpretations, it sounds like more college students are taking on debt rather than some students are taking on a lot more debt.

More appealing measurements of the American economy

The Economist looks at several ways in which the US federal government calculates certain economic statistics that might make our economic situation look most appealing. Here is their conclusion:

Conspiracy theorists might conclude that the American government is trying to nip and tuck its way to attractiveness. The persistent downward revisions to GDP growth do look suspicious. But in other areas American number-crunchers seem to believe that their measures are better; indeed, history shows that European statistical agencies have often later adopted their methods. The world’s biggest economy is also much less bothered about the international comparability of its numbers than smaller European countries. True, when the statisticians at the IMF or the OECD produce comparative data, they do so on the basis of standardised definitions. The snag comes if investors fail to grasp that official national figures can show the American economy in an overly flattering light.

Complex numbers, such as these, can be difficult to operationalize or calculate but they also need to be interpreted. Economic experts may know about these methodological differences and can account for these but I’m guessing that the average citizen of the US or European countries has less of an idea about what is going on.

Another US figure that has recently attracted methodological attention is unemployment. While the US unemployment rate has undoubtedly risen in the economic crisis of recent years, it has its own quirks. One part that has been discussed in that people have to be actively looking for work in the last 4 weeks and once people move beyond that cut-off point, they are no longer counted as being unemployed. Another area involves those who work less than full-time but want full-time work and could be classified as “underemployed.” (You can see how the Bureau of Labor Statistics calculates unemployment here.)

(It is also interesting in this story that they compare the calculation of these statistics to cosmetic surgery, apparently an important marker of American culture.)

The mean population center of Illinois is close to Chicago but this wasn’t always the case

The mean population center of Illinois is relatively close to Chicago:

[B]ased on new data from the U.S. Census, the true center point of Illinois’ population is about 70 miles southwest of Chicago’s bustling Magnificent Mile.

Situated in a corn field east of the intersection of U.S. Route 47 and Illinois Route 113 in Grundy County is the point referred to by the census as the Mean Center of Population for Illinois…

The center point can tell a lot about a state.

It can help explain why Illinois has a state government controlled largely by Chicago politicians.

It can help explain how money gets distributed around the state. It can help explain why some issues — say, gun control — can pit rural interests against urban interests.

“Somewhere close to half the population of the state is within 40 miles of the Loop,” Illinois State University geography professor Mike Sublett said.

This is not too surprising: by far, Chicago is the largest city in the state and the population of the Chicago metropolitan region (2009 estimate of the Illinois portion only – not counting Wisconsin and Indiana populations) is just under 8 million while Illinois’ total population is just over 12.8 million (2010 figures).

But the value of such a measure seems to be not exactly where this mean is located but rather how this population mean has shifted over time. The article goes on to note how the population mean wasn’t always so close to Chicago:

In the 1840s, the center point was located east of Springfield, relatively close to Illinois’ geographical center point in the Logan County community of Chestnut.

But, as Chicago began to grow as an urban center, the population center point began its northward trek along a line nearly mirroring what would become Interstate 55.

The 1880 center of the state’s population was on the south side of Bloomington, near U.S. Route 150 south of where State Farm Insurance Cos. has its Illinois regional office complex.

In 1910, the center moved out of McLean County for the first time in 50 years. The new center was in a farm field just a few miles southeast of Pontiac.

The only time it took a break from its northeasterly trek until recently was in 1940, when the center — then located in Livingston County — briefly moved southward…

The northern movement of the center point also has stalled in recent decades. The 2010 center point near south of Morris in Grundy County is somewhat south of the 2000 and 1990 population centers, located just a few miles away.

Sublett attributes the stall to the rapid growth of Chicago’s western suburbs and the loss of population within the state’s largest city.

“The center point has kind of stagnated. It has just been migrating around Grundy County,” Sublett said.

As I’ve written before regarding the US population mean (see here), the population mean measure seems to make the most sense when placed in a historical context so that people can get a quick look at larger population and migration trends.

I wonder how many Chicago area residents know that the bulk of the state’s early population lived in the central and southern portions of the state and it wasn’t really until the 1840s and 1850s that the population of northeastern Illinois really began to grow and tilt the balance of power in the state.