How hot is the sociology of zombies?

An unemployed sociologist discusses his difficulties in securing a tenure track job with papers in the growing area of the sociology of zombies:

My job market struggles are made all more the inexplicable by the fact that I maintain an active publication track in a hot field of study – zombies. In the past year alone I have published three articles, and I have an additional three under review, and numerous projects in the pipeline.

While my research on zombies may be an odd topic for sociologists to tackle, my scholarship garners much interest. My first sole-authored article “Locating Zombies in the Sociology of Popular Culture,” for instance, can net 100+ downloads in a day on my academia.edu webpage. The same piece has been quoted in numerous press outlets, elicits interview requests, and even gets me open invitations to present at professional conferences that, ironically, I cannot afford to attend.

One of my “under review” articles “The New Horror Movie” is required reading for a graduate seminar taught by a friend at Aarhus University in Denmark. While some in sociology may be turned off to my research on zombies (something they do without reading it), I have also published and received grant money in the sociology of race – a topic of perennial sociological interest.

Zombies are a hot area in popular culture so it makes sense that academics would address this and think about what it says about or means for American society. At the same time, I wonder how many people within the larger discipline of sociology are thinking about zombies. Traditional markers of the status of a topic within the discipline include things like papers about the topic at professional meetings of sociologists (conferences involving other disciplines may not matter as much), respected faculty tackling the subject, important programs/departments teaching the course/having a concentration of faculty teaching about it/attracting graduate students interested in the topic, numerous articles/book chapters/books published and in the pipeline, citations of said publications, and perhaps a research network or ASA section or some sort of permanent sociology group addressing the topic. All of this takes quite a bit of time to develop and for the benefits to trickle down to those who study the subject.

I wonder if there is some easy way to track trends in sociological subjects over time to see which hot topics of past decades made it and which did not.

Getting around the anger or apathy students have for taking a sociology qualitative research methods class

In a review that describes how a book’s author practices “stealth sociology,” one sociologist describes how he tries to get his students excited about a qualitative research methods class:

Every semester, I teach a course in qualitative research methods. Revealing this at a dinner party or art opening invariably prompts sympathy, no response at all or variations on “Yuck! That was the worst course I ever had.”

Teaching what students dread and remember in anger robs my equilibrium. I tell students qualitative methods happen to be about stories, not numbers and measurements. And who doesn’t love a story and need one—many—daily? I merely teach ways to collect people’s stories, how to observe everyday life and narrate the encounter, and ways to discover stories “contained” in every human communication medium, from movies and tweets to objects of material culture, cars to casseroles.

Hearing this, students perk up. Momentarily. I continue in the liberal arts college spirit and urge students, “Bring to our class discussion and your research planning the skills you developed in English, literature and art classes.”

Hearing this, spirits deflate. Although some take to the freedom in narrative research methods, many students can’t give up the security they find in objective hypotheses, measured variables and reassuring numbers.

“How can we be objective about ourselves?” I argue. “How can anyone?”

Today in the wake of so-called identity studies, we sociologists and anthropologists expect each other to write ourselves into our research. We reveal our social addresses, identify our perspectives, and justify our intent. Sociologists and women’s studies scholars call it standpoint theory. No more pretense of the all-seeing-eye. No more fly on the wall invisibility.

As I think back on my experiences teaching lots of Intro to Sociology, Statistics, and Research Methods (involving both quantitative and qualitative methods), I have found the opposite to often be true: undergraduates more often understand the value or stories and narratives and have more difficulty thinking about scientifically studying people and society. Perhaps this is the result of a particular subculture that values personal relationships.

At the same time, sociologists collect stories in particular ways. It isn’t just about one person making an interpretation and other people can see very different things in the stories. This involves rigorous data collection and analysis by looking across cases. But, this is done without statistical tests and often having smaller samples (which can limit generalizability). Coding “texts” can be a time-consuming and involved process and interviews with people take quite a bit of work in crafting good questions, interacting with respondents in order to build rapport but not doing things to influence their answers, and then understanding and applying what you have heard. We know that we might bias the process, even in the selection of a research question, but we can find ways to limit this including utilizing multiple coders as well as sharing our work with others so they can check our findings and help us think through the implications.

Just how different is Canadian society from American society?

I don’t know about the validity of this argument but two sociologists argued a while back that Canada and the United States could be better understand through breaking them into four total regions:

Our research, covering almost 30 years of contemporary and historical analysis, shows the four-regions model fits the evidence much better than a simple two-nations model, in which Canada and the US in general are portrayed as very different. There certainly are other internal differences that could be considered, like those between the US west coast and New England, or between British Columbia and Canada’s Atlantic region.However, we found clear and consistent evidence that the strongest lines of demarcation separate Québec and the rest of Canada, on the one hand, and the American South and non-South, on the other, with national differences usually far less prominent.

In Regions Apart, and in other studies that we and others have conducted, Québec is clearly the most left-liberal region of North America on topics like gay rights, same-sex marriage, common-law marriage, adolescent sexuality, capital punishment, taxation, government spending, unionization, military intervention and so on. The US South is the most conservative or traditional on these same issues. The rest of Canada and the US are usually quite similar on these and other cultural, social, political and economic questions…

What Jim and I called the four “deep structural” principles of the two nations are still intact, though more as ideals to strive for, and not as perfectly achieved realities in either country. These include liberty, individual freedom to pursue one’s goals, while also accepting the rights of others to pursue their goals; equality, the same rights and opportunities for all citizens, though not necessarily the same life outcomes; popular sovereignty, government of the people, by the people and for the people, as Abraham Lincoln so eloquently put it; and pluralism, the belief that all individuals have the fundamental right to be different, even if other people don’t always like or agree with their differences.

As for divergences, I think we have long been divergent in the area of criminal justice, where we see consistently much higher US incarceration and homicide rates, for example. However, even here some differences are exaggerated, for, as shown in Regions Apart, Canada actually has somewhat higher rates for some non-violent crimes, like auto thefts and break-and-entry.

Another area of substantial difference or divergence over the years concerns our roles in the world. The US is far more powerful politically, economically and culturally than Canada, and such differences inevitably give rise to occasionally different views about how to address some of the world’s problems. But we have also been close political allies and economic partners for many decades, so even here our divergent positions can be overstated in many instances, and can regularly change toward more convergence again at a later time.

I don’t know how accurate such an analysis is without looking further at the methodology of how these regions were developed. Why four regions? How was the cluster analysis undertaken? How much variation is within these categories?

At the same time, this made me think: just how much do Americans know about Canada? Could they even identify these two broad regions or some of the key tensions in Canadian life today? On the other hand, I suspect Canadians know more about American life. This could be due to a variety of factors yet it seems odd that we wouldn’t know much about Canada given some of our overlapping background and interests as well as geographic proximity.

Tree diagrams as important tool in human approach to big data

Big data may seem like a recent phenomenon but for centuries tree diagrams have helped people make sense of new influxes of data:

The Book of Trees: Visualizing Branches of Knowledge catalogs a stunning diversity of illustrations and graphics that rely on arboreal models for representing information. It’s a visual metaphor that’s found across cultures throughout history–a data viz tool that has outlived empires and endured huge upheavals in the arts and sciences…

For the first several hundred years at least, the use of the tree metaphor is largely literal. A graphic from 1552 classifies parts of the Code of Justinian–a hugely important collection of a thousand years of Roman legal thought–as a trunk with a dense tangle of leafless branches. An illustration from Liber Floridus, one of the best-known encyclopedias from the Middle Ages, lays out virtues as fronds of a palm. In the early going, classifying philosophical knowledge and delineating the moral world were frequent use cases. In nearly every case, foliage abounds…

At some point in the 18th or 19th century, the tree model made the leap to abstraction. This led to much more sophisticated visuals, including complex organization charts and dense genealogies. One especially influential example arrived with Darwin’s On the Origin of Species, in 1859…

While the impulse to visualize is more alive today than ever, our increasingly technological society may be outgrowing this enduring representational model. “Trees are facing this paradigm shift,” Lima says. “The tree, as a representational hierarchy, cannot accommodate things like the web and Wikipedia–things with linkage. The network is replacing the tree as the new visual metaphor.” In fact, the idea to do a collection solely on trees was born during Lima’s research on his first book–a collection of visualizations based on the staggering complexity of networks.

A few quick thoughts:

1. We talk a lot now about being in a visual age (why can’t audio clips go viral?) yet humans have a long history of utilizing visuals to help them understand the world.

2. We’ve seen big leaps forward in data dissemination in the past – think the invention of writing, the printing press, the telegraph, etc. The leap forward to the Internet may seem quite monumental but such shifts have been tackled before.

3. Designing infographics took skill in the past just as it does today. The tree is a widely understood symbol that lends itself to certain kinds of data. Throw in some color and flair and it can work well. Yet, it can also be done poorly and detract from its ability to convey information quickly.

Chicago crime stats: beware the “official” data in recent years

Chicago has a fascinating look at some interesting choices made about how to classify homicides in Chicago – with the goal of trying to reduce the murder count.

For the case of Tiara Groves is not an isolated one. Chicago conducted a 12-month examination of the Chicago Police Department’s crime statistics going back several years, poring through public and internal police records and interviewing crime victims, criminologists, and police sources of various ranks. We identified 10 people, including Groves, who were beaten, burned, suffocated, or shot to death in 2013 and whose cases were reclassified as death investigations, downgraded to more minor crimes, or even closed as noncriminal incidents—all for illogical or, at best, unclear reasons…

Many officers of different ranks and from different parts of the city recounted instances in which they were asked or pressured by their superiors to reclassify their incident reports or in which their reports were changed by some invisible hand. One detective refers to the “magic ink”: the power to make a case disappear. Says another: “The rank and file don’t agree with what’s going on. The powers that be are making the changes.”

Granted, a few dozen crimes constitute a tiny percentage of the more than 300,000 reported in Chicago last year. But sources describe a practice that has become widespread at the same time that top police brass have become fixated on demonstrating improvement in Chicago’s woeful crime statistics.

And has there ever been improvement. Aside from homicides, which soared in 2012, the drop in crime since Police Superintendent Garry McCarthy arrived in May 2011 is unprecedented—and, some of his detractors say, unbelievable. Crime hasn’t just fallen, it has freefallen: across the city and across all major categories.

Two quick thoughts:

1. “Official” statistics are often taken for granted and it is assumed that they measure what they say they measure. This is not necessarily the case. All statistics have to be operationalized, taken from a more conceptual form into something that can be measured. Murder seems fairly clear-cut but as the article notes, there is room for different people to classify things differently.

2. Fiddling with the statistics is not right but, at the same time, we should consider the circumstances within which this takes place. Why exactly does the murder count – the number itself – matter so much? Are we more concerned about the numbers or the people and communities involved? How happy should we be that the number of murders was once over 500 and now is closer to 400? Numerous parties mentioned in this article want to see progress: aldermen, the mayor, the police chief, the media, the general public. Is progress simply reducing the crime rate or rebuilding neighborhoods? In other words, we might consider whether the absence of major crimes is the best end goal here.

Steps for cities trying to brand themselves

Most cities would love to attract more business and visitors and thereby expand their tax base. But, how can cities brand themselves today amidst so much competition?

Cities of varying sizes struggle with two related, but seemingly opposing, global and local forces. At one level, every city would like to benefit from the global flow of capital and the emerging landscapes of prosperity seen in “other” places. At another level, to be a recipient of such attention, a city has to offer something more than cheaper real estate and tax benefits.

What cities need is a sense of uniqueness; something that separates them from other cities. Without uniqueness, a city can easily be made invisible in a world of cities. In other words, without defining the “local,” there is no “global.” Here is where identifying a coherent message about a place, based on its identity, becomes crucial. One of the major challenges facing many cities, small and large, is how to make themselves visible, and how to identify, activate, and communicate their place identity – their brand – through actions.

The challenge of urban branding is that cities are not commodities. As such, urban branding is not the same as product or corporate-style branding. Cities are much more complex and contain multiple identity narratives; whatever the business and leadership says, there are other local voices that may challenge the accepted “script”. In fact, while city marketing may focus mainly on attracting capital through economic development and tourism, urban branding needs to move beyond the simply utilitarian, and consider memories, urban experiences, and quality of life issues that affect those who live in a city. A brand does not exist outside the reality of a city. It is not an imported idea. It is an internally generated identity, rooted in the history and assets of a city…

To make a city visible takes more than a logo. The future of a city region depends on a diversity of political, managerial, community and business leaders who will participate and sustain a process that will lead to an inclusively created brand, followed by actions that embrace it. Cities without articulated identities will remain invisible, lamenting at every historical turn the loss of yet another opportunity to be like their more successful neighbors.

The primary parts of this argument are: (1) have a cohesive and dynamic set of local leaders; (2) identify and/or develop a key unique feature or identity to build upon; and (3) focus not just on economic factors but cultural scenes. I don’t know that these have changed all that much in recent decades though the second and third pieces may seem more difficult today due to increased competition, both for perceived limited resources and the reality that cities now compete against a wider set of cities. Boosterism has been a consistent dimension of American cities for a long time but their status anxiety may have increased in recent decades.

I wonder if part of the branding issue today is defining what makes a city successful. What should the average city strive for in terms of development? Is it better to shoot for the moon? Should a city set more realistic goals? Is it okay for many leaders to be more of a regional center appealing to a more immediate population or should everyone go in on a global game? Is this about increasing population, having more tourists, attracting more businesses, rehabbing rundown neighborhoods, being able to pay their own bills, a combination of all of these or something else? Communities have all sorts of narratives they tell about themselves that can range from the stable community that pays its bills to a friendly, helping place to the city that has all of the quality of life amenities to the suburb that has a disproportionate of valuable white-collar jobs. Some of this branding/narrative development/character happens in relation to other cities geographically nearby or in a perceived similar category (Chicago might compare itself to New York City but they compare themselves to cities like London and Tokyo) but there is also an internal dimension they may not be intended for outsiders.

Chicago area transit problem: “Only 12 percent of suburbanites can get to work in less than 90 minutes via mass transit”

As Chicago area leaders debate how local groups should approach regional mass transit, a Chicago Tribune editorial in favor of shaking things up says changes would make mass transit more accessible:

The group’s 95-page report suggests measures to curb the sort of political meddling that led to the resignations of six Metra board members. It also makes a case that a streamlined organizational chart would reduce corruption simply by limiting the number of actors…

Our region’s three transit agencies waste tax dollars on lobbyists to compete with one another for more tax dollars for parochial priorities, instead of developing a consensus vision that would lead to more investment. From 2002 to 2012, consolidated transit systems serving Boston, New York, Philadelphia, San Francisco and Washington, D.C., have spent almost twice as much per resident on transit as Chicago has, the task force says.

Lack of coordination between the CTA, Metra and Pace means that riders whose commutes involve switching from bus to train or vice versa are stuck with long waits, poor connections and multiple fare systems. The task force says only 12 percent of suburbanites can get to work in less than 90 minutes via mass transit.

That last figure is important: mass transit is really a limited option in the Chicago suburbs. While there are still transit issues in Chicago itself (expanding L lines, building more bicycles paths and lanes), the issues in the broader region often get overlooked. Suburban job centers are not connected. The railroad lines run into the city, meaning commuters can’t make connections to other lines often until they are in Chicago’s Loop. If the region was still centered on lots of jobs in the Loop, this all might make sense. But, it hasn’t been this way for decades and the suburban mass transit options have not kept pace.

“Why Did Chicago’s Middle Class Disappear?”

Whet Moser explains the GIF of Chicago’s disappearing middle-class through the work of sociologist Lincoln Quillian:

What’s most striking about Hertz’s map is the transition from 1970 onwards; when the map begins, the lowest-income census tracts are extremely concentrated. Then, as if a switch was flipped, they radiate out from the city center by 1980. (It almost looks like watching Conway’s Game of Life.) The change in those 20 years is immense. And Quillian gives a clue as to why, laying the groundwork for what was happening before Hertz’s analysis begins (emphasis mine):

Modern poor urban neighborhoods, formed after 1970 or so, thus stand in sharp demographic
contrast to poor and minority neighborhoods earlier in the century. Accounts of racial succession of neighborhoods in the 1950s indicate that neighborhoods undergoing racial transition tended to increase in population density, especially in passing through a late phase in racial succession referred to as “piling up,” in which previously white-owned homes and apartments were subdivided into smaller dwellings to accommodate the housing demands of black immigrants (Duncan and Duncan 1957). Although the affluent have always made efforts to segregate themselves from the poor, immigration into cities before about 1970 was proceeding at too rapid a pace to allow inner city neighborhoods to drop substantially in population as part of this process. Indeed, a chief reason blacks desired to exit predominantly black areas of the city before 1970 was because the housing supply in black neighborhoods was insufficient to keep up with demand (Aldrich 1975). With the end of black immigration to urban areas, poor African-American neighborhoods have changed from densely packed communities of recently arrived immigrants to areas gradually abandoned by the nonpoor. The cessation of the flow of black immigrants to the nation’s cities, and the corresponding decline in the population density of poor neighborhoods, may be one unexplored factor responsible for the change in the nature of poor African-American neighborhoods in the early 1970s that Wilson (1987) describes.

The Second Great Migration ends in 1970. To paraphrase Hunter S. Thomson, Hertz’s 1970 map appears to be the point where you can see the wave break and roll back.

Quillian’s data then picks up the narrative, which adds texture to Hertz’s map. Between 1980 and 1990, there’s a substantial leap in the lowest-income-level census tracts, then things plateau from 1990-2000. Here’s Quillian again:

There is no indication in the PSID data that stayers in black and/or poor neighborhoods experienced increases in their poverty rates in the 1970s and 1980s, except during the recession of the early 1980s. During this recession, increases in the poverty rate among the nonpoor were spatially concentrated in black moderately poor neighborhoods. Since these neighborhoods were already moderately poor to begin with, this suggests that increasing poverty rates in the early 1980s had a strong effect in increasing the number of extremely poor neighborhoods.

Quillian was writing in 1998 (here’s another paper from him in 2012, addressing similar issues), but his conclusions accurately foretell the changes you can see from 2000-2012: “Neighborhoods in transition to high-poverty status empty first of whites, then of many middle-class blacks, leaving more-disadvantaged and less-populous areas. The overall result is that high-poverty neighborhoods have been becoming geographically larger and less densely settled.”

So some of these neighborhoods that changed over to high levels of poverty aren’t necessarily the result of increasing number of poor people but rather the departure of higher-income and white residents. They may be poor neighborhoods but they are not necessarily dense because few people of any background (regardless of class and race) are moving in.

Another thought: some conversation about white flight focuses on the 1950s and 1960s when whites moved to the suburbs due to (1) policies that helped make the suburbs more attractive (interstate construction, new rules about mortgages that made home purchases available to more Americans plus (2) continued waves of the Great Migration of blacks to Northern cities. All this is true but this map is a reminder that the processes affecting poor neighborhoods continued from the 1970s to 1990s. It wasn’t until the 1980s that academics started writing important books like this, whether from William Julius Wilson or Paul Jargowsky.

Of course, a key question is how much this is still happening today. Can poor neighborhoods spread even further as better-off urban residents and suburban residents move to wealthier pockets while lower-class and poorer residents are left in emptying out locales? The process may not be over yet and it is hard to find cases where truly poor neighborhoods from recent decades made substantial turnarounds.

Iconic image of American McMansions from Plano, Texas

I’ve seen this picture of a Plano, Texas McMansion numerous times around the Internet:

DeanTerryPlanoTXMcMansionI’ve wondered at the origin of this photo and now I see: see this image and others from the same area as part of Dean Terry’s Flickr stream with the photos originating from his 2007 documentary Subdivided.

What makes this particular McMansion photo stand out? Some reasons:

1. The home has a “typical” McMansion design: brick exterior, multi-gabled roof, clearly a big home, lots of big windows in the front at various levels, a two-story foyer.

2. The surrounding area: the looming water tower, the big power lines out nearby, a neighborhood of similar sized houses with little evidence of anyone being around. (Some of the later photos in the Flickr set illustrate this further: the home backs up to a wide right-of-way for power lines and that water tower really is huge.) Setting the picture beneath a stop sign and lamppost seems to add to the ominousness of the photo.

3. This is Texas, a place where everything is big, including the homes, water towers, and sky. And not just any part of Texas: Plano is a booming suburb in the Dallas-Fort Worth metropolitan area that went from just 17,872 people in 1970 to 259,841 people in 2010. That is explosive, sprawling suburban growth.

Now, I may just have to get my hands on this documentary to see more of the home and its context…

Fake Georgian office building to hide electric substation next to fake Hard Rock Cafe in Chicago

It is not uncommon for cities to have fake buildings or facades to hide infrastructure and here is an example in Chicago where the same architect designed the Hard Rock Cafe and fake mansion next door:

The most noteworthy, a faux Georgian mansion in the River North area of downtown, was designed by perhaps the city’s most famous living architect, Stanley Tigerman, former director of the School of Architecture at the University of Illinois at Chicago.

“The building is somewhat tongue-in-cheek , a bit of a joke,” said Tigerman, who had first designed a restaurant just west of the site. “The Hard Rock Cafe: fake stucco, fake Georgian, nothing real about it. Then they came to me and wanted me to do the ComEd substation next door, but to be contextual, to relate it to this ersatz piece of junk.”

So rather than construct a bogus building based on a fake, albeit one he designed, Tigerman cut the other direction.

“I decided to go absolutely hard core, as classically designed as I could, done authentically Georgian,” he said. “The brick bonding is  English cross bond, the one Mies van der Rohe used whenever he used brick. It’s very expensive to to lay bricks that way, but it makes the walls sturdy and impervious to cracking. I knew the building would never receive any maintenance, so the idea was to do as good a building as I could.”

He also had to take into account the building’s true purpose — so if you look closely, what seem to be windows are actually vents, to help cool the 138 kV electrical transmission equipment inside.

Hiding in plain sight. Here is the Google Streetview image of the two buildings, the covered substation on the left and the Hard Rock Cafe on the right:

55WestOntarioChicago

This could lead to a great architecture conversation: which of the buildings is more fake or authentic? The restaurant which is about evoking a particular spirit (a museum? an imposing older structure intended to lend more gravitas to rock ‘n’ roll?) to make money? Or the fake mansion with more pure design that does nothing but hide the infrastructure that is necessary for big cities? Both could be considered postmodern for their application of old styles to new purposes, their exteriors projecting certain images that don’t match their interiors.