The urban theory behind SimCity

In constructing the game SimCity, Will Wright worked with the ideas of James Forrester:

Looking to understand how real cities worked, Wright came across a 1969 book by Jay Forrester called Urban Dynamics. Forrester was an electrical engineer who had launched a second career as an expert on computer simulation; Urban Dynamics deployed his simulation methodology to offer a controversial theory of how cities grew and declined. Wright used Forrester’s theories to transform the cities he was designing in his level editor from static maps of buildings and roads into vibrant models of a growing metropolis. Eventually, Wright became convinced that his “guinea-pig city” was an entertaining, open-ended video game. Released in 1989, the game became wildly popular, selling millions of copies, winning dozens of awards, and spawning an entire franchise of successors and dozens of imitators. It was called SimCity

Largely forgotten now, Jay Forrester’s Urban Dynamics put forth the controversial claim that the overwhelming majority of American urban policy was not only misguided but that these policies aggravated the very problems that they were intended to solve. In place of Great Society-style welfare programs, Forrester argued that cities should take a less interventionist approach to the problems of urban poverty and blight, and instead encourage revitalization indirectly through incentives for businesses and for the professional class. Forrester’s message proved popular among conservative and libertarian writers, Nixon Administration officials, and other critics of the Great Society for its hands-off approach to urban policy. This outlook, supposedly backed up by computer models, remains highly influential among establishment pundits and policymakers today…

Forrester spent months tinkering with this model, tested and corrected it for errors, and ran a “hundred or more system experiments to explore the effects of various policies on the revival of a city that has aged into economic decline.” Six months after beginning the project, and over 2000 pages of teletype printouts later, Forrester declared that he had reduced the problems of the city to a series of 150 equations and 200 parameters…

Forrester thought that the basic problem of urban planning—and making social policy in general—was that “the human mind is not adapted to interpreting how social systems behave.” In a paper serialized in two early issues of Reason, the libertarian magazine founded in 1968, Forrester argued that for most of human history, people have only needed to understand basic cause-and-effect relationships, but that our social systems are governed by complex processes that unfold over long periods of time. He claimed that our “mental models,” the cognitive maps we have of the world, are ill-suited to help us navigate the web of  interrelationships that make up the structure of our society.

Three quick thoughts:

  1. How many people dream that cities could be reduced to equations and parameters? Cities are both fascinating and frustrating because they are so complex. And the quest to find overarching rules governing urban life continues – see the work of Geoffrey West as an example.
  2. Figuring out when more government intervention is helpful or not is a difficult task, particularly when it comes to complex cities. Housing is an area I have written about before: free markets do not bring about fair results and the federal government has promoted one kind of housing, single-family homes, over others for decades.
  3. This is a reminder that game users can learn about how the world works – they are not just mindless entertainment – but they also do so under the conditions or terms set up by the designer. Cities are indeed complex and SimCity presents them in one particular way. All games have a logic to them and this may or may not match reality. How much theory do we imbibe on a daily basis through different activities? At the least, we are forming our own individual theoretical explanations of how we think society operates.

Max Weber, Bernie Sanders, and a difficult revolution

Why not have more sociological theory applied to the 2016 election? Here is one application of Weber’s ideas to Bernie Sander’s chances for starting a revolution:

Max Weber, the great sociologist best remembered for coining the phrase “Protestant work ethic,” would have loved Sunday’s Democratic debate. Leaving aside the sad and quixotic figure of Martin O’Malley, the two main contenders Hillary Clinton and Bernie Sanders perfectly illustrated a distinction Weber made in his classic 1919 essay “Politics as a Vocation.” In that essay, Weber distinguished between two different ethical approaches to politics, an “ethics of moral conviction” and an “ethics of responsibility.”

Sanders is promoting an “ethics of moral conviction” by calling for a “political revolution” seeking to overthrow the deeply corrupting influence of big money on politics by bringing into the system a counterforce of those previously alienated, including the poor and the young. Clinton embodies the “ethics of responsibility” by arguing that her presidency won’t be about remaking the world but trying to preserve and build on the achievements of previous Democrats, including Obama.

The great difficulty Sanders faces is that given the reality of the American political system (with its divided government that has many veto points) and also the particular realities of the current era (with an intensification of political polarization making it difficult to pass ambitious legislation through a hostile Congress and Senate), it is very hard to see how a “political revolution” could work.

Read Weber’s piece here and a summary here. As I skim through the original piece, it is a reminder of Weber’s broad insights as well as his occasional interest in addressing current conditions (political unrest in Germany). Wouldn’t Weber suggest that either Sanders needs (1) a ridiculous amount of charisma (which he has to some degree to come this far in politics) and/or (2) unusually large-scale support from the public in order to counter the power of  existing government? Reaching either objective this time around may prove too difficult…

The Chicago School model of urban growth doesn’t quite fit…but neither do other models

Sociologist Andy Beveridge adds to the ongoing debate within urban sociology over the applicability of the Chicago School’s model of growth:

Ultimately, Beveridge’s interesting analysis found that the basic Chicago School pattern held for the early part of the 20th century and even into the heyday of American post-war suburbanization. But more recently, the process and pattern of urban development has diverged in ways that confound this classic model…

The pattern of urban growth and decline has become more complicated in the past couple of decades as urban centers, including Chicago, have come back. “When one looks at the actual spatial patterning of growth,” Beveridge notes, “one can find evidence that supports exponents of the Chicago, Los Angeles and New York schools of urban studies in various ways.” Many cities have vigorously growing downtowns, as the New York model would suggest, but outlying areas that are developing without any obvious pattern, as in the Los Angeles model.

The second set of maps (below) get at this, comparing Chicago in the decades 1910-20 and 1990-2000. In the first part of the twentieth century, decline was correlated with decline in adjacent downtown areas, shown here in grey. Similarly, growth was correlated with growth in more outlying suburbs, shown here in black. In the earlier period growth radiated outwards — a close approximate of the Chicago school concentric zone model. But in the more recent map, growth and decline followed less clear patterns. Some growth concentrated downtown, while other areas outside the city continued to boom, in ways predicted more accurately by the New York and Los Angeles models. The islands of grey and black–which indicate geographic correlations of decline and growth, respectively–are far less systematic. As Beveridge writes, the 1990-2000 map shows very little patterning. There were “areas of clustered high growth (both within the city and in the suburbs), as well as decline near growth, growth near decline, and decline near decline.”

Interesting research. It sounds like the issue is not necessarily the models of growth but how widely they are applied within a metropolitan region. Assuming the same processes are taking place over several hundred square miles is making too much of a leap. We might then need to look at smaller areas or types of areas as well as micro processes.

This reminds me that when teaching urban sociology this past spring and reading as a class about the Chicago School, New York School, and Los Angeles School, students wanted to discuss why sociologists seem to want one theory to explain all cities. This isn’t necessarily the case; we know cities are different, particularly when you get outside of an American or Western context. At the same time, we are interested in trying to better understand the underlying processes surrounding city change. Plus, Chicago, New York, and LA have had organized (sometimes more strongly, sometimes more loosely) groups based in important schools pushing theories (and we don’t have such schools in places like Miami, Atlanta, Dallas, Portland, etc.).

Another call for the need for theory when working with big data

Big data is not just about allowing researchers to look at really large samples or lots of information at once. It also requires the use of theory and asking new kinds of questions:

Like many other researchers, sociologist and Microsoft researcher Duncan Watts performs experiments using Mechanical Turk, an online marketplace that allows users to pay others to complete tasks. Used largely to fill in gaps in applications where human intelligence is required, social scientists are increasingly turning to the platform to test their hypotheses…

This is a point political forecaster and author Nate Silver discusses in his recent book The Signal and the Noise. After discussing economic forecasters who simply gather as much data as possible and then make inferences without respect for theory, he writes:

This kind of statement is becoming more common in the age of Big Data. Who needs theory when you have so much information? But this is categorically the wrong attitude to take toward forecasting, especially in a field like economics, where the data is so noisy. Statistical inferences are much stronger when backed up by theory or at least some deeper thinking about their root causes…

The value of big data isn’t simply in the answers it provides, but rather in the questions it suggests that we ask.

This follows a similar recent argument made on the Harvard Business Review website.

I like the emphasis here on the new kinds of questions that might be possible with big data. There are a couple of ways these could happen:

1. Uniquely large datasets might allow for different comparisons, particularly among smaller groups, that are more difficult to look at even with nationally representative samples.

2. The speed at which the experiments can be conducted through means like Amazon’s Mechanical Turk means more can be done more quickly. Additionally, I wonder if this could help alleviate some of the replication issues that pop up with scientific research.

3. Instead of having to be constrained by data limitations, big data might give researchers creative space to think on a larger scale and more outside of the box.

Of course, lots of topics are not well-suited for looking at through big data but such information does offer unique opportunities for researchers and theories.

Summarizing sociological theories in 140 characters or less

A sociology instructor is having his students tweet criminal-justice theories:

“They have all these theories to learn,” Atherton said. “Some of them are very dense, and complex. What I try to get them to do, and I tie some extra credit to it, is see if they can boil the theory down, the essence of it, to 140 characters.”…

In a recent class session, Atherton shared tweets from a lesson on a theory of social disorganization, displaying the tweets under Twitter’s signature bluebird.

“Social disorganization refers to communities as a whole not coming together for common goals, ultimately causing a disruption,” the first tweet stated.

Another tweet on the topic read: “theory suggests criminal activity comes from the neighborhood where someone lives and how it shapes them living there.”

If the American Sociological Association is working on a Wikipedia initiative, why not also start a Twitter push? Since it looks like Karl Marx’s Das Kapital is being tweeted (over 41,000 tweets and counting), there is work to be done.

While I think this could be an interesting pedagogical exercise as it allows students to use a current medium as well as put complex theories into their own terms, I wonder if this doesn’t perfectly illustrate the issues with Twitter. Sociological theories are often messy and complex, taking some time to explain and think through. For a very basic understanding, 140 characters could work but if this is all students know about sociological theories, is this worthwhile in the long run?

Argument: still need thinking even with big data

Justin Fox argues that the rise of big data doesn’t mean we can abandon thinking about data and relationships between variables:

Big data, it has been said, is making science obsolete. No longer do we need theories of genetics or linguistics or sociology, Wired editor Chris Anderson wrote in a manifesto four years ago: “With enough data, the numbers speak for themselves.”…

There are echoes here of a centuries-old debate, unleashed in the 1600s by protoscientist Sir Francis Bacon, over whether deduction from first principles or induction from observed reality is the best way to get at truth. In the 1930s, philosopher Karl Popper proposed a synthesis, in which the only scientific approach was to formulate hypotheses (using deduction, induction, or both) that were falsifiable. That is, they generated predictions that — if they failed to pan out — disproved the hypothesis.

Actual scientific practice is more complicated than that. But the element of hypothesis/prediction remains important, not just to science but to the pursuit of knowledge in general. We humans are quite capable of coming up with stories to explain just about anything after the fact. It’s only by trying to come up with our stories beforehand, then testing them, that we can reliably learn the lessons of our experiences — and our data. In the big-data era, those hypotheses can often be bare-bones and fleeting, but they’re still always there, whether we acknowledge them or not.

“The numbers have no way of speaking for themselves,” political forecaster Nate Silver writes, in response to Chris Anderson, near the beginning of his wonderful new doorstopper of a book, The Signal and the Noise: Why So Many Predictions Fail — But Some Don’t. “We speak for them.”

These days, finding and examining data is much easier than before but it is still necessary to interpret what these numbers mean. Observing relationships between variables doesn’t necessarily tell us something valuable. We also want to know why variables are related and this is where hypotheses come in. Careful hypothesis testing means we can rule out spurious associations, other variables that may be leading to the observed relationship, and look for the influence of one variable on another when controlling for other factors (the essence of regression) or looking at more complex models where we can see how a variety of models affect each other at the same time.

Also, at the opposite end of the scientific process from the hypotheses, utilizing findings when creating and implementing policies will also require thinking. Once we have established that relationships likely exist, it takes even more work to respond to this in useful and effective ways.

The intellectual bloodlines of Talcott Parsons

In response to a review of Robert Bellah’s new book, a sociologist writes to the New York Times to link Robert Bellah and Clifford Geertz to Talcott Parsons:

His contrast of Bellah’s theories of religious evolution with Clifford Geertz’s outlook was also illuminating, but I was surprised he did bnot mention that both Bellah and Geertz were students of Talcott Parsons, a towering figure of mid-20th-century sociology. Indeed, a fuller understanding of Bellah’s and Geertz’s intellectual trajectories demands appreciation of their continuity with Parsonsian theory as well as their breaks with it. Parsons struggled to provide a vision of human agency that makes a place for morality, reason, emotions and biology, and of social order as the product of both human initiative and pre-existing collective forces, which are themselves both cultural and coercive. As Wolfe points out, his two illustrious students continued to struggle with the complexities of how we can be agents as well the product of external forces — and the unique role religion has played in how we struggle to manage these elements.

This seems like prescient analysis to me. While undergraduate sociology majors hear in theory classes that Parsons was the end of functionalism and quickly faded from prominence, isn’t this intellectual bloodline a good measure of Parsons abilities? I never knew both Bellah and Geertz, both well-respected and well-known, were his students and this puts Parsons in a slightly different light.

Has anyone ever put together a sociological genealogy where we could see how generations of scholars have emerged from others? While these would no doubt be socially constructed and emphasize famous scholars, I think it would be fascinating to see.