Collecting big data the slow way

One of the interesting side effects of the era of big data is finding out how much information is not actually automatically collected (or is at least not available to the general public or researchers without paying money). A quick example from the work of sociologist Matthew Desmond:

The new data, assembled from about 83 million court records going back to 2000, suggest that the most pervasive problems aren’t necessarily in the most expensive regions. Evictions are accumulating across Michigan and Indiana. And several factors build on one another in Richmond: It’s in the Southeast, where the poverty rates are high and the minimum wage is low; it’s in Virginia, which lacks some tenant rights available in other states; and it’s a city where many poor African-Americans live in low-quality housing with limited means of escaping it.

According to the Eviction Lab, here is how they collected the data:

First, we requested a bulk report of cases directly from courts. These reports included all recorded information related to eviction-related cases. Second, we conducted automated record collection from online portals, via web scraping and text parsing protocols. Third, we partnered with companies that carry out manual collection of records, going directly into the courts and extracting the relevant case information by hand.

In other words, it took a lot of work to put together such a database: various courts, websites, and companies had different pieces of information but a researcher to access all of that data and put them together.

Without a researcher or a company or government body explicitly starting to record or collect certain information, a big dataset on that particular topic will not happen. Someone or some institution, typically with resources at its disposal, needs to set a process into motion. And simply having the data is not enough; it needs to be cleaned up so it all works with the other pieces. Again, from the Eviction Lab:

To create the best estimates, all data we obtained underwent a rigorous cleaning protocol. This included formatting the data so that each observation represented a household; cleaning and standardizing the names and addresses; and dropping duplicate cases. The details of this process can be found in the Methodology Report (PDF).

This all can lead to a fascinating dataset of over 83 million records on an important topic.

We are probably still a ways off from a scenario where this information would automatically become part of a dataset. This data had a definite start and required much work. There are many other areas of social life that require similar efforts before researchers and the public have big data to examine and learn from.

New standard and platform for city maps

Maps are important for many users these days and a new open data standard and platform aims to bring all the street data together:

Using giant GIS databases, cities from Boston to San Diego maintain master street maps to guide their transportation and safety decisions. But there’s no standard format for that data. Where are the intersections? How long are the curbs? Where’s the median? It varies from city to city, and map to map.

That’s a problem as more private transportation services flood the roads. If a city needs to communicate street closures or parking regulations to Uber drivers, or Google Maps users, or new dockless bikesharing services—which all use proprietary digital maps of their own—any confusion could mean the difference between smooth traffic and carpocalypse.

And, perhaps more importantly, it goes the other way too: Cities struggle to obtain and translate the trip data they get from private companies (if they can get their hands on it, which isn’t always the case) when their map formats don’t match up.

A team of street design and transportation data experts believes it has a solution. On Thursday, the National Association of City Transportation Officials and the nonprofit Open Transport Partnership launched a new open data standard and digital platform for mapping and sharing city streets. It might sound wonky, but the implications are big: SharedStreets brings public agencies, private companies, and civic hackers onto the same page, with the collective goal of creating safer, more efficient, and democratic transportation networks.

It will be interesting whether this step forward simply makes what is currently happening easier to manage or whether this will be a catalyst for new opportunities. In a number of domains, having access to data is necessary before creative ideas and new collaborations can emerge.

This also highlights how more of our infrastructure is entering a digital realm. I assume there are at least a few people who are worried about this. For example, what happens if the computers go down or all the data is lost? Does the digital distance from physical realities – streets are tangible things, not just manipulable objects on a screen – remove us from authentic streetlife? Data like this may no be no substitute for a Jane Jacobs-esque immersion in vibrant blocks.

“Tiny Houses Are Big” – with 10,000 total in the United States

Tiny houses get a lot of attention – including this recent Parade story – but rarely are numbers provided about how big (or small) this trend really is. The Parade story did provide some data (though without any indication of how this was measured) on the number of tiny houses in the US. Ready for the figure?

10,000.

Without much context, it is hard to know what to do with this figure or how accurate it might be. Assuming the figure’s veracity, is that a lot of tiny houses? Not that many? Some comparisons might help:

Between February 2016 and March 2017, there were over 1,000,000 housing starts in each month. (National Association of Home Builders) Within data going back to 1959, the lowest point for housing starts after the 2000s housing bubble burst experienced about 500,000 new housing starts a month. (Census Bureau data at TradingEconomics.com)

The RV industry shipped over 430,000 units in 2016. This follows a low point of shipments in recent years back in 2009 where only 165,000 units were shipped. (Recreation Vehicle Industry Association)

The number of manufactured homes that have shipped in recent years – 2014 to 2016 – has surpassed 60,000 each year. (Census Bureau)

The percent of new homes that are under 1,400 square feet has actually dropped since 1999 to 7% in 2016. (Census Bureau)

Based on these comparisons, 10,000 units is not much at all. They are barely a drop in the bucket within all housing.

Perhaps the trend is sharply on the rise? There is a little evidence of this. I wrote my first post here on tiny houses back in 2010 and it involved how to measure the tiny house trend. The cited article in that post included measures like the number of visitors to a tiny house blog and sales figures from tiny house builders. Would the number of tiny house shows on HGTV and similar networks provide some data? All trends have to start somewhere – with a small number of occurrences – but it doesn’t seem like the tiny house movement is taking off in exponential form.

Ultimately, I would ask for more and better data on tiny houses. Clearly, there is some interest. Yet, calling this a major trend would be misleading.

 

Measuring attitudes by search results rather than surveys?

An author suggests Google search result data gives us better indicators of attitudes toward insecurity, race, and sex than surveys:

I think there’s two. One is depressing and kind of horrifying. The book is called Everybody Lies, and I start the book with racism and how people were saying to surveys that they didn’t care that Barack Obama was black. But at the same time they were making horrible racist searches, and very clearly the data shows that many Americans were not voting for Obama precisely because he was black.

I started the book with that, because that is the ultimate lie. You might be saying that you don’t care that [someone is black or a woman], but that really is driving your behavior. People can say one thing and do something totally different. You see the darkness that is often hidden from polite society. That made me feel kind of worse about the world a little bit. It was a little bit frightening and horrifying.

But, I think the second thing that you see is a widespread insecurity, and that made me feel a little bit better. I think people put on a front, whether it’s to friends or on social media, of having things together and being sure of themselves and confident and polished. But we’re all anxious. We’re all neurotic.

That made me feel less alone, and it also made me more compassionate to people. I now assume that people are going through some sort of struggle, even if you wouldn’t know that from their Facebook posts.

We know surveys have flaws and there are multiple ways – from sampling, to bad questions, to nonresponse, to social desirability bias (the issue at hand here) – they can be skewed.

But, these flaws wouldn’t lead me to these options:

  1. Thinking that search results data provides better information. Who is doing the searching? Are they a representative population? How clear are the patterns? (It is common to see stories based on the data but that provide no numbers. “Illinois” might be the most misspelled word in the state, for example, but by a one search margin and with 486 to 485 searches).
  2. Thinking that surveys are worthless on the whole. They still tell us something, particularly if we know the responses to some questions might be skewed. In the example above, why would Americans tell pollsters they have more progressive racial attitudes that they do? They have indeed internalized something about race.
  3. That attitudes need to be measured as accurately as possible. People’s attitudes often don’t line up with their actions. Perhaps we need more measures of attitudes and behaviors rather than a single good one. The search result data cited above could supplement survey data and voting data to better inform us about how Americans think about race.

Good data is foundational to doing good sociological work

I’ve had conversations in recent months with a few colleagues outside the discipline about debates within sociology over the work of ethnographers like Alice Goffman, Matt Desmond, and Sudhir Venkatesh. It is enlightening to hear how outsiders see the disagreements and this has pushed me to consider more fully how I would explain the issues at hand. What follows is my one paragraph response to what is at stake:

In the end, what separates the work of sociologists from perceptive non-academics or journalists? (An aside: many of my favorite journalists often operate like pop sociologists as they try to explain and not just describe social phenomena.) To me, it comes down to data and methods. This is why I enjoy teaching both our Statistics course and our Social Research course: undergraduates rarely come into them excited but they are foundational to who sociologists are. What we want to do is have data that is (1) scientific – reliable and valid – and (2) generalizable – allowing us to see patterns across individuals and cases or settings. I don’t think it is a surprise that the three sociologists under fire above wrote ethnographies where it is perhaps more difficult to fit the method under a scientific rubric. (I do think it can be done but it doesn’t always appear that way to outsiders or even some sociologists.) Sociology is unique in both its methodological pluralism – we do everything from ethnography to historical analysis to statistical models to lab or natural experiments to mass surveys – and we aim to find causal explanations for phenomena rather than just describe what is happening. Ultimately, if you can’t trust a sociologist’s data, why bother considering their conclusions or why would you prioritize their explanations over that of an astute person on the street?

Caveats: I know no data is perfect and sociologists are not in the business of “proving” things but rather we look for patterns. There is also plenty of disagreement within sociology about these issues. In a perfect world, we would have researchers using different methods to examine the same phenomena and develop a more holistic approach. I also don’t mean to exclude the role of theory in my description above; data has to be interpreted. But, if you don’t have good data to start with, the theories are abstractions.

Claim: we see more information today so we see more “improbable” events

Are more rare events happening in the world or are we just more aware of what is going on?

In other words, the more data you have, the greater the likelihood you’ll see wildly improbable phenomena. And that’s particularly relevant in this era of unlimited information. “Because of the Internet, we have access to billions of events around the world,” says Len Stefanski, who teaches statistics at North Carolina State University. “So yeah, it feels like the world’s going crazy. But if you think about it logically, there are so many possibilities for something unusual to happen. We’re just seeing more of them.” Science says that uncovering and accessing more data will help us make sense of the world. But it’s also true that more data exposes how random the world really is.

Here is an alternative explanation for why all these rare events seem to be happening: we are bumping up against our limited ability to predict all the complexity of the world.

All of this, though, ignores a more fundamental and unsettling possibility: that the models were simply wrong. That the Falcons were never 99.6 percent favorites to win. That Trump’s odds never fell as low as the polling suggested. That the mathematicians and statisticians missed something in painting their numerical portrait of the universe, and that our ability to make predictions was thus inherently flawed. It’s this feeling—that our mental models have somehow failed us—that haunted so many of us during the Super Bowl. It’s a feeling that the Trump administration exploits every time it makes the argument that the mainstream media, in failing to predict Trump’s victory, betrayed a deep misunderstanding about the country and the world and therefore can’t be trusted.

And maybe it isn’t very easy to reconcile these two explanations:

So: Which is it? Does the Super Bowl, and the election before it, represent an improbable but ultimately-not-confidence-shattering freak event? Or does it indicate that our models are broken, that—when it comes down to it—our understanding of the world is deeply incomplete or mistaken? We can’t know. It’s the nature of probability that it can never be disproven, unless you can replicate the exact same football game or hold the same election thousands of times simultaneously. (You can’t.) That’s not to say that models aren’t valuable, or that you should ignore them entirely; that would suggest that data is meaningless, that there’s no possibility of accurately representing the world through math, and we know that’s not true. And perhaps at some point, the world will revert to the mean, and behave in a more predictable fashion. But you have to ask yourself: What are the odds?

I know there is a lot of celebration of having so much available information today but it isn’t necessarily easy adjusting to the changes. Taking it all in requires some effort on its own but the hard work is in the interpretation and knowing what to do with it all.

Perhaps a class in statistics – in addition to existing efforts involving digital or media literacy – could help many people better understand all of this.

Richard Florida: we lack systematic data to compare cities

As he considers Jane Jacobs’ impact, Richard Florida suggests we need more data about cities:

MCP: Some of the research around the built environment is pretty skimpy and not very scientific, in a lot of cases.

RF: Right. And it’s done by architects who are terrific, but are basically looking at it from the building level. We need a whole research agenda. A century or so ago John Hopkins University invented the teaching hospital, modern medicine. They said, medicine could be advanced by underpinning the way doctors treat people and develop clinical methodologies, with a solid, scientific research base. Think of it as a system that runs from laboratory to bed-side. We don’t have that for cities and urbanism.  But at the same time we know that the city is the key economic and social unit of our time. Billions of people across the world are pouring into cities and we are spending trillions upon trillions of dollars building new cities and rebuilding, expanding and upgrading existing ones. We’re doing it with little in the way of systematic research. We lack even the most basic data we need to compare and assess cities around the world. There’s no comparable grand challenge that we have so terribly under funded as cities and urbanism. We need to develop everything from the underlying science to better understand cities and their evolution, the systematic data to assess them and the educational and clinical protocols for building better, more prosperous and inclusive cities. Right now, mayors are out there winging it. Economic developers are out there winging it. There’s no clinical training program. There are some, actually, but they’re scattered about and they’re not having much impact. It’s going to take a big commitment. But we need to build the equivalent of the medical research infrastructure, with the equivalent of “teaching hospitals” for our cities.  When you think of it cities are our greatest laboratories for advancing our understanding the intersection of natural, physical, social and human environments—they’re our most complex organisms. This is going to be my next big research project: I’m calling it the Urban Genome Project. It’s what I hope to devote the rest of my career doing.

The cities as laboratories language echoes that of the Chicago School. But, much of the sociological literature suggests a basic tension in this area: how much are cities alike compared to how much are they different? Are there common processes across most or all cities that we can highlight and work with or does their unique contexts limit how much generalizing can be done? Hence, we have a range of studies with everything from examining large sets of cities at once or processes across all cities (like Florida would argue in The Rise of the Creative Class) versus studies of particular neighborhoods and cities to discover their idiosyncratic patterns.

Of course, we could just look at cities like a physicist might and argue there are power laws underlying cities…