“Tiny Houses Are Big” – with 10,000 total in the United States

Tiny houses get a lot of attention – including this recent Parade story – but rarely are numbers provided about how big (or small) this trend really is. The Parade story did provide some data (though without any indication of how this was measured) on the number of tiny houses in the US. Ready for the figure?

10,000.

Without much context, it is hard to know what to do with this figure or how accurate it might be. Assuming the figure’s veracity, is that a lot of tiny houses? Not that many? Some comparisons might help:

Between February 2016 and March 2017, there were over 1,000,000 housing starts in each month. (National Association of Home Builders) Within data going back to 1959, the lowest point for housing starts after the 2000s housing bubble burst experienced about 500,000 new housing starts a month. (Census Bureau data at TradingEconomics.com)

The RV industry shipped over 430,000 units in 2016. This follows a low point of shipments in recent years back in 2009 where only 165,000 units were shipped. (Recreation Vehicle Industry Association)

The number of manufactured homes that have shipped in recent years – 2014 to 2016 – has surpassed 60,000 each year. (Census Bureau)

The percent of new homes that are under 1,400 square feet has actually dropped since 1999 to 7% in 2016. (Census Bureau)

Based on these comparisons, 10,000 units is not much at all. They are barely a drop in the bucket within all housing.

Perhaps the trend is sharply on the rise? There is a little evidence of this. I wrote my first post here on tiny houses back in 2010 and it involved how to measure the tiny house trend. The cited article in that post included measures like the number of visitors to a tiny house blog and sales figures from tiny house builders. Would the number of tiny house shows on HGTV and similar networks provide some data? All trends have to start somewhere – with a small number of occurrences – but it doesn’t seem like the tiny house movement is taking off in exponential form.

Ultimately, I would ask for more and better data on tiny houses. Clearly, there is some interest. Yet, calling this a major trend would be misleading.

 

Measuring attitudes by search results rather than surveys?

An author suggests Google search result data gives us better indicators of attitudes toward insecurity, race, and sex than surveys:

I think there’s two. One is depressing and kind of horrifying. The book is called Everybody Lies, and I start the book with racism and how people were saying to surveys that they didn’t care that Barack Obama was black. But at the same time they were making horrible racist searches, and very clearly the data shows that many Americans were not voting for Obama precisely because he was black.

I started the book with that, because that is the ultimate lie. You might be saying that you don’t care that [someone is black or a woman], but that really is driving your behavior. People can say one thing and do something totally different. You see the darkness that is often hidden from polite society. That made me feel kind of worse about the world a little bit. It was a little bit frightening and horrifying.

But, I think the second thing that you see is a widespread insecurity, and that made me feel a little bit better. I think people put on a front, whether it’s to friends or on social media, of having things together and being sure of themselves and confident and polished. But we’re all anxious. We’re all neurotic.

That made me feel less alone, and it also made me more compassionate to people. I now assume that people are going through some sort of struggle, even if you wouldn’t know that from their Facebook posts.

We know surveys have flaws and there are multiple ways – from sampling, to bad questions, to nonresponse, to social desirability bias (the issue at hand here) – they can be skewed.

But, these flaws wouldn’t lead me to these options:

  1. Thinking that search results data provides better information. Who is doing the searching? Are they a representative population? How clear are the patterns? (It is common to see stories based on the data but that provide no numbers. “Illinois” might be the most misspelled word in the state, for example, but by a one search margin and with 486 to 485 searches).
  2. Thinking that surveys are worthless on the whole. They still tell us something, particularly if we know the responses to some questions might be skewed. In the example above, why would Americans tell pollsters they have more progressive racial attitudes that they do? They have indeed internalized something about race.
  3. That attitudes need to be measured as accurately as possible. People’s attitudes often don’t line up with their actions. Perhaps we need more measures of attitudes and behaviors rather than a single good one. The search result data cited above could supplement survey data and voting data to better inform us about how Americans think about race.

Good data is foundational to doing good sociological work

I’ve had conversations in recent months with a few colleagues outside the discipline about debates within sociology over the work of ethnographers like Alice Goffman, Matt Desmond, and Sudhir Venkatesh. It is enlightening to hear how outsiders see the disagreements and this has pushed me to consider more fully how I would explain the issues at hand. What follows is my one paragraph response to what is at stake:

In the end, what separates the work of sociologists from perceptive non-academics or journalists? (An aside: many of my favorite journalists often operate like pop sociologists as they try to explain and not just describe social phenomena.) To me, it comes down to data and methods. This is why I enjoy teaching both our Statistics course and our Social Research course: undergraduates rarely come into them excited but they are foundational to who sociologists are. What we want to do is have data that is (1) scientific – reliable and valid – and (2) generalizable – allowing us to see patterns across individuals and cases or settings. I don’t think it is a surprise that the three sociologists under fire above wrote ethnographies where it is perhaps more difficult to fit the method under a scientific rubric. (I do think it can be done but it doesn’t always appear that way to outsiders or even some sociologists.) Sociology is unique in both its methodological pluralism – we do everything from ethnography to historical analysis to statistical models to lab or natural experiments to mass surveys – and we aim to find causal explanations for phenomena rather than just describe what is happening. Ultimately, if you can’t trust a sociologist’s data, why bother considering their conclusions or why would you prioritize their explanations over that of an astute person on the street?

Caveats: I know no data is perfect and sociologists are not in the business of “proving” things but rather we look for patterns. There is also plenty of disagreement within sociology about these issues. In a perfect world, we would have researchers using different methods to examine the same phenomena and develop a more holistic approach. I also don’t mean to exclude the role of theory in my description above; data has to be interpreted. But, if you don’t have good data to start with, the theories are abstractions.

Claim: we see more information today so we see more “improbable” events

Are more rare events happening in the world or are we just more aware of what is going on?

In other words, the more data you have, the greater the likelihood you’ll see wildly improbable phenomena. And that’s particularly relevant in this era of unlimited information. “Because of the Internet, we have access to billions of events around the world,” says Len Stefanski, who teaches statistics at North Carolina State University. “So yeah, it feels like the world’s going crazy. But if you think about it logically, there are so many possibilities for something unusual to happen. We’re just seeing more of them.” Science says that uncovering and accessing more data will help us make sense of the world. But it’s also true that more data exposes how random the world really is.

Here is an alternative explanation for why all these rare events seem to be happening: we are bumping up against our limited ability to predict all the complexity of the world.

All of this, though, ignores a more fundamental and unsettling possibility: that the models were simply wrong. That the Falcons were never 99.6 percent favorites to win. That Trump’s odds never fell as low as the polling suggested. That the mathematicians and statisticians missed something in painting their numerical portrait of the universe, and that our ability to make predictions was thus inherently flawed. It’s this feeling—that our mental models have somehow failed us—that haunted so many of us during the Super Bowl. It’s a feeling that the Trump administration exploits every time it makes the argument that the mainstream media, in failing to predict Trump’s victory, betrayed a deep misunderstanding about the country and the world and therefore can’t be trusted.

And maybe it isn’t very easy to reconcile these two explanations:

So: Which is it? Does the Super Bowl, and the election before it, represent an improbable but ultimately-not-confidence-shattering freak event? Or does it indicate that our models are broken, that—when it comes down to it—our understanding of the world is deeply incomplete or mistaken? We can’t know. It’s the nature of probability that it can never be disproven, unless you can replicate the exact same football game or hold the same election thousands of times simultaneously. (You can’t.) That’s not to say that models aren’t valuable, or that you should ignore them entirely; that would suggest that data is meaningless, that there’s no possibility of accurately representing the world through math, and we know that’s not true. And perhaps at some point, the world will revert to the mean, and behave in a more predictable fashion. But you have to ask yourself: What are the odds?

I know there is a lot of celebration of having so much available information today but it isn’t necessarily easy adjusting to the changes. Taking it all in requires some effort on its own but the hard work is in the interpretation and knowing what to do with it all.

Perhaps a class in statistics – in addition to existing efforts involving digital or media literacy – could help many people better understand all of this.

Richard Florida: we lack systematic data to compare cities

As he considers Jane Jacobs’ impact, Richard Florida suggests we need more data about cities:

MCP: Some of the research around the built environment is pretty skimpy and not very scientific, in a lot of cases.

RF: Right. And it’s done by architects who are terrific, but are basically looking at it from the building level. We need a whole research agenda. A century or so ago John Hopkins University invented the teaching hospital, modern medicine. They said, medicine could be advanced by underpinning the way doctors treat people and develop clinical methodologies, with a solid, scientific research base. Think of it as a system that runs from laboratory to bed-side. We don’t have that for cities and urbanism.  But at the same time we know that the city is the key economic and social unit of our time. Billions of people across the world are pouring into cities and we are spending trillions upon trillions of dollars building new cities and rebuilding, expanding and upgrading existing ones. We’re doing it with little in the way of systematic research. We lack even the most basic data we need to compare and assess cities around the world. There’s no comparable grand challenge that we have so terribly under funded as cities and urbanism. We need to develop everything from the underlying science to better understand cities and their evolution, the systematic data to assess them and the educational and clinical protocols for building better, more prosperous and inclusive cities. Right now, mayors are out there winging it. Economic developers are out there winging it. There’s no clinical training program. There are some, actually, but they’re scattered about and they’re not having much impact. It’s going to take a big commitment. But we need to build the equivalent of the medical research infrastructure, with the equivalent of “teaching hospitals” for our cities.  When you think of it cities are our greatest laboratories for advancing our understanding the intersection of natural, physical, social and human environments—they’re our most complex organisms. This is going to be my next big research project: I’m calling it the Urban Genome Project. It’s what I hope to devote the rest of my career doing.

The cities as laboratories language echoes that of the Chicago School. But, much of the sociological literature suggests a basic tension in this area: how much are cities alike compared to how much are they different? Are there common processes across most or all cities that we can highlight and work with or does their unique contexts limit how much generalizing can be done? Hence, we have a range of studies with everything from examining large sets of cities at once or processes across all cities (like Florida would argue in The Rise of the Creative Class) versus studies of particular neighborhoods and cities to discover their idiosyncratic patterns.

Of course, we could just look at cities like a physicist might and argue there are power laws underlying cities…

Social science assumes “human living is not random”

Noted sociologist of religion Grace Davie gives a brief description of her work:

My work, like that of all social scientists, rests on the assumption that human living is not random. Why is it, for example, that Christian churches in the West are disproportionately attended by women? That requires an explanation.

This is a good starting point for describing the social sciences. There are patterns to human social life and we can’t rely on anecdotes or interpretations of whether there are patterns or how to understand them. We want to apply a scientific perspective to these patterns and explain why those patterns, and not others, exist. Then, we might delve deeper into level of analysis, theoretical assumptions, and techniques of data collection and analysis – three areas where the various social science disciplines differ.

Researchers fact-checking their own ethnographic data

Toward the end of a long profile of sociologist Matthew Desmond is an interesting section regarding ethnographic methods:

Desmond has done an especially good job spelling out precisely how he went about his research and verified his findings, says Klinenberg. At the start of Evicted, an author’s note states that most of the events in the book took place between May 2008 and December 2009. Except where it says otherwise in the notes, Desmond writes, all events that happened between those dates were observed firsthand. Every quotation was “captured by a digital recorder or copied from official documents,” he adds. He also hired a fact-checker who corroborated the book by combing public records, conducting some 30 interviews, and asking him to produce field notes that verified a randomly selected 10 percent of its pages.

Desmond has been equally fastidious about taking himself out of the text. Unlike many ethnographic studies, including Goffman’s, his avoids the first person. He wants readers to react directly to the people in Evicted. “Ethnography often provokes very strong feelings,” he says. “So I wanted the book to do that. But not about me.”

Ethnographers should be more skeptical about their data, Desmond believes. In his fieldwork, for example, he saw women getting evicted at higher rates than men. But when he crunched the data, analyzing hundreds of thousands of court records, it turned out that was only the case in predominantly black and Latino neighborhoods. Women in white neighborhoods were not evicted at higher rates than men. The field had told him a half-truth.

Still, beyond acknowledging that the reception of Goffman’s book shaped his fact-checking, he will say nothing about the controversy. Even an old journalism trick — letting a silence linger, in the hope that an interviewee will fill it — fails to wring a quote from him. “This is such a good technique,” he says after a few seconds, “where you just kind of let the person talk.” Then he sips his Diet Coke, waiting for the next question.

This gets at some basic questions about what ethnography is. Should it be participant observation with a reflexive and involved researcher? Letting the research subjects speak for themselves with minimal interpretation? Should it involve fact-checking and verifying data? Each of these could have their merit and sociologists pursue different approaches. Contrasting the last two, for example, how people describe their own circumstances and understanding could be very important even if what is reported is not necessarily true. On the other hand, more and more ethnographies involve reflexive commentary from the researcher on how their presence and personal characteristics influenced the data collection and inteprretation.

It sounds to me like Desmond is doing some mixed methods work: starting with ethnographic data that he directly observes but then using secondary analysis (in the example above, using official records) to better understand both the micro level that he observed as well as the broader patterns. This means more work for each study but also more comprehensive data.