What could lead to Americans considering what they want the suburbs to be

Yesterday, I wrote about competing visions of American suburbs. Under what circumstances might a national conversation, debate, and/or reckoning take place regarding what suburbs should be in the future? Here are a few possibilities:

photo of houses under starry skies

Photo by Dominika Roseclay on Pexels.com

  1. An election. As noted yesterday, elections can help to bring issues to the forefront. The suburbs are not a key issue in the 2020 presidential election but this does not mean they could not be down the road.
  2. Building concern about housing. The need for cheaper housing in certain metropolitan areas has led to local and state-level debate but this has rarely reached national levels. I am pessimistic about national level discussions about and solutions for housing – but it could happen.
  3. Some sort of crisis or unusual occurrence in suburbia that pushes people to rethink what suburbs are about. Perhaps it is ongoing police violence – like in Ferguson, Missouri – or an usual place like Columbia, Maryland that people want to emulate.
  4. Declining interest in living in suburbs among future generations. Whether millennials and their successors want to or can live in suburbs is up for debate.
  5. A redefinition of the American Dream away from single-family homes, driving, and private spaces to other factors ranging from different kinds of spaces (perhaps more cosmopolitan canopies?) to an inability or declining interest in homeownership compared to securing health care and basic income or a rise in AI, robots, and technology that renders spaces less important than ever.
  6. Black swan events or large changes beyond the control of the average suburbanite. Imagine no more gasoline or a disease that strikes suburbanites at higher rates or a collapse of the global economy rendering the suburban lifestyle difficult. (Because these are black swan events, they are hard or impossible to predict.)

For roughly seventy years, the United States has promoted suburbs on a massive scale (with evidence that a suburban vision has existed for roughly 170 years). With a majority of Americans living in suburbs, it would take work or certain events for a robust conversation to be had and then a wind-down of the suburbs and shift toward other spaces would likely take decades. At the same time, future researchers and pundits might look back to important conversations, events, decisions, or changes that started the United States down a path away from suburbs. Those precipitating factors could occur today, in the near future, somewhere down the road, or never. While the suburbs in the United States have tremendous inertia pushing them into the future, they do not necessarily have to continue.

Seeking insurance for black swan events

Lloyd’s of London is interested in black swan insurance that would help protect against losses from unusual events:

animals avian beaks black

Photo by Anthony on Pexels.com

Commercial insurance market Lloyd’s has said insurers worldwide will pay out more than $100 billion in coronavirus-related claims this year.

But many firms are frustrated that their business interruption policies do not cover the pandemic and some in Europe and the United States are in dispute with insurers.

The Black Swan cover could be used to ensure payments after catastrophes such as a cyber attack or solar storm destroying critical infrastructure, as well as for pandemics, Lloyd’s said in a report published on Wednesday.

In The Black Swan, Nassim Nicholas Taleb defines black swan events this way:

First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme impact (unlike the bird). Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable. (xxii)

What phenomena fall into this category? According to Taleb:

Fads, epidemics, fashion, ideas, the emergence of art genres and schools. All follow these Black Swan dynamics. (xxii)

It seems like a conundrum: how exactly to provide insurance monies for events that are unknown and unpredictable? One of the important features of the insurance industry is being able to estimate risk and possibly payouts. A black swan event makes this very difficult if not impossible. At the same time, we know black swan events are possible – even if we do not know which ones might occur or what new phenomena might arise – so having money available to address the situation seems wise.

It would be interesting to see how this plays in the court of public opinion. When crisis hits, I would guess many people want governments and large corporations to be able to respond quickly and dispatch needed monies. Yet, having a large slush fund or unlimited monies to address potential situations could strike some as problematic.

Using humorists to predict the future because they can push beyond plausibility

Predictions made by experts are often not very good so why not let humorists try their hand at looking at the future?

This is not because “Simpsons” creator Matt Groening and his teams of writers through the decades are sinister geniuses. They are, of course, but the phenomenon of jokes coming uncannily true is not at all unique to “The Simpsons.” So at this time of year, when lots of people are making forecasts or looking back at how last year’s predictions went, I’d like to make the case that humorists may make the best futurists of all.

The writers of “The 80s” would not have won one of Philip Tetlock’s forecasting competitions: The great majority of their “predictions” were wildly wrong. Congress didn’t ban the consumption of meat, Muhammad Ali didn’t become chairman of the Joint Chiefs of Staff, Disney didn’t buy the United Kingdom, a musical version of “1984” starring Leif Garrett, Tracy Austin and Marlon Brando (as “Big Brother”) did not become the movie of the decade, cancer was not cured with “a substance secreted in the cranium of the baby harp seal when its head was struck repeatedly.” But given that the aim of the book was not to make predictions but to entertain, that was OK. It’s like with “The Simpsons”: You’re not watching it to get a rundown on the world to come; the fact that you sometimes do is a happy bonus…

The humorist’s approach to looking into the future bears some resemblance to scenario planning, a practice developed in the 1950s and 1960s at the Rand Corp. and Hudson Institute. Scenario planning involves coming up with alternative story lines of how things might plausibly develop in the future, and thinking about how a business or other organization can adapt to them. It’s not about picking the right scenario, but about opening your mind to different possibilities.

To make stories about the future funny, they usually have to be pushed beyond the bounds of plausibility. If they’re not pushed too far beyond, though, they can sometimes come true — with the advantage that few “serious” forecasters will have predicted them. The Trump presidency is a classic case of this. He had been talking about running since the late 1980s, but those in the media and political circles had learned over the years not to take him seriously. So it was left to the jokers.

Looking into the future is a difficult task since the future is a complex system with many variable at play. Even with all the data we have at our disposal these days, future trends do not necessarily have to follow in line with past results. This reminds me of Nassim Taleb’s writings from The Black Swan and onward: there are certain parts of reality that are fairly predictable, other areas that complex but more knowable, and other areas that we do not even know what we do not know. See this chart adapted from Taleb by Garry Peterson for an overview:

Taleb's quadrants

This also gets at an important aspect of creativity: being able to think beyond existing realities.

Another bonus of looking to humorists to think about the future: you might get some extra laughs along the way.

Claim: we see more information today so we see more “improbable” events

Are more rare events happening in the world or are we just more aware of what is going on?

In other words, the more data you have, the greater the likelihood you’ll see wildly improbable phenomena. And that’s particularly relevant in this era of unlimited information. “Because of the Internet, we have access to billions of events around the world,” says Len Stefanski, who teaches statistics at North Carolina State University. “So yeah, it feels like the world’s going crazy. But if you think about it logically, there are so many possibilities for something unusual to happen. We’re just seeing more of them.” Science says that uncovering and accessing more data will help us make sense of the world. But it’s also true that more data exposes how random the world really is.

Here is an alternative explanation for why all these rare events seem to be happening: we are bumping up against our limited ability to predict all the complexity of the world.

All of this, though, ignores a more fundamental and unsettling possibility: that the models were simply wrong. That the Falcons were never 99.6 percent favorites to win. That Trump’s odds never fell as low as the polling suggested. That the mathematicians and statisticians missed something in painting their numerical portrait of the universe, and that our ability to make predictions was thus inherently flawed. It’s this feeling—that our mental models have somehow failed us—that haunted so many of us during the Super Bowl. It’s a feeling that the Trump administration exploits every time it makes the argument that the mainstream media, in failing to predict Trump’s victory, betrayed a deep misunderstanding about the country and the world and therefore can’t be trusted.

And maybe it isn’t very easy to reconcile these two explanations:

So: Which is it? Does the Super Bowl, and the election before it, represent an improbable but ultimately-not-confidence-shattering freak event? Or does it indicate that our models are broken, that—when it comes down to it—our understanding of the world is deeply incomplete or mistaken? We can’t know. It’s the nature of probability that it can never be disproven, unless you can replicate the exact same football game or hold the same election thousands of times simultaneously. (You can’t.) That’s not to say that models aren’t valuable, or that you should ignore them entirely; that would suggest that data is meaningless, that there’s no possibility of accurately representing the world through math, and we know that’s not true. And perhaps at some point, the world will revert to the mean, and behave in a more predictable fashion. But you have to ask yourself: What are the odds?

I know there is a lot of celebration of having so much available information today but it isn’t necessarily easy adjusting to the changes. Taking it all in requires some effort on its own but the hard work is in the interpretation and knowing what to do with it all.

Perhaps a class in statistics – in addition to existing efforts involving digital or media literacy – could help many people better understand all of this.

“Normal accidents” and black swans in the complex systems of today

A sociological idea about the problems that can arise in complex systems is related to Taleb’s ideas of black swans:

This near brush with nuclear catastrophe, brought on by a single foraging bear, is an example of what sociologist Charles Perrow calls a “normal accident.” These frightening incidents are “normal” not because they happen often, but because they are almost certain to occur in any tightly connected complex system.

Today, our highly wired global financial markets are just such as system. And in recent years, aggressive traders have repeatedly played the role of the hungry bear, setting off potential disaster after potential disaster through a combination of human blunders and network failures…

In his book Normal Accidents, Perrow stresses the role that human error and mismanagement play in these scenarios. The important lesson: failures in complex systems are caused not only by the hardware and software problems but by people and their motivations.

See an earlier post dealing with the same sociological ideas. Nassim Taleb discusses this quite a bit and suggests knowing about this complexity should lead us to different kinds of actions where we try to minimize the disastrous risks and find opportunities for extraordinary success (if there are inevitable yet unknown opportunities for crisis, there could also be moments where low risk investments can pay off spectacularly).

If these are inherent traits of complex systems, does this mean more people will argue against such systems in the future? I could imagine some claiming this means we should have smaller systems and more local control. However, we may be at the point where even much smaller groups can’t escape a highly interdependent world. And, as sociologist Max Weber noted, bureaucratic structures (a classic example of complex organizations or systems) may have lots of downsides but they are relatively efficient at dealing with complex concerns. Take the recent arguments about health care: people might not like the government handling more of it but even without government control, there are still plenty of bureaucracies involved, it is a complex system, and there is plenty of potential for things to go wrong.

Risk, reward as more complexity leads to new, more problems

In discussing the recent fine levied about BP for the 2010 oil issue in the Gulf of Mexico, an interesting question can be raised: are events and problems like this simply inevitable given the growing complexity of society?

In 1984, a Yale University sociologist named Charles Perrow published a book called “Normal Accidents: Living with High-Risk Technologies.” He argued that as technologies become more complex, accidents become inevitable.

The more complex safety features that are built in, the more likely it is that something will go wrong. You not only add technical complexity more things to go wrong but you add a human element of complacency. The more often things don’t go wrong, the more likely it is that people think they won’t. The phrase for this is “normalization of deviance,” coined by Boston University sociologist Diane Vaughan, part of the team that examined the 1986 explosion of space shuttle Challenger.

“Normal accident” and “normalization of deviance” come to mind because 10 days ago, the oil company BP agreed to plead guilty to 12 felony and two misdemeanor criminal charges in connection with the 2010 explosion of the Deepwater Horizon drilling rig in the Gulf of Mexico. Eleven workers were killed and nearly 5 million barrels of oil (210 million gallons) poured into the Gulf over 87 days…

But it requires complex systems that will, at some point, fail. Politically, the government can only seek to explain those risks, try to minimize them with tough regulation and make sure those who take big risks have the means to redress inevitable failure.

If these sorts of events are inevitable given more complexity and activity (particularly in the field of drilling and extraction), how do we balance the risks and rewards of such activity? How much money and effort should be spent trying to minimize risky outcomes? This is a complex social question that involves a number of factors. Unfortunately, such discussions often happen after the fact rather than ahead of possible occurrences. This is what Nassim Taleb discusses in The Black Swan; we can do certain things to prepare for or at least think about known and unknown events. We shouldn’t be surprised that oil accidents happen and should have some idea of how to tackle the problem or make things better after the fact. A fine against the company is punitive but will it necessarily provide the solution to the consequences of the event or guarantee that no such event will happen in the future? Probably not.

At the same time, I wonder if such events are more difficult for us to understand today because we do have strong narratives of progress. Although it is not often stated this explicitly, we tend to think such problems can be eliminated through technology, science, and reason. Yet, complex systems have points of frailty. Perhaps technology hasn’t been tested in all circumstances. Perhaps unforeseen or unpredictable environmental or social forces arise. And, perhaps most of all, these systems tend to involve humans who make mistakes (unintentionally or intentionally). This doesn’t necessarily mean that we can’t strive for improvements but it also means we should keep in mind our limitations and the possible problems that might arise.

A call to collect better data in order to predict economic crises

Economist Robert Shiller says that we would be better able to predict economic crises if we only had better data:

Eventually, these advances led to quantitative macroeconomic models with substantial predictive power — and to a better understanding of the economy’s instabilities. It is likely that the “great moderation,” the relative stability of the economy in the years before the recent crisis, owes something to better public policy informed by that data.

Since then, however, there hasn’t been a major revolution in data collection. Notably, the Flow of Funds Accounts have become less valuable. Over the last few decades, financial institutions have taken on systemic risks, using leverage and derivative instruments that don’t show up in these reports.

Some financial economists have begun to suggest the kinds of measurements of leverage and liquidity that should be collected. We need another measurement revolution like that of G.D.P. or flow-of-funds accounting. For example, Markus Brunnermeier of Princeton, Gary Gorton of Yale and Arvind Krishnamurthy of Northwestern are developing what they call “risk topography.” They explain how modern financial theory can guide the collection of new data to provide revealing views of potentially big economic problems.

Even if more data was collected, it would still require interpretation. If we had the right data before the ongoing current economic crisis, I wonder how confident Shiller would be that we would have made the right predictions (50%? 70% 95%?). From the public narrative that has developed, it looks like there was enough evidence that the mortgage industry was doing some interesting things but few people were looking at the data or putting the story together.

And for the future, do we even know what data we might need to be looking at in order to figure out what might go wrong next?

The predictive power of sociology and learning from the past

In recent  years, the predictive element of social science has been discussed by a few people: how much can we use data from the past to predict the future? In an interview with Scientific American, a mathematical sociologist who works at Yahoo! Labs talks about our predictive abilities:

A big part of your book deals with the problem of ignoring failures—a selective reading of the past to draw erroneous conclusions, which reminds me of the old story about the skeptic who hears about sailors who survived a shipwreck supposedly because they’d prayed to the gods. The skeptic asked, “What about the people who prayed and perished?”
Right—if you look at successful companies or shipwrecked people, you don’t see the ones who didn’t make it. It’s what sociologists call “selection on the dependent variable,” or what in finance is called survivorship bias. If we collected all the data instead of just some of it, we could learn more from the past than we do. It’s also like Isaiah Berlin’s distinction between hedgehogs and foxes. The famous people in history were hedgehogs, because when those people win they win big, but there are lots of failed hedgehogs out there.

Other scholars have pointed out that ignoring this hidden history of failures can lead us to take bigger risks than we might had we seen the full distribution of past outcomes. What other problems do you see with our excessive focus on the successful end of the distribution?

It causes us to misattribute the causes of success and failure: by ignoring all the nonevents and focusing only on the things that succeed, we don’t just convince ourselves that things are more predictable than they are; we also conclude that these people deserved to succeed—they had to do something right, otherwise why were they successful? The answer is random chance, but that would cause us to look at them in a different light, and changes the nature of reward and punishment.

Interesting material and Watts’ just published book (Everything Is Obvious: *Once You Know the Answer)  sounds worthwhile. There are also some interesting thoughts later in the interview about how information in digital social networks doesn’t really get passed along through influential people.

I haven’t seen too much discussion within sociology about predictive abilities: how much do we suffer from these blind spots that Watts and others point out?

(As a reminder, Nassim Taleb, in his book Black Swan, has also written well on this subject.)

Thinking about a legal framework for a potential apocalypse

This story about the State of New York thinking about the legal challenges of an apocalyptic event might cause one to wonder: why are they spending time with this when there are other pressing concerns? Here is a description of some of the issues that could arise should an apocalypse occur:

Quarantines. The closing of businesses. Mass evacuations. Warrantless searches of homes. The slaughter of infected animals and the seizing of property. When laws can be suspended and whether infectious people can be isolated against their will or subjected to mandatory treatment. It is all there, in dry legalese, in the manual, published by the state court system and the state bar association.

The most startling legal realities are handled with lawyerly understatement. It notes that the government has broad power to declare a state of emergency. “Once having done so,” it continues, “local authorities may establish curfews, quarantine wide areas, close businesses, restrict public assemblies and, under certain circumstances, suspend local ordinances.”…

“It is a very grim read,” Mr. Younkins said. “This is for potentially very grim situations in which difficult decisions have to be made.”…

The manual provides a catalog of potential terrorism nightmares, like smallpox, anthrax or botulism episodes. It notes that courts have recognized far more rights over the past century or so than existed at the time of Typhoid Mary’s troubles. It details procedures for assuring that people affected by emergency rules get hearings and lawyers. It mentions that in the event of an attack, officials can control traffic, communications and utilities. If they expect an attack, it says, they can compel mass evacuations.

But the guide also presents a sober rendition of what the realities might be in dire times. The suspension of laws, it says, is subject to constitutional rights. But then it adds, “This should not prove to be an obstacle, because federal and state constitutional restraints permit expeditious actions in emergency situations.”

Isn’t it better that authorities are doing some thinking about these situations now rather than simply reacting if something major happens? This reminds me of Nasim Taleb’s book The Black Swan where he argues that a problem we face as a society is that we don’t consider the odd things that could, and still do (even if it is rarely), happen. Taleb suggests we tend to extrapolate from past historical events but this is a poor predictor of future happenings.

Depending on the size or scope of the problem, it may be that government is limited or even unable to respond. Then we would have a landscape painted by numerous books and movies of the last few decades where every person has to simply find a way to survive. But even a limited and effective government response would be better than no response.

It would be interesting to know how much time has been spent putting together this manual.