Bringing sociology and understanding culture to Wall Street

A sociologist argues for the value of bringing a sociological perspective to Wall Street after noting how the president of the Federal Reserve Bank of New York recently used the term culture repeatedly and defined the term:

“Culture relates to the implicit norms that guide behavior in the absence of regulations or compliance rules—and sometimes despite those explicit restraints. … Culture reflects the prevailing attitudes and behaviors within a firm.  It is how people react not only to black and white, but to all of the shades of grey. Like a gentle breeze, culture may be hard to see, but you can feel it. Culture relates to what “should” I do, and not to what “can” I do.”

Dudley has a doctorate in economics, and spent a decade as chief economist at Goldman Sachs. But in his remarks he sounded more like a sociologist than an economist. His many mentions of “culture” could be significant. I’m hoping they mark the beginning of a change in how regulators think about reining in law-breaking and excessive risk-taking at banks. I’m also hoping that I had something to do with them…

So I studied sociology, and for my doctoral dissertation focused on the organizational culture of Goldman Sachs. The dissertation became a book, titled What Happened to Goldman Sachs: An Insider’s Story of Organizational Drift and Its Unintended Consequences (HBR Press, 2013). One of the changes I document in the book is how Goldman drifted from a focus on ethical standards of behavior to legal ones — from what one “should” do to what one “can” do.

After the book was published, Dudley got in touch. I met with him and his people, and discussed what I had learned in my study of sociology and, in particular, my in-depth study of Goldman. I made recommendations on how to improve regulation. Also, I sent him two pieces I wrote for HBR.org, one on the importance of focusing on organizational behavior and not just individuals, the other asserting that culture had more to do with the financial crisis than leverage ratios did.

One of the key conclusions I drew from my study was that to achieve sustained success and avoid firm-endangering risks, a firm like Goldman has to cultivate financial interdependence among its top employees.

Employees that can make big financial decisions on their own means that many will take big risks and a lot of money could be lost. This reminds me of some of the arguments of Nassim Taleb who suggests losses should not be shared, especially in unpredictable areas like the stock market or with innovative and cutting-edge financial instruments. Instead, there could be organizational cultures that promote more prudent financial decisions that may still be innovative and profitable but limit the possibility of major black swan losses.

“Normal accidents” and black swans in the complex systems of today

A sociological idea about the problems that can arise in complex systems is related to Taleb’s ideas of black swans:

This near brush with nuclear catastrophe, brought on by a single foraging bear, is an example of what sociologist Charles Perrow calls a “normal accident.” These frightening incidents are “normal” not because they happen often, but because they are almost certain to occur in any tightly connected complex system.

Today, our highly wired global financial markets are just such as system. And in recent years, aggressive traders have repeatedly played the role of the hungry bear, setting off potential disaster after potential disaster through a combination of human blunders and network failures…

In his book Normal Accidents, Perrow stresses the role that human error and mismanagement play in these scenarios. The important lesson: failures in complex systems are caused not only by the hardware and software problems but by people and their motivations.

See an earlier post dealing with the same sociological ideas. Nassim Taleb discusses this quite a bit and suggests knowing about this complexity should lead us to different kinds of actions where we try to minimize the disastrous risks and find opportunities for extraordinary success (if there are inevitable yet unknown opportunities for crisis, there could also be moments where low risk investments can pay off spectacularly).

If these are inherent traits of complex systems, does this mean more people will argue against such systems in the future? I could imagine some claiming this means we should have smaller systems and more local control. However, we may be at the point where even much smaller groups can’t escape a highly interdependent world. And, as sociologist Max Weber noted, bureaucratic structures (a classic example of complex organizations or systems) may have lots of downsides but they are relatively efficient at dealing with complex concerns. Take the recent arguments about health care: people might not like the government handling more of it but even without government control, there are still plenty of bureaucracies involved, it is a complex system, and there is plenty of potential for things to go wrong.

Krugman: prediction problems in economics due to the “sociology of economics”

Looking at the predictive abilities of macroeconomics, Paul Krugman suggests there is an issue with the “sociology of economics”:

So, let’s grant that economics as practiced doesn’t look like a science. But that’s not because the subject is inherently unsuited to the scientific method. Sure, it’s highly imperfect — it’s a complex area, and our understanding is in its early stages. And sure, the economy itself changes over time, so that what was true 75 years ago may not be true today — although what really impresses you if you study macro, in particular, is the continuity, so that Bagehot and Wicksell and Irving Fisher and, of course, Keynes remain quite relevant today.

No, the problem lies not in the inherent unsuitability of economics for scientific thinking as in the sociology of the economics profession — a profession that somehow, at least in macro, has ceased rewarding research that produces successful predictions and rewards research that fits preconceptions and uses hard math instead.

Why has the sociology of economics gone so wrong? I’m not completely sure — and I’ll reserve my random thoughts for another occasion.

This is an occasional discussion in social sciences like economics or sociology: how much are they really like a science in the sense of making testable predictions (not about the natural world but for social behavior) versus whether they are more interpretive. I’m not surprised Krugman takes this stance but it is interesting that he says the issue is within the discipline itself for rewarding the wrong things. If this is the case, what could be done to reward successful predictions? At this point, Krugman is suggesting a problem without offering much of a solution. As a number of people, like Nassim Taleb and Nate Silver, have noted in recent years, making predictions is quite difficult, requires a more humble approach, and requires particular methodological and statistical approaches.

Argument: humans like causation because they like to feel in control

Here is an interesting piece that summarizes some research and concludes that humans like to feel in control and therefore like the idea of causality:

This predisposition for causation seems to be innate. In the 1940s, psychologist Albert Michotte theorized that “we see causality, just as directly as we see color,” as if it is omnipresent. To make his case, he devised presentations in which paper shapes moved around and came into contact with each other. When subjects—who could only see the shapes moving against a solid-colored background—were asked to describe what they saw, they concocted quite imaginative causal stories…

Nassim Taleb noted how ridiculous this is in his book The Black Swan. In the hours after former Iraqi dictator Saddam Hussein was captured on December 13, 2003, Bloomberg News blared the headline, “U.S. TREASURIES RISE; HUSSEIN CAPTURE MAY NOT CURB TERRORISM.” Thirty minutes later, bond prices retreated and Bloomberg altered their headline: “U.S. TREASURIES FALL; HUSSEIN CAPTURE BOOSTS ALLURE OF RISKY ASSETS.” A more correct headline might have been: “U.S. TREASURIES FLUCTUATE AS THEY ALWAYS DO; HUSSEIN CAPTURE HAS NOTHING TO DO WITH THEM WHATSOEVER,” but that isn’t what editors want to post, nor what people want to read.

This trend doesn’t merely manifest itself for stocks or large events. Take scientific studies, for example. Many of the most sweeping findings, ones normally reported in large media outlets, originate from associative studies that merely correlate two variables—television watching and death, for example. Yet headlines—whose functions are partly to summarize and primarily to attract attention—are often written as “X causes Y” or “Does X cause Y?” (I have certainly been guilty of writing headlines in the latter style). In turn, the general public usually treats these findings as cause-effect, despite the fact that there may be no proven causal link between the variables. The article itself might even mention the study’s correlative, not causative, nature, and this still won’t change how it is perceived. Co-workers across the world will still congregate around coffee machines the next day, chatting about how watching The Kardashians is killing you, albeit very slowly.Humanity’s need for concrete causation likely stems from our unceasing desire to maintain some iota of control over our lives. That we are simply victims of luck and randomness may be exhilarating to a madcap few, but it is altogether discomforting to most. By seeking straightforward explanations at every turn, we preserve the notion that we can always affect our condition in some meaningful way. Unfortunately, that idea is a facade. Some things don’t have clear answers. Some things are just random. Some things simply can’t be controlled.

I like the reference to Taleb here. His books make just this argument: people want to see patterns when they don’t exist and thus are completely unprepared for changes in the stock market, governments, or the natural world. The trick is to know when you can rely on patterns and when you can’t – and Taleb even has general investment strategies in his most recent book Antifragile that try to minimize loss and try to maximize potential gains.

I wonder if this isn’t lurking behind the discussion of big data: there are scientists and others who seem to suggest that all we need to understand the world is more data and better pattern recognition tools. If only we could get enough, we could figure things out. But, what if the world turns out to be too complex? What if we can’t know everything about the social or natural world? Does this then change our perceptions of human ingenuity and progress?

h/t Instapundit

Thinking about a legal framework for a potential apocalypse

This story about the State of New York thinking about the legal challenges of an apocalyptic event might cause one to wonder: why are they spending time with this when there are other pressing concerns? Here is a description of some of the issues that could arise should an apocalypse occur:

Quarantines. The closing of businesses. Mass evacuations. Warrantless searches of homes. The slaughter of infected animals and the seizing of property. When laws can be suspended and whether infectious people can be isolated against their will or subjected to mandatory treatment. It is all there, in dry legalese, in the manual, published by the state court system and the state bar association.

The most startling legal realities are handled with lawyerly understatement. It notes that the government has broad power to declare a state of emergency. “Once having done so,” it continues, “local authorities may establish curfews, quarantine wide areas, close businesses, restrict public assemblies and, under certain circumstances, suspend local ordinances.”…

“It is a very grim read,” Mr. Younkins said. “This is for potentially very grim situations in which difficult decisions have to be made.”…

The manual provides a catalog of potential terrorism nightmares, like smallpox, anthrax or botulism episodes. It notes that courts have recognized far more rights over the past century or so than existed at the time of Typhoid Mary’s troubles. It details procedures for assuring that people affected by emergency rules get hearings and lawyers. It mentions that in the event of an attack, officials can control traffic, communications and utilities. If they expect an attack, it says, they can compel mass evacuations.

But the guide also presents a sober rendition of what the realities might be in dire times. The suspension of laws, it says, is subject to constitutional rights. But then it adds, “This should not prove to be an obstacle, because federal and state constitutional restraints permit expeditious actions in emergency situations.”

Isn’t it better that authorities are doing some thinking about these situations now rather than simply reacting if something major happens? This reminds me of Nasim Taleb’s book The Black Swan where he argues that a problem we face as a society is that we don’t consider the odd things that could, and still do (even if it is rarely), happen. Taleb suggests we tend to extrapolate from past historical events but this is a poor predictor of future happenings.

Depending on the size or scope of the problem, it may be that government is limited or even unable to respond. Then we would have a landscape painted by numerous books and movies of the last few decades where every person has to simply find a way to survive. But even a limited and effective government response would be better than no response.

It would be interesting to know how much time has been spent putting together this manual.