Other cities learned from Chicago’s privatization of parking meters

Failures in one city can help other cities learn what not to do:

Chicago Mayor Rahm Emanuel aggressively pushed to privatize 311 in 2015, telling journalists it would save the city “about a million dollars a year” to run the system using contractors. Hiring an outside operator would save the city from shouldering the cost of sorely needed improvements to a 20-year-old system, he suggested.

City officials weren’t thrilled at the idea. A famously unpleasant privatization effort was still in people’s minds. About 10 years ago, Chicago made an 80-year deal to pass control over its parking meters to a private firm in exchange for a $1.2 billion lump sum. The firm promptly made more than half that lump sum in revenue for itself—and still has 70 years of returns. (I wrote about this in WIRED last year.)

But that parking meter deal has been remarkably generative: It has dampened enthusiasm for privatization in cities around the country. Left to its own rational profit-making devices, a private company will systematically squeeze services to the bare minimum and avoid additional investments. That’s fine for margins, but not always great for the public.

And so when Emanuel proposed privatizing 311, scores of Chicago aldermen felt emboldened to fight.

At least other cities and Chicago now think twice before privatizing certain services. This could also lead to at least a few interesting interesting research questions:

  1. Part of the pitch for privatization was increased efficiency. Would more reluctance for such deals hold back cities in certain ways?
  2. How have private companies shifted their efforts now that cities may be wiser about making such deals? I assume this means that profit margins on such deals are smaller…

Errors committed by surgeons, plagiarism, and disciplinary errors

Megan McArdle highlights the work of a sociologist who studied the categories of errors made by surgeons and then connects those findings to plagiarism in journalism:

For my book on failure, I thought a lot about what constitutes failure. One of the most interesting interviews I did was with Charles Bosk, a sociologist who has spent his career studying medical errors. Bosk did his first work with surgical residents, and his book divides the errors into different categories: technical errors (failures of skill or knowledge), judgment errors (failing to make the right decision in a difficult case), and normative errors. The last category includes not being prepared to discuss every facet of your patient’s case, and interestingly, trying to cover up one of the other kinds of error.

Surgeons, he said, view the first two kinds of errors as acceptable, indeed inevitable, during residency. You learn to do surgery by doing surgery, and in the early days, you’re going to make some mistakes. Of course, if you just can’t seem to acquire the manual skills needed to do surgery, then you may have to leave the program for another branch of medicine, but some level of technical and judgment error is expected from everyone. Normative error is different; it immediately raises the suspicion that you shouldn’t be a surgeon…

Plagiarism might actually fall into Bosk’s fourth category of error, the one I find most interesting: quasi-normative error. That’s when a resident does something that might be acceptable under the supervision of a different attending physician, but is forbidden by the attending physician he reports to. In the program he studied, if your attending physician did a procedure one way, that’s the way you had to do it, even if you thought some other surgeon’s way was better.

In other words, quasi-normative error is contextual. So with plagiarism. In college and in journalism, it’s absolutely wrong, because “don’t plagiarize” is — for good reason — in your job description. In most of the rest of corporate America, lifting copy from somewhere else might be illegal if the material is copyrighted, but in many situations, maybe even most situations, no one, including the folks from whom you are lifting the copy, will care. They certainly won’t care if you “self-plagiarize” (as Jonah Lehrer was also accused of doing), and I’m very thankful for that, because I wrote a lot of proposals for my company, and there are only so many original ways to describe a computer network. Yet I’d never copy and paste my own writing for Bloomberg without a link, a block quote and attribution.

All errors are not created equal yet I suspect all professional and academic fields could come up with similar lists. The third and fourth types of errors above seemed to be related to professional boundaries; how exactly are surgeons supposed to act, whether when in surgery or not? The first two are more linked to surgery themselves: could you make the right decision and execute the decision? Somewhat frustratingly, some of the same language might be used across fields yet be defined differently. Plagiarism in journalism will look different than it does it academic settings where the practice McArdle describes of “re-researching” a story and not making any attributions to the original researcher would not be good in a peer-reviewed article.

Using statistics to find lost airplanes

Here is a quick look at how Bayesian statistics helped find Air France 447 in the Atlantic Ocean:

Stone and co are statisticians who were brought in to reëxamine the evidence after four intensive searches had failed to find the aircraft. What’s interesting about this story is that their analysis pointed to a location not far from the last known position, in an area that had almost certainly been searched soon after the disaster. The wreckage was found almost exactly where they predicted at a depth of 14,000 feet after only one week’s additional search…

This is what statisticians call the posterior distribution. To calculate it, Stone and co had to take into account the failure of four different searches after the plane went down. The first was the failure to find debris or bodies for six days after the plane went missing in June 2009; then there was the failure of acoustic searches in July 2009 to detect the pings from underwater locator beacons on the flight data recorder and cockpit voice recorder; next, another search in August 2009 failed to find anything using side-scanning sonar; and finally, there was another unsuccessful search using side-scanning sonar in April and May 2010…

That’s an important point. A different analysis might have excluded this location on the basis that it had already been covered. But Stone and co chose to include the possibility that the acoustic beacons may have failed, a crucial decision that led directly to the discovery of the wreckage. Indeed, it seems likely that the beacons did fail and that this was the main reason why the search took so long.

The key point, of course, is that Bayesian inference by itself can’t solve these problems. Instead, statisticians themselves play a crucial role in evaluating the evidence, deciding what it means and then incorporating it in an appropriate way into the Bayesian model.

It is not just about knowing where to look – it is also about knowing how to look. Finding a needle in a haystack is a difficult business whether it is looking for small social trends in mounds of big data or finding a crashed plane in the middle of the ocean.

This could also be a good reminder that only having one search in such circumstances may not be enough. When working with data, failures are not necessarily bad as long as they can help move to a solution.

Sir James Dyson discusses the value of failure

Sir James Dyson, noted inventor of the Dyson vacuum cleaners, discusses how failure is necessary on the path to innovation:

It’s time to redefine the meaning of the word “failure.” On the road to invention, failures are just problems that have yet to be solved…

From cardboard and duct tape to ABS polycarbonate, it took 5,127 prototypes and 15 years to get it right. And, even then there was more work to be done. My first vacuum, DC01, went to market in 1993. We’re up to DC35 now, having improved with each iteration. More efficiency, faster motors, new materials…

The ability to learn from mistakes — trial and error — is a valuable skill we learn early on. Recent studies show that encouraging children to learn new things on their own fosters creativity. Direct instruction leads to children being less curious and less likely to discover new things.

Unfortunately, society doesn’t always look kindly on failure. Punishing mistakes doesn’t lead to better solutions or faster results. It stifles invention.

If the American Dream is now about attaining perfection, where is there room for failure? Dyson goes on to talk about how education might be changed to incorporate more room for failure but getting to the point where the broader society would be more accepting of failure is another matter.

I wonder how much this idea about innovation and failure could be tied to issues regarding publishing “negative findings” in academia.