Errors committed by surgeons, plagiarism, and disciplinary errors

Megan McArdle highlights the work of a sociologist who studied the categories of errors made by surgeons and then connects those findings to plagiarism in journalism:

For my book on failure, I thought a lot about what constitutes failure. One of the most interesting interviews I did was with Charles Bosk, a sociologist who has spent his career studying medical errors. Bosk did his first work with surgical residents, and his book divides the errors into different categories: technical errors (failures of skill or knowledge), judgment errors (failing to make the right decision in a difficult case), and normative errors. The last category includes not being prepared to discuss every facet of your patient’s case, and interestingly, trying to cover up one of the other kinds of error.

Surgeons, he said, view the first two kinds of errors as acceptable, indeed inevitable, during residency. You learn to do surgery by doing surgery, and in the early days, you’re going to make some mistakes. Of course, if you just can’t seem to acquire the manual skills needed to do surgery, then you may have to leave the program for another branch of medicine, but some level of technical and judgment error is expected from everyone. Normative error is different; it immediately raises the suspicion that you shouldn’t be a surgeon…

Plagiarism might actually fall into Bosk’s fourth category of error, the one I find most interesting: quasi-normative error. That’s when a resident does something that might be acceptable under the supervision of a different attending physician, but is forbidden by the attending physician he reports to. In the program he studied, if your attending physician did a procedure one way, that’s the way you had to do it, even if you thought some other surgeon’s way was better.

In other words, quasi-normative error is contextual. So with plagiarism. In college and in journalism, it’s absolutely wrong, because “don’t plagiarize” is — for good reason — in your job description. In most of the rest of corporate America, lifting copy from somewhere else might be illegal if the material is copyrighted, but in many situations, maybe even most situations, no one, including the folks from whom you are lifting the copy, will care. They certainly won’t care if you “self-plagiarize” (as Jonah Lehrer was also accused of doing), and I’m very thankful for that, because I wrote a lot of proposals for my company, and there are only so many original ways to describe a computer network. Yet I’d never copy and paste my own writing for Bloomberg without a link, a block quote and attribution.

All errors are not created equal yet I suspect all professional and academic fields could come up with similar lists. The third and fourth types of errors above seemed to be related to professional boundaries; how exactly are surgeons supposed to act, whether when in surgery or not? The first two are more linked to surgery themselves: could you make the right decision and execute the decision? Somewhat frustratingly, some of the same language might be used across fields yet be defined differently. Plagiarism in journalism will look different than it does it academic settings where the practice McArdle describes of “re-researching” a story and not making any attributions to the original researcher would not be good in a peer-reviewed article.

Internet commenters can’t handle science because they argue by anecdote, think studies apply to 100% of cases

Popular Science announced this week they are not allowing comments on their stories because “comments can be bad for science”:

But even a fractious minority wields enough power to skew a reader’s perception of a story, recent research suggests. In one study led by University of Wisconsin-Madison professor Dominique Brossard, 1,183 Americans read a fake blog post on nanotechnology and revealed in survey questions how they felt about the subject (are they wary of the benefits or supportive?). Then, through a randomly assigned condition, they read either epithet- and insult-laden comments (“If you don’t see the benefits of using nanotechnology in these kinds of products, you’re an idiot” ) or civil comments. The results, as Brossard and coauthor Dietram A. Scheufele wrote in a New York Times op-ed:

Uncivil comments not only polarized readers, but they often changed a participant’s interpretation of the news story itself.
In the civil group, those who initially did or did not support the technology — whom we identified with preliminary survey questions — continued to feel the same way after reading the comments. Those exposed to rude comments, however, ended up with a much more polarized understanding of the risks connected with the technology.
Simply including an ad hominem attack in a reader comment was enough to make study participants think the downside of the reported technology was greater than they’d previously thought.

Another, similarly designed study found that just firmly worded (but not uncivil) disagreements between commenters impacted readers’ perception of science…

A politically motivated, decades-long war on expertise has eroded the popular consensus on a wide variety of scientifically validated topics. Everything, from evolution to the origins of climate change, is mistakenly up for grabs again. Scientific certainty is just another thing for two people to “debate” on television. And because comments sections tend to be a grotesque reflection of the media culture surrounding them, the cynical work of undermining bedrock scientific doctrine is now being done beneath our own stories, within a website devoted to championing science.

In addition to rude comments and ad hominem attacks leading to changed perceptions about scientific findings, here are two common misunderstandings of how science works often found in online comments (these are also common misconceptions offline):

1. Internet conversations are ripe for argument by anecdote. This happens all the time: a study is described and then the comments are full of people saying that the study doesn’t apply to them or someone they know. Providing a single counterfactual usually says very little and scientific studies are often designed to be as generalizable as they can be. Think of jokes made about global warming: just because there is one blizzard or one cold season doesn’t necessarily invalidate a general trend upward for temperatures.

2. Argument by anecdote is related to a misconception about scientific studies: the findings do not often apply to 100% of cases. Scientific findings are probabilistic, meaning there is some room for error (this does not mean science doesn’t tell us anything – it means it is hard to measure and analyze the real world – and scientists try to limit error as much as possible). Thus, scientists tend to talk in terms of relationships being more or less likely. This tends to get lost in news stories that suggest 100% causal relationships.

In other words, in order to have online conversations about science, you have to have readers who know the basics of scientific studies. I’m not sure my two points above are necessarily taught before college but I know I cover these ideas in both Statistics and Research Methods courses.

“The Nate Silver of immigration reform”

Want a statistical model that tells you which Congressman to lobby on immigration reform? Look no further than a political scientist at UC San Diego:

In the mold of Silver, who is famous for his election predictions, Wong bridges the gap between equations and shoe-leather politics, said David Damore, a political science professor at the University of Nevada, Las Vegas and a senior analyst for Latino Decisions, a political opinion research group.

Activists already have an idea of which lawmakers to target, but Wong gives them an extra edge. He can generate a custom analysis for, say, who might be receptive to an argument based on religious faith. With the House likely to consider separate measures rather than a comprehensive bill, Wong covers every permutation.

“In the House, everybody’s in their own unique geopolitical context,” Damore said. “What he’s doing is very, very useful.”

The equations Wong uses are familiar to many political scientists. So are his raw materials: each lawmaker’s past votes and the ethnic composition of his or her district. But no one else appears to be applying those tools to immigration in quite the way Wong does.

So is there something extra in the models that others don’t have or is Wong extra good at interpreting the results? The article suggests there are some common factors all political scientists would consider but then it also hints there are some more hidden factors like religiosity or district-specific happenings.

A fear I have for Nate Silver as well: what happens when the models are wrong? Those who work with statistics know they are just predictions and statistical models always have error but this isn’t necessarily how the public sees things.

Finding the right model to predict crime in Santa Cruz

Science fiction stories are usually the setting when people talk about predicting crimes. But it appears that the police department in Santa Cruz is working with an academic in order to forecast where crimes will take place:

Santa Cruz police could be the first department in Northern California that will deploy officers based on forecasting.

Santa Clara University assistant math professor Dr. George Mohler said the same algorithms used to predict aftershocks from earthquakes work to predict crime.”We started with theories from sociological and criminological fields of research that says offenders are more likely to return to a place where they’ve been successful in the past,” Mohler said.

To test his theory, Mohler plugged in several years worth of old burglary data from Los Angeles. When a burglary is reported, Mohler’s model tells police where and when a so-called “after crime” is likely to occur.

The Santa Cruz Police Department has turned over 10 years of crime data to Mohler to run in the model.

I wonder if we will be able to read about the outcome of this trial, regardless of whether the outcome is good or bad. If the outcome is bad, perhaps the police department or the academic would not want to publicize the results.

On one hand, this simply seems to be a problem of getting enough data to make accurate enough predictions. On the other hand, there will always be some error in the predictions. For example, how could a model predict something like what happened in Arizona this past weekend? Of course, one could include some random noise into the model – but these random guesses could easily be wrong.

And knowing the location of where crime would happen doesn’t necessarily mean that the crime could be prevented.

The presence of error in statistics as illustrated by basketball predictions

TrueHoop has an interesting paragraph from this afternoon illustrating how there is always error in even complicated statistical models:

A Laker fan wrings his hands over the fact that advanced stats prefer the Heat and LeBron James to the Lakers and Kobe Bryant. It’s pitched as an intuition vs. machine debate, but I don’t see the stats movement that way at all. Instead, I think everyone agrees the only contest that matters takes place in June. In the meantime, the question is, in clumsily predicting what will happen then (and stats or no, all such predictions are clumsy) do you want to use all of the best available information, or not? That’s the debate about stats in the NBA, if there still is one.

By suggesting that predictions are clumsy, Abbott is highlighting an important fact about statistics and statistical analysis: there is always some room for error. Even with the best statistical models, there is always a chance that a different outcome could result. There are anomalies that pop up, such as a player who has an unexpected breakout year or a young star who suffers an unfortunate injury early in the season. Or perhaps an issue like “chemistry,” something that I imagine is difficult to model, plays a role. The better the model, meaning the better the input data and the better the statistical techniques, the more accurate the predictions.

But in the short term, there are plenty of analysts (and fans) who want some way to think about the outcome of the 2010-2011 NBA season. Some predictions are simply made on intuition and basketball knowledge. Other predictions are made based on some statistical model. But all of these predictions will serve as talking points during the NBA season to help provide some overarching framework to understand the game by game results. Ultimately, as Gregg Easterbrook has pointed out in his TMQ column during the NFL off-season, many of the predictions are wrong – though the makers of the predictions are not often punished for poor results.