Debate over data on the mental fragility of college students

A recent study suggests there is a need for more data to claim that today’s college students are more fragile:

The point, overall, is that given the dizzying array of possible factors at work here, it’s much too pat a story to say that kids are getting more “fragile” as a result of some cultural bugaboo. “I think it’s not only an oversimplification, I think it’s unfair to the kids, many of whom are very hardworking and tremendously diligent, and working in systems that are often very competitive,” said Schwartz. “Many of the kids are doing extraordinarily well, and I think it’s unfair to portray this whole group of people as being somehow weakhearted or weak-minded in some sense, when there’s no evidence to really support it.”

It hasn’t gone unnoticed among those who study college mental health that there’s an interesting divide at work here: College counselors are so convinced kids’ mental health is getting worse that it’s become dogma in some quarters, and yet it’s been tricky to find any solid, rigorous evidence of this. Some researchers have tried to dig into counseling-center data in an attempt to explain this discrepancy. One recent effort, published in the October issue of the Journal of College Student Psychopathology, comes from Allan J. Schwartz, a psychiatry professor at the University of Rochester who has devoted a chunk of his career to studying college suicide. Schwartz examined data from “4,755 clients spanning a 15-year period from 1992-2007” at one university, poring over the records to determine whether students who came in contact with that school’s counseling services had, over that period, exhibited increasing levels of distress in the form of suicidality, anxiety and phobic disorders, overall signs of serious mental illness, and other measures. (The same caveat I mentioned above applies here — such a study can only tell us about rates of pathology among kids who go to counseling centers. But it can at least help determine whether counselors are right that among the kids they see every day, things are getting worse.)

Schwartz found no evidence to support the pessimistic view. With the exception of suicidality, where he noted a “significant decline” over the years, every other measure he looked at held stable over the study’s 15-year span. In his paper, Schwartz rightly notes that there are limitations to what we can extrapolate from a study of a single campus. But he goes on to explain that four other, similar studies, published between 1996 and 2007, also sought to track changes in pathology over time in single-university settings, and they too found no empirical evidence that things have been getting worse. This doesn’t definitively prove that kids who seek counseling aren’t getting sicker, of course. But statistically, Schwartz argues, it’s unlikely that five studies looking at different schools would all come up with null findings if, in fact, there was a widespread increase in student pathology overall.

I don’t know this area of research but it sounds like there is room for disagreement and/or need for more definitive data about what is going on among college students.

A broader observation: claims about cultural zeitgeists are not always backed with data. On one hand, perhaps the change is coming so quickly or underneath the radar (it takes time for scientists and others to measure things) that data simply can’t be found. On the other hand, claims about trends are often based on anecdotes and particular points of view that break down pretty quickly when compared to data that is available.

The FBI doesn’t collect every piece of data about crime

The FBI released the 2014 Uniform Crime Report Monday but it doesn’t have every piece of information we might wish to have:

As I noted in May, much statistical information about the U.S. criminal-justice system simply isn’t collected. The number of people kept in solitary confinement in the U.S., for example, is unknown. (A recent estimate suggested that it might be as many as 80,000 and 100,000 people.) Basic data on prison conditions is rarely gathered; even federal statistics about prison rape are generally unreliable. Statistics from prosecutors’ offices on plea bargains, sentencing rates, or racial disparities, for example, are virtually nonexistent.

Without reliable data on crime and justice, anecdotal evidence dominates the conversation. There may be no better example than the so-called “Ferguson effect,” first proposed by the Manhattan Institute’s Heather MacDonald in May. She suggested a rise in urban violence in recent months could be attributed to the Black Lives Matter movement and police-reform advocates…

Gathering even this basic data on homicides—the least malleable crime statistic—in major U.S. cities was an uphill task. Bialik called police departments individually and combed local media reports to find the raw numbers because no reliable, centralized data was available. The UCR is released on a one-year delay, so official numbers on crime in 2015 won’t be available until most of 2016 is over.

These delays, gaps, and weaknesses seem exclusive to federal criminal-justice statistics. The U.S. Department of Labor produces monthly unemployment reports with relative ease. NASA has battalions of satellites devoted to tracking climate change and global temperature variations. The U.S. Department of Transportation even monitors how often airlines are on time. But if you want to know how many people were murdered in American cities last month, good luck.

There could be several issues at play including:

  1. A lack of measurement ability. Perhaps we have some major disagreements about how to count certain things.
  2. Local law enforcement jurisdictions want some flexibility in working with the data.
  3. A lack of political will to get all this information.

My guess is that the most important issue is #3. If we wanted this data we could get this data. Yet, it may require concerted efforts by individuals or groups to make the issues enough of a social problem to ask that we collect good data. This means that the government and/or public needs a compelling enough reason to get uniformity in measurement and consistency in reporting.

How about this reason: having consistent and timely reporting on such data would help cut down on anecdotes and instead correctly keep the American public up to date. They could then make more informed political and civic choices. Right now, many Americans don’t quite know what is happening with crime rates as their primary sources are anecdotes or mass media reports (which can be quite sensationalistic).

Internet commenters can’t handle science because they argue by anecdote, think studies apply to 100% of cases

Popular Science announced this week they are not allowing comments on their stories because “comments can be bad for science”:

But even a fractious minority wields enough power to skew a reader’s perception of a story, recent research suggests. In one study led by University of Wisconsin-Madison professor Dominique Brossard, 1,183 Americans read a fake blog post on nanotechnology and revealed in survey questions how they felt about the subject (are they wary of the benefits or supportive?). Then, through a randomly assigned condition, they read either epithet- and insult-laden comments (“If you don’t see the benefits of using nanotechnology in these kinds of products, you’re an idiot” ) or civil comments. The results, as Brossard and coauthor Dietram A. Scheufele wrote in a New York Times op-ed:

Uncivil comments not only polarized readers, but they often changed a participant’s interpretation of the news story itself.
In the civil group, those who initially did or did not support the technology — whom we identified with preliminary survey questions — continued to feel the same way after reading the comments. Those exposed to rude comments, however, ended up with a much more polarized understanding of the risks connected with the technology.
Simply including an ad hominem attack in a reader comment was enough to make study participants think the downside of the reported technology was greater than they’d previously thought.

Another, similarly designed study found that just firmly worded (but not uncivil) disagreements between commenters impacted readers’ perception of science…

A politically motivated, decades-long war on expertise has eroded the popular consensus on a wide variety of scientifically validated topics. Everything, from evolution to the origins of climate change, is mistakenly up for grabs again. Scientific certainty is just another thing for two people to “debate” on television. And because comments sections tend to be a grotesque reflection of the media culture surrounding them, the cynical work of undermining bedrock scientific doctrine is now being done beneath our own stories, within a website devoted to championing science.

In addition to rude comments and ad hominem attacks leading to changed perceptions about scientific findings, here are two common misunderstandings of how science works often found in online comments (these are also common misconceptions offline):

1. Internet conversations are ripe for argument by anecdote. This happens all the time: a study is described and then the comments are full of people saying that the study doesn’t apply to them or someone they know. Providing a single counterfactual usually says very little and scientific studies are often designed to be as generalizable as they can be. Think of jokes made about global warming: just because there is one blizzard or one cold season doesn’t necessarily invalidate a general trend upward for temperatures.

2. Argument by anecdote is related to a misconception about scientific studies: the findings do not often apply to 100% of cases. Scientific findings are probabilistic, meaning there is some room for error (this does not mean science doesn’t tell us anything – it means it is hard to measure and analyze the real world – and scientists try to limit error as much as possible). Thus, scientists tend to talk in terms of relationships being more or less likely. This tends to get lost in news stories that suggest 100% causal relationships.

In other words, in order to have online conversations about science, you have to have readers who know the basics of scientific studies. I’m not sure my two points above are necessarily taught before college but I know I cover these ideas in both Statistics and Research Methods courses.

Use data in order to describe Anacostia neighborhood in Washington, D.C.

A recent NPR report described the changes taking place in the Anacostia neighborhood in Washington, D.C. In addition to calling Washington “Chocolate City” (setting off another line of debate), one of the residents quoted in the story is unhappy with how the neighborhood was portrayed:

Kellogg wrote that “in recent years, even areas like Anacostia — a community that was virtually all-black and more often than not poor — have seen dramatic increases in property values. The median sales price of a home east of the river — for years a no-go zone for whites and many blacks — was just under $300,000 in 2009, two to three times what it was in the mid-’90s.” After profiling one black resident who moved out, Kellogg spoke with David Garber, a “newcomer” among those who “see themselves as trailblazers fighting to preserve the integrity of historic Anacostia.”

But Garber and others didn’t like the portrayal, as even WAMU’s Anna John noted in her DCentric blog, where she headlined a post “‘Morning Edition’ Chokes On Chocolate City.”

On his own blog And Now, Anacostia, Garber wrote that the NPR story “was a dishonest portrayal of the changes that are happening in Anacostia. First, his evidence that black people are being forced out is based entirely on the story of one man who chose to buy a larger and more expensive house in PG County than one he was considering near Anacostia. Second, he attempts to prove that Anacostia is becoming ‘more vanilla’ by talking about one white person, me — and I don’t even live there anymore.”

Garber also complained that Kellogg “chose to sensationalize my move out of Anacostia” by linking it to a break-in at his home, which Garber says was unrelated to his move. Garber says Kellogg chose to repeat the “canned story” of Anacostia — which We Love D.C. bluntly calls a “quick and dirty race narrative.”

Garber continues, “White people are moving into Anacostia. So are black people. So are Asian people, Middle Eastern people, gay people, straight people, and every other mix. And good for them for believing in a neighborhood in spite of its challenges, and for meeting its hurdles head on and its new amenities with a sense of excitement.”

This seems like it could all be solved rather easily: let us just look at the data of what is happening in this neighborhood. I have not listened to the initial NPR report. But it would be fairly easy for NPR or Garber or anyone else to look up some Census figures regarding this neighborhood to see who is moving in or out. If the NPR story is built around Garber’s story (and some other anecdotal evidence), then it is lacking. If it has both the hard data but the story is one-sided or doesn’t give the complete picture, then this is a different issue. Then, we can have a conversation about whether Garber’s story is an appropriate or representative illustration or not.

Beyond the data issue, Garber also hints at another issue: a “canned story” or image of a community versus what residents experience on the ground. This is a question about the “character” of a location and the perspective of insiders (residents) and outsiders (like journalists) could differ. But both perspectives could be correct; each view has merit but has a different scope. A journalist is liable to try to place Anacostia in the larger framework of the whole city (or perhaps the whole nation) while a resident is likely working with their personal experiences and observations.