Myers-Briggs not scientifically valid but offers space for self-reflection, ideal types

Critics argue the Myers-Briggs Personality Test doesn’t stand up to scientific scrutiny:

The obvious criticism of this test is that it’s based on dichotomies. Are you perceiving or judging? Introverted or extroverted? You must choose. This reeks of pseudo-science. Of course, most of us don’t fall clearly on one side or the other. When the specific introvert vs. extrovert duality was a hot topic a few years ago, many writers persuasively argued against reducing socialization patterns to a simplistic either/or. Indeed, reams of psychological literature debunks MBTI as wildly inconsistent—many people will test differently within weeks—and over reliant on polarities. For instance, someone can certainly be both deeply thinking and feeling, and we all know folks who appear to be neither. “In social science, we use four standards: are the categories reliable, valid, independent, and comprehensive? For the MBTI, the evidence says not very, no, no, and not really,” organizational psychologist Adam Grant wrote in Psychology Today after reviewing all the science on MBTI. It’s pretty damning.

But the same journalist admits she still finds the test useful:

Any means for busy adults to take time to comprehend ourselves and see how our styles converge and diverge from others has a use—and more honestly, it’s fascinating. So while I remain skeptical of MBTI’s accuracy and I don’t think the test should be given to children and then treated like a blueprint for their future life, I’m optimistic about its potential to make us feel less alone and less hamstrung by our imperfections. A smart aleck might observe drily that this idealistic conclusion was foreordained: “how typically ENFP of you.” Guilty as charged.

So perhaps the Myers-Briggs is only helpful in that it gives people an excuse to engage in self-reflection. Is self-reflection only possible today (and not viewed as indulgent or unnecessary) when given a pseudo-scientific veneer?

Organizational psychologist Adam Grant gives two reasons Myers-Briggs has been so popular:

Murphy Paul argues that people cling to the test for two major reasons. One is that thousands of people have invested time and money in becoming MBTI-certified trainers and coaches. As I wrote over the summer, it’s awfully hard to let go of our big commitments. The other is the “aha” moment that people experience when the test gives them insight about others—and especially themselves. “Those who love type,” Murphy Paul writes, “have been seduced by an image of their own ideal self.” Once that occurs, personality psychologist Brian Little says, raising doubts about “reliability and validity is like commenting on the tastiness of communion wine. Or how good a yarmulke is at protecting your head.”

Perhaps this “ideal self” concept could be analogous to Max Weber’s ideal types. Social scientists do a lot of categorizing as they empirically observe the social world but it can be difficult (Weber suggests pretty much impossible) to exhaustively describe and explain social phenomena. Ideal types can provide analytical anchors that may not be often found in reality but provide a starting point. Plus, using ideal types of personality might help give individuals something to aspire to.

Adding creative endeavors to GDP

The federal government is set to change how it measures GDP and the new measure will include creative work:

The change is relatively simple: The BEA will incorporate into GDP all the creative, innovative work that is the backbone of much of what the United States now produces. Research and development has long been recognized as a core economic asset, yet spending on it has not been included in national accounts. So, as the Wall Street Journal noted, a Lady Gaga concert and album are included in GDP, but the money spent writing the songs and recording the album are not. Factories buying new robots counted; Pfizer’s expenditures on inventing drugs were not.

As the BEA explains, it will now count “creative work undertaken on a systematic basis to increase the stock of knowledge, and use of this stock of knowledge for the purpose of discovering or developing new products, including improved versions or qualities of existing products, or discovering or developing new or more efficient processes of production.” That is a formal way of saying, “This stuff is a really big deal, and an increasingly important part of the modern economy.”

The BEA estimates that in 2007, for example, adding in business R&D would have added 2 percent to U.S. GDP, or about $300 billion. Adding in the various inputs into creative endeavors such as movies, television and music will mean an additional $70 billion. A few other categories bring the total addition to over $400 billion. That is larger than the GDP of more than 160 countries…

The new framework will not stop the needless and often harmful fetishizing of these numbers. GDP is such a simple round number that it is catnip to commentators and politicians. It will still be used, incorrectly, as a proxy for our economic lives, and it will still frame our spending decisions more than it should. Whether GDP is up 2 percent or down 2 percent affects most people minimally (down a lot, quickly, is a different story). The wealth created by R&D that was statistically less visible until now benefited its owners even those the figures didn’t reflect that, and faster GDP growth today doesn’t help a welder when the next factory will use a robot. How wealth is used, who benefits from it and whether it is being deployed for sustainable future growth, that is consequential. GDP figures, even restated, don’t tell us that.

On one hand, changing a measure so that more accurately reflects the economy is a good thing. This could help increase the validity of the measure. On the other hand, measures still can be used well or poorly, the change may not be a complete improvement over previous measures, and it may be difficult to reconcile new figures with past figures. It is not quite as easy as simply “improving” a measure; a lot of other factors are involved. It will be interesting to see how this measurement change sorts out in the coming years and how the information is utilized.

A company offers to replicate research study findings

A company formed in 2011 is offering a new way to validate the findings of research studies:

A year-old Palo Alto, California, company, Science Exchange, announced on Tuesday its “Reproducibility Initiative,” aimed at improving the trustworthiness of published papers. Scientists who want to validate their findings will be able to apply to the initiative, which will choose a lab to redo the study and determine whether the results match.

The project sprang from the growing realization that the scientific literature – from social psychology to basic cancer biology – is riddled with false findings and erroneous conclusions, raising questions about whether such studies can be trusted. Not only are erroneous studies a waste of money, often taxpayers’, but they also can cause companies to misspend time and resources as they try to invent drugs based on false discoveries.

This addresses a larger concern about how many research studies found their results by chance alone:

Typically, scientists must show that results have only a 5 percent chance of having occurred randomly. By that measure, one in 20 studies will make a claim about reality that actually occurred by chance alone, said John Ioannidis of Stanford University, who has long criticized the profusion of false results.

With some 1.5 million scientific studies published each year, by chance alone some 75,000 are probably wrong.

I’m intrigued by the idea of having an independent company assess research results. This could work in conjunction with other methods of verifying research results:

1. The original researchers could run multiple studies. This works better with smaller studies but it could be difficult when the N is larger and more resources are needed.

2. Researchers could also make their data available as they publish their paper. This would allow other researchers to take a look and see if things were done correctly and if the results could be replicated.

3. The larger scientific community should endeavor to replicate studies. This is the way science is supposed to work: if someone finds something new, other researchers should adopt a similar protocol and test it with similar and new populations. Unfortunately, replicating studies is not seen as being very glamorous and it tends not to receive the same kind of press attention.

The primary focus of this article seems to be on medical research. Perhaps this is because it can affect the lives of many and involves big money. But it would be interesting to apply this to more social science studies as well.