Changing the measurement of poverty leads to 400 million more in poverty around the world

Researchers took a new look at global poverty, developed more specific measures, and found a lot more people living in poverty:

So OPHI reconsidered poverty from a new angle: a measure of what the authors term generally as “deprivations.” They relied on three datasets that do more than capture income: the Demographic and Health Survey, the Multiple Indicators Cluster Survey, and the World Health Survey, each of which measures quality of life indicators. Poverty wasn’t just a vague number anymore, but a snapshot of on-the-ground conditions people were facing.

OPHI then created the new index (the MPI) that collected ten needs beyond “the basics” in three broader categories: nutrition and child mortality under Health; years of schooling and school attendance under Education; and cooking fuel, sanitation, water, electricity, floor, and assets under Living Conditions. If a person is deprived of a third or more of the indicators, he or she would be considered poor under the MPI. And degrees of poverty were measures, too: Did your home lack a roof or did you have no home at all?

Perhaps the MPI’s greatest feature is that it can locate poverty. Where the HPI would just tell you where a country stood in comparison to others, the MPI maps poverty at a more granular level. With poverty mapped in greater detail, aid workers and policy makers have the opportunity to be more targeted in their work.

So what did we find out about poverty now that we can measure it better? Sadly, the world is more impoverished than we previously thought. The HPI has put this figure at 1.2 billion people. But under the MPI’s measurements, it’s 1.6 billion people. More than half of the impoverished population in developing countries lives in South Asia, and another 29 percent in Sub-Saharan Africa. Seventy-one percent of MPI’s poor live in what is considered middle income countries—countries where development and modernization in the face of globalization is in full swing, but some are left behind. Niger is home to the highest concentration of multidimensionally poor, with nearly 90 percent of its population lacking in MPI’s socioeconomic indicators. Most of the poor live in rural areas.

This reminds me of Bill Gates’ suggestion a few years ago that one of the best ways to help address global issues is to set goals and collect better data. Based on this, the world could use more people who can work at collecting and analyzing data. If poverty is at least somewhat relative (beyond the basic needs of absolute poverty) and multidimensional, then defining it is an important ongoing task.

Hard to measure school shootings

It is difficult to decide on how to measure school shootings and gun violence:

What constitutes a school shooting?

That five-word question has no simple answer, a fact underscored by the backlash to an advocacy group’s recent list of school shootings. The list, maintained by Everytown, a group that backs policies to limit gun violence, was updated last week to reflect what it identified as the 74 school shootings since the massacre in Newtown, Conn., a massacre that sparked a national debate over gun control.

Multiple news outlets, including this one, reported on Everytown’s data, prompting a backlash over the broad methodology used. As we wrote in our original post, the group considered any instance of a firearm discharging on school property as a shooting — thus casting a broad net that includes homicides, suicides, accidental discharges and, in a handful of cases, shootings that had no relation to the schools themselves and occurred with no students apparently present.

None of the incidents rise to the level of the massacre that left 27 victims, mostly children, dead in suburban Connecticut roughly 18 months ago, but multiple reviews of the list show how difficult quantifying gun violence can be. Researcher Charles C. Johnson posted a flurry of tweets taking issue with incidents on Everytown’s list. A Hartford Courant review found 52 incidents involving at least one student on a school campus. (We found the same, when considering students or staff.) CNN identified 15 shootings that were similar to the violence in Newtown — in which a minor or adult was actively shooting inside or near a school — while Politifact identified 10.

Clearly, there’s no clean-cut way to quantify gun violence in the nation’s schools, but in the interest of transparency, we’re throwing open our review of the list, based on multiple news reports per incident. For each, we’ve summarized the incident and included casualty data where available.

This is a good example of the problems of conceptualization and operationalization. The idea of a “school shooting” seems obvious until you start looking at a variety of incidents and have to decide whether they hang together as one definable phenomenon. It is interesting here that the Washington Post then goes on to provide more information about each case but doesn’t come down on any side.

So how might this problem be solved? In the academic or scientific world, scholars would debate this through publications, conferences, and public discussions until some consensus (or at least some agreement about the contours of the argument) emerges. This takes time, a lot of thinking, and data analysis. This runs counter to more media or political-driven approaches that want quick, sound bite answers to complex social problems.

Wait, What’s Your Problem: the Census does or does not require people to participate?

Sunday’s What’s Your Problem? column in the Chicago Tribune featured a woman irritated by some Census workers who did sound like creepers. Yet, a Census employee is still unclear about whether U.S. residents have to participate in Census surveys:

He said census interviewers are trained to be professional, courteous, and to never use the possibility of a fine to coerce people into participating.

Olson said the American Community Survey is mandatory and there is a potential fine for people who fail to participate, but the Census Bureau relies on public cooperation to encourage responses.

The survey is important because its data guide nearly 70 percent of federal grants, Olson said.

This is a common response from the Census but it is still vague. Is participating in the Census and the American Community Survey mandatory or not? Is there a fine for participation or not? The answer seems to be yes and yes – mandatory, a fine is possible, and yet no has to really worry about incurring a penalty.

Typical social science research, which is akin to what the Census Bureau is doing (and the organization has been led by sociologists), has several basic rules regarding ethics in collecting information from people. Don’t harm people. (See the above story about peeking in people’s windows.) And participation has to be voluntary. This can include contacting people multiple times. So is participation really voluntary if there is even the implicit idea of a fine? This is where it is less like social science research and more like government action, which is a fine line the Census is walking here. Clearing this up might help improve relations with people who are suspicious of why the Census wants basic information about their lives.

 

Today’s social interactions: “data is our currency”

Want to interact with the culturally literate crowds of today? You need to be aware of lots of online data:

Whenever anyone, anywhere, mentions anything, we must pretend to know about it. Data has become our currency. (And in the case of Bitcoin, a classic example of something that we all talk about but nobody actually seems to understand, I mean that literally.)…

We have outsourced our opinions to this loop of data that will allow us to hold steady at a dinner party, though while you and I are ostensibly talking about “The Grand Budapest Hotel,” what we are actually doing, since neither of us has seen it, is comparing social media feeds. Does anyone anywhere ever admit that he or she is completely lost in the conversation? No. We nod and say, “I’ve heard the name,” or “It sounds very familiar,” which usually means we are totally unfamiliar with the subject at hand.

Knowing about all of the latest Internet memes, videos, and headlines may just be the cultural capital of our times. On one hand, cultural capital is important. This is strikingly seen in the influence of Pierre Bourdieu in recent decades after Bourdieu argued different social classes have different cultural tastes and expressions. Want to move up in the world? You need to be able to operate in the cultural spheres of the upper classes. On the other hand, the writer of this article suggests this cultural capital may not be worth having. This Internet data based cultural capital emphasizes a broad and populist knowledge rather than a deep consideration of life’s important issues. If we are all at the whim of the latest Internet craze, we are all chasing ultimately unsatisfying data.

But, I think you can take this another direction than the long debate about what is proper cultural literacy. I recently heard an academic suggest we should ask one question about all of this: how much do we get wrapped up in these online crazes and controversies versus engaging in important relationships? Put in terms of this article, having all the data currency in the world doesn’t help if you have no one to really spend that currency with.

 

The factors behind the rise of viral maps

Here is a short look at how viral maps (“graphic, easy to read, and they make a quick popular point”) are put together by one creator:

When I need to find a particular data set, it’s often as straightforward as a search for the topic with the word “shapefile” or “gis” attached. There’s so much data just sitting on servers that if you can imagine it, it’s probably out there somewhere (often for free). Sometimes though, finding data requires a deeper search. A lot of government-provided data sits inside un-indexed data portals or clearinghouses. Depending on the quality of the portal, these can be tedious to sort through…

Simplicity and ease-of-use: Interactive maps are great, but I want the maps I make to be straightforward to read and understand. I don’t want viewers to have to figure out how to use the map; they should just be able to look at it and figure out what’s going on.

Projections: Typical web maps are limited to the Web Mercator projection. I don’t have any objection to Mercator in principle (in fact it’s brilliant for what it does), but I can’t in good conscience use it for maps at a continental or global scale. Sticking to static maps allows me to choose more appropriate projections for the data and region I’m depicting.

Uniformity: I want everyone who visits my maps to be presented with the same information. I don’t want some algorithm deciding that one visitor is shown a particular view while another visitor gets a different one.

These principles sound similar to what one would expect for any sort of online chart or infographic. There is plenty of data available online but it takes some skill in order to present the data clearly and then market the map to the appropriate audience.

Now that I think about it, it is a little surprising that it took this long for viral maps to catch on. First, the Internet makes a lot of geographic data easily accessible. Two, it is a visual medium and maps are essentially graphics (audio is another story). Third, geographic data seems to feed into a lot of hot-button topics of conversation these days as people of different races (residential segregation), cultural viewpoints (think the American South or the Bible Belt), education (think the Creative Callas looking for exciting urban neighborhoods), and other groupings tend to live in different places.

I wonder if the real story here isn’t the technology that makes mapping on a large-scale relatively easy today. GIS software has been around for a while but it generally pretty expensive and has a learning curve. Now, there are numerous websites that offer access to data and mapping capability (think the Census or Social Explorer). Shapefiles are used by a variety of local governments and researchers and can be downloaded. There are good freeware GIS programs like GeoDa. You need some bandwidth and computing power to get the data and crunch the numbers. All together, the pieces have now come together for more people to access, manipulate, and publish maps in a way that wasn’t possible even just 5 years ago.

 

Only 56% of Twitter accounts have ever sent a tweet

There are over 900 million Twitter accounts but not everyone is actually sending tweets:

A report from Twopcharts, a website that monitors Twitter account activity, states that about 44% of the 974 million existing Twitter accounts have never sent a tweet…

Twitter said it has 241 million monthly active users the last three months of 2013. Twitter defines a monthly active user as an account that logs in at least once a month. By Twitter’s standards, a person does not have to tweet to be considered a monthly active user…

But having engaged users–those who are active participants in the online conversation–are particularly valuable to Twitter. For one thing, activity tends to make users more inclined to continue using the service.

Secondly, user tweets, retweets, favorites and other actions help Twitter generate advertising revenue. Over the last year, the company has made it easier for users to do those things and introduced user-friendly features such as pictures into the timeline…

Moreover, the report highlights Twitter’s user retention issue. It estimates 542.1 million accounts have sent at least one tweet since they’ve been created, suggesting that more than half of the accounts in existence have actively tried out the service. But just 23% of those accounts have tweeted sometime in the last 30 days.

And how many of these accounts are fake?

All together, the number of people actively using Twitter – meaning they are tweeting themselves, retweeting, interacting with others – is still limited. If you read a lot of Internet stories from journalists and bloggers, it sounds like lots of people are on Twitter doing important things. But, these users are likely a limited part of the population: more educated, have regular access to smartphones and Internet connections, younger. This doesn’t mean Twitter is worthless but it does suggest it is not exactly representative of Americans.

Research shows new mothers are less active on Facebook, aren’t flooding news feeds with babies

A researcher finds that new mothers are quite a bit less active on Facebook after their children are born:

Recently, Meredith Ringel Morris—a computer scientist at Microsoft Research—gathered data on what new moms actually do online. She persuaded more than 200 of them to let her scrape their Facebook accounts and found the precise opposite of the UnBaby.Me libel. After a child is born, Morris discovered, new mothers post less than half as often. When they do post, fewer than 30 percent of the updates mention the baby by name early on, plummeting to not quite 10 percent by the end of the first year. Photos grow as a chunk of all postings, sure—but since new moms are so much less active on Facebook, it hardly matters. New moms aren’t oversharers. Indeed, they’re probably undersharers. “The total quantity of Facebook posting is lower,” Morris says.

And therein lies an interesting lesson about our supposed age of oversharing. If new moms don’t actually deluge the Internet with baby talk, why does it seem to so many of us that they do? Morris thinks algorithms explain some of it. Her research also found that viewers disproportionately “like” postings that mention new babies. This, she says, could result in Facebook ranking those postings more prominently in the News Feed, making mothers look more baby-obsessed.

And a reminder of how we could see beyond our personal experiences and anecdotes and look at the bigger picture:

I have another theory: It’s a perceptual quirk called a frequency illusion. Once we notice something that annoys or surprises or pleases us—or something that’s just novel—we tend to suddenly notice it more. We overweight its frequency in everyday life. For instance, if you’ve decided that fedoras are a ridiculous hipster fashion choice, even if they’re comparatively rare in everyday life, you’re more likely to notice them. And pretty soon you’re wondering, why is everyone wearing fedoras now? Curse you, hipsters!…

The way we observe the world is deeply unstatistical, which is why Morris’ work is so useful. It reminds us of the value of observing the world around us like a scientist—to see what’s actually going on instead of what just happens to gall (or please) us. I’d hazard that perceptual illusions lead us to overamplify the incidence of all sorts of ostensibly annoying behavior: selfies on Instagram, people ignoring one another in favor of their phones, Google Glass. We don’t have a plague of oversharing. We have a plague of over-noticing. It’s time to reboot our eyes.

This study suggests the mothers themselves are not at fault but the flip side of this study would seem to be to then study the news feeds of friends of new mothers to see how often these pictures and posts show up (and how algorithms might be pushing this). And who are the people more likely to like such posts and pictures? This study may have revealed the supply side of the equation but there is more to explore.

The difficulty of getting good data on heroin use

Heroin use is getting more public attention but how exactly can researchers go about measuring its use?

For as long as it’s been around, the NSDUH has provided a pretty good picture of marijuana use in the U.S., and is a reliable source for annual stories about teens and pot (a perennial sticking point in the debate over marijuana legalization). But the NSDUH data on hard drug use seldom makes as big a splash. In a new report from the RAND Corporation, researchers suggest that one reason for this disparity may be that the NSDUH survey underestimates heroin use by an eye-boggling amount. “Estimates from the 2010 NSDUH suggest there were only about 60,000 daily and near daily heroin users in the United States,” drug policy researchers Beau Kilmer and Jonathan Caulkins, both of the RAND Corporation, wrote in a recent editorial. “The real number is closer to 1 million.”…

Kilmer and Caulkins came up with their much higher figures for heroin and hard-drug use by combining county-level treatment and mortality data with NSDUH data and a lesser known government survey called the Arrestee Drug Abuse Monitoring Program. Instead of calling people at home and asking them about their drug use, the ADAM survey questions arrestees when they’re being booked and tests their urine. “ADAM goes where serious substance abuse is concentrated — among those entangled with the criminal justice system, specifically arrestees in booking facilities,” Kilmer and Caulkins write. The survey also asks questions about street prices, as well as how and where drugs are bought. The data collected by the ADAM Program enabled RAND to put together a report looking at what Americans spent on drugs between 2000 and 2010.

In short, ADAM is a crucial tool for crafting hard-drug policy. Which is why researchers are alarmed that after being scaled back several times (including a brief shutdown between 2004 and 2006), funding for ADAM has completely run out. “Folks in the research world have known that this was coming,” Kilmer writes in an email. “I wanted to use the attention around our new market report to highlight the importance of collecting information about hard drug users in non-treatment settings. ADAM was central to our estimates for cocaine, heroin, and meth.”

Despite providing a wealth of information since the early 2000s, the budget for ADAM has slowly been chipped away. The survey was originally conducted in more than 35 counties, then 10, then five. The program disappeared completely between 2004 and 2006, but was revived by the Office of National Drug Control Policy in 2006. At its most expensive, ADAM cost $10 million a year.

For a variety of reasons, including public health and the resource-intensive war on drugs, this information is important. But, illegal activities are often difficult to measure. This requires researchers to be more creative in finding reliable and valid data. Even then, two other issues emerge:

1. How much do the researchers feel like they are estimating?  What are their margin of errors?

2. This can become a political football: is the data being collected worth the money it costs? For bean counters, is this the most efficient way to collect the data?

I wonder if this could be part of arguments for legalizing certain activities: it would be much easier for researchers (and governments, the public, etc.) to get good data.

The difficulty in wording survey questions about American education

Emily Richmond points out some of the difficulties in creating and interpreting surveys regarding public opinion on American education:

As for the PDK/Gallup poll, no one recognizes the importance of a question’s wording better than Bill Bushaw, executive director of PDK. He provided me with an interesting example from the September 2009 issue of Phi Delta Kappan magazine, explaining how the organization tested a question about teacher tenure:

“Americans’ opinions about teacher tenure have much to do with how the question is asked. In the 2009 poll, we asked half of respondents if they approved or disapproved of teacher tenure, equating it to receiving a “lifetime contract.” That group of Americans overwhelmingly disapproved of teacher tenure 73% to 26%. The other half of the sample received a similar question that equated tenure to providing a formal legal review before a teacher could be terminated. In this case, the response was reversed, 66% approving of teacher tenure, 34% disapproving.”

So what’s the message here? It’s one I’ve argued before: That polls, taken in context, can provide valuable information. At the same time, journalists have to be careful when comparing prior years’ results to make sure that methodological changes haven’t influenced the findings; you can see how that played out in last year’s MetLife teacher poll. And it’s a good idea to use caution when comparing findings among different polls, even when the questions, at least on the surface, seem similar.

Surveys don’t write themselves nor is the interpretation of the results necessarily straightforward. Change the wording or the order of the questions and results can change. I like the link to the list of “20 Questions A Journalist Should Ask About Poll Results” put out by the National Council on Public Polls. Our public life would be improved if journalists, pundits, and the average citizen would pay attention to these questions.

Questioning the open kitchen

Lots of newer homes have kitchens open to great rooms or other gathering spaces. However, there are a few people questioning the trend:

J. Bryan Lowder, an assistant editor at Slate, recently slammed the open concept in a widely read article called “Close Your Open-Concept Kitchen.” He called the trend a “baneful scourge” that has spread through American homes like “black mold through a flooded basement.”

Lowder’s point, and one echoed through the anti-open-kitchen movement, is that we have walls and doors for a reason. While open-kitchen lovers champion the ease of multitasking cooking and entertainment and appreciate how the cook can keep an eye on the kids (or an eye on a favorite TV show), the haters reply that open kitchens do neither effectively. Instead, the detractors say, open kitchens leave guests with an eyeful of kitchen mess, distract cooks, and leave Mom and Dad with no place to hide from their noisy brood.

And apparently defenders of the open kitchen are quite vocal:

Roxanne, who blogs at Just Me With … under her first name only (and chose not to reveal her last name in this article for fear of backlash from open-kitchen devotees), ranted against the concept on her blog. For Roxanne, the open kitchen destroys coveted privacy.

Who knew this topic was so controversial. And how did we move from older homes with kitchens at the back of the house to the open kitchen of today?

Design psychologist Toby Israel, author of “Some Place Like Home: Using Design Psychology to Create Ideal Places,” said open kitchens have gained such momentum because the kitchen is often the heart of family existence and a central gathering point.

All interesting. But, another issue with this article: the headline suggests there is a backlash against this design but presents limited evidence of this. Sure, it quotes a few people who don’t like the open kitchen. And there is a citation of an odd statistic that just over 75% of home remodelers are knocking down walls. All of this indicates more of a discussion about open kitchens, rather than a big trend.

This is a common tactic today from journalists and others online: suggest there may be a trend, present limited evidence, and then leave it to readers to sort out at the end whether a big trend really exists. There are several ways around this. First, present more data. A few articles that start heated online discussions do not tell us much. In this case, tell us what builders are actually doing or what homes people are buying. Second, wait it out a bit. Having more time tends to reveal whether there is really a trend or just a minor blip. While this doesn’t help meet regular deadlines, it does mean that we can be more certain that there is a discernible pattern.