New way of measuring poverty gives California highest rate

The Census Bureau tried changing the definition of poverty and it put California at the top of the list for poverty:

California continues to have – by far – the nation’s highest level of poverty under an alternative method devised by the Census Bureau that takes into account both broader measures of income and the cost of living.

Nearly a quarter of the state’s 38 million residents (8.9 million) live in poverty, a new Census Bureau report says, a level virtually unchanged since the agency first began reporting on the method’s effects.

Under the traditional method of gauging poverty, adopted a half-century ago, California’s rate is 16 percent (6.1 million residents), somewhat above the national rate of 14.9 percent but by no means the highest. That dubious honor goes to New Mexico at 21.5 percent.

But under the alternative method, California rises to the top at 23.4 percent while New Mexico drops to 16 percent and other states decline to as low as 8.7 percent in Iowa.

Not surprisingly, the new methodology has become political:

It’s now routinely cited in official reports and legislative documents, and Neel Kashkari, the Republican candidate for governor, has tried to make it an issue in his uphill challenge to Democratic Gov. Jerry Brown, even spending several days in Fresno posing as a homeless person to dramatize it.

The definition of poverty is an interesting methodological topic that certainly has social and political implications. I assume the Census Bureau argues the new definition is a better one since it accounts for more information and adjusts for regional variation. But, “better” could also mean one that either reduces or increases the official number which then can be used for different ends.

US unemployment figures distorted by lack of response, repeated takings of the survey

Two new studies suggest unemployment figures are pushed downward by the data collection process:

The first report, published by the National Bureau of Economic Research, found that the unemployment number released by the government suffers from a problem faced by other pollsters: Lack of response. This problem dates back to a 1994 redesign of the survey when it went from paper-based to computer-based, although neither the researchers nor anyone else has been able to offer a reason for why the redesign has affected the numbers.

What the researchers found was that, for whatever reason, unemployed workers, who are surveyed multiple times are most likely to respond to the survey when they are first given it and ignore the survey later on.

The report notes, “It is possible that unemployed respondents who have already been interviewed are more likely to change their responses to the labor force question, for example, if they want to minimize the length of the interview (now that they know the interview questions) or because they don’t want to admit that they are still unemployed.”

This ends up inaccurately weighting the later responses and skewing the unemployment rate downward. It also seems to have increased the number of people who once would have been designated as officially unemployed but today are labeled as out of the labor force, which means they are neither working nor looking for work.

And the second study suggests some of this data could be collected via Twitter by looking for key phrases.

This generally highlights the issue of survey fatigue where respondents are less likely to respond and completely fill out a survey. This hampers important data collection efforts across a wide range of fields. Given the enormity of the unemployment figures for American politics and economic life, this is a data problem worth solving.

A side thought: instead of searching Twitter for key words, why not deliver survey instruments like this through Twitter or smartphones? The surveys would have to be relatively short but they could have the advantage of seeming less time-consuming and could get better data.

Changing the measurement of poverty leads to 400 million more in poverty around the world

Researchers took a new look at global poverty, developed more specific measures, and found a lot more people living in poverty:

So OPHI reconsidered poverty from a new angle: a measure of what the authors term generally as “deprivations.” They relied on three datasets that do more than capture income: the Demographic and Health Survey, the Multiple Indicators Cluster Survey, and the World Health Survey, each of which measures quality of life indicators. Poverty wasn’t just a vague number anymore, but a snapshot of on-the-ground conditions people were facing.

OPHI then created the new index (the MPI) that collected ten needs beyond “the basics” in three broader categories: nutrition and child mortality under Health; years of schooling and school attendance under Education; and cooking fuel, sanitation, water, electricity, floor, and assets under Living Conditions. If a person is deprived of a third or more of the indicators, he or she would be considered poor under the MPI. And degrees of poverty were measures, too: Did your home lack a roof or did you have no home at all?

Perhaps the MPI’s greatest feature is that it can locate poverty. Where the HPI would just tell you where a country stood in comparison to others, the MPI maps poverty at a more granular level. With poverty mapped in greater detail, aid workers and policy makers have the opportunity to be more targeted in their work.

So what did we find out about poverty now that we can measure it better? Sadly, the world is more impoverished than we previously thought. The HPI has put this figure at 1.2 billion people. But under the MPI’s measurements, it’s 1.6 billion people. More than half of the impoverished population in developing countries lives in South Asia, and another 29 percent in Sub-Saharan Africa. Seventy-one percent of MPI’s poor live in what is considered middle income countries—countries where development and modernization in the face of globalization is in full swing, but some are left behind. Niger is home to the highest concentration of multidimensionally poor, with nearly 90 percent of its population lacking in MPI’s socioeconomic indicators. Most of the poor live in rural areas.

This reminds me of Bill Gates’ suggestion a few years ago that one of the best ways to help address global issues is to set goals and collect better data. Based on this, the world could use more people who can work at collecting and analyzing data. If poverty is at least somewhat relative (beyond the basic needs of absolute poverty) and multidimensional, then defining it is an important ongoing task.

A video goes viral with 320,000+ views in one week?

This silent newsreel of the 1919 Black Sox World Series is a great find. A news story about the video suggests it went viral with over 320,000 views in its first week online. Is this enough views to go viral?

This is an ongoing issue for stories and reports regarding online behavior. When does something go from being an online object of interest to some people to being a trend? Reporters often find Facebook groups or a few blog posts and turn that into a trend. Perhaps this is better than interviewing a few people on the street – also still done – but there are plenty of online groups, tweets, and posts.

We need some sort of metric or guidelines for making such proclamations. Unfortunately, there is little agreement about this for websites: should we count page views, unique visitors, click-throughs or something else? Should we just count the number of Twitter followers even though they can be purchased? Other mediums have agreed-upon metrics like Nielsen ratings or book sales or digital downloads.

In the meantime, I would suggest 342,000 viewers is not quite going viral.

Adding creative endeavors to GDP

The federal government is set to change how it measures GDP and the new measure will include creative work:

The change is relatively simple: The BEA will incorporate into GDP all the creative, innovative work that is the backbone of much of what the United States now produces. Research and development has long been recognized as a core economic asset, yet spending on it has not been included in national accounts. So, as the Wall Street Journal noted, a Lady Gaga concert and album are included in GDP, but the money spent writing the songs and recording the album are not. Factories buying new robots counted; Pfizer’s expenditures on inventing drugs were not.

As the BEA explains, it will now count “creative work undertaken on a systematic basis to increase the stock of knowledge, and use of this stock of knowledge for the purpose of discovering or developing new products, including improved versions or qualities of existing products, or discovering or developing new or more efficient processes of production.” That is a formal way of saying, “This stuff is a really big deal, and an increasingly important part of the modern economy.”

The BEA estimates that in 2007, for example, adding in business R&D would have added 2 percent to U.S. GDP, or about $300 billion. Adding in the various inputs into creative endeavors such as movies, television and music will mean an additional $70 billion. A few other categories bring the total addition to over $400 billion. That is larger than the GDP of more than 160 countries…

The new framework will not stop the needless and often harmful fetishizing of these numbers. GDP is such a simple round number that it is catnip to commentators and politicians. It will still be used, incorrectly, as a proxy for our economic lives, and it will still frame our spending decisions more than it should. Whether GDP is up 2 percent or down 2 percent affects most people minimally (down a lot, quickly, is a different story). The wealth created by R&D that was statistically less visible until now benefited its owners even those the figures didn’t reflect that, and faster GDP growth today doesn’t help a welder when the next factory will use a robot. How wealth is used, who benefits from it and whether it is being deployed for sustainable future growth, that is consequential. GDP figures, even restated, don’t tell us that.

On one hand, changing a measure so that more accurately reflects the economy is a good thing. This could help increase the validity of the measure. On the other hand, measures still can be used well or poorly, the change may not be a complete improvement over previous measures, and it may be difficult to reconcile new figures with past figures. It is not quite as easy as simply “improving” a measure; a lot of other factors are involved. It will be interesting to see how this measurement change sorts out in the coming years and how the information is utilized.

Help needed in measuring online newspaper readership

The newspaper industry is in trouble and it doesn’t help that there is not an agreed-upon way to measure online readership:

It’s no longer uncommon for someone to own three or four devices that can access news content at home, work or almost anywhere. This array causes headaches for newspaper publishers and editors and sows confusion for advertisers who want to know how many readers a newspaper has. How should they be counted? Where should advertisers put their dollars? How many readers does an online advertisement reach? What’s an ad worth anymore?

Perhaps as vexing is who is counting readers and who counts them best. Unlike the methods Arbitron and Nielsen use to develop radio and TV ratings, the science of counting online and digital news consumers has existed only for a short time. At least nine companies have crowded into the business of measuring digital audiences over the past 15 years. Each company employs its own methodology to collect data. And because digital technology seems to leap forward almost every day, measurement techniques that were acceptable yesterday may not be adequate tomorrow.

With the money at stake in advertising and prestige, you would think there would be more agreement here. Without agreed-upon standards, newspapers can claim very different numbers and there is no way to really sort it out.

Why can’t newspapers themselves pick a provider or two they like, perhaps one that is more generous in its counting, and run with it as an industry?

Dana Chinn, a lecturer at the University of Southern California’s Annenberg School for Communication and Journalism, said newspapers haven’t kept up with other industries that do business online.

“There is a stark contrast between the news industry and e-commerce, in that e-commerce is saying analytics is do or die for us because we are a digital business,” Chinn said. “News organizations don’t say that, because if they did they would use the right metrics. All the news organizations I know are usually using the wrong metrics to make the decisions that are needed to survive.”

This is a reminder that money-making today is very closely tied to measurement, particularly when you are selling online information.

Disagreeing lists: most religious US metro area vs. the most Bible-minded cities

There are multiple ways to measure religion and two lists about religiosity in American cities illustrate this:

According to Gallup, Provo-Orem is the most religious U.S. metro area, with 77 percent of residents identifying as “very religious.” That’s a full 13 percentage points higher than the second-ranked city—Montgomery, Alabama—where 64 percent of residents say they are very religious.

Of the top 10 most religious cities identified by Gallup, only three are outside of the South: Provo-Orem; Ogden-Clearfield, Utah; and Holland-Grand Haven, Mich.

But of greater interest, Gallup’s list looks significantly different from one released by Barna Group and American Bible Society earlier this year. Barna’s list of America’s most “Bible-minded” cities, based on “highest combined levels of regular Bible reading and belief in the Bible’s accuracy,” listed Knoxville, Tenn., as the top city. However, Gallup’s ranking shows that fewer than 50 percent of Knoxville residents identify as “very religious”; Knoxville was nowhere near Gallup’s top 10—or even the top 20.

In fact, only two of Barna’s top 10 most Bible-minded cities correspond with Gallup’s: Barna’s fifth-ranked Jackson, Miss., and ninth-ranked Huntsville, Ala., are third and fifth among Gallup’s cities, respectively. Two other top Barna picks (Shreveport, La., and Chattanooga, Tenn.) fell within Gallup’s top 20.

The lists’ least-religious/least Bible-minded cities don’t exactly line up either. Whereas most of Barna’s picks are in the New England region, Gallup reports the lowest percentages of “very religious” believers in West coast cities.

While these two lists may both be dealing with aspects of religion, we shouldn’t be surprised they have different findings. Barna, as it often does, is looking at a specific aspect of Christian practice as understood by a particular Christian group while Gallup is taking a broader view and ends up with a city with a heavy concentration of Mormons at the top of the list (and the only Utah city on the list, Salt Lake City, is #84 out of 96 on Barna’s list). We could take other aspects of religiosity, such as church attendance or giving to churches and religious organizations or feeling “spiritual,” and the results across cities could differ.

It does appear, however, that the two lists generally agree that the South and Midwest/Great Plains (+ Utah) are more religious than the Northeast and West.

Measuring audience reaction: from the applause of crowds to Facebook likes

Megan Garber provides an overview of applause, “the big data of the ancient world.

Scholars aren’t quite sure about the origins of applause. What they do know is that clapping is very old, and very common, and very tenacious — “a remarkably stable facet of human culture.” Babies do it, seemingly instinctually. The Bible makes many mentions of applause – as acclamation, and as celebration. (“And they proclaimed him king and anointed him, and they clapped their hands and said, ‘Long live the king!'”)

But clapping was formalized — in Western culture, at least — in the theater. “Plaudits” (the word comes from the Latin “to strike,” and also “to explode”) were the common way of ending a play. At the close of the performance, the chief actor would yell, “Valete et plaudite!” (“Goodbye and applause!”) — thus signaling to the audience, in the subtle manner preferred by centuries of thespians, that it was time to give praise. And thus turning himself into, ostensibly, one of the world’s first human applause signs…

As theater and politics merged — particularly as the Roman Republic gave way to the Roman Empire — applause became a way for leaders to interact directly (and also, of course, completely indirectly) with their citizens. One of the chief methods politicians used to evaluate their standing with the people was by gauging the greetings they got when they entered the arena. (Cicero’s letters seem to take for granted the fact that “the feelings of the Roman people are best shown in the theater.”) Leaders became astute human applause-o-meters, reading the volume — and the speed, and the rhythm, and the length — of the crowd’s claps for clues about their political fortunes.

“You can almost think of this as an ancient poll,” says Greg Aldrete, a professor of history and humanistic studies at the University of Wisconsin, and the author of Gestures and Acclamations in Ancient Rome. “This is how you gauge the people. This is how you poll their feelings.” Before telephones allowed for Gallup-style surveys, before SMS allowed for real-time voting, before the Web allowed for “buy” buttons and cookies, Roman leaders were gathering data about people by listening to their applause. And they were, being humans and politicians at the same time, comparing their results to other people’s polls — to the applause inspired by their fellow performers. After an actor received more favorable plaudits than he did, the emperor Caligula (while clutching, it’s nice to imagine, his sword) remarked, “I wish that the Roman people had one neck.”…

So the subtleties of the Roman arena — the claps and the snaps and the shades of meaning — gave way, in later centuries, to applause that was standardized and institutionalized and, as a result, a little bit promiscuous. Laugh tracks guffawed with mechanized abandon. Applause became an expectation rather than a reward. And artists saw it for what it was becoming: ritual, rote. As Barbra Streisand, no stranger to public adoration, once complained: “What does it mean when people applaud? Should I give ’em money? Say thank you? Lift my dress?” The lack of applause, on the other hand — the unexpected thing, the relatively communicative thing — “that I can respond to.”…

Mostly, though, we’ve used the affordances of the digital world to remake public praise. We link and like and share, our thumbs-ups and props washing like waves through our networks. Within the great arena of the Internet, we become part of the performance simply by participating in it, demonstrating our appreciation — and our approval — by amplifying, and extending, the show. And we are aware of ourselves, of the new role a new world gives us. We’re audience and actors at once. Our applause is, in a very real sense, part of the spectacle. We are all, in our way, claqueurs.

Fascinating, from the human tendency across cultures to clap, planting people in the audience to clap and cheer, to the rules that developed around clapping.

A couple of thoughts:

1. Are there notable moments in history when politicians and others thought the crowd was going one way because of applause but quickly found out that wasn’t the case? Simply going by the loudest noise seems rather limited, particularly with large crowds and outdoors.

2. The translation of clapping into Facebook likes loses the embodied nature of clapping and crowds. Yes, likes allow you to mentally see that you are joining with others. But, there is something about the social energy of a crowd that is completely lost. Durkheim would describe this as collective effervesence and Randall Collins describes the physical nature of “emotional energy” that can be generated when humans are in close physical proximity to each other. Clapping is primarily a group behavior and is difficult to transfer to a more individualistic setting.

3. I have noticed in my lifetime the seemingly increasing prevalence of standing ovations. Pretty much every theater show I have been to in recent years is followed by a standing ovation. My understanding is that at one point such ovations were reserved for truly spectacular performances but now it is simply normal. Thus, the standing ovation now has a very different meaning.

Nielsen changes the definition of watching TV to include streaming

When people starting watching TV in new ways, companies have to adjust and collect better data:

The decisions made by the [What Nielsen Measures] committee are not binding but a source at one of the big four networks was ecstatic at the prospect of expanded measurement tools. The networks for years have complained that total viewing of their shows isn’t being captured by traditional ratings measurements. This is a move to correct that.

By September 2013, when the next TV season begins, Nielsen expects to have in place new hardware and software tools in the nearly 23,000 TV homes it samples. Those measurement systems will capture viewership not just from the 75 percent of homes that rely on cable, satellite and over the air broadcasts but also viewing via devices that deliver video from streaming services such as Netflix and Amazon, from so-called over-the-top services and from TV enabled game systems like the X-Box and PlayStation.

While some use of iPads and other tablets that receive broadband in the home will be included in the first phase of measurement improvements, a second phase is envisioned to include such devices in a more comprehensive fashion. The second phase is envisioned to roll out on a slower timetable, according to sources, will the overall goal to attempt to capture video viewing of any kind from any source.

Nielsen is said to have an internal goal of being able to measure video viewing on an iPad by the end of this year, a process in which the company will work closely with its clients.

This is a good example of how operationalization and measurement are not just for scientists. Here, possibly millions of dollars are at stake in advertising. It would be interesting to hear the advertisers’ side of the story; higher numbers could mean they pay more but it would also mean that they can reach bigger audiences.

So can we assume that better measurement means we will find that Americans watch more TV than we currently think?

Using the newer measure of population-weighted density

Richard Florida writes about how the Census Bureau is using a new measure of population density:

A new report from the U.S. Census Bureau helps to fill the gap, providing detailed estimates of different types of density for America’s metros. This includes new data on “population-weighted density” as well as of density at various distances from the city center. Population-weighted density, which essentially measures the actual concentration of people within a metro, is an important improvement on the standard measure of density. For this reason, I like to think of it as a measure of concentrated density. The Census calculates population-weighted density based on the average densities of the separate census tracts that make up a metro.

The differences in the two density measures are striking. The overall density across all 366 U.S. metro areas is 283 people per square mile. Concentrated or population-weighted density for all metros is over 20 times higher, at 6,321 people per square mile.

This Census report is not the first to use population-weighted density. A 2001 study by Gary Barnes of the University of Minnesota developed such a measure to examine sprawl and commuting patterns. In 2008, Jordan Rappaport of the Kansas City Fed published an intriguing study in the Journal of Urban Economics (non-gated version here), which looked at the relationship between density (including population-weighted density) and the productivity of regions. Christopher Bradford, who blogs at his Austin Contrarian, has also advocated for using population-weighted density to better understand urban development…

New York and Los Angeles are good examples of the differences between these two density measures. While they are close in the average density — 2,826 for New York versus 2,646 for L.A. — the New York metro has much higher levels of concentrated or population-weighted density, 31,251 versus 12,114 people per square mile. San Francisco, which has lower average density than L.A. (1,755 people per square mile), tops L.A. on population-weighted density with 12,145 people per square mile.

It sounds like the new density measure uses the average densities of Census tracts which then limits the effect of sprawl as these less dense tracts, of which there are necessarily more in burgeoning metropolitan regions, are averaged out by the denser tracts. In other words, the effects of sprawl are less pronounced in this newer measure.

This reminds me of an interesting density fact: if you use the basic measure of density (total population of metro land divided by land in the metro area), the Los Angeles metro region has a higher density than New York City. But, of course, New York City is much more dense at its core while LA is more known for its sprawl.