Measuring Presidential popularity with merchandise

There are traditional ways to measure Presidential popularity: polls that in some way measure approval or disapproval. Here is another possible way: sales of Presidential merchandise.

I’ve always wondered why Presidents or other political officials allow such merchandizing using their figures and words in order to make money. Perhaps it is simply publicity (even if it is in opposition to them). Or perhaps they don’t want to appear to be the politicians who cracks down on such things. Or perhaps by running for or entering public office, there is a tacit understanding that they are now in the public eye and can be used for money-making purposes.

And what does it mean culturally to reduce any politician to a piece of merchandise?

LA Times portal on value-added analysis of teachers

The Los Angeles Times has put together an information and opinion filled portal regarding their recent publication of a value-added analysis of Los Angeles teachers.

Measuring teacher performance is a tricky subject as there are a number of factors at play in a student’s academic performance. In an article, the newspaper summarizes how value-added scores are estimated:

Value-added estimates the effectiveness of a teacher by looking at the test scores of his students. Each student’s past test performance is used to project his performance in the future. The difference between the child’s actual and projected results is the estimated “value” that the teacher added or subtracted during the year. The teacher’s rating reflects his average results after teaching a statistically reliable number of students.

In addition to these methodological questions, there are number of other fascinating issues: should this sort of information be publicly available and how will affect teacher’s performance? Is it an accurate assessment of what teachers do? What should be done for the teachers who fall outside the normal range? How will the politics of all of this play out?

For those interested in education and measuring outcomes, this all makes for interesting reading.

(As a side note: I can only imagine what discussions would ensure if similar information was published regarding college professors.)

Criteria in the college rating process across publications

There are numerous publications that rate colleges. According to this story and very helpful graphic in The Chronicle of Higher Education, publications tend not to use the same criteria:

That indicates a lack of agreement among them on what defines quality. Much of the emphasis is on “input measures” such as student selectivity, faculty-student ratio, and retention of freshmen. Except for graduation rates, almost no “outcome measures,” such as whether a student comes out prepared to succeed in the work force, are used.

This suggests each publication is measuring something different as their overall scores have different inputs. This is a classic measurement issue: each publication is operationalizing “college quality” in a different way.

The suggestion about using student outcomes as a criteria is a good one. How much different would the rankings look if this were taken into account? And isn’t this what administrators, faculty, and students are really concerned about? While students and families may worry about the outcome of jobs, I’m sure faculty want to know that their students are learning and maturing.

Another debate over Washington crowd estimate

The actors are different but the question is the same: just how many people attended Glenn Beck’s “Restoring Honor” rally over the weekend in Washington, D.C.?

This is not an isolated question. The National Park Service bowed out of official estimates back in 1997:

The media, in years past, would typically cite the National Parks Service estimate, along with the organizer’s estimates (which tend to be higher). But the Parks Service stopped providing crowd estimates in 1997 after organizers of the 1995 Million Man March assailed the agency for allegedly undercounting the turnout for that event.

So various media outlets (and interested parties) are now left making competing estimates based on aerial photos, how much space a person typically takes up, and other sources.

There has to be a better solution to this problem.

Older age = more wisdom, happiness

In a youth-oriented culture like that of the United States, growing older may not appear appealing to many. But recent research suggests that growing older leads to more wisdom and increased levels of happiness:

Contrary to largely gloomy cultural perceptions, growing old brings some benefits, notably emotional and cognitive stability. Laura Carstensen, a Stanford social psychologist, calls this the “well-being paradox.” Although adults older than 65 face challenges to body and brain, the 70s and 80s also bring an abundance of social and emotional knowledge, qualities scientists are beginning to define as wisdom. As Carstensen and another social psychologist, Fredda Blanchard-Fields of the Georgia Institute of Technology, have shown, adults gain a toolbox of social and emotional instincts as they age. According to Blanchard-Fields, seniors acquire a feel, an enhanced sense of knowing right from wrong, and therefore a way to make sound life decisions.

That may help explain the finding that old age correlates with happiness. A study published this year in the Proceedings of the National Academy of Science found a U-shaped relationship between happiness and age: Adults were happiest in youth and again in their 70s and early 80s, and least happy in middle age. A 2007 University of Chicago study similarly concluded that rates of happiness — “the degree to which a person evaluates the overall quality of his present life positively” — crept upward from age 65 to 85 and beyond, in both sexes.

These are interesting findings. Now how could American culture go about showing and sharing these benefits of growing old? Wisdom, in particular, might be a challenge to portray in commercial advertisements.

Also, there is an interesting discussion in the article about how to define and measure “wisdom.”

Varying statistics about DNA matches

NewScientist has a story about a criminal case that demonstrates how scientists can disagree about statistics regarding DNA analysis:

The DNA analyst who testified in Smith’s trial said the chances of the DNA coming from someone other than Jackson were 1 in 95,000. But both the prosecution and the analyst’s supervisor said the odds were more like 1 in 47. A later review of the evidence suggested that the chances of the second person’s DNA coming from someone other than Jackson were closer to 1 in 13, while a different statistical method said the chance of seeing this evidence if the DNA came from Jackson is only twice that of the chance of seeing it if it came from someone else…

[W]e show how, even when analysts agree that someone could be a match for a piece of DNA evidence, the statistical weight assigned to that match can vary enormously.

I recall reading something recently that suggested while the public thinks having DNA samples in a criminal case makes the case very clear, this is not necessarily the case. This article suggests is a lot more complicated and it depends on what lab and scientists are looking at the DNA samples.

Determining the best colleges…using RateMyProfessor.com?

Forbes recent published another installment of their rankings of the best colleges in America. One of the question that arises with such a list is the methodology behind the rankings. To their credit, Forbes provides a lengthy explanation.

Even as the ranking is supposedly from the point of view of students, I initially had some questions about one of the major criteria which accounts for 17.5% of the score for a college: using student evaluations of professors at RateMyProfessor.com. At first, this sounded crazy to me – how representative is the data from RateMyProfessors.com and does it accurately reflect what is going on in the classroom?

Forbes sums up why they used this data:

In spite of some drawbacks of student evaluations of teaching, they apparently have value for the 86% of schools that have some sort of internal evaluation system. RMP ratings give similar results to these systems. Moreover, they are a measure of consumer preferences, which is what is critically important in rational consumer choice. When combined with the significant advantages of being uniform across different schools, not being subject to easy manipulation by schools, and being publicly available, RMP data is a preferred data source for information on student evaluations of teaching–it is the largest single uniform data set we know of student perceptions of the quality of their instruction.

To recap why these used data from RateMyProfessors.com:

1. RMP ratings are similar to evaluation scores gathered by colleges. There is some scholarly research to back this up.

2. RMP ratings are “a measure of consumer preference.” This is data generated voluntarily by students. If Forbes wants the students’ perspective, this website offers it. (Though it is still a question whether it is a representative measure – but point #1 may take care of that.)

3. RMP ratings are perhaps the only data source to answer the question of what students experience in the classroom. It may not be perfect data but it can be used as an approximation.

Overall, Forbes logic makes some sense: RateMyProfessor.com offers a unique dataset that when cleaned up (and they describe how they weighted and standardized the scores) offers some insights into the classroom experience.

However, I’m still leery of giving 17.5% of the total score over to RateMyProfessor.com evaluations. Perhaps the scholarly literature will continue to examine this website and determine the value of its ratings. And you can see that Forbes is tweaking their measurements: the 2009 methodology explanation has some differences and the RateMyProfessor.com score then counted for 25% of the total score (compared to 17.5% in the 2010 edition).

Using Twitter as a data source; examining emotions and more

In April, the Library of Congress announced plans to archive all public tweets since the start of Twitter in March 2006. So what might researchers do with this data?

A recent study provides an example. Scholars from Northeastern and Harvard examined the emotions of Americans through their Tweets. By coding certain words as having positive or negative emotional value, researchers were able to map out data. According to New Scientist:

[T]hese “tweets” suggest that the west coast is happier than the east coast, and across the country happiness peaks each Sunday morning, with a trough on Thursday evenings.

The mood map is cool.

While the findings about when people are happy may not be too surprising, the research does bring up the question about the value of Tweets as a data source. Since it is likely skewed to a younger sample and also perhaps a wealthier and more educated group, it is not representative data. But it could provide some insights into reactions to certain events or for seeing the beginning and end of certain trends.

So what else will researchers study using tweets?

Ongoing issue of measuring online audiences

If you were examining Hulu.com’s online audience figures from the last few months, you would find some fluctuation: 43.5 million viewers in May and then 24 million viewers in June. What happened? Did something radically change with the website? Are people abandoning the practice of watching television online?

No, the main change is that ComScore changed its methodology for measuring who used the website. According to the Los Angeles Times:

The three dominant measurement firms — ComScore, Nielsen and Quantcast — have been working since 2007 with an independent media auditing group to make improvements so the Web data they report don’t have a fun-house quality, in which the same site’s traffic can look emaciated or bulging, depending on the viewer’s angle.

These firms have used different measurements over time including panels of users (like Nielsen uses for television and radio) and embedded tags in videos and websites to track viewership. These numbers matter more than ever for advertisers as they will spend around $25 billion in online advertising in the United States in 2010.

As in many cases, knowing the means of measurement matters tremendously for interpreting statistics.

A disappearing middle class?

Yahoo Finance has a story that contains 22 statistics to “prove” the American middle class is “radically shrinking.” Interestingly, some of these statistics don’t prove much of anything about the middle class even if  they do indicate something about America as a whole. The post does show that the wealthy have gotten wealthier but without more context (statistics to compare to from the past, rates from other nations, etc.), there are better statistics to use to make this argument. Some of the statistics are linked to the latest economic downturn such as a rising number of bankruptcies and a rising time for finding a job.

Some examples of weaker statistics:

-“36 percent of Americans say that they don’t contribute anything to retirement savings.” How does this compare to previous rates? Perhaps the Americans of today don’t save like people in the past?

-“More than 40 percent of Americans who actually are employed are now working in service jobs, which are often very low paying.” Service jobs are often low paying – but we don’t know much more from this statistic.

-“For the first time in U.S. history, more than 40 million Americans are on food stamps, and the U.S. Department of Agriculture projects that number will go up to 43 million Americans in 2011.” Sounds bad – but since we now have more people in the country, a percentage would be a much better measure.

-“Average Wall Street bonuses for 2009 were up 17 percent when compared with 2008.” This is a shot at Wall Street more than an explanation about the middle class.

Other statistics do back up his point (even though they would all benefit from more explanation):

-“66 percent of the income growth between 2001 and 2007 went to the top 1% of all Americans.”

-“Only the top 5 percent of U.S. households have earned enough additional income to match the rise in housing costs since 1975.”

-“The bottom 50 percent of income earners in the United States now collectively own less than 1 percent of the nation’s wealth.”

On the whole, this seems more an alarmist piece. There is evidence to back up his argument – but the evidence here is not presented well and needs a lot more context.