On list of generous nations, US ranks 5th

Gallup has released “The World Giving Index 2010” and the United States is tied for fifth with Switzerland and behind Australia and New Zealand (tied for first) and Ireland and Canada (tied for third).

It looks like respondents were asked whether they did three things within the past month: gave money to an organization, volunteered for an organization, or helped someone they didn’t know.

Gallup suggests “the level of satisfaction or happiness of the population is emerging as the key driver for increasing the giving of money.” They also argue there could be “a positive cycle of giving” where happier people give to others who then are more likely to give.

I would be interested to know how much a country’s culture affects this. Are there certain societal traits that lead to more giving? Or are there certain economic and governmental structures that encourage more giving?

Measuring Presidential popularity with merchandise

There are traditional ways to measure Presidential popularity: polls that in some way measure approval or disapproval. Here is another possible way: sales of Presidential merchandise.

I’ve always wondered why Presidents or other political officials allow such merchandizing using their figures and words in order to make money. Perhaps it is simply publicity (even if it is in opposition to them). Or perhaps they don’t want to appear to be the politicians who cracks down on such things. Or perhaps by running for or entering public office, there is a tacit understanding that they are now in the public eye and can be used for money-making purposes.

And what does it mean culturally to reduce any politician to a piece of merchandise?

Discussions about student-learning outcomes among college boards

As discussions about assessment and student-learning outcomes build on college campuses, a new report looks at what governing boards think about their discussions of student-learning outcomes:

While oversight of educational quality is a critical responsibility of college boards of trustees, a majority of trustees and chief academic officers say boards do not spend enough time discussing student-learning outcomes, and more than a third say boards do not understand how student learning is assessed, says a report issued on Thursday by the Association of Governing Boards of Universities and Colleges.

According to members, boards tend to focus on business matters. But this issue of assessment and student-learning outcomes is one that is likely to affect all levels of colleges and universities.

(A note about how the results were obtained: the survey was sent to “1,300 chief academic officers and chairs of board committees on academic affairs how boards oversee academic quality.” The response rate was only 38%.)

Untangling the effects of income on happiness

Examining the relationship between income and happiness can be tricky. A recent research study, conducted by two Princeton researchers and summarized by LiveScience, is illustrative of some of the issues in this research field:

-The researchers were working with a large dataset that is built around a daily survey of Americans: “they analyzed more than 450,000 responses to the Gallup-Healthways Well-Being Index, a daily survey of 1,000 U.S. residents conducted by the Gallup Organization.”

-Changes in income were measured in terms of percentages rather than absolute numbers. This was done to reflect the fact that a percentage change in income would be better for comparisons across income types. As the researchers note: ““In the context of income, a $100 raise does not have the same significance for a financial services executive as for an individual earning the minimum wage, but a doubling of their respective incomes might have a similar impact on both.”

-Survey respondents answered questions related to two measures of happiness: overall life satisfaction and what their emotions were the day before. According to the LiveScience article: “For life evaluation, participants indicated on a scale from zero to 10, from worst to best possible, how they would rate their lives. For emotional well-being, participants answered yes/no questions about whether they had experienced various positive and negative emotions a lot during the prior day.” Having both of these dimensions is critical as a general question about happiness might be interpreted differently (do the reseachers mean happy right now or overall?) by respondents.

-Some of the findings: having a “Low income seemed to magnify the emotional pain of life’s misfortunes, including divorce, illness and loneliness.” However, there was a tipping point of $75,000 where having more money didn’t help improve one’s well-being:

The researchers suggest that making anything more than $75,000 no longer improves a person’s ability to spend time with friends, avoid pain and disease and enjoy leisure time – all factors involved in emotional well-being.

“It also is likely that when income rises beyond this value, the increased ability to purchase positive experiences is balanced, on average, by some negative effects,” they write. For instance, a past study revealed a link between high income and a reduced ability to savor small pleasures, the researchers noted.

This tipping point of $75,000 is above the median income in the United States. I would be curious to know if individuals feel this tipping point when their income does rise to this level – are they cognizant of this point? Or once they reach $75,000, are they still locked into a mindset that having more money will lead to increasing levels of well-being?

Also, this $75,000 point could be quite fluid. Over time, this point would change based on economic conditions and cultural understandings of what is a “good income.”

The poor cleaniness of home kitchens

Occasionally, one can find stories about how dirty homes can be. Here is more evidence, this time regarding unclean kitchens:

The small study from California’s Los Angeles County found that only 61 percent of home kitchens would get an A or B if put through the rigors of a restaurant inspection. At least 14 percent would fail — not even getting a C.

In comparison, nearly all Los Angeles County restaurants — 98 percent — get A or B scores each year.

On its own, these are interesting results: restaurant kitchens are generally more clean than home kitchens. But there is more to this story: how exactly researchers found out about the kitchens.

The study, released Thursday, is believed to be one of the first to offer a sizable assessment of food safety in private homes. But the researchers admit the way it was done is hardly perfect.

The results are based not on actual inspections, but on an Internet quiz taken by about 13,000 adults .

So it’s hard to use it to compare the conditions in home kitchens to those in restaurants, which involve trained inspectors giving objective assessments of dirt, pests, and food storage and handling practices.

What’s more, experts don’t believe the study is representative of all households, because people who are more interested and conscientious about food safety are more likely to take the quiz.

A more comprehensive look would probably find that an even smaller percentage of home kitchens would do well in a restaurant inspection, he suggested.

On one hand, this sounds like innovative research that is the first to provide a broad overview of the cleanliness of American kitchens. On the other hand, the way the data was collected suggests one should be wary about making definitive conclusions.

The online quiz is also reliant on self-reporting.

LA Times portal on value-added analysis of teachers

The Los Angeles Times has put together an information and opinion filled portal regarding their recent publication of a value-added analysis of Los Angeles teachers.

Measuring teacher performance is a tricky subject as there are a number of factors at play in a student’s academic performance. In an article, the newspaper summarizes how value-added scores are estimated:

Value-added estimates the effectiveness of a teacher by looking at the test scores of his students. Each student’s past test performance is used to project his performance in the future. The difference between the child’s actual and projected results is the estimated “value” that the teacher added or subtracted during the year. The teacher’s rating reflects his average results after teaching a statistically reliable number of students.

In addition to these methodological questions, there are number of other fascinating issues: should this sort of information be publicly available and how will affect teacher’s performance? Is it an accurate assessment of what teachers do? What should be done for the teachers who fall outside the normal range? How will the politics of all of this play out?

For those interested in education and measuring outcomes, this all makes for interesting reading.

(As a side note: I can only imagine what discussions would ensure if similar information was published regarding college professors.)

Criteria in the college rating process across publications

There are numerous publications that rate colleges. According to this story and very helpful graphic in The Chronicle of Higher Education, publications tend not to use the same criteria:

That indicates a lack of agreement among them on what defines quality. Much of the emphasis is on “input measures” such as student selectivity, faculty-student ratio, and retention of freshmen. Except for graduation rates, almost no “outcome measures,” such as whether a student comes out prepared to succeed in the work force, are used.

This suggests each publication is measuring something different as their overall scores have different inputs. This is a classic measurement issue: each publication is operationalizing “college quality” in a different way.

The suggestion about using student outcomes as a criteria is a good one. How much different would the rankings look if this were taken into account? And isn’t this what administrators, faculty, and students are really concerned about? While students and families may worry about the outcome of jobs, I’m sure faculty want to know that their students are learning and maturing.

WEIRD (Western, education, industrialized, rich, democratic) people may indeed be weird

A new article in Brain and Behavioral Sciences makes a thought-provoking cross-cultural conclusion about WEIRD people:

The article, titled “The weirdest people in the world?”, appears in the current issue of the journal Brain and Behavioral Sciences. Dr. Henrich and co-authors Steven Heine and Ara Norenzayan argue that life-long members of societies that are Western, educated, industrialized, rich, democratic — people who are WEIRD — see the world in ways that are alien from the rest of the human family. The UBC trio have come to the controversial conclusion that, say, the Machiguenga are not psychological outliers among humanity. We are…

WEIRD people, the UBC researchers argue, have unusual ideas of fairness, are more individualistic and less conformist than other people. In many of these respects, Americans are the most “extreme” Westerners, especially young ones. And educated Americans are even more extremely WEIRD than uneducated ones…

One of the consequences of this argument that is pointed out by the authors is that WEIRD people are then a bad population for studies and experiments because the results may not be generalizable.

I wonder how average Westerners and Americans in particular would react after reading this argument. Perhaps it might fit in with some of the ideas regarding “American exceptionalism” – though whether this is good or bad could be debated.
Regardless, if other researchers agree with these conclusions, it suggests that social science studies about humanity need to be expanded across the globe. The era of the undergraduate research subject might then be over.

The difficulties of polling for primary elections

Some polls about recent primary races in several states have been off. In a report from ABC, some of the difficulties in predicting primary elections are discussed:

Experts say this year’s primaries are highlighting some of the pitfalls of political polling.

“As a general rule, primaries are much harder to predict than general elections,” said Brad Coker, managing director of Mason-Dixon Polling and Research. “The hard part is figuring out who’s going to show up.”

Pollsters say developing a fundamental sense of who is going to vote is harder to do in primary elections when turnout is historically lower and more variable. There’s also the early and absentee voting factor.

This sounds like a sampling issue. Political polls tend to search for people who are likely to vote. But if a large percent of the sample that pollsters reach aren’t going to vote, the results are not very trustworthy.

The value of using multiple coders

A well-known psychologist from Harvard is in trouble for allegedly reporting false data from laboratory studies. How the allegations surfaced is illustrative of why researchers should have more than just one person looking at data. As reported in the Chronicle of Higher Education, here is what happened after the psychologist and a graduate student coded an experiment involving rhesus monkeys:

According to the document that was provided to The Chronicle, the experiment in question was coded by Mr. Hauser and a research assistant in his laboratory. A second research assistant was asked by Mr. Hauser to analyze the results. When the second research assistant analyzed the first research assistant’s codes, he found that the monkeys didn’t seem to notice the change in pattern. In fact, they looked at the speaker more often when the pattern was the same. In other words, the experiment was a bust.

But Mr. Hauser’s coding showed something else entirely: He found that the monkeys did notice the change in pattern—and, according to his numbers, the results were statistically significant. If his coding was right, the experiment was a big success.

The second research assistant was bothered by the discrepancy. How could two researchers watching the same videotapes arrive at such different conclusions? He suggested to Mr. Hauser that a third researcher should code the results. In an e-mail message to Mr. Hauser, a copy of which was provided to The Chronicle, the research assistant who analyzed the numbers explained his concern. “I don’t feel comfortable analyzing results/publishing data with that kind of skew until we can verify that with a third coder,” he wrote.

A graduate student agreed with the research assistant and joined him in pressing Mr. Hauser to allow the results to be checked, the document given to The Chronicle indicates. But Mr. Hauser resisted, repeatedly arguing against having a third researcher code the videotapes and writing that they should simply go with the data as he had already coded it. After several back-and-forths, it became plain that the professor was annoyed.

These discrepancies in the data led to indications that something similar had happened in other experiments.

Having multiple coders is good for several reasons:

1. Helping to eliminate or catch problems such as these where someone might be tempted to falsify data.

2. To help interpret ambiguous situations.

3. To demonstrate to the broader research community that the results are more than just one person’s conclusions. (This should also be aided by the review process as other researchers look over the work.)