Determining the best colleges…using

Forbes recent published another installment of their rankings of the best colleges in America. One of the question that arises with such a list is the methodology behind the rankings. To their credit, Forbes provides a lengthy explanation.

Even as the ranking is supposedly from the point of view of students, I initially had some questions about one of the major criteria which accounts for 17.5% of the score for a college: using student evaluations of professors at At first, this sounded crazy to me – how representative is the data from and does it accurately reflect what is going on in the classroom?

Forbes sums up why they used this data:

In spite of some drawbacks of student evaluations of teaching, they apparently have value for the 86% of schools that have some sort of internal evaluation system. RMP ratings give similar results to these systems. Moreover, they are a measure of consumer preferences, which is what is critically important in rational consumer choice. When combined with the significant advantages of being uniform across different schools, not being subject to easy manipulation by schools, and being publicly available, RMP data is a preferred data source for information on student evaluations of teaching–it is the largest single uniform data set we know of student perceptions of the quality of their instruction.

To recap why these used data from

1. RMP ratings are similar to evaluation scores gathered by colleges. There is some scholarly research to back this up.

2. RMP ratings are “a measure of consumer preference.” This is data generated voluntarily by students. If Forbes wants the students’ perspective, this website offers it. (Though it is still a question whether it is a representative measure – but point #1 may take care of that.)

3. RMP ratings are perhaps the only data source to answer the question of what students experience in the classroom. It may not be perfect data but it can be used as an approximation.

Overall, Forbes logic makes some sense: offers a unique dataset that when cleaned up (and they describe how they weighted and standardized the scores) offers some insights into the classroom experience.

However, I’m still leery of giving 17.5% of the total score over to evaluations. Perhaps the scholarly literature will continue to examine this website and determine the value of its ratings. And you can see that Forbes is tweaking their measurements: the 2009 methodology explanation has some differences and the score then counted for 25% of the total score (compared to 17.5% in the 2010 edition).

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s