Forbes’ college rankings signals possible trend of looking at alumni earnings and status

The college rankings business is a lucrative one and there are a number of different players with a number of different measures. Forbes recently released its 2011 rankings and they have a particular angle that seems aimed at unseating the rankings of US News & World Report:

Our annual ranking of the 650 best undergraduate institutions focuses on the things that matter the most to students: quality of teaching, great career prospects, graduation rates and low levels of debt. Unlike other lists, we pointedly ignore ephemeral measures such as school “reputation” and ill-conceived metrics that reward wasteful spending. We try and evaluate the college purchase as a consumer would: Is it worth spending as much as a quarter of a million dollars for this degree? The rankings are prepared exclusively for Forbes by the Center for College Affordability and Productivity, a Washington, D.C. think tank founded by Ohio University economist Richard Vedder.

With phrases like “ephemeral measures” and “ill-conceived metrics,” Forbes claims to have a better methodology. This new approach helps fill a particular niche in the college rankings market: those looking for the “biggest bang for your educational buck.”

In their rankings, 30% of the final score is based on “Post-Graduate Success.” This is comprised of three values: “Listings of Alumni in Who’s Who in America” (10%), “Salary of Alumni from” (15%), and “Alumni in Forbes/CCAP Corporate Officers List” (5%). These may be good measures (Forbes goes to some effort to defend them) but I think there is a larger issue at play here: are these good measures by which to evaluate a college degree and experience? Is a college degree simply about obtaining a certain income and status?

At this point, many rankings and assessment tools rely on the experiences of students while they are in school. But, with an increasing price for a college degree and a growing interest in showing that college students do learn important skills and content in college, I think we’ll see more measures of and a greater emphasis placed on post-graduation information. This push will probably come from both outsiders, Forbes, parents and students, the government, etc., and college insiders. This could be good and bad. On the good side, it could help schools tailor their offerings and training to what students need to succeed in the adult world. On the bad side, if value or bang-for-your-buck becomes the overriding concern, college and particular degrees simply become paths to higher or lower-income outcomes. This could particularly harm liberal arts schools or non-professional majors.

In the coming years, perhaps Forbes will steal some of the market away from US News with the financial angle. But this push is not without consequences for everyone involved.

(Here is another methodological concern: 17.5% of a school’s total score is based on ratings from Forbes suggests it cannot be manipulated by schools and is uniform across schools but this is a pretty high percentage.)

(Related: a new report rates colleges by debt per degree. A quick explanation:

Its authors say they aim to give a more complete picture of higher education — rather than judging by graduation rates alone or by default rates alone — by dividing the total amount of money undergraduates borrow at a college by the number of degrees it awards.

We’ll see if this catches on.)

Determining the best colleges…using

Forbes recent published another installment of their rankings of the best colleges in America. One of the question that arises with such a list is the methodology behind the rankings. To their credit, Forbes provides a lengthy explanation.

Even as the ranking is supposedly from the point of view of students, I initially had some questions about one of the major criteria which accounts for 17.5% of the score for a college: using student evaluations of professors at At first, this sounded crazy to me – how representative is the data from and does it accurately reflect what is going on in the classroom?

Forbes sums up why they used this data:

In spite of some drawbacks of student evaluations of teaching, they apparently have value for the 86% of schools that have some sort of internal evaluation system. RMP ratings give similar results to these systems. Moreover, they are a measure of consumer preferences, which is what is critically important in rational consumer choice. When combined with the significant advantages of being uniform across different schools, not being subject to easy manipulation by schools, and being publicly available, RMP data is a preferred data source for information on student evaluations of teaching–it is the largest single uniform data set we know of student perceptions of the quality of their instruction.

To recap why these used data from

1. RMP ratings are similar to evaluation scores gathered by colleges. There is some scholarly research to back this up.

2. RMP ratings are “a measure of consumer preference.” This is data generated voluntarily by students. If Forbes wants the students’ perspective, this website offers it. (Though it is still a question whether it is a representative measure – but point #1 may take care of that.)

3. RMP ratings are perhaps the only data source to answer the question of what students experience in the classroom. It may not be perfect data but it can be used as an approximation.

Overall, Forbes logic makes some sense: offers a unique dataset that when cleaned up (and they describe how they weighted and standardized the scores) offers some insights into the classroom experience.

However, I’m still leery of giving 17.5% of the total score over to evaluations. Perhaps the scholarly literature will continue to examine this website and determine the value of its ratings. And you can see that Forbes is tweaking their measurements: the 2009 methodology explanation has some differences and the score then counted for 25% of the total score (compared to 17.5% in the 2010 edition).