Criteria in the college rating process across publications

There are numerous publications that rate colleges. According to this story and very helpful graphic in The Chronicle of Higher Education, publications tend not to use the same criteria:

That indicates a lack of agreement among them on what defines quality. Much of the emphasis is on “input measures” such as student selectivity, faculty-student ratio, and retention of freshmen. Except for graduation rates, almost no “outcome measures,” such as whether a student comes out prepared to succeed in the work force, are used.

This suggests each publication is measuring something different as their overall scores have different inputs. This is a classic measurement issue: each publication is operationalizing “college quality” in a different way.

The suggestion about using student outcomes as a criteria is a good one. How much different would the rankings look if this were taken into account? And isn’t this what administrators, faculty, and students are really concerned about? While students and families may worry about the outcome of jobs, I’m sure faculty want to know that their students are learning and maturing.

2 thoughts on “Criteria in the college rating process across publications

  1. Pingback: Ranking universities around the world | Legally Sociable

  2. Pingback: Forbes’ college rankings signals possible trend of looking at alumni earnings and status | Legally Sociable

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s