There are numerous publications that rate colleges. According to this story and very helpful graphic in The Chronicle of Higher Education, publications tend not to use the same criteria:
That indicates a lack of agreement among them on what defines quality. Much of the emphasis is on “input measures” such as student selectivity, faculty-student ratio, and retention of freshmen. Except for graduation rates, almost no “outcome measures,” such as whether a student comes out prepared to succeed in the work force, are used.
This suggests each publication is measuring something different as their overall scores have different inputs. This is a classic measurement issue: each publication is operationalizing “college quality” in a different way.
The suggestion about using student outcomes as a criteria is a good one. How much different would the rankings look if this were taken into account? And isn’t this what administrators, faculty, and students are really concerned about? While students and families may worry about the outcome of jobs, I’m sure faculty want to know that their students are learning and maturing.