Northeastern University moved up 113 spots in the USN&WR rankings in 17 years

Northeastern University successfully moved itself up the US News and World Report college rankings in a relatively short amount of time:

Figuring out how much Northeastern needed to adjust was one thing; actually doing it was another. Point by point, senior staff members tackled different criteria, always with an eye to U.S. News’s methodology. Freeland added faculty, for instance, to reduce class size. “We did play other kinds of games,” he says. “You get credit for the number of classes you have under 20 [students], so we lowered our caps on a lot of our classes to 19 just to make sure.” From 1996 to the 2003 edition (released in 2002), Northeastern rose 20 spots. (The title of each U.S. News “Best Colleges” edition actually refers to the upcoming year.)

Admissions stats also played a big role in the rankings formula. In 2003, ranked at 127, Northeastern began accepting the online Common Application, making it easier for students to apply. The more applications NU could drum up, the more students they could turn away, thus making the school appear more selective. A year later, NU ranked 120. Since studies showed that students who lived on campus were more likely to stay enrolled, the school oversaw the construction of dormitories like those in West Village—a $1 billion, seven-building complex—to improve retention and graduation rates. NU was lucky in this regard—not every urban school in the country had vast land, in the form of decrepit parking lots, on which to build a new, attractive campus.

There was one thing, however, that U.S. News weighted heavily that could not be fixed with numbers or formulas: the peer assessment. This would require some old-fashioned glad-handing. Freeland guessed that if there were 100 or so universities ahead of NU and if three people at each school were filling out the assessments, he and his team would have to influence some 300 people. “We figured, ‘That’s a manageable number, so we’re just gonna try to get to every one of them,’” Freeland says. “Every trip I took, every city I went to, every conference I went to, I made a point of making contact with any president who was in that national ranking.” Meanwhile, he put less effort into assessing other schools. “I did it based on what was in my head,” he says. “It would have been much more honest just to not fill it out.”…

In many ways, Aoun tries to distance himself from Freeland. He resists talking about the school’s meteoric rise over 17 years—from 162 to 49 in 2013—and plays down the rankings, brushing them aside like an embarrassment or a youthful mistake. “The focus on the ranking is not a strategy, for a simple reason,” he says. “You have thousands of rankings. So you will lose sleep if you start chasing all of them.” While it’s true that U.S. News no longer appears in the university’s strategic plan, it does appear in NU’s portrayal of itself: The school has no qualms using its high ranking in recruiting materials and publicity videos. Yet multiple Northeastern administrators expressed concern over this article’s focus on the rankings. One vice president telephoned Boston’s editors in a panic.

Despite Aoun’s carefully crafted image, the school’s actions undercut his words, as gaming U.S. News is now clearly part of the university’s DNA. And Aoun is a willing participant. “He may not admit to it, but that’s definitely what’s going on,” says Bob Lowndes, who is retiring as vice provost for global relations. Ahmed Abdelal, provost under both Freeland and Aoun, says the two presidents have shared “the same goal: further advancement in national ranking.”

These rankings clearly matter and few schools can ignore them completely. A few parts of this that I found interesting:

1. There are some indications in the article that some faculty resisted this rankings push. It would be interesting to hear more. At the same time, doesn’t being ranked #49 now mean faculty would also benefit?

2. The article suggests but doesn’t say exactly how much Northeastern was able to budge the reputational assessments. These can take take a long time to move. Another difficulty is that for a school like Northeastern to move up, others have to move down. But, it sounds like the gladhanding campaign had some effect.

3. Articles like these suggest that gaming the rankings is a bad thing. Lots of academics would talk about how this goes against the true values of a college education. Yet, the rankings matter to the public. The success of the US News & World Report Rankings has helped lead to a whole cottage industry of other assessments based on the best financial value schools, the best schools for public service, and so on. And, it is hard to imagine that once you introduce a quantifiable system like this in any industry that is highly based on status – and academia is perhaps a status industry par excellence – that one of the outcomes will be that different actors will want to work their way to the top.

Measuring colleges by their service to community and country

Many publications want to get into the college rankings business and Washington Monthly released their own take today. The difference? They emphasize how the college gives back to society:

The Monthly’s list aims to be a corrective to the annual ranking of colleges published by U.S. News World & Report–the industry-standard roster that typically leads with well-endowed Ivy League schools that turn away the vast majority of applicants.

Instead, the Monthly ranks schools using three main categories: how many low-income students the college enrolls, how much community and national service a given college’s students engage in, and the volume of groundbreaking research the university produces (in part measured by how many undergraduates go on to get PhDs). To paraphrase the long-ago dictum of President John F. Kennedy, the Monthly is seeking, in essence, to ask not so much what colleges can to for themselves as what they can be doing for their country.

By that measure, only one Ivy cracked the top 10–Harvard. The University of California system dominated, with six of California state schools among the top 30 national universities. Texas A&M, which is ranked 63rd by U.S. News, shot into the top 20 in part because of how many of its students participate in ROTC. Meanwhile, Washington University in St. Louis plunged in these rankings to 112 from U.S. News’ 13, because only 6 percent of its student body qualifies for federal Pell grants, an indication that Washington’s students come almost entirely from upper- and middle-class backgrounds.

The U.S. News & World Report “relies on crude and easily manipulated measures of wealth, exclusivity, and prestige for its rankings,” Washington Monthly editor Paul Glastris wrote. The U.S. News’ rankings take into account freshmen retention rate, admissions’ selectivity, high school counselors’ opinions of the school, faculty salary, per-pupil spending and the rate of alumni giving, among other things.

While the editor suggests these new rankings are not as influenced by status and wealth, I wonder if the measures really get away from these entirely. It takes resources to enroll low-income students, provide resources ground-breaking research, and perhaps extra time for students to be able to be engaged in community and national service. On the other hand, colleges make decisions about how to spend their money and could choose to put their resources into these particular areas.

I’m sure there will be questions about methodology: how did they measure impactful research? How much should ROTC count for and how did they measure community engagement?

New rankings also give more schools an opportunity to claim that they are at the top. For example, Northwestern College in Iowa now trumpets on their main page that “Washington Monthly ranks NWC third in the nation.” Read past the headline and you find that it is third within baccalaureate colleges. On the other side, will schools like Washington University in St. Louis even acknowledge these new rankings since they don’t look so good?

Forbes’ college rankings signals possible trend of looking at alumni earnings and status

The college rankings business is a lucrative one and there are a number of different players with a number of different measures. Forbes recently released its 2011 rankings and they have a particular angle that seems aimed at unseating the rankings of US News & World Report:

Our annual ranking of the 650 best undergraduate institutions focuses on the things that matter the most to students: quality of teaching, great career prospects, graduation rates and low levels of debt. Unlike other lists, we pointedly ignore ephemeral measures such as school “reputation” and ill-conceived metrics that reward wasteful spending. We try and evaluate the college purchase as a consumer would: Is it worth spending as much as a quarter of a million dollars for this degree? The rankings are prepared exclusively for Forbes by the Center for College Affordability and Productivity, a Washington, D.C. think tank founded by Ohio University economist Richard Vedder.

With phrases like “ephemeral measures” and “ill-conceived metrics,” Forbes claims to have a better methodology. This new approach helps fill a particular niche in the college rankings market: those looking for the “biggest bang for your educational buck.”

In their rankings, 30% of the final score is based on “Post-Graduate Success.” This is comprised of three values: “Listings of Alumni in Who’s Who in America” (10%), “Salary of Alumni from payscale.com” (15%), and “Alumni in Forbes/CCAP Corporate Officers List” (5%). These may be good measures (Forbes goes to some effort to defend them) but I think there is a larger issue at play here: are these good measures by which to evaluate a college degree and experience? Is a college degree simply about obtaining a certain income and status?

At this point, many rankings and assessment tools rely on the experiences of students while they are in school. But, with an increasing price for a college degree and a growing interest in showing that college students do learn important skills and content in college, I think we’ll see more measures of and a greater emphasis placed on post-graduation information. This push will probably come from both outsiders, Forbes, parents and students, the government, etc., and college insiders. This could be good and bad. On the good side, it could help schools tailor their offerings and training to what students need to succeed in the adult world. On the bad side, if value or bang-for-your-buck becomes the overriding concern, college and particular degrees simply become paths to higher or lower-income outcomes. This could particularly harm liberal arts schools or non-professional majors.

In the coming years, perhaps Forbes will steal some of the market away from US News with the financial angle. But this push is not without consequences for everyone involved.

(Here is another methodological concern: 17.5% of a school’s total score is based on ratings from RateMyProfessors.com. Forbes suggests it cannot be manipulated by schools and is uniform across schools but this is a pretty high percentage.)

(Related: a new report rates colleges by debt per degree. A quick explanation:

Its authors say they aim to give a more complete picture of higher education — rather than judging by graduation rates alone or by default rates alone — by dividing the total amount of money undergraduates borrow at a college by the number of degrees it awards.

We’ll see if this catches on.)

Gladwell on US News college rankings

In his latest New Yorker piece, Malcolm Gladwell takes aim at the US News & World Report college rankings (the full story requires a subscription). This is a familiar target and I have some thoughts about Gladwell’s analysis.

Even though I like Gladwell, I found this article underwhelming. It doesn’t give us much new information though it is an easy yet thought-provoking read about indexes (and could easily be used for a class discussion about research methods and rankings). And if his main conclusion is the ranking depends on who is doing the ranking…we already knew that.

Some things that would be beneficial to know (and some of these ideas are prompted by recently reading Mitchell Stevens’ Creating a Class where he spent 1.5 years working in an admissions department of a New England, DIII, liberal arts schools):

1. Gladwell seems to suggest that US News is just making arbitrary decisions. Not quite: they think they have a rationale for these decisions. As the head guy said, they have talked to a lot of experts and this is how they think it works. They could be wrong in their measures but they have reasons. Other publications use other factors (see a summary of those different factors here) but their lists are also not arbitrary – they have reasons for weighting factors differently or introducing new factors.

2. Stevens argues that the rankings work because they denote status. The reputational rankings are just that. And while they may be silly measures of “educational quality,” human beings are influenced by status and want to know relative rankings. Gladwell seems to suggest that the US News rankings have a huge impact – making it a circular status system dependent on their rankings – but there are other status systems that both agree and disagree with US News rankings.

2a. Additionally, it is not as if these sorts of rankings created a status system of colleges. Before US News, people already had ideas about this: US News simply codified it and opened it up to a lot more schools. There perhaps even could be an argument that their rankings opened up the college status system to more participants who wouldn’t have been part of the discussion before.

3. Stevens also suggests that parents and potential students often have to have a good “emotional fit” with a school before making the final decision. Much of the decision-making is made on status – Stevens says that most times when students have two schools (or more) to choose from, they will likely choose the one with a higher status. But the campus visits and interactions are important, even if they just confirm the existing status structure.

Ultimately, this discussion of US News rankings can get tiresome. Lots of academics (and others) don’t like the rankings. Schools claim not to like the rankings. Then why doesn’t somebody do something about it? Stevens suggests it is because if a school drops out of this game, their relative status will drop (and he makes the same argument for athletics: schools have to have some athletics to keep up, not necessarily to win). However, there are a lot of colleges that don’t need the extra applicants that a good US News ranking system would bring. Plus, there are alternative guides and rankings available – there are a number of others that examine different factors and develop different rankings.