Northeastern University moved up 113 spots in the USN&WR rankings in 17 years

Northeastern University successfully moved itself up the US News and World Report college rankings in a relatively short amount of time:

Figuring out how much Northeastern needed to adjust was one thing; actually doing it was another. Point by point, senior staff members tackled different criteria, always with an eye to U.S. News’s methodology. Freeland added faculty, for instance, to reduce class size. “We did play other kinds of games,” he says. “You get credit for the number of classes you have under 20 [students], so we lowered our caps on a lot of our classes to 19 just to make sure.” From 1996 to the 2003 edition (released in 2002), Northeastern rose 20 spots. (The title of each U.S. News “Best Colleges” edition actually refers to the upcoming year.)

Admissions stats also played a big role in the rankings formula. In 2003, ranked at 127, Northeastern began accepting the online Common Application, making it easier for students to apply. The more applications NU could drum up, the more students they could turn away, thus making the school appear more selective. A year later, NU ranked 120. Since studies showed that students who lived on campus were more likely to stay enrolled, the school oversaw the construction of dormitories like those in West Village—a $1 billion, seven-building complex—to improve retention and graduation rates. NU was lucky in this regard—not every urban school in the country had vast land, in the form of decrepit parking lots, on which to build a new, attractive campus.

There was one thing, however, that U.S. News weighted heavily that could not be fixed with numbers or formulas: the peer assessment. This would require some old-fashioned glad-handing. Freeland guessed that if there were 100 or so universities ahead of NU and if three people at each school were filling out the assessments, he and his team would have to influence some 300 people. “We figured, ‘That’s a manageable number, so we’re just gonna try to get to every one of them,’” Freeland says. “Every trip I took, every city I went to, every conference I went to, I made a point of making contact with any president who was in that national ranking.” Meanwhile, he put less effort into assessing other schools. “I did it based on what was in my head,” he says. “It would have been much more honest just to not fill it out.”…

In many ways, Aoun tries to distance himself from Freeland. He resists talking about the school’s meteoric rise over 17 years—from 162 to 49 in 2013—and plays down the rankings, brushing them aside like an embarrassment or a youthful mistake. “The focus on the ranking is not a strategy, for a simple reason,” he says. “You have thousands of rankings. So you will lose sleep if you start chasing all of them.” While it’s true that U.S. News no longer appears in the university’s strategic plan, it does appear in NU’s portrayal of itself: The school has no qualms using its high ranking in recruiting materials and publicity videos. Yet multiple Northeastern administrators expressed concern over this article’s focus on the rankings. One vice president telephoned Boston’s editors in a panic.

Despite Aoun’s carefully crafted image, the school’s actions undercut his words, as gaming U.S. News is now clearly part of the university’s DNA. And Aoun is a willing participant. “He may not admit to it, but that’s definitely what’s going on,” says Bob Lowndes, who is retiring as vice provost for global relations. Ahmed Abdelal, provost under both Freeland and Aoun, says the two presidents have shared “the same goal: further advancement in national ranking.”

These rankings clearly matter and few schools can ignore them completely. A few parts of this that I found interesting:

1. There are some indications in the article that some faculty resisted this rankings push. It would be interesting to hear more. At the same time, doesn’t being ranked #49 now mean faculty would also benefit?

2. The article suggests but doesn’t say exactly how much Northeastern was able to budge the reputational assessments. These can take take a long time to move. Another difficulty is that for a school like Northeastern to move up, others have to move down. But, it sounds like the gladhanding campaign had some effect.

3. Articles like these suggest that gaming the rankings is a bad thing. Lots of academics would talk about how this goes against the true values of a college education. Yet, the rankings matter to the public. The success of the US News & World Report Rankings has helped lead to a whole cottage industry of other assessments based on the best financial value schools, the best schools for public service, and so on. And, it is hard to imagine that once you introduce a quantifiable system like this in any industry that is highly based on status – and academia is perhaps a status industry par excellence – that one of the outcomes will be that different actors will want to work their way to the top.

US News & World Report changing up its college ranking methodology

US News & World Report recently announced changes to how it ranks colleges:

  • The “student selectivity” portion of the methodology will count for 12.5 percent of a college’s total, not 15 percent.
  • Within the student selectivity formula, class rank will count for 25 percent, not 40 percent. The change is attributed to the increase in the proportion of high schools that do not report class rank. SAT/ACT scores, meanwhile, rise to 65 percent from 50 percent of that score. (The rest isn’t explained but has in the past been based on colleges’ acceptance rates.)
  • Graduation rate performance (a measure that attempts to reward colleges for doing better than expected with their student body) will be applied to all colleges, not just the “national” ones at the top of the rankings.
  • “Peer assessment” — one of the most widely criticized criteria, based on a survey of presidents — will be cut from 25 to 22.5 percent of the formula for evaluating regional colleges. (One of the questions U.S. News declined to answer was whether there would be any change in the weighting for national universities.)
  • Graduation and retention rates will matter more for national universities, going from 20 percent to 22.5 percent.

This would really get interesting if these changes lead to significant shake-ups in the rankings. If some colleges move up quite a bit and, perhaps more importantly, others fall (knowing that people/institutions would feel a loss harder than an equivalent gain), there could be a lot of discussion. It would probably lead to schools that drop decrying the changes while colleges that rise would praise the new system.

It is too bad we don’t get an explanation of why these changes were made. The validity of the methodology is always in question but US News could at least try to make a case.

Trying to ensure more accountability in US News & World Report college ranking data

The US News & World Report college rankings are big business but also a big headache in data collection. The company is looking into ways to ensure more trustworthy data:

A new report from The Washington Post‘s Nick Anderson explores the increasingly common problem, in which universities submit inflated standardized test scores and class rankings for members of their incoming classes to U.S. News, which doesn’t independently verify the information. Tulane University, Bucknell University, Claremont McKenna College, Emory University, and George Washington University have all been implicated in the past year alone. And those are just the schools that got caught:

A survey of 576 college admissions officers conducted by Gallup last summer for the online news outlet Inside Higher Ed found that 91 percent believe other colleges had falsely reported standardized test scores and other admissions data. A few said their own college had done so.

For such a trusted report, the U.S. News rankings don’t have many safeguards ensuring that their data is accurate. Schools self-report these statistics on the honor system, essentially. U.S. News editor Brian Kelly told Inside Higher Ed’s Scott Jaschik, “The integrity of data is important to everybody … I find it incredible to contemplate that institutions based on ethical behavior would be doing this.” But plenty of institutions are doing this, as we noted back in November 2012 when GWU was unranked after being caught submitting juiced stats. 

At this point, U.S. News shouldn’t be surprised by acknowledgment like those from Tulane and Bucknell. It turns out that if you let schools misreport the numbers — especially in a field of fierce academic competition and increasingly budgetary hardship — they’ll take you up on the offer. Kelly could’ve learned that by reading U.S. News‘ own blog, Morse Code. Written by data researcher Bob Morse, almost half of the recent posts have been about fraud. To keep schools more honest, the magazine is considering requiring university officials outside of enrollment offices to sign a statement vouching for submitted numbers. But still, no third party accountability would be in place, and many higher ed experts are already saying that the credibility of the U.S. News college rankings is shot.

Three quick thoughts:

1. With the amount of money involved in the entire process, this should not be a surprise. Colleges want to project the best image they can so having a weakly regulated system (and also a suspect methodology and set of factors to start with) can lead to abuses.

2. If the USNWR rankings can’t be trusted, isn’t there someone who could provide a more honest system? This sounds like an opportunity for someone.

3. I wonder if there are parallels to PED use in baseball. To some degree, it doesn’t matter if lots of schools are gaming the system as long as the perception among schools is that everyone else is doing it. With this perception, it is easier to justify one’s own cheating because colleges need to catch up or compete with each other.

Sociologist argues that SATs not the best predictor of college success

In another round of the battles over standardized testing, a Wake Forest sociologist argues that the SAT is not the best predictor of college performance:

His conclusion? SATs don’t tell us much about how well a student will perform in college.

A better predictor of college success lies in a student’s high school grade-point average, class rank and course selection, Soares said…

Soares is editor of a new book, “SAT Wars: The Case for Test-Optional College Admissions,” that takes a critical look at the SAT while calling for a rethinking of the college admissions process…

When it dropped the SAT option, Wake Forest revamped its admissions process, beefing up its written response section and encouraging students to be interviewed by an admissions officer, a move that created a huge logistical challenge for the school.

This is not a small argument: as the article notes, this is a multi-billion dollar industry.

I wouldn’t be surprised if more schools continued to play around with the admissions processes, both to get around some of the difficulties with particular measures but also to get a competitive advantage in grabbing good students before other schools realize what is going on (the Moneyball approach to admissions?).

A new way to do the college search process: one comprehensive website to match students to colleges

The policy director of an education think tank writes in Washington Monthly, itself a purveyor of college rankings, that the future of college admissions will come in the form of a single, comprehensive website that will match prospective students and colleges:

This is the future of college admissions. The market for matching colleges and students is about to undergo a wholesale transformation to electronic form. When the time comes for Jameel to apply to colleges, ConnectEDU will take all of the information it has gathered and use sophisticated algorithms to find the best colleges likely to accept him—to find a match for Jameel in the same way that Amazon uses millions of sales records to advise customers about what books they might like to buy and Match.com helps the lovelorn find a compatible date. At the same time, on the other side of the looking glass, college admissions officers will be peering into ConnectEDU’s trove of data to search for the right mix of students.

This won’t just help the brightest, most driven kids. Bad matching is a problem throughout higher education, from top to bottom. Among all students who enroll in college, most will either transfer or drop out. For African American students and those whose parents never went to college, the transfer/dropout rate is closer to two-thirds. Most students don’t live in the resource-rich, intensely college-focused environment that upper-middle-class students take for granted. So they often default to whatever college is cheapest and closest to home. Tools like ConnectEDU will give them a way to find something better.

We can think of getting into college like this: students need to be slotted into the appropriate school. At this point, students can do certain things to improve their fit and colleges use certain information (though it often comes in a form of a narrative about students that admissions officers construct – I highly recommend Creating a Class). Our current system is highly dependent on students doing the initial legwork in searching out colleges that might fit them but as this article suggests, there are a number of students, particularly poorer students, who don’t do well in this system.

If this website idea catches on, wouldn’t it create more competition within the college market for students? If so, would middle- and upper-class students start complaining?

Also, while the article suggests a website like this is the answer to helping kids who can’t currently play the college game, doesn’t it rest on the idea that (1) people have equal access to this website and (2) that users have the ability or “cultural capital” to sort through the information the website presents? Neither of these might necessarily be true.

h/t Instapundit

Measuring colleges by their service to community and country

Many publications want to get into the college rankings business and Washington Monthly released their own take today. The difference? They emphasize how the college gives back to society:

The Monthly’s list aims to be a corrective to the annual ranking of colleges published by U.S. News World & Report–the industry-standard roster that typically leads with well-endowed Ivy League schools that turn away the vast majority of applicants.

Instead, the Monthly ranks schools using three main categories: how many low-income students the college enrolls, how much community and national service a given college’s students engage in, and the volume of groundbreaking research the university produces (in part measured by how many undergraduates go on to get PhDs). To paraphrase the long-ago dictum of President John F. Kennedy, the Monthly is seeking, in essence, to ask not so much what colleges can to for themselves as what they can be doing for their country.

By that measure, only one Ivy cracked the top 10–Harvard. The University of California system dominated, with six of California state schools among the top 30 national universities. Texas A&M, which is ranked 63rd by U.S. News, shot into the top 20 in part because of how many of its students participate in ROTC. Meanwhile, Washington University in St. Louis plunged in these rankings to 112 from U.S. News’ 13, because only 6 percent of its student body qualifies for federal Pell grants, an indication that Washington’s students come almost entirely from upper- and middle-class backgrounds.

The U.S. News & World Report “relies on crude and easily manipulated measures of wealth, exclusivity, and prestige for its rankings,” Washington Monthly editor Paul Glastris wrote. The U.S. News’ rankings take into account freshmen retention rate, admissions’ selectivity, high school counselors’ opinions of the school, faculty salary, per-pupil spending and the rate of alumni giving, among other things.

While the editor suggests these new rankings are not as influenced by status and wealth, I wonder if the measures really get away from these entirely. It takes resources to enroll low-income students, provide resources ground-breaking research, and perhaps extra time for students to be able to be engaged in community and national service. On the other hand, colleges make decisions about how to spend their money and could choose to put their resources into these particular areas.

I’m sure there will be questions about methodology: how did they measure impactful research? How much should ROTC count for and how did they measure community engagement?

New rankings also give more schools an opportunity to claim that they are at the top. For example, Northwestern College in Iowa now trumpets on their main page that “Washington Monthly ranks NWC third in the nation.” Read past the headline and you find that it is third within baccalaureate colleges. On the other side, will schools like Washington University in St. Louis even acknowledge these new rankings since they don’t look so good?

Forbes’ college rankings signals possible trend of looking at alumni earnings and status

The college rankings business is a lucrative one and there are a number of different players with a number of different measures. Forbes recently released its 2011 rankings and they have a particular angle that seems aimed at unseating the rankings of US News & World Report:

Our annual ranking of the 650 best undergraduate institutions focuses on the things that matter the most to students: quality of teaching, great career prospects, graduation rates and low levels of debt. Unlike other lists, we pointedly ignore ephemeral measures such as school “reputation” and ill-conceived metrics that reward wasteful spending. We try and evaluate the college purchase as a consumer would: Is it worth spending as much as a quarter of a million dollars for this degree? The rankings are prepared exclusively for Forbes by the Center for College Affordability and Productivity, a Washington, D.C. think tank founded by Ohio University economist Richard Vedder.

With phrases like “ephemeral measures” and “ill-conceived metrics,” Forbes claims to have a better methodology. This new approach helps fill a particular niche in the college rankings market: those looking for the “biggest bang for your educational buck.”

In their rankings, 30% of the final score is based on “Post-Graduate Success.” This is comprised of three values: “Listings of Alumni in Who’s Who in America” (10%), “Salary of Alumni from payscale.com” (15%), and “Alumni in Forbes/CCAP Corporate Officers List” (5%). These may be good measures (Forbes goes to some effort to defend them) but I think there is a larger issue at play here: are these good measures by which to evaluate a college degree and experience? Is a college degree simply about obtaining a certain income and status?

At this point, many rankings and assessment tools rely on the experiences of students while they are in school. But, with an increasing price for a college degree and a growing interest in showing that college students do learn important skills and content in college, I think we’ll see more measures of and a greater emphasis placed on post-graduation information. This push will probably come from both outsiders, Forbes, parents and students, the government, etc., and college insiders. This could be good and bad. On the good side, it could help schools tailor their offerings and training to what students need to succeed in the adult world. On the bad side, if value or bang-for-your-buck becomes the overriding concern, college and particular degrees simply become paths to higher or lower-income outcomes. This could particularly harm liberal arts schools or non-professional majors.

In the coming years, perhaps Forbes will steal some of the market away from US News with the financial angle. But this push is not without consequences for everyone involved.

(Here is another methodological concern: 17.5% of a school’s total score is based on ratings from RateMyProfessors.com. Forbes suggests it cannot be manipulated by schools and is uniform across schools but this is a pretty high percentage.)

(Related: a new report rates colleges by debt per degree. A quick explanation:

Its authors say they aim to give a more complete picture of higher education — rather than judging by graduation rates alone or by default rates alone — by dividing the total amount of money undergraduates borrow at a college by the number of degrees it awards.

We’ll see if this catches on.)

World rankings of sociology departments

If ranking sociology departments in the United States is not enough (see here and here regarding the NRC rankings), now one can look at world rankings. Seven of the top ten programs are in the United States as are eleven of the top twenty. It also appears there is quite a bit of variation in the “employer” score for schools in the top ten with a range of 41.7 to 100.0

At the bottom of the page: “Since 2004 QS has produced the leading and most trusted world university rankings. Focusing globally and locally, we deliver world university rankings for students and academics alike.” Does anyone pay any attention to these world rankings?

American Sociological Association committee on doctoral program rankings

While the ranking of undergraduate programs is contentious (read about Malcolm Gladwell’s latest thoughts on the subject here), the rankings of doctoral programs can also draw attention. In February, a five-person American Sociological Association (ASA) committee released a report about the 2010 National Research Council (NRC) rankings of doctoral sociological programs (see a summary here).

The ASA committee summarized their concerns about the NRC rankings:

Based on our work, we recommend that the ASA Council issue a resolution criticizing the 2010 NRC rankings for containing both
operationalization and implementation problems; discouraging faculty, students, and university administrators from using the core 2010 NRC rankings to evaluate sociology programs;
encouraging them to be suspicious of the raw data accompanying the 2010 NRC report; and indicating that alternative rankings, such as those based on surveys of departments’ reputations, have their own sets of biases.

The explanation of these issues is an interesting methodological analysis. Indeed, this document suggests a lot of these rankings have had issues, starting with the 1987 US News & World Report rankings which were primarily based on reputational rankings.

So what did the committee conclude should be done? Here are their final thoughts:

At this time, the committee believes that ASA should encourage prospective students, faculty, university administrators or others evaluating a given program to avoid blind reliance on
rankings that claim explicitly or implicitly to list departments from best to worst. The heterogeneity of the discipline suggests that evaluators should first determine what characteristics they value in a program and then employ available sources of information to assess the program’s performance. In addition, the ASA should help facilitate, within available means, the dissemination of such information.

So the final recommendation is to be skeptical about these rankings. This seems to be a fairly common approach for those who find issues with rankings of schools or programs.

How might we get past this kind of conclusion? If the ranking process were done by just sociologists, could we decide on even a fuzzy rank order of graduate programs that most could agree upon?

Gladwell on US News college rankings

In his latest New Yorker piece, Malcolm Gladwell takes aim at the US News & World Report college rankings (the full story requires a subscription). This is a familiar target and I have some thoughts about Gladwell’s analysis.

Even though I like Gladwell, I found this article underwhelming. It doesn’t give us much new information though it is an easy yet thought-provoking read about indexes (and could easily be used for a class discussion about research methods and rankings). And if his main conclusion is the ranking depends on who is doing the ranking…we already knew that.

Some things that would be beneficial to know (and some of these ideas are prompted by recently reading Mitchell Stevens’ Creating a Class where he spent 1.5 years working in an admissions department of a New England, DIII, liberal arts schools):

1. Gladwell seems to suggest that US News is just making arbitrary decisions. Not quite: they think they have a rationale for these decisions. As the head guy said, they have talked to a lot of experts and this is how they think it works. They could be wrong in their measures but they have reasons. Other publications use other factors (see a summary of those different factors here) but their lists are also not arbitrary – they have reasons for weighting factors differently or introducing new factors.

2. Stevens argues that the rankings work because they denote status. The reputational rankings are just that. And while they may be silly measures of “educational quality,” human beings are influenced by status and want to know relative rankings. Gladwell seems to suggest that the US News rankings have a huge impact – making it a circular status system dependent on their rankings – but there are other status systems that both agree and disagree with US News rankings.

2a. Additionally, it is not as if these sorts of rankings created a status system of colleges. Before US News, people already had ideas about this: US News simply codified it and opened it up to a lot more schools. There perhaps even could be an argument that their rankings opened up the college status system to more participants who wouldn’t have been part of the discussion before.

3. Stevens also suggests that parents and potential students often have to have a good “emotional fit” with a school before making the final decision. Much of the decision-making is made on status – Stevens says that most times when students have two schools (or more) to choose from, they will likely choose the one with a higher status. But the campus visits and interactions are important, even if they just confirm the existing status structure.

Ultimately, this discussion of US News rankings can get tiresome. Lots of academics (and others) don’t like the rankings. Schools claim not to like the rankings. Then why doesn’t somebody do something about it? Stevens suggests it is because if a school drops out of this game, their relative status will drop (and he makes the same argument for athletics: schools have to have some athletics to keep up, not necessarily to win). However, there are a lot of colleges that don’t need the extra applicants that a good US News ranking system would bring. Plus, there are alternative guides and rankings available – there are a number of others that examine different factors and develop different rankings.