Why Elite College Rankings Need a Rethink

Much of higher education has a love/hate relationship with college rankings: love them when their college does well and refuse to acknowledge their existence if they drop a spot. But most colleges — and selective institutions in particular — play the ranking game in two main ways. First, they spend a lot of time and effort gathering data to US News and World Report to use in their annual ranking. Second, they often have in-house research staff to figure out how to move up the rankings as quickly as possible. Sometimes colleges fake their numbers, as evidenced by the recent scandals at Temple University and the University of Southern California, in which programs submitted false data for years and now face lawsuits. angry students.

Breaking Ranks: How the Rankings Industry Rules Higher Education and What to Do About It by Colin Diver Johns Hopkins University Press, 368 pp.

Enter Colin Diver. As president of Reed College in Oregon, he continued his predecessor’s tradition of refusing to provide data to American News and be prepared to bear the consequences of not ranking well. After a long and distinguished career in higher education, he wrote a book, Break rankswhich is in part a treaty against
prestige-based college rankings that cause colleges to make bad decisions and part of how he would like to rate colleges if he had the chance.

In my day job as a professor of education and head of department, I study the responsibility of higher education while experiencing first-hand the pressures to climb the ladder. American News rankings. But I also worked in the moonlight like Washington Monthly‘s the rankings guy over the past decade, which gives me some insight into how the rankings industry works and how colleges respond to rankings. It made me want to read this book, and it generally does not disappoint.

Diver focuses most of his anger on American News, even though the title is a criticism of the rankings industry as a whole. I had to laugh at Washington Monthly to be labeled as a cousin of the 800 pound gorilla who is American News. He devotes almost half of the book to two lines of attack which preach to the Monthly chorus: how rankings can reinforce the existing prestige-based hierarchy and encourage colleges to focus on selectivity rather than inclusiveness. These are reasons why the Monthly started publishing college rankings nearly two decades ago, and we get some credit from Diver for our alternative approach, like including net prices faced by working-class students and excluding rates of acceptance.

Diver then discusses the challenges of producing a single number that reflects a college’s performance. It raises legitimate concerns about the selection of variables, the way weights are assigned, and the high correlation between the selected variables. We at Monthly getting questioned by Diver for “somehow guessing that his Pell degree gap metric… accounted for 5.56% of his overall grade, while the number of first-generation students of a college deserved a measly 0.92%”. He also expresses frustration with rankings that seemingly change their methodology every year to either upend the results or prevent colleges from playing them.

These are all questions I think about every year, along with the rest of the Monthly team, when we put together our college guide. We pride ourselves on using publicly available metric data and do not require colleges to complete onerous surveys to be included in our rankings, as data provided directly by colleges to American News has suffered from accuracy issues in recent years, and because we believe colleges could better use these resources to help students directly. When we change variables, it is because new measures have become available or old ones have ceased to be maintained. Our general principle has been to give equal weights to groups of variables that all measure the same concept, and we have used a panel of experts to give us their feedback on the weights and variables. Is this all perfect? Absolutely not. But we believe we do our best to be transparent about our decisions and produce a reasonable set of rankings that highlight the public good of higher education.

Diver uses the fourth part of Break ranks share its philosophy of assessing the quality of individual colleges. He begins by discussing the possibility of using student learning outcomes to measure academic quality, and he is much more optimistic than I am in this regard. While this can be done easily for the more technical skills acquired in a student’s major, efforts to test general critical thinking and reasoning skills have been a challenge for decades. There was a lot of hype around college learning assessment in the 2000s, culminating in Richard Arum and Josipa Roksa’s book, Academically adriftwhich claimed only modest learning gains for students, but the test was never able to gain widespread acceptance or be seen as a good measure of skill.

The next quality measure proposed is the quality of teaching, which is even more difficult to measure. Diver discusses the possibility of counting the types of teaching practices used, the opinions of others on teaching practices, or even the ratings of instructors by students. Yet he overlooks research showing that all of these measures work better in theory than in practice, as students often give their professors lower grades if they are women, underrepresented minorities, or in STEM fields. He then launches the idea of ​​using teaching expenditure as an indicator of quality, but I think this rewards the richest institutions, which can spend a lot of money even if it does not generate learning for students .

He then speaks favorably of the approach that Monthly rankings used to assess other potential measures of quality. He likes to use measures of social mobility like graduation rates for Pell grant recipients (a proxy indicator of students from low-income families) and net worth for students of modest financial means. It also endorses the use of graduation rates and earnings using a value-added approach that compares actual and predicted results after adjusting for student and institution characteristics.

The Monthly gets another shoutout for our service metrics, which Diver calls “an original choice of variables,” and our use of the number of graduates who earn doctorates in the research portion of the rankings. It’s a somewhat odd choice to use things like ROTC participation and voting engagement, but those metrics capture different aspects of the service and have data. This comes down to an advantage and a limitation of our rankings: we use data that is readily available and not submitted directly by colleges.

Finally, Diver concludes by offering recommendations for students and educators on how to approach the wild world of college rankings. He recommends that students focus more on the underlying data than the college’s position in the rankings, and that they use the rankings as a resource to learn more about particular institutions. These are reasonable recommendations, although they assume that students have the time and social capital to access numerous rankings and can choose from a wide range of colleges to attend. This is great advice for students from upper-middle-class families whose parents went to college. But it’s probably overwhelming for first-generation students who choose institutions based on price more than other factors.

He begins his recommendations for educators by stating that college rankings should be ignored, which is extremely difficult to do when legislators and governing bodies pay such close attention to them. Maybe that could work for a president with a national brand and a lot of political capital, like Arizona State University’s Michael Crow. But for a leader in a status-conscious institution? No chance. This also trickles down to deans, department heads, and professors, as rankings are often part of strategic plans.

However, the steps for removing the diver’s rating are worth considering. First, he advises quorum leaders not to complete the American News peer reputation survey, which is played frequently and has declining response rates. No argument from me on that. He then recommends that college leaders ignore rankings that don’t align with their values ​​and celebrate those that do. This is crucial, in my opinion, but colleges need to be consistent in this view instead of just ignoring rankings when they drop in any given year. If the Monthly Where American News rankings suit you better, be prepared to justify both good and bad changes.

Globally, Break ranks is an easy and breezy read that serves as a helpful primer on the pros and cons of college rankings with an emphasis on American News. The one thing I want to point out—and the reason I’ve been with the Monthly rankings for so many years is that the rankings don’t go away. It’s up to us to produce rankings that try to measure what we think is important, and I take that responsibility seriously. I think the Monthly the rankings do this by focusing on the public good of higher education and highlighting data points that would otherwise not be known outside of a small circle of higher education insiders.

Norma A. Roth