Ranking [public domain] www.byrdnick.com

University and Department Rankings: A Custom Solution

Lots of people pay close attention to the US News National University Rankings. But those rankings assume all users have the same priorities. Moreover, some people want field-specific rankings that compare universities at the department level (e.g., the Philosophy department at Harvard vs. the Philosophy department at MIT). Ranking-obsessed philosophers have had the Philosophical Gourmet Report to rank philosophy Ph.D. programs since at least 1996—1989 if you count the pre-internet version. For many reasons, academic philosophers are becoming more vocal about their criticism of these philosophy rankings (e.g., Bruya 2015, De Cruz 2016 2018). In this post, I will propose a (new?) custom ranking system. This system will address common complaints about philosophy’s existing ranking system: a custom ranking system will be more versatile, up-to-date, and generalizable.

1.  THE COMPLAINTS

The complaints about the rankings are voluminous — what else would you expect from philosophers? In lieu of an outline of every blog post and every public statement, I provide a list of major themes that fall into three different categories: the practice of ranking, the current process of ranking, and the current leadership of the ranking.

Complaints About Ranking

  1. Rankings might misrepresent the magnitude of the differences between departments.
  2. Rankings might indicate a false sense of hierarchy and/or prestige.
  3. Ordinal lists just aren’t that informative.

Complaints About Process

  1. The PGR rankings are based on a fixed set of variables, so the rankings are not useful to those who wish to compare departments according to a different set of variables.
  2. The PGR rankings aren’t representative since very few philosophers are invited to weigh in.
  3. The current process of the PGR rankings relies on qualitative reports, which makes for an unnecessarily large margin for error or dispute.

Complaints About People

  1. The selection of the board is concerning.
  2. The primary organizer and editor of the PGR is a subject of concern for a non-negligible portion of philosophers.

I think that my proposal can address many of these concerns. I will try to explain and visualize my proposal below. But first, let’s take a step back.

2.  WHAT DO ACADEMICS WANT?

Imagine that we never had the US News University Rankings or the Philosophical Gourmet Report or any other ranking system. And imagine that you are commissioned to create a tool that allows us to compare and rank academic departments. Before you start the project, you ask yourself: what information do academics want? And how much weight should be assigned to each kind of information? The latter question points to the central issue with ranking.

2.1  Versatility

To address the central issue of how much weight to give each variable, we need a reporting tool that lets users select the variable(s) they care about, assign their own weight to them, and ignore whatever variables they don’t care about. If we can do this, then our central issues with ranking are avoided. After all, this custom reporting tool can accommodate our plurality of ranking ideals.

2.2  Quantification

To address the remaining complaints, you want to make sure that the tool is based on quantitative metrics like publications, citations, areas of specialization (AOS), and demographic information rather than, say, individuals’ qualitative report. Getting this information shouldn’t be too difficult.

The Bottom-up Approach

One way to gather the information involves departments and faculty submitting it. Many departments already report information to their own institution. And many researchers already report this information on their CV, websites, and academic social network profiles. So reporting this information would not involve any new practices — just a tweak to existing practices.

The Artificially Intelligent Approach

Another option is to have bots crawl the web for the information. This option will probably be more error-prone — even if professional web developers were in charge. The output would be severely flawed, requiring perpetual (human) monitoring and correction. So either way, we will need the bottom-up approach.

2.3  Visualization

Once you decide on how to get your information, you compile it into a database. Now, all you need is a friendly web user interface that creates visualizations of the data. This makes the custom reports and rankings much easier to consume and share.

3.  THE SOLUTION

With your fancy new website, users can produce reports, comparisons, and rankings based on their preferred metrics and weighting scheme. Maybe you want to see which departments would be a good fit based on your interest in metaphysics. So you search for the departments with the most metaphysicians — metaphysicists? To get a closer look at each department, you generate pie charts representing each department’s distribution of specializations:

And maybe you have certain criteria in mind for choosing a graduate program. So you narrow your search to a few departments that have attracted your attention. Then you select a few metrics by which to compare these departments.

And maybe that last report did a poor job of capturing the big picture, so you turn the last comparison into a ranking. By default, each variable is treated equally. But you care about some variables more than others, so you begin adjusting the weights of some variables. You get the idea.

3.1  Bigger Picture

Now imagine how other academics could use this tool. Department chairs could look at longitudinal reports of their department. And they could compare their department to departments that they admire. Hiring committees could produce reports of their department to improve the representation of certain research and researchers. You might be thinking of other ways to use the tool. The point is that this one tool can serve many purposes — certainly more purposes than the existing rankings.

3.2  Even Bigger Picture

Now imagine how our larger academic community can use this tool. Imagine that the database includes all academic fields across all academic institutions. A single website could serve as a one-stop reporting tool for all kinds of purposes. We could compare all the aforementioned metrics, but across disciplines. If such a tool existed, and was updated regularly, then it would easily generate enough traffic to fund itself with advertising income — come to think of it, I wonder if Google ScholarAcademia.edu, or US News would be interested in this kind of project. (See the “Updates” section below to see the proposed system coming to life).

4.  RECAP

This tool offers lots of advantages. Consider the following:

  1. Easy to maintain. We can maintain the database just like we maintain a CV and other reports. Or, we could update the database more passively with, say, artificial intelligence. Either way, the actual database would be very simple: the backend database could be a single (albeit large) spreadsheet.
  2. Easy to use. The visual reports would allow for much more nuanced and meaningful reports than static lists.
  3. Customizable. People with various interests, criteria, and goals could produce optimally useful reports based on the same data by selecting different variables.
  4. Broadly useful. Professional and aspiring academics would find it helpful; even para-academic institutions and non-academics might find it useful. And because it’s not a static ranking, even the people who are against rankings could find a use for it.
  5. Expandable. With few changes, the tool could be used not just by a single academic field, but by the larger academic community.

Updates

2018-01-09

The PhilPapers Foundation’s announced a new service, PhilPeople, which will deliver — among other things — multiple parts of this rating/ranking system via the bottom-up approach: “We will also shortly be contacting philosophy department administrators in order to make sure that our information about department members is as complete and correct as possible.  […] the service will be widely used by prospective students.  We will offer the opportunity to compare departments along various dimensions, and it will be in everyone’s interests for the information to be as complete as possible.

2023-03-27

The New York Times launched “Build Your Own College Rankings“, which actualizes my custom ranking vision for university rankings (Section 3.2). Users can indicate the degree to which they care about like “High earnings”, “Economic mobility”, “Low net price”, “Academic profile”, “Graduation rate”, “student-faculty ratio”, “Party scene”, “Campus safety”, “Racial… divers[ity]”, “Economic… divers[ity]”, and more. The result is a ranking that is relative to the users‘ priorities. Cheers, NYT!

Published by

Nick Byrd

Nick is a cognitive scientist at Florida State University studying reasoning, wellbeing, and willpower. Check out his blog at byrdnick.com/blog

2 thoughts on “University and Department Rankings: A Custom Solution”

  1. Hey Nick,

    Nice graphs! I think the idea of a massive database of information that allows for customizable comparisons of data is great in theory, but I worry about its practical application in this context. Obviously if implemented the details would need to be sorted out, but two aspects of your suggestion jump out at me: (1) reliance on quantification, and (2) the feasibility of data management.

    Regarding (1) you write:
    “Also, based on some of the complaints above, you might want a tool that relies on quantitative metrics about, say, publications, citations, areas of specialization (AOS), demographic information, etc. Using quantitative information will help minimize the margin of error.”

    I don’t deny that the information you’ve listed is quite important. Although I am sensitive to the fallibility of subjective qualitative judgments (about faculty members, areas of study, journals, etc.), I think an issue with a primarily qualitative approach has a few drawbacks. First, as Brian Leiter and others have pointed out, there must be some kind of ordering of data; not all jobs or journals are created equal. For example, perhaps the faculty in department A publish a lot of articles in non-peer reviewed or open-access journals, while the faculty in department B publish fewer articles but in journals that are well-established and have editors and reviewers who are very well respected in their areas of specialization. Certainly a database will tell me which department has more publications, and perhaps we can even divide it up between certain types of journals. However, I feel as though there must be some qualitative judgments somewhere along the line. Some person or group must make decisions as to divide categories of data, whether and how certain pieces of datum should be weighted, etc. Can a database model really capture the complexity of journal quality?

    Similarly, perhaps a database will tell me that department A has a lot of philosophers working in philosophy of language, while department B has fewer in that area. But can it tell me whether department B includes particularly influential philosophers who are leaders in the field of philosophy of language?

    Even if we could find a way to divide all of this data up in a way that minimizes the need for qualitative judgments, the complexity of such an endeavor (as well as its maintenance and upkeep) would seem to be massive. This leads to (2).

    Regarding (2) you write:
    “More than that, to avoid some of the complaints above, you might want an independent party to manage things anyway. Fortunately, there isn’t a shortage of competent web developers who could easily pull this off.”

    How would such a project be funded, especially in a way that would not bias against departments who could afford inclusion fees, etc. Someone must be paid to design the site, provide tech support, maintain the database, request information from departments, etc. and I doubt any of this would be very cheap. There may not be a shortage of qualified individuals to run such a project, but think (particularly in philosophy) there is a shortage of funds.

    1. Jared,

      You raise some very important concerns. I’ll address them as best I can.

      (1) Reliance on quantification. I agree that some element of human judgment is helpful. As much as possible, I want that to be in the hands of the user, not the data-collectors or organizers. So, if someone wants to see whether a department is influential, then they might choose to look at the departments longitudinal publication and citation record by AOS. Once you get down to the nitty gritty details of philosophy journal rankings, I think things might become too complicated for the system. But think about the following: perhaps one could drill down on publications per department for a select number of journals. In other words, a user looks at a handful of departments’ publications records in, say, PhilReview, J Phil, Nous, Mind over the last 5 years. Beyond that, however, you might be right: the ability to compare departments by publications and across ranked journals might get tricky.

      (2) Feasibility of data management. I agree that the project would require substantial chunk of money up front. I imagine this being something like a PhilPapers project that is funded by grants at the beginning. Once it is up and running (and used), however, I would be surprised if it wasn’t able to sustain itself (if not profit) simply in virtue of advertising revenue—a quick look at what Leiter charges (up to $4500/month) shows that someone could have a more than modest income from a website like this. And it might be worth pointing out that the website would attract much more traffic than static ranking pages simply because one could use multiple features of the site to create a multitude of reports and comparisons. This would mean more “page views” and greater “visit duration” than a website that merely posts a static ranking. I wonder if these details sway you at all, one way or another. Or, maybe I am still missing something.

      Also, I wouldn’t be unhappy if this kind of tool merely supplemented the various rankings philosophers use. The thing is, some people seem to want rankings (evidenced by the wide adoption of the rankings), but others want only information and are outspokenly against rankings. This tool, though imperfect, would offer a tool that could be useful to both groups.

      And I have no doubts that this proposal is incomplete or unfit for launch. I leave out many details that one would want in order to have robust confidence in the project. Still, philosophers have conjured up the resources and the talent to make other ambitious projects happen (PhilPapers, PhilEvents, Philosopher’s Index, etc.). And non-philosophers have made useful and beautiful websites for academics and been able to sustain themselves (e.g., Academia.edu). I have no doubt that the next generation of philosophers or academics, with some help from web-savvy slash programming-savvy folks, could make this ambitious project a reality.

      Thanks for engaging!

Comments are closed.