Many academic philosophers and aspiring academic philosophers look to the Philosophical Gourmet Report (PGR) for philosophy ranking — i.e, to rank philosophy PhD programs. For many reasons, academic philosophers are becoming more vocal about their criticism of this philosophy ranking system (e.g., this recent paper in Metaphilosophy). In this post, I will propose a new system based on a variety of the common complaints and suggestions about philosophy’s existing ranking system. In the end, it should be clear that the proposed system could be (i) more useful and visually amenable than existing rankings, (ii) achievable, and (iii) generalizable to the broader academic community.
The complaints about the rankings are voluminous — what else would you expect from philosophers? In lieu of an outline of every blog post and every public statement, I provide a list of major themes that fall into three different categories: the practice of ranking, the current process of ranking, and the current leadership of the ranking.
Complaints About Ranking
- Rankings might misrepresent the magnitude of the differences between departments.
- Rankings might indicate a false sense of hierarchy and/or prestige.
- Ordinal lists just aren’t that informative.
Complaints About Process
- The PGR rankings are based on a fixed set of variables, so the rankings are not useful to those who wish to compare departments according to a different sets of variables.
- The PGR rankings aren’t representative since very few philosophers are invited to weigh in.
- The current process of the PGR rankings relies on qualitative reports, which makes for an unnecessarily large margin for error or dispute.
Complaints About People
- The selection of the board is concerning.
- The primary organizer and editor of the PGR is a subject of concern for a non-negligible portion of philosophers.
I think that my proposal can address many of these concerns. I will try to explain and visualize my proposal below. But first, let’s take a step back.
WHAT WOULD ACADEMICS WANT?
Let’s imagine that philosophers never had the PGR or any other ranking system. Now imagine you are commissioned to create a tool that philosophers would use to analyze, compare, and maybe even rank their various academic departments. Before you start the project, you ask yourself: what kind of information would academics (and aspiring academics) want to know about their field? No doubt, academics will disagree about what information they want, in general. But more importantly, they will disagree about how much weight should be assigned to each bit of information. This latter disagreement is the central problem for your project, as I see it.
To address that problem you need a reporting tool that features a customizable set of variables. That is, users need to be able to select the variable(s) they care about, weight them the way they see fit, and ignore whatever variables they want to ignore. If people can do this, then there is no longer a problem with people disagreeing about which information matters or which information matters most. With a tool that allows users to select and weight variables, a community with a diverse set of ideals can get individually suited rankings from one and the same dataset.
Also, based on some of the complaints above, you might want a tool that relies on quantitative metrics about, say, publications, citations, areas of specialization (AOS), demographic information, etc. Gathering quantitative information will help address concerns about purely qualitative data.
Getting this information shouldn’t be too difficult. One option is to have departments submit this information. Many departments already have to report certain information to their own institution. Reporting it again needn’t be an enormous hassle. Although the information you want in your database might go beyond the basic information that most departments keep tabs on. Another option is to have professional and aspiring academics create their own profiles and maintain their own metrics. Once the database has enough users from each department, meaningful reports could be made. I find this second option less attractive since it is based on the continued participation of lots of busy people — think about how many academics don’t have or maintain Google Scholar, ResearchGate, or Academia.edu profiles and who relatively few philosophers have or maintain PhilPapers profiles. A third option is to have bots trolling the web for the information, a.k.a., the Google approach. This third option is probably unrealistic for anyone other than Google. And even if it was realistic, the output would be hopelessly flawed without perpetual (human) monitoring and correction.
Once you decide on how to get your information, you compile it into a single dataset and store it on a server. Now, all you need is a friendly web-based user interface that allows for visualizations of your data. This is the part where you might want some outside help — unless you’re a computer scientist, of course. More than that, to avoid some of the complaints above, you might want an independent party to manage things anyway. Fortunately, there isn’t a shortage of competent web developers who could easily pull this off.
A CUSTOMIZABLE, VISUALIZABLE, DATA-BASED TOOL
With your fancy new website, users can produce reports, comparisons, and rankings based on all sorts of variables and sets of variables. For example, maybe an aspiring grad student wants to see which departments would be a good fit based on her interests in metaphysics. So, the student searches for the departments with the most metaphysicians — metaphysicists? To get a closer look at each department, she can generate pie charts representing their distribution of specializations:
And maybe the aspiring grad student has certain criteria in mind when it comes to choosing a graduate program. So, she selects a few departments that have attracted her attention. Then she selects a few metrics by which to compare these departments.
And maybe that last report did a poor job of capturing the big picture, so she opts to turn her comparison into a ranking (the variables of which would be treated equally by default, but could also be weighted by the user).
Now imagine how other academics could use this tool. Department chairs could look at longitudinal reports of various metrics in their department. They could even compare their department to departments they admire. Hiring committees could produce reports of their department’s areas of specialization, areas of publication, and areas of citation to reveal areas that are underrepresented in the department or to identify a departmental strength. This might help make the decision about which AOS to highlight in their next job post. You might be thinking of other ways to use the tool. The point is that this tool would serve a variety of purposes — certainly more purposes than the existing rankings.
Even Bigger Picture
Before we close, I want to ask you to briefly imagine how this tool could be used by an even larger audience. Imagine the database includes all academics fields across all academic institutions. A single website could serve as a one-stop reporting tool for all kinds of purposes. This would allow for reports, comparisons, and rankings to be made across departments, fields of study, institutions, etc. If such a tool existed, and it were kept up to date, then it would easily generate enough traffic to fund itself with advertising income — come to think of it, I wonder if Google Scholar, Academia.edu, or US News would be interested in this kind of project. …or maybe ResearchGate is already working on this.
This tool offers lots of advantages. Consider the following:
- Easy to maintain. The database could be a single spreadsheet. When new information is received, the spreadsheet is updated. The rest of the system, once it’s built, remains unchanged.
- Easy to use. The visual reports would allow for much more nuanced and meaningful reports than static lists.
- Customizable. People with various interests, criteria, and goals could produce optimally useful reports based on the same data by selecting different variables.
- Broadly useful. Professional and aspiring academics would find it helpful; even para-academic institutions and non-academics might find it useful. And because it’s not a static ranking, even the people who are against rankings could find a use for it.
- Expandable. With few changes, the tool could be used not just by a single academic field, but by the larger academic community.
Oh, and if you were wondering: I’m definitely not the first one to make this kind of proposal. Noëlle McAfee and Robert Vallier have made similar proposals. I got part of this idea from Noëlle. I hadn’t come across Robert’s idea until I was writing this. I wouldn’t be surprised if others have made similar, or better, proposals.