Ranking [public domain] www.byrdnick.com

On Philosophy Ranking & Reporting: A Custom Solution?

Many academic philosophers and aspiring academic philosophers look to the Philosophical Gourmet Report (PGR) for philosophy ranking — i.e, to rank philosophy PhD programs. For many reasons, academic philosophers are becoming more vocal about their criticism of this philosophy ranking system (e.g., this recent paper in Metaphilosophy). In this post, I will propose a new system based on a variety of the common complaints and suggestions about philosophy’s existing ranking system. In the end, it should be clear that the proposed system could be (i) more useful and visually amenable than existing rankings, (ii) achievable, and (iii) generalizable to the broader academic community.

1.  THE COMPLAINTS

The complaints about the rankings are voluminous — what else would you expect from philosophers? In lieu of an outline of every blog post and every public statement, I provide a list of major themes that fall into three different categories: the practice of ranking, the current process of ranking, and the current leadership of the ranking.

Complaints About Ranking

  1. Rankings might misrepresent the magnitude of the differences between departments.
  2. Rankings might indicate a false sense of hierarchy and/or prestige.
  3. Ordinal lists just aren’t that informative.

Complaints About Process

  1. The PGR rankings are based on a fixed set of variables, so the rankings are not useful to those who wish to compare departments according to a different sets of variables.
  2. The PGR rankings aren’t representative since very few philosophers are invited to weigh in.
  3. The current process of the PGR rankings relies on qualitative reports, which makes for an unnecessarily large margin for error or dispute.

Complaints About People

  1. The selection of the board is concerning.
  2. The primary organizer and editor of the PGR is a subject of concern for a non-negligible portion of philosophers.

I think that my proposal can address many of these concerns. I will try to explain and visualize my proposal below. But first, let’s take a step back.

2.  WHAT WOULD ACADEMICS WANT?

Let’s imagine that philosophers never had the PGR or any other ranking system. Now imagine you are commissioned to create a tool that philosophers would use to analyze, compare, and maybe even rank their various academic departments. Before you start the project, you ask yourself: what kind of information would academics (and aspiring academics) want to know about their field? No doubt, academics will disagree about what information they want, in general. But more importantly, they will disagree about how much weight should be assigned to each bit of information. This latter disagreement is the central problem for your project, as I see it.

Versatility

To address that problem you need a reporting tool that features a customizable set of variables. That is, users need to be able to select the variable(s) they care about, weight them the way they see fit, and ignore whatever variables they want to ignore. If people can do this, then there is no longer a problem with people disagreeing about which information matters or which information matters most. With a tool that allows users to select and weight variables, a community with a diverse set of ideals can get individually suited rankings from one and the same dataset.

Quantification

Also, based on some of the complaints above, you might want a tool that relies on quantitative metrics about, say, publications, citations, areas of specialization (AOS), demographic information, etc. Gathering quantitative information will help address concerns about purely qualitative data.

Getting this information shouldn’t be too difficult. One option is to have departments submit this information. Many departments already have to report certain information to their own institution. Reporting it again needn’t be an enormous hassle. Although the information you want in your database might go beyond the basic information that most departments keep tabs on. Another option is to have professional and aspiring academics create their own profiles and maintain their own metrics. Once the database has enough users from each department, meaningful reports could be made. I find this second option less attractive since it is based on the continued participation of lots of busy people — think about how many academics don’t have or maintain Google Scholar, ResearchGate, or Academia.edu profiles and how many philosophers don’t even maintain PhilPapers profiles. A third option is to have bots trolling the web for the information, a.k.a., the Google approach. This third option is probably unrealistic for anyone other than Google. And even if it was realistic, the output would be hopelessly flawed without perpetual (human) monitoring and correction.

Visualization

Once you decide on how to get your information, you compile it into a single dataset and store it on a server. Now, all you need is a friendly web-based user interface that allows for visualizations of your data. This is the part where you might want some outside help — unless you’re a computer scientist, of course. More than that, to avoid some of the complaints above, you might want an independent party to manage things anyway. Fortunately, there isn’t a shortage of competent web developers who could easily pull this off.

3.  A EXAMPLE OF THE SOLUTION

With your fancy new website, users can produce reports, comparisons, and rankings based on all sorts of variables and sets of variables. For example, maybe an aspiring grad student wants to see which departments would be a good fit based on her interests in metaphysics. So, the student searches for the departments with the most metaphysicians — metaphysicists? To get a closer look at each department, she can generate pie charts representing their distribution of specializations:

And maybe the aspiring grad student has certain criteria in mind when it comes to choosing a graduate program. So, she selects a few departments that have attracted her attention. Then she selects a few metrics by which to compare these departments.

And maybe that last report did a poor job of capturing the big picture, so she opts to turn her comparison into a ranking (the variables of which would be treated equally by default, but could also be weighted by the user).

BIGGER PICTURE

Now imagine how other academics could use this tool. Department chairs could look at longitudinal reports of various metrics in their department. They could even compare their department to departments they admire. Hiring committees could produce reports of their department’s areas of specialization, areas of publication, and areas of citation to reveal areas that are underrepresented in the department or to identify a departmental strength. This might help make the decision about which AOS to highlight in their next job post. You might be thinking of other ways to use the tool. The point is that this tool would serve a variety of purposes — certainly more purposes than the existing rankings.

Even Bigger Picture

Before we close, I want to ask you to briefly imagine how this tool could be used by an even larger audience. Imagine the database includes all academics fields across all academic institutions. A single website could serve as a one-stop reporting tool for all kinds of purposes. This would allow for reports, comparisons, and rankings to be made across departments, fields of study, institutions, etc. If such a tool existed, and it were kept up to date, then it would easily generate enough traffic to fund itself with advertising income — come to think of it, I wonder if Google ScholarAcademia.edu, or US News would be interested in this kind of project. …or maybe ResearchGate is already working on this.

SUMMARY

This tool offers lots of advantages. Consider the following:

  1. Easy to maintain. The database could be a single spreadsheet. When new information is received, the spreadsheet is updated. The rest of the system, once it’s built, remains unchanged.
  2. Easy to use. The visual reports would allow for much more nuanced and meaningful reports than static lists.
  3. Customizable. People with various interests, criteria, and goals could produce optimally useful reports based on the same data by selecting different variables.
  4. Broadly useful. Professional and aspiring academics would find it helpful; even para-academic institutions and non-academics might find it useful. And because it’s not a static ranking, even the people who are against rankings could find a use for it.
  5. Expandable. With few changes, the tool could be used not just by a single academic field, but by the larger academic community.

Oh, and if you were wondering: I’m definitely not the first one to make this kind of proposal. Noëlle McAfee and Robert Vallier have made similar proposals. I got part of this idea from Noëlle. I hadn’t come across Robert’s idea until I was writing this. I wouldn’t be surprised if others have made similar, or better, proposals.

Published by

Nick Byrd

Nick is a cognitive scientist at Florida State University studying reasoning, wellbeing, and willpower. Check out his blog at byrdnick.com/blog

2 thoughts on “On Philosophy Ranking & Reporting: A Custom Solution?”

  1. Hey Nick,

    Nice graphs! I think the idea of a massive database of information that allows for customizable comparisons of data is great in theory, but I worry about its practical application in this context. Obviously if implemented the details would need to be sorted out, but two aspects of your suggestion jump out at me: (1) reliance on quantification, and (2) the feasibility of data management.

    Regarding (1) you write:
    “Also, based on some of the complaints above, you might want a tool that relies on quantitative metrics about, say, publications, citations, areas of specialization (AOS), demographic information, etc. Using quantitative information will help minimize the margin of error.”

    I don’t deny that the information you’ve listed is quite important. Although I am sensitive to the fallibility of subjective qualitative judgments (about faculty members, areas of study, journals, etc.), I think an issue with a primarily qualitative approach has a few drawbacks. First, as Brian Leiter and others have pointed out, there must be some kind of ordering of data; not all jobs or journals are created equal. For example, perhaps the faculty in department A publish a lot of articles in non-peer reviewed or open-access journals, while the faculty in department B publish fewer articles but in journals that are well-established and have editors and reviewers who are very well respected in their areas of specialization. Certainly a database will tell me which department has more publications, and perhaps we can even divide it up between certain types of journals. However, I feel as though there must be some qualitative judgments somewhere along the line. Some person or group must make decisions as to divide categories of data, whether and how certain pieces of datum should be weighted, etc. Can a database model really capture the complexity of journal quality?

    Similarly, perhaps a database will tell me that department A has a lot of philosophers working in philosophy of language, while department B has fewer in that area. But can it tell me whether department B includes particularly influential philosophers who are leaders in the field of philosophy of language?

    Even if we could find a way to divide all of this data up in a way that minimizes the need for qualitative judgments, the complexity of such an endeavor (as well as its maintenance and upkeep) would seem to be massive. This leads to (2).

    Regarding (2) you write:
    “More than that, to avoid some of the complaints above, you might want an independent party to manage things anyway. Fortunately, there isn’t a shortage of competent web developers who could easily pull this off.”

    How would such a project be funded, especially in a way that would not bias against departments who could afford inclusion fees, etc. Someone must be paid to design the site, provide tech support, maintain the database, request information from departments, etc. and I doubt any of this would be very cheap. There may not be a shortage of qualified individuals to run such a project, but think (particularly in philosophy) there is a shortage of funds.

    1. Jared,

      You raise some very important concerns. I’ll address them as best I can.

      (1) Reliance on quantification. I agree that some element of human judgment is helpful. As much as possible, I want that to be in the hands of the user, not the data-collectors or organizers. So, if someone wants to see whether a department is influential, then they might choose to look at the departments longitudinal publication and citation record by AOS. Once you get down to the nitty gritty details of philosophy journal rankings, I think things might become too complicated for the system. But think about the following: perhaps one could drill down on publications per department for a select number of journals. In other words, a user looks at a handful of departments’ publications records in, say, PhilReview, J Phil, Nous, Mind over the last 5 years. Beyond that, however, you might be right: the ability to compare departments by publications and across ranked journals might get tricky.

      (2) Feasibility of data management. I agree that the project would require substantial chunk of money up front. I imagine this being something like a PhilPapers project that is funded by grants at the beginning. Once it is up and running (and used), however, I would be surprised if it wasn’t able to sustain itself (if not profit) simply in virtue of advertising revenue—a quick look at what Leiter charges (up to $4500/month) shows that someone could have a more than modest income from a website like this. And it might be worth pointing out that the website would attract much more traffic than static ranking pages simply because one could use multiple features of the site to create a multitude of reports and comparisons. This would mean more “page views” and greater “visit duration” than a website that merely posts a static ranking. I wonder if these details sway you at all, one way or another. Or, maybe I am still missing something.

      Also, I wouldn’t be unhappy if this kind of tool merely supplemented the various rankings philosophers use. The thing is, some people seem to want rankings (evidenced by the wide adoption of the rankings), but others want only information and are outspokenly against rankings. This tool, though imperfect, would offer a tool that could be useful to both groups.

      And I have no doubts that this proposal is incomplete or unfit for launch. I leave out many details that one would want in order to have robust confidence in the project. Still, philosophers have conjured up the resources and the talent to make other ambitious projects happen (PhilPapers, PhilEvents, Philosopher’s Index, etc.). And non-philosophers have made useful and beautiful websites for academics and been able to sustain themselves (e.g., Academia.edu). I have no doubt that the next generation of philosophers or academics, with some help from web-savvy slash programming-savvy folks, could make this ambitious project a reality.

      Thanks for engaging!

Comments are closed.