The ranking of scholars and institutions using metrics such as the number of publications, the number of citations, or the h-index is not new. Many entities including but not limited to Google Scholar, Scopus, Web of Science, and US News and World Report have ranked scholars (and institutions) using these or similar metrics. Unfortunately, the simple listing of scholars or institutions in rank order based on any of the preceding metrics, without accounting for the publication and citation traditions of particular Fields, Disciplines, and Specialties, unfairly punishes scholars (and institutions) that focus on areas where publication counts, citations, and h-indeces are low relative to other areas. To address this reality, ScholarGPS™ analyzes metrics for individual scholars in terms of their Fields, Disciplines, and Specialties. Therefore, ScholarGPS™ ranks each individual in a rational manner; scholars in the Humanities, Law, or Social Sciences are fairly compared to their counterparts in Medicine, Life Sciences, or Engineering who usually have significantly larger publication and citation counts.
Other issues associated with the ranking of scholars such as but not limited to how publications with a large number of authors should be counted, how the citations to such publications should be allocated among the authors, whether self-citations should be included or excluded, and how to minimize the perceived bias toward senior scholars who continually accumulate citations to publications that may be decades old, are all addressed by ScholarGPS™. In fact, we provide subscribers with a wide variety of choices in how they see how scholars are ranked (weighting publications, citations, and h-indeces by the number of authors; evaluating either lifetime or prior five-year activity; including or excluding self-citations). Issues associated with the ranking of institutions such as but not limited to how institutions that focus on the Humanities, Law, or Social Sciences can be fairly compared to institutions that focus on Medicine, Life Sciences, or Engineering and Computer Science; the perceived bias of reputational rankings toward more established institutions; minimizing the influence of the size of the institution on the institutional ranking) are also addressed by the purely quantitative ScholarGPS™ ranking methodologies available here. In short, ScholarGPS™ ranks institutions and programs from around the world based on the quantity and quality (relative to their own Disciplines) of active scholars in each institution.
It is likely impossible to find any encompassing set of metrics that would create the perfect scholarly ranking model — one that would be embraced by all scholars and which would rank scholars with absolute and complete fairness and accuracy. ScholarGPS™ recognizes that great care should be taken in using any ranking (whether those from ScholarGPS™ or any other ranking system) as the final statement of any scholar's true influence or any institution's true value. While ScholarGPS™ metrics are derived largely from those used in the sciences and social sciences, other parameters including quality of teaching, outreach activities, as well as other modes of scholarly or artistic dissemination such as exhibitions, performances, and musical compositions should be considered when warranted. Users should therefore not construe a lower ranking as necessarily representative of lesser influence or prestige.