I've never found the phrase 'one size fits all' useful for T-shirts. Maybe it is also wrong when seeking to rank academic economists too.
Much heated debate has ensued from the 2001 working paper Rankings of Academic Journals and Institutions in Economics by Pantelis Kalaitzidakis and colleagues. The 'KMS' paper, later published in the December 2003 issue of the Journal of the European Aconomic Association, proposed a world-wide ranking of economic research institutions based on a computation of the current impact factors ranking of economic journals. Harvard, Chicago and MIT were ranked first, second and third respectively. But US instutions accounted for less than half (44%), compared with one-third from Europe and a fair smattering of Asian institutions.
But was the methodology the right one? A new paper by Magnus Henrekson and Daniel Waldenström argues it wasn't. In Should Research Performance be Measured Unidimensionally? Evidence from Rankings of Academic Economists, they applied seven established measures of research performance for all professors of economics in Sweden, exploring how different measures influenced the skewness and ranking of individual performances. The Kalaitzidakis et al method did not perform well:
We find large differences across all measures, but some deviate more than others. In particular, the journal ranking of Kalaitzidakis et al. 2003 (KMS), which was endorsed by the European Economic Association and has been extremely influential especially in Europe, appears to be an outlier among the available measures. Its distribution of performances is the most skewed, and its ranking of scholars corresponds the least with the rankings of the other measures.
Hence, relying on one single metric of research quality, especially one that is as extreme as KMS, is associated with a great risk given that researchers tend to adjust behavior in order to maximize the assessed relative and absolute value of their work.
The authors conclude on a cautious but sensible note:
Our results do not imply that we should refrain from efforts to rank individual researchers and ban all attempts to quantify the value of research output. But our results make clear that there is no single unequivocal catch-all measure that can be used.
All seven measures provide relevant information about the performance of individual researchers, and no doubt there are additional aspects that may be important that are largely overlooked by all of these measures. For instance, only a small subset of all journals are included in the KMS, IF and KY measures, and most measures either ignore or give little weight to impact outside economics or on policymaking. Hence, quantitative measures cannot fully substitute for careful reading and individual assessment of the works of individual researchers.