The trouble with evaluating anything
09 Feb 2015It is very hard to evaluate people’s productivity or work in any meaningful way. This problem is the source of:
- Consternation about peer review
- The reason why post publication peer review doesn’t work
- Consternation about faculty evaluation
- Major problems at companies like Yahoo and Microsoft.
Roger and I were just talking about this problem in the context of evaluating the impact of software as a faculty member and Roger suggested the problem is that:
Evaluating people requires real work and so people are always looking for shortcuts
To evaluate a person’s work or their productivity requires three things:
- To be an expert in what they do
- To have absolutely no reason to care whether they succeed or not
- To have time available to evaluate them
These three fundamental things are at the heart of why it is so hard to get good evaluations of people and why peer review and other systems are under such fire. The main source of the problem is the conflict between 1 and 2. The group of people in any organization or on any scale that is truly world class at any given topic from software engineering to history is small. It has to be by definition. This group of people inevitably has some reason to care about the success of the other people in that same group. Either they work with the other world class people and want them to succeed or they either intentionally or unintentionally are competing with them.
The conflict between being and expert and having no say wouldn’t be such a problem if it wasn’t for issue number 3: the time to evaluate people. To truly get good evaluations what you need is for someone who isn’t an expert in a field and so has no stake to take the time to become an expert and then evaluate the person/software. But this requires a huge amount of effort on the part of a reviewer who has to become expert in a new field. Given that reviewing is often considered the least important task in people’s workflow, evidenced by the value we put on people acting as peer reviewers for journals, or the value people get for doing a good job in people’s evaluation for promotion in companies, it is no wonder people don’t take the time to become experts.
I actually think that tenure review committees at forward thinking places may be the best at this (Rafa said the same thing about NIH study section). They at least attempt to get outside reviews from people who are unbiased about the work that a faculty member is doing before they are promoted. This system, of course, has large and well-document problems, but I think it is better than having a person’s direct supervisor - who clearly has a stake - being the only person evaluating them.It is also better than only using the quantifiable metrics like number of papers and impact factor of the corresponding journals. I also think that most senior faculty who evaluate people take the job very seriously despite the only incentive being good citizenship.
Since real evaluation requires hard work and expertise, most of the time people are looking for a short cut. These short cuts typically take the form of quantifiable metrics. In the academic world these shortcuts are things like:
- Number of papers
- Citations to academic papers
- The impact factor of a journal
- Downloads to a person’s software
I think all of these things are associated with quality but none define quality. You could try to model the relationship, but it is very hard to come up with a universal definition for the outcome you are trying to model. In academics, some people have suggested that open review or post-publication review solves the problem. But this is only true for a very small subset of cases that violate rule number 2. The only papers that get serious post-publication review are where people have an incentive for the paper to go one way or the other. This means that papers in Science will be post-pub reviewed much much more often than equally important papers in discipline specific journals - just because people care more about Science. This will leave the vast majority of papers unreviewed - as evidenced by the relatively modest number of papers reviewed by PubPeer or Pubmed Commons.
I’m beginning to think that the only way to do evaluation well is to hire people whose only job is to evaluate something well. In other words, peer reviewers who are paid to review papers full time and are only measured by how often those papers are retracted or proved false. Or tenure reviewers who are paid exclusively to evaluate tenure cases and are measured by how well the post-tenure process goes for the people they evaluate and whether there is any measurable bias in their reviews.
The trouble with evaluating anything is that it is hard work and right now we aren’t paying anyone to do it.