On ‘excellence’ in research

On one or two of the websites that I visit the topic of ‘excellence’ in research is being discussed, and I thought I might add my ten cents’ worth. It is a topic that is endlessly fascinating if you are involved in assessing other people’s research, and there is no right answer, or even a right mechanism for judging. You learn by doing.

In sport the question of excellence is dealt with in a straightforward way: you decide who is the best by matching the teams or players until they have all played one another, and you compare the scores. ‘Can you beat Federer?’ is a question that can be answered through a test, if Roger is available to play you. If you can’t then beat him, he is better than you — on that day, anyway.

Research is not like that. A good research project is a game against nature: if we did this, we might learn about that, which would then tell us about the other, and would add to the sum of knowledge we have about this important domain. Nature does not give up her secrets easily, and the best research is ingenious, well-thought-out, and often simple.

Who is to decide whether a given project is excellent? The answer is  ‘the peers’, meaning those who work in the same field. I don’t want to get into the world of peer review, here. I’ve done that recently, and there’s no point in repeating myself. How big is anyone’s field? When I was seriously interested in ‘human knowledge’ as a domain of enquiry I came to the view that ‘fields’ at what is called ‘the cutting edge’ were not bigger than about 500 people, spread all over the world. If they got larger than this they would break up, and form two or more smaller fields, a little as cells do.

Why would that happen? Because the field usually revolves around one or two inter-related questions that seem important, answers to which would define what we really know about something, and point to what we need to know further. The questions may be difficult to answer, and they might hang about for a few years until they are solved. Some of them don’t get solved quickly, and many researchers will grow tired of the game, and move on to more easily answered questions.

When we come to provide money for research we need to know whether the researcher knows what he or she is doing, and whether or not what is proposed is worth doing in the first place. Reasonable people will disagree about both issues. In general we ask physicists to rule on these issues where the discipline is physics, chemists where the discipline is chemistry, and so on. On the whole, however, physicists tend to see other disciplines as intrinsically boring, so that the only interesting bits of geology, for example, are in geophysics. For historians, the only interesting bits of political science are the historical bits, and so on.

If we have to choose between a good proposal in physics and a good one in chemistry, what do we do then? Well, we can either leave it to the physicists, and tell them to decide within a given budget, or we can sit around and argue about it, including those of us who are not really physicists or chemists. What would be the criterion in the latter case? Well, a judgment about which of the marginal projects would better advance either humanity, or the discipline; or who was the more junior researcher, in the belief that we should encourage younger aspirants; or which of the institutions, all other things being equal, would benefit more from the award.

You don’t like those criteria, and would rather leave it to the disciplines? Well, how do you decide how much money each discipline gets? In terms of their relative proportion of ‘excellent’ projects? Hmmm. That will lead to all disciplines inflating their grades, won’t it? But surely excellence is obvious, you cry. Well, I think I am a decent judge in areas I am familiar with, but another person with my backgrounds might disagree with my judgment. What then?

In fact, ‘excellence’ is not a useful criterion at all. There’s far too much of it about, especially in universities which think they are, not to put too fine a point on it, embodiments of excellence themselves. There has to be a judge, and usually a jury. Yes, some kind of acumen (let’s not call it ‘excellence’) has to be there, but the judge and jury will be affected by all sorts of things, and they mightn’t even agree on what constituted excellence.

I once thought some attention ought to be paid to the national interest (what would Australia get out of this?), since the public money ought to be spent in the interest of the public. But my feeling is that swing has occurred and probably gone too far, so that we are now getting far too much ‘policy-based evidence-making’ especially in climate science. And that is the danger of associating research funding with national goals, which have to be, and should be, political.

As researchers have discovered, often to their surprise, research doesn’t often point neatly to political solutions. But I’ll leave that for another day.

Join the discussion One Comment

Leave a Reply to Of mice and men: when peer review fails « DON AITKIN Cancel Reply