Regular readers will know that I have become disappointed with The Conversation, a publicly funded website that gives academics an opportunity to put forward their ideas and opinions about just about anything. It seems to me to be infected with what I have called elsewhere ‘the ABC culture‘, a point of view in which the planet is in trouble, women meet glass ceilings everywhere, all boat people are genuine refugees from political terror, poverty is caused by rich people, biodiversity is disappearing everywhere, and so on. And the Comments section is populated by small groups of acolytes who congratulate the author(s) of the essay and defend their point of view against all comers.
Nonetheless a summary of The Conversation appears on my screen every morning during the week, and I scroll down to see if there is anything worth reading. A couple of days ago I came across an essay on understanding research: Have you ever tried to interpret some new research to work out what the study means in the grand scheme of things? All the time, I thought, and read on. It’s a good piece, with one great howler, and I’ll summarise it for you.
1. Wait! That’s just one study If you base your views on just one study you’re making a great mistake, You’re either cherry-picking or falling for the exception fallacy. Indeed so, and I said much the same a couple of times last week.
2. Significant doesn’t mean important Yes, ‘significant’ has a particular meaning in statistics, and readers can often drift into the view that an apparently significant finding, where p<0.001, must therefore be important. It may not be so, because in a study with a large N you are likely to get all sorts of significant relationships that have not much meaning. As a postdoc in the USA I ventured the view in a seminar that a result was ‘interesting’, and was pounced on at once. ‘What’s your theory?’ Why did you think it was interesting? What relationship did you have in mind? Why did you have it? Was it important?
3. And effect size doesn’t mean useful In medical research you might encounter a treatment that lowers the risk of something by 50 per cent. But what is the risk of your developing the condition in the first place? If it is really very small, there is no point in treating everybody so that one or two have the condition improved.
4. Are you judging the extremes by the majority? Not all trends are linear, though you might think so from the number of graphs you encounter. People with very high salt intakes have a greater risk of cardio-vascular disease. But people with very low salt intakes can have a similarly high risk too. We need to remember bell-shaped and U-shaped curves.
5. Did you maybe even want to find that effect? We are all prone to confirmation bias, and need to look hard at findings that we like.
6. Were you tricked by sciencey snake oil? The authors provide one of my favourite little videos, the fabulous Turbo Encabulator, in which a straight-faced engineer rattles off a complex stream of technical nonsense. Another example: In one study, non-experts found even bad psychological explanations of behaviour more convincing when they were associated with irrelevant neuroscience information.
7. Qualities aren’t quantities and quantities aren’t qualities While it is par for the course to attempt a mathematical account of whatever it is you are studying, numbers may not be the way to go. Human emotions don’t lend themselves to numerical treatment, and the numbers will very likely lead one astray.
8. Models by definition are not perfect representations of reality Here comes the Whoops! A common battle-line between climate change deniers and people who actually understand evidence is the effectiveness and representativeness of climate models. Oh chaps, you were doing so well, too. This is the whole of the rest of No. 8:
But we can use much simpler models to look at this. Just take the classic model of an atom. It’s frequently represented as a nice stable nucleus in the middle of a number of neatly orbiting electrons.While this doesn’t reflect how an atom actually looks, it serves to explain fundamental aspects of the way atoms and their sub-elements work.This doesn’t mean people haven’t had misconceptions about atoms based on this simplified model. But these can be modified with further teaching, study and experience.
So how does that help us with these poor deluded ‘climate change deniers’? In my experience, it is precisely the sceptics who go to evidence, and the orthodox who keep insisting that the models explain everything — which they can’t and don’t, as No. 8 says explicitly.
9. Context matters Individual scientists — and scientific disciplines — might be great at providing advice from just one frame. But for any complex social, political or personal issue there are often multiple disciplines and multiple points of view to take into account. Yes indeed, and ‘climate change’ provides a splendid example.
10. And just because it’s peer reviewed that doesn’t make it right Amen. Peer review is the beginning of a study’s active public life, not the culmination. Yep.
The authors are members of the Centre for Public Awareness of Science at the ANU, and they have written a series of related articles, of which the present one is the last. They wrote one on correlation and causation, too.
I think these are excellent guidelines, and they ought to be used by anyone who is interested in the whole ‘climate change’ debate — indeed in any debate in which ‘research’ is said to show this or that. I would like to say that I use them as a matter of course, because that’s the way I was trained all those years ago. And with all due modesty, I’ll add a number 11.
11. Always go to the evidence for the claim, and try to make sense of it yourself. Don’t just accept what others say.