Of mice and men: when peer review fails

I’ve come across a fascinating story in the field of medical research. It is about the use of mice and rats as research subjects for diseases in humans, a role that they have played for decades. The story has two parts, the study itself, and the difficulty the authors had in getting it published. The second explains the title I have used, but the first is interesting in itself.

Mice have been the ‘species of choice’ for those who study human diseases, which explains why research centres that do this work have, and have to have, a controlled environment in which mice are bred, and maintained. These environments can be very expensive. We use mice, rats and rabbits as experimental animals because  they breed abundantly. Humans and mice share 99 per cent of their genes and 97.5 per cent of their DNA. And we have learned a great deal from the use of mice, to the point where they provide the standard vehicle for experimental research into human disease.

The study in question found that in three critical areas — sepsis, burns and trauma — the mouse model simply didn’t work. Every one of nearly 150 drugs tested at great expense in humans with sepsis failed. The drugs were all based on studies on mice, and it seems to be the case that the sepsis that mice have is not the same as the human variety. One reason is that mice scavenge on food material that humans could not tolerate, and therefore can handle bacterial infection more easily than humans. Mice need a million times more bateria in their blood than would kill a human being.

Was sepsis (a potentially deadly reaction from the body when infection sets in, and the leading cause of death to patients in intensive-care units) the same in mice as it was in humans? Over ten years the researchers collected data from humans and compared what they had with data from mice. They discovered that there were no similarities: humans and mice dealt with sepsis through different mechanisms — a gene that might be used in mice was suppressed in humans, and so on.

What is more, different conditions in mice — burns, trauma and sepsis — used different groups of genes, while in humans similar genes were used in all three conditions. For humans, then, you might find a drug that worked for all three conditions, but that wouldn’t be the case for mice.

When they tried to discuss their findings, and then to publish them, there were objections that they had not shown that the same gene response also occurred in mice!

‘They were so used to doing mouse studies that they thought that was how you validate things,’ the lead author of the study said. ‘They are so ingrained in trying to cure mice that they forget we are trying to cure humans.’

Their paper argued that there was no relationship between the genetic responses of mice and those of human beings, and journal after journal rejected it, including both Science and Nature. According to one of the authors, reviewers from the journals that rejected the paper did not point to scientific errors. In his words, ‘the most common response was, “It has to be wrong. I don’t know why it is wrong, but it has to be wrong.’”

Eventually they had the paper published in the Proceedings of the National Academy of Sciences, an option available to one of the authors, who was a member. Now that the paper is out, there is a chorus of amazement from within the research community.

‘When I read the paper, I was stunned by just how bad the mouse data are,’ said one researcher. ‘It’s really amazing — no correlation at all. These data are so persuasive and so robust that I think funding agencies are going to take note. Until now, to get funding, you had to propose experiments using the mouse model.’

It is not the end of the line for mice as research subjects, but it is the end of a widespread assumption that mice and men are, at least in the research environment, virtually interchangeable. And that should mean that we get  better outcomes for pre-clinical research in these areas.

And it should remind us again that the world of peer review is governed by widespread assumptions about how things actually work, assumptions that aren’t always correct. I’ve made this point before about climate change, and about peer review itself. It is a necessary instrument in deciding whether or not a given research proposal should be funded, or a given paper should be published, but it is not at all infallible, and it operates on assumptions of all kinds.

As it happens, one of the authors of the study in question is a Dr Warren, who researches sepsis at Massachusetts General Hospital. Another Dr Warren, senior pathologist at the Royal Perth Hospital, was one of the two discoverers of  helicobacter pylori, the cause of stomach ulcers, for which they were later to win the Nobel Prize.

They had a lot of trouble in getting their research accepted and published, too. It was assumed that it was stress that was the main cause of those ulcers, as I know to my cost. I had a dangerous operation, which altered a good deal of my pleasure in eating and drinking, in order to deal with this imagined stress. A few years later my condition would have been dealt with orally, cheaply and quickly.

I can give only two cheers for peer review, as I said in my first essay on it.


Join the discussion One Comment

  • Legal Eagle says:

    That is a fascinating story. As an academic, I can certainly say that peer review has its ups and downs. I have reviewed articles myself and had useful reviews from submissions to journals. Peer review can be a very positive thing (sorts the wheat from the chaff) but if you are really challenging the dominant academic paradigm, it might lead to exactly the kind of thing you see here.

    I was having a chat with a colleague yesterday, who advised me that I should make sure that my students’ PhDs were sent to people who would assess them with an open mind – most particularly, even if the PhD criticised that person’s own analysis or mind-set, I should be sure that the reviewer was capable of acknowledging that it was a good piece of work. In the event, I was lucky enough to have my own PhD judged by two such academics, but I can quite easily imagine that if it had been sent to someone less open-minded, I might have been in trouble. The point is that the peer review process works well if you have open-minded reviewers who are prepared to have their own preconceptions challenged, but it doesn’t work so well if you have reviewers who are closed-minded and just want to defend their own turf, or are offended by people who challenge their ideas.

Leave a Reply