Why isn’t more research reproducible?

Some six years ago I wrote an essay about John Ioannidis, now at Stanford, who stirred up the medical research community with a paper arguing that more than half of all medical research papers could not be trusted because the work described in them could not be replicated. Ioannidis’s original work dates from 2005, but he and others have moved into other areas as well as medicine. The amount of money wasted because of poor research, both by private enterprise and by governments, is enormous.

From time to time since I have heard murmurings that positive things are happening in medical science and elsewhere — occasionally even in climate science. Last week the National Association of Scholars in the US published a long paper on the whole issue, The Irreproducibility Crisis of Modern Science. It is sobering reading, if you are interested in the ethical structure of scientific endeavour. The NAS is politically conservative in the context of higher education, opposed to political correctness on campuses, as well as to age, sex and diversity quotas, and in favour of Great Books, rigour in argument and good data. It probably doesn’t need saying that these are by and large my own intellectual values too.

The introduction to the paper starts with the case of a postdoc, Oona Lönnstedt at Uppsala University. She and her supervisor, Peter Eklöv, published a paper in Science in June 2016, warning of the dangers of microplastic particles in the ocean. The microplastics, they reported, endangered fish. It turns out that Lönnstedt never performed the research that she and Eklöv reported.

The initial June 2016 article achieved worldwide attention and was heralded as the revelation of a previously unrecognized environmental catastrophe. When doubts about the research integrity began to emerge, Uppsala University investigated and found no evidence of misconduct. Critics kept pressing and the University responded with a second investigation that concluded in April 2017 and found both Lönnstedt and Eklöv guilty of misconduct. The university then appointed a new Board for Investigation of Misconduct in Research. In December 2017 the Board announced its findings: Lönnstedt had intentionally fabricated her data and Eklöv had failed to check that she had actually carried out her research as described. By then the postdoc had become an international environmental celebrity.

Deliberate fraud of this kind, the paper claims, is uncommon, though it may be increasing. At the heart of the problem is a failure both to follow good research design practices and to understand statistics properly (or at all, in some astonishing cases). Why does it happen? Why does so much research fail to replicate? Bad methodology, inadequate constraints on researchers, and a professional scientific culture that creates incentives to produce new results —innovative results, trailblazing results, exciting results — have combined to create the reproducibility crisis.

I have written about these issues before (for example, here), and have no pleasure in doing so again. Many of the examples cited come from the social sciences, which is most embarrassing to me, and I am sure to others of my old tribe. The extensive use of statistics is now almost universal in the social sciences and some of the natural sciences, and today’s researchers are able to employ computer statistical packages that allow the researcher, having assembled the data, to do little more than press a button. But what does the output mean, and does the researcher understand what the package actually does? According to the paper, far too frequently the answers are ‘Don’t know’ and ‘No’. Anyone who reads academic journal articles will see what looks like an obsession to find low p values (the lower the p value, the more likely it is that the null hypothesis is wrong, and the researcher’s hypothesis therefore more likely to be true). In March 2016 the American Statistical Association issued a “Statement on Statistical Significance and p-Values” to address common misconceptions. The Statement’s six enunciated principles included the warning that by itself a p value does not provide a good measure of evidence about a model or a hypothesis. That was drummed into me fifty years ago.

Also emphasised in those pre-desktop-computer days was the importance of deriving a hypothesis from one body of data and testing it on another. It was an absolute NoNo to test the hypothesis on the same body of data — for obvious reasons. The NAS paper finds that this practice is widespread, and that it leads to other malpractices.  Scientists also produce supportive statistical results from recalcitrant data by fiddling with the data itself. Researchers commonly edit their data sets, often by excluding apparently bizarre cases (“outliers”) from their analyses. But in doing this they can skew their results: scientists who systematically exclude data that undermines their hypotheses bias their data to show only what they want to see. Perhaps the BoM, the CSIRO and other climate data-mongers could pause and think harder about what they are doing in homogenisation.

Researchers can also bias their data by ceasing to collect data at an arbitrary point, perhaps the point when the data that has already been collected finally supports their hypothesis. Conversely, a researcher whose data doesn’t support his hypothesis can decide to keep collecting additional data until it yields a more congenial result. Such practices are all too common. [A] survey of 2,000 psychologists noted … found that 36% of those surveyed “stopped data collection after achieving the desired result”.

Another sort of problem arises when scientists try to combine, or “harmonize,” multiple pre-existing data sets and models in their research—while failing to account sufficiently for how such harmonization magnifies the uncertainty of their conclusions. Claudia Tebaldi and Reto Knutti concluded in 2007 that the entire field of probabilistic climate projection, which often relies on combining multiple climate models, had no verifiable relation to the actual climate, and thus no predictive value. Absent “new knowledge about the [climate] processes and a substantial increase in computational resources,” adding new climate models won’t help: “our uncertainty should not continue to decrease when the number of models increases”.

And so it goes. Researchers are allowed too much freedom in current protocols, for example, going back and changing the research design to something more useful given the data. Researchers are reluctant to share their data or their methodology with others, and that makes reproducibility difficult from the beginning. Data sought from researchers are said to be lost, or in the wrong format, or a victim of a shift in computers or offices, and so on. Many journals state they  require open-ness with data and methods, but the requirement seems not to be policed. For all researchers there is a premium on positive results, both to get published and to get a grant renewal. So researchers strive to get or discover significant statistical relationships in their data. And of course there is ‘groupthink’, which is abundantly illustrated in climate science but by no means restricted to it.

What can be done? The NAS proposes a long list of recommendations, some of which I agree with, and some that I think somewhat pie-in-the-sky. We cannot, for example, look to governments to solve the crisis, because governments are part of the problem: they search now for policy-based evidence. But there is said to be a general improvement in laboratory practices, at least, Nature says so, while there are a lot of new journals springing up which set out to follow the canons of reproducibility. And let us not forget the efforts of people like Anthony Watts, Judith Curry and others, who keep the doors open for those who want to argue with what should never be called ‘settled science’: if it’s science, it’s never settled, and if it’s settled, it isn’t science.

The NAS paper is long, but well written and most accessible. I recommend it to those who are interested in how we come to know things, and how best to do so. And, though it says little about the issue, the paper points to real problems with peer review, which include groupthink, the replacement of editors with others more favourable to the groupthink, and a sheer failure to think hard hard about what the proposed article is actually saying. The latter has become so obvious that spoof papers have been offered and published without apparently anyone’s even reading them closely. There is even a computer program available which will generate one for you. I doubt that the reproducibility problems are going to be solved quickly, but at least they are being recognised.

Join the discussion 60 Comments

  • JMO says:

    Thanks Don for a great post. Long have I been suspecting this issue was lingering in today’s scientific world. As a former climate alarmist, I knew this issue would be prevalent in, so called, climate science; especially where their salaries depended on their alarmist/catastrophic even doomsday results or predictions. The more I delved, read and recalled my astronomy and physics knowledge, and applied some due diligence, the more sceptical I became. I thought for myself.

    Now I am not embarrassed to say I was wrong to “believe” the climate alarmists, catastrophists and doomsters. The one remaining question is how could I have been duped (even for a relatively short time) by those merchants of climate doom? A partial answer is “scientists” were saying these dire predictions and scientists are beyond reproach. I now know differently.

  • Bryan Roberts says:

    At least some people recognised them decades ago. In those days, I was visiting a very distinguished academic, who, in the course of conversation, mentioned that he was asked to review a paper for one of the top journals in the field. His response to the Editor was “I do not believe this paper, but I can find no reason to reject it”. Errors, misinterpretations of data, do happen – perhaps not 30% of the time, but they do happen.

    I have been in the business for 45 years now. In my field, these sorts of frauds do exist (I have certainly known of some) but established frauds are relatively uncommon, and spectacular results are scrutinised very carefully. Vide the Ted Steele ‘Lamarckian’ controversy, and of course, The Mann ‘hockey stick’.

    I will leave the last word with my PhD supervisor, an irascible Liverpudlian. He said “My boy, if you need statistics to prove your results, you should have done a better experiment”.

    • Don Aitkin says:


      I am too old now to be asked to comment on MSS, but I recall one of the last ones, a few years ago. There was nothing wrong with it, in technical sense, but it was a trite answer to a non-problem. I said so, in more measured language, in my review. But the article was published anyway, the editor writing to me to say that (in effect) the chap had done a lot of work, and needed a publication…

    • Chris Warren says:

      You may have needed a better supervisor.

  • spangled drongo says:

    The world is becoming more aware of this daily and because Trump is right onto this “fakery at the bakery” the Dems and the MSM [including our Auntie] never let up on him:

    “This is why liberals are really mad at Scott Pruitt and demand his resignation – he’s demanding accountability and transparency in environmental science, something they didn’t have to do before

    “Environmental Protection Agency Administrator Scott Pruitt signed a proposed rule on Tuesday to prevent the agency from relying on scientific studies that don’t publish the underlying data.”

    “The era of secret science at EPA is coming to an end,” Pruitt said in a statement. “The ability to test, authenticate, and reproduce scientific findings is vital for the integrity of rulemaking process.”


  • Aert Driessen says:

    Good one Don. Your reference to ‘microplastic particles in the ocean’ is still alive and well. I’m reasonably certain that in only the last few days I heard about this in a news bulletin (probably the ABC, who love this sort of stuff) with reference to Arctic ice. I would have thought that the pronouncement of such an environmental problem would have raised questions from investigative journalists sooner rather than later because, as I recall from school days, the purest water comes from melted ice, which is to say that ice crystals cannot accommodate ‘foreign’ matter. Perhaps someone else can clarify this.

  • David says:

    “…….arguing that more than half of all medical research papers could not be trusted because the work described in them could not be replicated.”

    Who would know how these papers we selected. But research on ant subject conducted on the edge of what we know is going to be more uncertain than core knowledge. But there is obviously a process of assessment and re-evaluation that results in medical treatments that are much better than a 50/50 bet.

  • Tezza says:

    Excellent post, Don. John Ioannidis now has an article published on the power of bias in economics research, and features in an excellent podcast with Russ Roberts on Econtalk at http://www.econtalk.org/archives/2018/01/john_ioannidis.html

    • Don Aitkin says:

      Thanks, Tezza. Readers don’t have to listen to the talk, for the text can be found if you scroll down. It seems that economics is very like neuroscience.

      • Don Aitkin says:

        I’ve listened and read it all. Excellent discussion, and so widely relevant in research. There is a second discussion in the Comments, and I liked this little gem (from a medical researcher): ‘Epidemiologists have struggled with these problems. I am strongly on the side of the guest [Ioannidis] — most of what we do is garbage but this is true across the majority of science even if the methods are good. Most science does not accumulate new useful knowledge. The problem is how much wrong knowledge we have that distracts people.’

  • Bryan Roberts says:

    What I find astonishing about this whole business is the conviction that the guys doing the replication are ‘better’ scientists than those who did the work in the first place. Says who?

    I once worked in a lab with a guy who was technically extraordinarily clever. To the point that even his co-workers had trouble replicating his results, because they were inferior at bench work. Were his results wrong because they apparently could not be replicated? No, they were not.

    Medical research is a money-making business. Maybe the ‘replication’ researchers were incompetent, or maybe there were vested interests in moving research towards/away from a particular line. I’ve never been swayed by money, because I’ve never been offered any, and I don’t care either way, but if those questions are not asked, the Ioannidis claims are just hot air.

    Follow the money. What was in it for the group making the original claim, and what was in it for the group refuting it?

    • Don Aitkin says:


      My understanding, not verified from ten minutes of trying to track down a published paper to that effect, is that Ioannidis used his students and colleagues to try to repeat the experiments given the published information about them, and reported results where they believed they had all the data and methodology needed to to do so. More than half of the replications failed. Now you could argue that in some cases the replicators weren’t as able as the original investigator. But so many failed that relative ability is unlikely to be the common reason.

      • Bryan Roberts says:


        The following extract is from Wikipedia. “In another 2005 paper, Ioannidis analyzed 49 of the most highly regarded research findings in medicine over the previous 13 years. The paper compared the 45 studies that claimed to have uncovered effective interventions to subsequent studies with larger sample sizes: 7 (16%) of the studies were contradicted, 7 (16%) had effects that were smaller in the second study than in the first, 20 (44%) were replicated, and 11 (24%) remained largely unchallenged.[15]”. In other words, 84% were not directly contradicted, which is vastly different to the idea that 30% were fraudulent or just wrong.

        From the context (larger sample sizes), it can be assumed that most were clinical trials of one sort or another, where numbers matter, and objective criteria for success may be weak or disputed. These latter also pertain to research in the social sciences. I very much doubt that you would find similar trends in the ‘harder’ disciplines, such as Molecular Genetics, Geology, Astrophysics, or God forbid, Statistics.

        • Don Aitkin says:

          I think you are right. The long discussion about economics mentioned above makes it clear that Ioannidis is particularly critical of large claims from small datasets. The trouble with the very large ones is that every finding will be statistically significant… That gets us back to the issue of importance, not of statistical significance. I think Briggs wrote about that too (type in Briggs in the magnifying glass option at the top right of the website).

      • David says:

        “But so many failed that relative ability is unlikely to be the common reason.”

        How does that follow?

        • Don Aitkin says:

          If Dr X based his findings on a sample of twenty patients in a university or repatriation hospital, no matter how clever he is, the uncertainties involved remain very large. If Dr Y has a sample of 2000 patients drawn at random from the nation’s hospitals and he is approximately as clever then, provided he has done the work properly, (i) the uncertainties will not be nearly so large (unless he has gone into minute sub-group analysis), and he may find that there is no relationship — or he may find that Dr X was lucky, and there appears to be one. (All else being equal.)

  • Neville says:

    So how does the mitigation fra-d and con enjoy such a long life?
    The data has been available for decades , but from Kyoto through COP 21 until today the same old con trick is used and nearly every govt on the planet continues to believe this lunacy.
    It couldn’t be easier to understand the corruption involved yet we’re supposed to sit back and say nothing and even accept the waste of endless trillions $ for no measurable change to temp by 2040 or 2100.

    • Bryan Roberts says:

      Because the criteria for success or failure are so vague. The best comment in this area was made by a well-respected biologist, who said “You can’t argue with dead mice”.

  • spangled drongo says:

    When gatekeepers and other scientists have a particular, partisan POV and promote their dogma on to the rationally sceptical public, not only to pander to their own philosophical vanity but line their pockets at those same rational sceptics expense, it requires extreme obtuseness not to see the bleedin’ obvious.

    When we have such degrees of “consensual science”, irreproducibility is only to be expected.

    The basics have been going on for a long time.

    Well before current “climate” observations.

    Brisbane Courier Mail, Jan 10 1871:


  • spangled drongo says:

    Of course, groupthink is always reproducible, as long as it is reproduced by similar groupthinkers.

    One aspect of science that always had me puzzled was the naturalisation of the dingo in Australia when it is obvious that in places where it is adored and protected [Fraser Is] no other ground dwelling native wildlife exists but places where it never got [Tasmania] there is the greatest biodiversity.

    Also, Tasmanian Devils, which used to be numerous on the mainland, are now only in Tasmania.

    Maybe it’s just me but when our native ground dwelling marsupials have never evolved alongside canids of any type and these canids had only populated the coastline for what is a blink of an eye in evolutionary terms, there seems to be a certain denial of very obvious science happening.

    When science can deny the bleedin’ obvious like this, I suppose if they get only half their papers right they are doing well.

  • Bryan Roberts says:

    Scientists tend to be rather innocent. Very few envisage a career ending in a Nobel Prize. Most just enjoy what they are doing, and they report it absolutely honestly. Few would know the name of their local politician, or his/her views on any scientific topic.

    The (apparently) widespread view that a significant proportion of their cohort will be crooked is patently false, as I have illustrated above. The politicisation of scientific results is an appalling disservice to young men and women interested in the sciences. Controversies have always existed, and have always been resolved by science, not by ‘consensus’.

    I like to think our young men and women will overcome this current rubbish, and my contacts with them suggest I am correct.

    • spangled drongo says:

      Bryan, I don’t doubt you are right with some younger scientists but there are many older ones that are very comfortable in their groupthink and will not give an inch in spite of contrary evidence.

  • JimboR says:

    Spare a thought for the poor climate scientists, giving up $200K jobs as data scientists in the financial industry, to instead slog away on $80K on limited tenure because they believe in what they’re doing…. only to be told by RWNJs that they’re just in it for the money.



    • spangled drongo says:

      And jimb, also spare a thought for those poor scientists who are paid to live in paradise on great salaries at the public’s expense to tell us if we have a problem with the GBR.

      Naturally they are not going to say, “sack us, there’s no problem” [even though plenty of good scientists say exactly that].

      Instead, they have told and sold their groupthink so well that the RWNJs are giving them an extra $500 million to play with.

      Ah! the power of groupthink!

    • Don Aitkin says:

      Jimbo, I hope that indeed the poor climate scientists don’t ‘believe in what they are doing’. Belief is about religion, not science. They no doubt think that they are going what prof, or the director, wants them to do, and they are moving upward through writing papers. In so doing they are likely to see that their general enterprise is a good thing, and will speak for it if that is necessary. In doing so they are defending the enterprise. There’s nothing conspiratorial about it, nor is it reprehensible in itself. It is what human beings do. You can generalise such a small example and see that what UNEP, the IPCC, the learned academies and the leading climate scientist/activists do is simply defending and expanding (if they can) their general enterprise. That there are competing views of the data doesn’t concern them particularly, and they will ignore them if that seems sensible.

      It doesn’t apply simply to climate science. It is human nature on parade. In time things will change. They always do.

      • JimboR says:

        I suspect it comes down to whether you think they’re in it for the public good, or for their own personal good. As the one interviewed in that link above said, he’d have no trouble swapping his 80K climate scientist salary (with limited tenure) for a permanent 200K financial analyst salary, so at least at first blush, personal good doesn’t seem to be his top priority.

        • Don Aitkin says:

          Why do you make a binary distinction? There is an almost infinite variety of motives in the universe of science, and anyone who has worked there can see many of them on parade every day. It doesn’t mean that their results are always bad. But it does mean that you need to be sceptical, always.

  • JimboR says:

    DA: “Jimbo, I hope that indeed the poor climate scientists don’t ‘believe in what they are doing’. ”

    Perhaps I could have been more precise……

    Jimbo-meant-to-say: “instead slog away on $80K on limited tenure because they believe by finding the truth they will help society, either by showing there’s no problem here and we can burn as much coal as we like, or showing that there is a problem and we need to change our ways”.

    DA: “They no doubt think that they are going what prof, or the director, wants them to do….. In so doing they are likely to see that their general enterprise is a good thing…. it is what human beings do.”

    That seems to be an extremely simplistic model of human behaviour, and it completely ignores job satisfaction. I know a lot of scientists and engineers and not one of them has ever hung around once they’ve decided their employer is headed down the wrong track (and they’ve been unable to change that track from within the organisation). These are highly skilled people in huge demand across a surprisingly wide range of industries. Why would they keep turning up to work each day at the CSIRO if their research kept getting quashed because it pointed to something contrary to their masters’ “policy” position? I guarantee they would quit the next day.

    DA: “But it does mean that you need to be sceptical, always.”

    If only you would apply that same scepticism to the stuff you find on the web that you think supports your position.

    • Don Aitkin says:

      Jimbo, I am sceptical about anything that takes my interest, including what I read on the Internet, and I am sceptical about a lot of the sceptical claims too. I’ve made that clear in the past.

      You, who have a binary position in which scientists are either after their own good or striving for the public interest, accuse me of having a simplistic model of human behaviour! Good grief. I repeat what I said above: ‘There is an almost infinite variety of motives in the universe of science, and anyone who has worked there can see many of them on parade every day. It doesn’t mean that their results are always bad.’

  • Bryan Roberts says:

    The guy in question is not a dedicated ‘climate scientist’, he is a statistician fiddling with numbers, and the reason he is not making millions on Wall Street is that he is not very bright.

    • JimboR says:

      You should check out some of the prize money being handed out in the kaggle competitions Bryan. Companies (and governments) are paying big bucks to data fiddlers. Even the US Homeland Security are crowd sourcing their passenger screening algorithms and forked out 1.5 million in prizes:

      First Place: $500,000
      Second Place: $300,000
      Third Place: $200,000
      Fourth Place: $100,000
      Fifth Place: $100,000
      Sixth Place: $100,000
      Seventh Place: $100,000
      Eighth Place: $100,000


  • JimboR says:

    Don, what do you think a bright young scientist working in the climate section of CSIRO would do if he roughly shared your position on climate change? I’m genuinely interested in what you think he’d do, or what you’d do if you were a bright young scientist working there holding your current views.

    • Don Aitkin says:

      Jimbo, oddly enough I have the germ of a novel on just that theme. It all depends on circumstance. For example, a single person with good contacts has a wider range of options than a married one with a wife and small child and a newly purchased house. For the most part you do what is asked of you until you can’t stand it much longer — you feel that your sense of personal worth is being challenged. The you leave. Or you follow Socrates’s view: if you accept the law (i.e. the current doctrine); if you don’t like it you try to change it; and if you can’t do that you take your possessions and family and leave.

    • JimboR says:

      Is the novel set in the 80s? It all sounds a bit old school. I can think of a dozen companies that would hire him to work from home in his boxer shorts while helping look after the kids. And with the doubling of his CSIRO salary, he’ll pay that mortgage off faster too.

      • Don Aitkin says:

        You are perhaps a literary critic as well? Have no fear, that fiction idea is a long way down the list.

      • JimboR says:

        Actually, I’m quite a fan of fiction set in the 80s, so I wouldn’t consider it a criticism. The point is this meme that CSIRO is flush with bright young scientists who roughly share your view on climate change, but manage to set that aside and instead churn out party-line research because their financial situation demands it, is a myth. It belongs in your fiction section.

        • Don Aitkin says:

          Not my meme, Jimbo, whatever you think a meme is. I know a couple of youngish scientists in CSIRO, but know many more older ones. How many do you know, and how many of them work in the climate science section?

        • JimboR says:

          None personally, but I do know someone who’s collaborated closely with a handful of them (all from climate modeling) at the NCI. She assures me they’re fiercely independent professional highly skilled researchers who would not tolerate any pressure to fake any results, particularly from their own management chain who they apparently don’t have a lot of respect for.

          I ran the meme by her, and she found it immensely entertaining. When the laughter eventually subsided she asked “RWNJs?”. She posits that ones susceptibility to falling for that meme is probably a pretty good proxy for an IQ test.

          “Not my meme”

          Indeed. According to that Guardian article above it can be traced back to some cashed up neoliberals from the Mont Pelerin Society. But I’m glad to hear you didn’t fall for it, you pass the proxy IQ test. We’ll put comments like these down to some harmless dog-whistling to those who don’t pass:

          DA: “a notion introduced by climate scientists to get their models to run properly (ie. show a lot of warming)”
          DA: “They no doubt think that they are going what prof, or the director, wants them to do”

          And replying to a different thread:

          DA: “and I am sceptical about a lot of the sceptical claims too”

          Judging by some of the junk science you attach to your essays, I can only say I’d hate to see the stuff you reject, so thank-you for saving us from that.

    • spangled drongo says:

      One thing we do know, jimb, is that if he shared your position he wouldn’t be a very “bright young scientist”.

      Even though plenty of “scientists” share your groupthink:


  • JimboR says:

    “You, who have a binary position in which scientists are either after their own good or the public 9interest”

    I have no such binary position, you need to re-read what I wrote. I do suspect the RWNJ “they’re just in it for the money” crowd have a very binary position on that. I’m glad to hear you’re not one of them… at least this week.

    • spangled drongo says:

      You’d have to be a bit obtuse, jimb, as well as in denial of the facts, not to see how the groupthinker scientists insist that their colleagues all think the same way on “climate change”.

      How many instances and examples would you like?

      Did you ever read the climategate emails, BTW?

      Or was that all too embarrassing?

      And I doubt if even the groupthinkers could reproduce John Cook’s ol’ 97 special.

      Without their own specially selected reviewers, that is.

    • Don Aitkin says:

      Jimbo, you asked me to re-read what you wrote. This is it: ‘I suspect it comes down to whether you think they’re in it for the public good, or for their own personal good.’ That seems binary to me.

      • Bazza says:

        I was drawn to this binary opposition comment because of Don’s most recent essay on optimism about the future.
        Is one either an optimist or a pessimist?
        Isn’t that binary?

  • spangled drongo says:

    Money is only the lube, jimb:

    Nietzsche’s observation that “madness is the exception in individuals but the rule in groups,” and notes that groupthink occurs when “subtle constraints … prevent a [group] member from fully exercising his critical powers and from openly expressing doubts when most others in the group appear to have reached a consensus.”

  • Don Aitkin says:

    A reader from far away has sent this Comment, which I post here (slightly edited):

    Your essay confirms my perception here. Outside climate science, which is considered settled by students and
    Teachers alike, PG research is justified as a training process.

    Eliminating outliers or inconvenient observations is common where data comes from public sources rather than from an experiment for a specific purpose. There seems to be a fashion for aggregating unfruitful analyses as Big Data, in the hope something might be squeezed, out of it.

    I can’t see things getting better soon> Apparently much Big Pharma research is done in China and results destroyed after use.

Leave a Reply