I’ve written about the measurement of temperature quite a few times, and in my first published foray into this field six years ago I said then that the measurement of temperature was fraught with every conceivable kind of error. The passage of time has only confirmed for me that position. The apparent precision of these measurements, when they are averaged to three decimal places, seems to me quite bogus. The global temperature anomaly is a statistical construct, and means nothing in practice to anybody, even though climate scientists and those of us interested in the global warming scare pay great attention to its movement.
Having said all that, it seems to me nonetheless very likely that the earth has warmed a little over the last century. The satellite measurements since 1979 I take more seriously, since they are very close to being global, and they show gradual warming in fits and starts over the last 35 years, with virtual stasis over the last decade (that applies to the RSS dataset more than to that of UAH).
Now the compilation and averaging of temperature data is done largely by government agencies of many kinds, and those responsible for that work often have problems when for one reason or another there are no data from one or more measuring points. A man may be sick and unable to go the thermometer; the instrument may be faulty; the power supply may be down; and so on. What the agencies do then varies from country to country, but the choices are straightforward. You can simply report no data, or you can estimate, by looking at the measurements from surrounding instruments, and filling in. Perhaps the instrument has moved from one location to another, for good reason. Again you have choices: make some kind of adjustment, or start a new trend line.
It is likely that over time each agency develops a technique that is uniformly applied in specific cases, and since weather is a global thing, agencies like to do things the same way — there is after all, a World Meteorological Organisation, which has that goal. Moreover, there is some pressure to keep trend lines going, and not break them, which means that in most cases agencies will estimate and make adjustments, rather than provide even more trend lines and have empty cells.
From the very beginning of my interest in this field I was aware that many sceptics felt that, whatever the ostensible reason, adjustments always seemed to make the past cooler and the present warmer. What were the reasons? Official agencies, like our Bureau of Meteorology, usually declined to explain. From time to time someone would write in and point out what had happened to the temperature readings for his or her town over time, and the trend was always the same — the past had become cooler and the present warmer, supporting the global warming scare.
During the last Labor governments it seemed to me that spokespeople for our Bureau of Meteorology too often spoke as advocates for the imperative need to ‘combat climate change’, rather than as public servants speaking to evidence. It is always difficult to draw the right line here, but conventionally it is the Minister who does the public speaking, not a senior civil servant. And the BoM papers about the state of the climate seemed to me also to be right at the edge of global warming hysteria, and I wrote about one of them not so long ago. Jo Nova wondered at the time whether or not the Bureau had become nothing more than a PR section for the Gillard Government; I found it hard to disagree with her.
All that is background. In the last week or two there have been loud rumblings both here and in the USA about the issue of adjustments, and they have been highly publicised. Jennifer Marohasy has combined with Liberal MP Dr Dennis Jensen (the only scientist in Parliament) and two others to produce a serious and scholarly paper whose import is that over the past century New South Wales has cooled rather than warmed, notwithstanding the Bureau’s advice to the contrary. The reason, which the authors go into with considerable detail, is that the original temperature data have been ‘adjusted’ and ‘homogenised’ to a ridiculous degree. They give examples.
A similar claim has occurred in the USA, and you can read about it on Watts Up With That here, and on Climate etc here. The response from the agency concerned there was breathtaking in its loftiness. Asked ‘Are the examples in Texas and Kansas prompting a deeper look at how the algorithms change the raw data?’ the agency (NCDC) replied, ‘No – our algorithm is working as designed…’
Now you would expect all those concerned in any agency to defend what they have done, at least in the beginning. But so much of what governments do in ‘climate change’ is based on these supposedly accurate measurements. Dr Jensen has raised questions in Parliament, and so far there have been no answers. These two major incidents suggest that sooner or later the Auditor-Generals or their equivalents will have to be called in.
From my own experience I have a lot of faith in public servants. But I also know that their bias is conservative, in that they like to stick to what has been done in the past. In the measurement of temperature, I think it’s time for a fresh new look at what has been done, and why it has been done.