I’ve reported before on the fuss about ‘adjustments’ to Australian temperature data, which has caused the Minister to ask for an audit. It’s not only here that the fuss is occurring. There has been similar and continual agitation in the USA over the ten years I have been reading about global warming, and now in the UK the Global Warming Policy Foundation (GWPF) has enlisted a group of scientists with qualifications in the area to look at the whole question. The GWPF is a London based sceptic think-tank whose best known member is Lord Lawson, formerly a Chancellor the Exchequer to Margaret Thatcher. On the whole the GWPF is sensible about what it reports and what it does. Whether or not the world is warming is a question whose answer relies on temperature data and their accuracy.
The GWPF mission has cause some indignation on the part of the BEST (Berkeley Earth Surface Temperature Project) people, particularly Professor Richard Muller, who created the project because he too was worried about the reliability of the temperature data. He now believes that his team has done the job, and that the BEST data are reliable. He thinks that adjusting the data is necessary, and that not to so so would be ‘poor science’. His view is that the earth has warmed about 1.5 degrees C in the last 250 years, with most of the net warming having occurred in the last sixty years or so.
I don’t dispute this general claim. It says nothing about the causes of the warming, and is consistent with lots of little bits of observational evidence. I was a bit worried about this comment:
There are cases in which the temperature suddenly jumps from Celsius to Fahrenheit – and ignoring that would be unjustifiable. Another example: when measurements on ships were suddenly shifted from buckets to boat inlets, there was a sudden and significant jump in the value of the temperatures being recorded.
Adjusting temperature for C to F does nothing to the intrinsic value of the reading, and is not an ‘adjustment’ as the term is used at the moment, while there was no ‘sudden’ shift from buckets to ship inlets in measuring ocean temperature — that happened over a considerable period. What puzzles those who have looked closely at the Australian data, and at some of the American data, too, are the shifts that occurred for no obvious reason at all — no apparent move from the town to the aerodrome, no death on the part of the the operator, nothing of any consequence. Why change the data at all? OK, so there is a funny reading. My preference would be not to change it, unless we know why it is funny. Best of all: leave it out, and call it missing data if it is way off the mark. To give it another value requires great justification.
Muller says, with some pride, that his whole system is untouched by human hands.
What Berkeley Earth did was to avoid totally any human intervention and to make all adjustment automatic. Our adjustment program has been put online for all to see. In addition, we put the data online in its original as well as adjusted form, so everyone can see what the automated programs did.
On the whole, and because I simply don’t have the time to go through the millions of data-points and see for myself, I accept the BEST system, though I’m uneasy about going back much before 1900, simply because there are so few stations then, especially in the Southern Hemisphere. Nonetheless, I look forward to the findings of the Australian team and that of GWPF.
And at much the same time, someone else, Dr Roy Spencer, has done a lot of homework in ‘adjusting’ the methodology that produces the data from the UAH satellites, and you can read about that endeavour here. I have preferred the satellite data over the land and ocean surface data because the satellite data are truly global, and the others are not, not at all. What is measured here is the temperature of the lower troposphere, the atmosphere from the surface up to the stratosphere. But the satellite data come with their own problems. Dr Spencer anticipates a major question in his discussion:
Why do the satellite data have to be adjusted at all? If we had satellite instruments that (1) had rock-stable calibration, (2) lasted for many decades without any channel failures, and (3) were carried on satellites whose orbits did not change over time, then the satellite data could be processed without adjustment. But none of these things are true.
And he explains why, thoroughly. To start with, there have been 15 satellites on the job, and their orbits are different, and they drift, and you have to keep on correcting. It’s a most interesting story if you like reading about methodologies of various kinds (I do). And there is an outcome. After several years of work, and re-doing the computer programs from scratch to get rid of all the band-aid patches that have been added in 25 years, Dr Spencer has come up with a long-term trend of warming that is lower, appreciably lower, than the one he has been publishing until now. What was a long-term (1979 to 2015) warming trend of +0.140 degrees C per decade has become one of +0.114 degrees C, and that makes the UAH trend very similar to that of RSS, the other satellite temperature measuring system. Both show a cooling trend since 1998, too. You can see that in the following pair of graphs, from Bob Tisdale at WUWT. The similarity is extraordinary, and someone sceptical of satellites is about to say so shortly, I am sure.
Another outcome of the new UAH methodology is a strong suggestion that warming has been substantially greater on land than in the ocean. The map below shows this in a most striking way.
Dr Spencer regards his new system as a kind of final draft, and is open to suggestions for improvement. I don’t have any to offer. What we have now are 35 years of temperature data that cover the globe and show a slightly cooler trend than the land and ocean datasets. They will do me as evidence, at least for the moment. If real qualifications emerge in the coming months, I’ll report on them here. The search for better temperature data will continue indefinitely.