I’ve reported before on the fuss about ‘adjustments’ to Australian temperature data, which has caused the Minister to ask for an audit. It’s not only here that the fuss is occurring. There has been similar and continual agitation in the USA over the ten years I have been reading about global warming, and now in the UK the Global Warming Policy Foundation (GWPF) has enlisted a group of scientists with qualifications in the area to look at the whole question. The GWPF is a London based sceptic think-tank whose best known member is Lord Lawson, formerly a Chancellor the Exchequer to Margaret Thatcher. On the whole the GWPF is sensible about what it reports and what it does. Whether or not the world is warming is a question whose answer relies on temperature data and their accuracy.
The GWPF mission has cause some indignation on the part of the BEST (Berkeley Earth Surface Temperature Project) people, particularly Professor Richard Muller, who created the project because he too was worried about the reliability of the temperature data. He now believes that his team has done the job, and that the BEST data are reliable. He thinks that adjusting the data is necessary, and that not to so so would be ‘poor science’. His view is that the earth has warmed about 1.5 degrees C in the last 250 years, with most of the net warming having occurred in the last sixty years or so.
I don’t dispute this general claim. It says nothing about the causes of the warming, and is consistent with lots of little bits of observational evidence. I was a bit worried about this comment:
There are cases in which the temperature suddenly jumps from Celsius to Fahrenheit – and ignoring that would be unjustifiable. Another example: when measurements on ships were suddenly shifted from buckets to boat inlets, there was a sudden and significant jump in the value of the temperatures being recorded.
Adjusting temperature for C to F does nothing to the intrinsic value of the reading, and is not an ‘adjustment’ as the term is used at the moment, while there was no ‘sudden’ shift from buckets to ship inlets in measuring ocean temperature — that happened over a considerable period. What puzzles those who have looked closely at the Australian data, and at some of the American data, too, are the shifts that occurred for no obvious reason at all — no apparent move from the town to the aerodrome, no death on the part of the the operator, nothing of any consequence. Why change the data at all? OK, so there is a funny reading. My preference would be not to change it, unless we know why it is funny. Best of all: leave it out, and call it missing data if it is way off the mark. To give it another value requires great justification.
Muller says, with some pride, that his whole system is untouched by human hands.
What Berkeley Earth did was to avoid totally any human intervention and to make all adjustment automatic. Our adjustment program has been put online for all to see. In addition, we put the data online in its original as well as adjusted form, so everyone can see what the automated programs did.
On the whole, and because I simply don’t have the time to go through the millions of data-points and see for myself, I accept the BEST system, though I’m uneasy about going back much before 1900, simply because there are so few stations then, especially in the Southern Hemisphere. Nonetheless, I look forward to the findings of the Australian team and that of GWPF.
And at much the same time, someone else, Dr Roy Spencer, has done a lot of homework in ‘adjusting’ the methodology that produces the data from the UAH satellites, and you can read about that endeavour here. I have preferred the satellite data over the land and ocean surface data because the satellite data are truly global, and the others are not, not at all. What is measured here is the temperature of the lower troposphere, the atmosphere from the surface up to the stratosphere. But the satellite data come with their own problems. Dr Spencer anticipates a major question in his discussion:
Why do the satellite data have to be adjusted at all? If we had satellite instruments that (1) had rock-stable calibration, (2) lasted for many decades without any channel failures, and (3) were carried on satellites whose orbits did not change over time, then the satellite data could be processed without adjustment. But none of these things are true.
And he explains why, thoroughly. To start with, there have been 15 satellites on the job, and their orbits are different, and they drift, and you have to keep on correcting. It’s a most interesting story if you like reading about methodologies of various kinds (I do). And there is an outcome. After several years of work, and re-doing the computer programs from scratch to get rid of all the band-aid patches that have been added in 25 years, Dr Spencer has come up with a long-term trend of warming that is lower, appreciably lower, than the one he has been publishing until now. What was a long-term (1979 to 2015) warming trend of +0.140 degrees C per decade has become one of +0.114 degrees C, and that makes the UAH trend very similar to that of RSS, the other satellite temperature measuring system. Both show a cooling trend since 1998, too. You can see that in the following pair of graphs, from Bob Tisdale at WUWT. The similarity is extraordinary, and someone sceptical of satellites is about to say so shortly, I am sure.
Another outcome of the new UAH methodology is a strong suggestion that warming has been substantially greater on land than in the ocean. The map below shows this in a most striking way.
Dr Spencer regards his new system as a kind of final draft, and is open to suggestions for improvement. I don’t have any to offer. What we have now are 35 years of temperature data that cover the globe and show a slightly cooler trend than the land and ocean datasets. They will do me as evidence, at least for the moment. If real qualifications emerge in the coming months, I’ll report on them here. The search for better temperature data will continue indefinitely.
Join the discussion 6 Comments
Very interesting charts from Roy Spencer and Bob Tisdale. Concerning the UAH map above, I’m not surprised we see more warming on land than for the oceans – much easier to absorb through mixing the heating effects through the layers.
Judith Curry seems happy at present with the BEST data, but she is a person with admirable intellectual integrity and personal courage, and if the GWPF-initiated analysis produces an improved result, I’m sure she will examine that and endorse it if she agrees.
An obvious question is that if it is clearly demonstrated that the temperature increases since the early 1900s have been significantly less than has been consistently presented to us by the official bodies, where does that leave global warming itself? Even less associated with rising carbon dioxide levels. And less “alarming”. But fear not, those fighting to save the planet are even prepared to spill their hot latte in their laps, if necessary to defend their inestimable truth.
The AGW debate is lop-sided; evidence on one hand, emotion on the other. A bit like apples and oranges. Both full of juice.
That is an interesting map from Dr Spencer showing the global temperature change over the past 36 years. So that would be 18 years of rise with 18 years of pause. What I find remarkable with all the El ninos we have had during this period they have not left any signature over the oceans, most of the warming appears over land. As Peter K says the land surfaces heat quicker than the ocean, this can also be seen in the annual up tick in global temps during the northern hemisphere summer, ironically when the earth receives less solar radiation when further from the sun.
But where is this heat coming from? Is it the enhanced green house effect, a change in solar output, changes in clouds and aerosols or global lack of wind, which has also occurred during this period, see
Off topic, but it is funny
Delicious! Thanks very much.
I thought the skeptic in you might appreciate 🙂
Hi Don, enjoy reading your blog and your wry skepticism.
I’ve long been suspicious about the tide gauges around the shores of the inner west of Sydney. They purport to show the past and projected sea level rise marked at 1920, 2006 & 2050.
Last year’s highest king tide was predicted for the night of July 13, 2014 at 2.06m. I visited the gauges that night to record and take photos. The water level was 9cm above the 2006 marker and 15cm below the 2050 marker.
I wasn’t sure how they’d been calibrated so I started searching through local council records and found this from Leichhardt council:
“The heights proposed are based on the highest king tide level for
2006 which was 2.01m on 31/1/06. 1920 levels are based on
subtracting an average 1.2mm rise in sea level for each year (as
determined by CSIRO) from the 2006 level i.e. 1.91m.”
So the 1920 marker is based on a backward projection and not on observed records.
Consulting the tide records showed that the highest king tide for 2006 was 2.078 in January 2006, and there were other tides of 2.022 & 2.058 in February and September 2006. Furthermore the highest king tide of 1920 was 2.050 in June of that year. Not much difference at all when one looked at the actual data and a completely incorrect reading of the 2006 records.
I don’t wish to take up too much space in the comments but if you are interested there a couple of postscripts to this story of citizen science. I find it annoying that councils take it upon themselves to “do something” about global warming at a cost of $4k per ill-defined gauge.