I’ve wrote a few weeks ago about the nagging problem of the measurement of temperature. The basic weather data are collected by national meteorological bodies, both as a public service for citizens and also for ships, aeroplanes and other transport media. There is a World Meteorological Organisation, which has established standards that nations are urged to follow. The developed countries do so, while the developing countries do their best. It is most important that the data are accurate, because the utility of forecasts depends on their reliability and validity.
Over the last few years those with a sceptical mind have begun to be concerned about the reliability and validity of the data underlying the forecasts and reports issued by our Bureau of Meteorology, the British Met Bureau and their counterparts in the USA. Indeed, like Jo Nova, I wondered in print last year whether or not the BoM was best described as a PR outfit for global warming, pushing the view of the then Gillard Government.
Dr Jennifer Marohasy, whose website is listed on my blogroll, has been a persistent critic of the Bureau’s methodology, and has put forward actual examples of places, like Bourke in NSW and Rutherglen in Victoria, where the actual data have been ‘homogenised’, or adjusted, so that what would have been a cooling trend over time has become a significant warming trend. Others, like Tom Quirk, Des Moore and William Kininmonth, have raised their own comparable questions, mostly in Quadrant. As I’ve explained in the past, when new instruments are brought in, or data-collecting stations are moved, because the site is no longer appropriate (there are rules about the desirable local environment of thermometers, for example), the organisation has to make a choice. Do you end one trend and start anew ? Do you make an adjustment for what you think would have been the case had the new station always been where it has now been placed? And if you do, on what basis do you make the adjustment?
Well, the Bureau steadfastly declined to deal with these inconvenient questions, and it was only recently, after one of Dr Marohasy’s papers had an MP (Dr Dennis Jensen, a scientist himself) as a co-investigator, and her investigations began to be noticed internationally, that it offered any response. You can read it in The Conversation, a government-funded website to allow academics to voice their thoughts and opinions. The authors are both strong supporters of the IPCC, and one of them, Dr Andy Pitman, is most dismissive of sceptics: the sceptics are so well funded, so well organized, have nothing else to do, they kind of don’t have day jobs, they can put all of their efforts into misinforming and miscommunicating climate science to the public, whereas the climate scientists have day jobs and this isn’t one of them. He said this in an ABC interview. I don’t know where he gets this guff from, but it is mind-bendingly inaccurate as far as I am concerned, and indeed as far as all the sceptics I know are concerned.
Well, you ask, what was the response. It was bland and unhelpful. Its tone comes across as superior and condescending. Over the past week or so, the Bureau of Meteorology has stood accused of fudging its temperature data records to emphasise warming, in a series of articles in The Australian. The accusation hinges on the method that the Bureau uses to remove non-climate-related changes in its weather station data, referred to as “data homogenisation”. If true, this would be very serious because these data sets underpin major climate research projects, including deducing how much Australia is warming. But it’s not true.
Great, you think. Now they’ll show why it isn’t true. Alas, no. There are many methods, they say, to deal with ‘inhomogeneities’ (an unlovely word), and the Bureau uses a technique ‘that can adjust the data to make sure it is consistent through time’. What is it? Well, we never learn, but are directed to a 2010 article by a senior Bureau official, Blair Trewin, which is a well written summary of the problems that face organisations like his when they try to deal with missing and new data. But the paper doesn’t answer the question, and that was not its intention. By this time you’d be expecting the authors at least to pick up the two named sites, Bourke and Rutherglen, and show what was done, and why. The Bureau had a go, claiming that one of the adjustments must have been caused by a station move. Then a retired Bureau official who worked in the area said that was news to him.
The authors of The Conversation piece didn’t try that one on. They ramble on, claiming that if the Bureau hadn’t homogenised the data the outcome would have been an apparently and inaccurately hotter Australia, and finish with the lofty slap-down to the effect that if you’re unhappy with what the Bureau does, your task is to show how the Bureau is wrong, in a paper to be submitted to a peer-reviewed journal. Then they’ll take you seriously. Really! The contempt for those who raise questions is apparent.
Once again, this is dreadful, empty stuff, though this time it comes from the Bureau’s supporters, rather than from the Bureau itself. It cannot be all that difficult to present a few cases where the adjustments have led to cooling over time rather than warming or to show in much more detail actually what happens and why that gives a preferred outcome for reasons that would be clear to anybody.
Until that is done, this dispute will go on indefinitely, to the growing discredit of the Bureau.
Hi Don,
I agree with all you have written above.
The issue is the trust that one can put in the Bureau’s amended temperature set.
The examples cited by Dr Marohasy and others cast doubt on three aspects:
1. The appropriateness/accuracy/correctness of the methods used adjust the observations.
2. The review mechanism in place for correction of the method, and the disclosure of the method.
3. The configuration management of the data set(the recording of changes and the archiving of previous versions so that one can remove corrections if it becomes obvious that these introduced error).
The retreat behind the curtain of “Peer Review” ignores the issue, and is completely unhelpful. Peer Review is not the gold standard – it is merely a sanity check done at the time of publication of a paper, by no means the guarantee of correctness.
The gold standard is that the method proves accurate over time. And the work of the critics has exposed that there is a problem with the method.
Instead of circling the wagons, the appropriate response by the bureau would be to immediately acknowledge the problem and institue a review with a view to identifying exactly what the causes are so they can be addressed.
I would like to emphasise that the issue is not that there are apparent errors with the Bureau’s adjustment method. Such errors are to be expected as part of normal business. No, the issue is that there appears to be no Bureau response to address and correct those errors.
For that reason, there can be no trust in the Bureau’s data set.
Yes Colin – excellent point. Peer review is a mostly a post-WWII phenomenon designed to tackle the meteoric rise in scientific publication. It may be the best we’ve got for limiting the number of howlers that get published, but we all know it has many problems (read Retraction Watch for examples).
Gold standards would be replication or successful predictions, as you say, what works.
The requirement to submit a paper in a peer-reviewed journal before any questions about the Bureau’s methods can be answered, where the paper would need to demonstrate deficiencies in those methods, is a classic example of Doc Daneeka’s Catch-22, of Joseph Heller fame. “No, you can’t have the original data records, nor the rationale and/or algorithms for their homogenisation, until you can tell us (in a peer-reviewed paper, of course), where you think we’ve erred.” Beautiful!
Each time I hear that appeal to authority, “leave it to the experts – it’s very complex – only in a peer-reviewed paper . . . “, I hear also the frantic cry “Quick! Pull up the drawbridge! The Vandals are coming!”
“the sceptics are so well funded, so well organized, have nothing else to do, they kind of don’t have day jobs” . . . Gosh, Andy Pitman, could you sling some of that funding my way? No? Oh well, I guess I’d better carry on with my little task for today, fencing another paddock. Seriously, that’s what I’m doing. No day job indeed?
As I review the lame excuses offered by the warmists about the current 17 year long period of generally even temperatures (“the heat is going into the deep ocean” – sneaky stuff, that heat), about why the predictions of the climate models have failed so badly (“it’s just natural variabiliy”), about the build-up of ice in the Antarctic (“it’s all just wind, it’s been pushed further south”), the more I am reminded of the quote by George Orwell that a good friend mentioned the other day: “Some ideas are so stupid that only intellectuals believe them”.
Hi Peter, if Orwell was around today he could have fun writing a parable about pigs in white coats.
It seems obvious that if BoM had valid reasons for their adjustments to the temperature record they would simply respond by saying specifically what the did and why. Their waffling, evasive, condescending reply leaves one with little choice but to assume they must have no legitimate answers. An adjustment made because a station “might” have been moved, when it fact it wasn’t, would be a pathetic excuse even for a habitual liar much less a scientist. The willingness of their defenders to accept the adjusted record on blind faith and their anger at any questioning of its validity also make obvious a deep emotional commitment to the existence of the problem rather than any desire to understand the truth of the matter whatever it may prove to be.
Homogenisation is a well-documented subject and the details are all published in the literature.
Tell me something, I’m curious, are you the same “Walter Starck” that believes crop circles are made by aliens?
Here in Canberra, the weather station was moved from Fairbairn airport to Canberra airport next door on 19 September 2008. On 18 January 2013 it recorded a 41.7C degrees maximum. “Record breaking” temperature, according to the local BOM, ignoring the fact records at the Canberra airport station dated back only a little over 4 years!.Also ignoring the records of the previous weather station dating back to 1939 which showed it hottest day of 42.2 C on 1 February 1968.
Before those 2 weather stations, there was one a few kilometres away at Acton, it recorded a temperature of 42.4 C on January 11 1939 during a nine day heatwave above 38 degrees.
Both my wife and myself experienced a 42 C degree temperature when we visited Canberra in January 1982 (at the time we were living in Townsville in North Qld), so we both knew the “record breaking” temperature claim was poppycock.
So from my experience, moving weather stations around and ignoring (or adjusting/hiding/deleting) prior records encourages eronious alarmist climate claims.
The BoM’s homogenised temperature figures leave a sour taste. Its failure to answer the questions leaves an impression that it has something to hide.
Dear Don,
Follow the link in the Conversation piece marked “provided the details of how it is done,” and look on the BoM page you reach. The first item in the sidebar is: “Full details on how the Bureau has prepared ACORN-SAT are available from the technical report”, and a corresponding 92 page technical report describing the details of the construction of the temperature series.
This is the link:
http://www.cawcr.gov.au/publications/technicalreports/CTR_049.pdf
They describe the details of the inhomogeneity detection, which as far as I can tell amounts to looking at the neighbouring stations, and when any particular station exhibits a time series discontinuity which is not also substantially exhibited by its close neighbour stations, an inhomogeneity gets flagged – essentially a statistical test to see if a station is behaving in the same way as its neighbours. Flagging is a separate process to adjustment. There’s then an analysis of 16 different adjustment methods, a comparison of how they work, and a discussion of how they choose which adjustment method – it’s something to do with the number of useful neighbouring stations, I didn’t get into the weeds here. There’s an in depth discussion of how corrections are applied to particular stations, which I’m sure would give a lot of insight into the Broome and Rutherglen examples.
To my eye this seems to overcome all the questions and objections you raise in your article above.
Patrick,
I have indeed read that long paper, and much of its methodology turns on how to deal with individual puzzles. The section on how to deal with inhomogeneity is informative, but here much turns on exactly what ‘neighbours’ are being used to cross-reference a station. What are Bourke’s neighbours, for example, or those of Rutherglen? We know enough about individual variations within one city, like Canberra, where I live, to puzzle at the thought that stations that might be a hundred or more kilometres away is a useful check on another site. How does a cooling trend over a century become adjusted to a strong warming trend? The paper alas does not explain how this happens.
And it is also about the ACORN group of stations — about ten per cent of the whole system. No doubt the same techniques are used for the rest, too. But wouldn’t it be so easy to set out the Bourke and Rutherglen examples and show what happened?
I would imagine that they are those neighbours are obtained by employing the methodology in the middle of page 46. As I read the methodology the neighbours will vary year by year as stations have or do not have data. For instance for Bourke in 1915 half an hour of stuffing about in python and Excel gives me the list below. I also looked at what the Pearson correlations between annual temperature deviations are, and I’ve placed this below as well. You’ll note the correlations are all greater than 0.95, which would lead one to suspect that there is some information about Bourke available from its neighbour stations as at 1915. I compared correlations with a couple of WA stations and saw correlations nearer 0.4.
I can’t comment on how easy or hard it would be to set out the Bourke or Rutherglen stations as I’ve not tried it.
Correlation between neigbours of Bourke as at 1915:
41038 44022 46037 48013 48030 52026 55023 56017 63004 63005 65016
41038 1.000000 0.981347 0.973383 0.982110 0.977861 0.986833 0.976811 0.987419 0.970968 0.968518 0.968830
44022 0.981347 1.000000 0.967179 0.975930 0.967989 0.977086 0.958355 0.970831 0.952533 0.956943 0.950220
46037 0.973383 0.967179 1.000000 0.988875 0.985632 0.983043 0.966048 0.972243 0.973168 0.975498 0.980417
48013 0.982110 0.975930 0.988875 1.000000 0.987949 0.989954 0.974651 0.978422 0.980841 0.978687 0.983444
48030 0.977861 0.967989 0.985632 0.987949 1.000000 0.988996 0.972063 0.980174 0.980991 0.980977 0.982513
52026 0.986833 0.977086 0.983043 0.989954 0.988996 1.000000 0.975084 0.985666 0.975096 0.977792 0.977235
55023 0.976811 0.958355 0.966048 0.974651 0.972063 0.975084 1.000000 0.977821 0.974827 0.968638 0.971549
56017 0.987419 0.970831 0.972243 0.978422 0.980174 0.985666 0.977821 1.000000 0.977116 0.980557 0.975108
63004 0.970968 0.952533 0.973168 0.980841 0.980991 0.975096 0.974827 0.977116 1.000000 0.986658 0.988937
63005 0.968518 0.956943 0.975498 0.978687 0.980977 0.977792 0.968638 0.980557 0.986658 1.000000 0.986913
65016 0.968830 0.950220 0.980417 0.983444 0.982513 0.977235 0.971549 0.975108 0.988937 0.986913 1.000000
Neighbours of Bourke as at 1915:
IX Site Name Lat Lon Start End Years PC AWS distance
892 48013 BOURKE POST OFFICE -30.0917 145.9358 1871-04-01 1996-08-01 124.3 98 N 0.000000
895 48030 COBAR POST OFFICE -31.5000 145.8000 1881-02-01 1965-12-01 78.2 91 N 84.842921
926 52026 WALGETT COUNCIL DEPOT -30.0236 148.1218 1878-08-01 1993-06-01 113.7 97 N 113.668073
877 46037 TIBOOBURRA POST OFFICE -29.4345 142.0098 1910-01-01 2014-02-01 103.8 96 N 208.378032
867 44022 CHARLEVILLE POST OFFICE -26.4025 146.2381 1889-05-01 1953-09-01 64.3 98 N 222.073017
1102 65016 FORBES (CAMP STREET) -33.3892 148.0081 1873-04-01 1998-05-01 122.1 96 N 224.466052
940 55023 GUNNEDAH POOL -30.9841 150.2540 1876-12-01 2011-12-01 128.8 87 N 229.620677
838 41038 GOONDIWINDI POST OFFICE -28.5481 150.3075 1891-03-01 1991-06-01 98.5 96 N 246.879335
953 56017 INVERELL COMPARISON -29.7783 151.1114 1874-02-01 1997-11-01 122.6 98 N 269.917594
1062 63004 BATHURST GAOL -33.4167 149.5500 1858-01-01 1983-05-01 118.1 89 N 271.797439
1063 63005 BATHURST AGRICULTURAL STATION -33.4289 149.5559 1908-07-01 2014-02-01 102.1 94 N 272.531348
Did you read the disclaimer at the beginning of the report that you link?
” CSIRO and the Bureau of Meteorology advise that the information contained in this publication comprises general statements based on scientific research. The reader is advised and needs to be aware that such information may be incomplete or unable to be used in any specific situation. No reliance or actions must therefore be made on that information without seeking prior expert professional, scientific and technical advice.”
Your conclusion appears to be ill-advised according to the BoM.
Don,
Well written as always. But are you going to grace your post with a statistic or two. You know a mean, or standard deviation. All you have done is cherry pick two weather stations. Pitman has argued that on balance the homogenisation has reduced the warming trend, not increased it!
Muller analysed unadjusted temperature data and still reported a warming trend.
And the published analysis of Judith Curry among others refutes your hyperventilation about the nagging problem of measurement of temperature.
Welcome back, David. No, this post was about an argument that is going on it the press. I don’t have any data myself, and nor did the article in The Conversation. I didn’t ‘cherry pick’, for the same reason.
I believe there are only two weather stations in the Northern Territory, a place the size of France, that have been there, continuously, for more than a century. What can we say about Northern Territory temperatures a century ago on this basis? Nothing.
I think there are two separate issues here, or perhaps three.
(1) The people who are attempting to squeeze temperature trends from a system that was never designed to record climate change feel they must homogenize the temperature data and they apply a set of techniques to flag, evaluate and change such temperatures that they think anomalous. Their goal is a temperature record “that fluctuates and changes only in response to weather and climate variations” (Trewin 2012* quoting Conrad & Pollack 1950 p. 42). Of course, this is an impossible goal (“no century-long temperature records meeting such standards exist in Australia” p. 42*), but as long as the methods are transparent, the data readily available, and the error and uncertainty clearly acknowledged (David will be disappointed that there are no error bars), then this is no worse than many other modelling enterprises. Unfortunately none of these criteria appear to be met.
* http://www.cawcr.gov.au/publications/technicalreports/CTR_049.pdf
(2) Mistaking or misrepresenting the homogenized records as being real temperature data. The best approximation to actual temperatures at historic weather stations is the original data. Possible errors should be flagged, but not altered. Playing ‘let’s make the planet burn’ on a government computer is one thing, misrepresenting historical data is fraud. It’s bad enough when the media does it, but there is no excuse for the BoM to be hawking ‘hottest summer ever’ malarky. They know how unreliable the data is.
Here’s the first of Trewin’s* examples: Inverell, 1967: site move of 100 m from post office (surrounded by buildings) to library grounds (more open) resulted in “mean annual adjustment of ?1.0°C for maximum temperature and ?1.2°C for minimum temperature” (p. 77). Yet, we are supposed to accept homogenization using stations many to hundreds of kilometres distant?
(3) Corruption of the historical record. Let’s hope this isn’t happening. Original records in the US appear to be protected and I hope ours are too.
One record high temperature which has been removed from the official records: the 53.1 C at Cloncurry on 16 January 1889. The reason? It was not measured using the Stevenson screen. OK, but it would have been extremely hot nevertheless and worth a mention, perhaps altered oops… homoginised in the updated records.
The current record is now a cooler 50.7 C at Oodnadatta on 2 January 1960.
For anyone interested in there relevance of peer review to nonconformity and why our BoM panjandrums would like to filter any criticism through it, this makes interesting reading:
http://arstechnica.com/science/2014/09/is-there-a-creativity-deficit-in-science/
[…] to deal with this issue at once. I have written about the Bureau before (here, for example, and here), but to this direct question there is a straightforward reply. I’ll deal first with the […]