An Agnostic’s Guide to Global Temperature, part 2: Measuring Temperature

By August 11, 2012Climate Change, Other

When I entered the literature on global warming my first worry arrived with the notion of an average global temperature. There is one, usually expressed as the ‘global temperature anomaly’ (GTA), which is the difference between the average temperature for a month and the average temperature for that month over a baseline period, typically 1961 to 1990. Twelve months’ averages gives you a year’s average. Any day’s temperature is the average of the maximum and minimum temperatures for that day. If your thermometer showed -4 as the minimum (yes, you can have a thermometer that does this for you) and +12 as the maximum, then the average for that day will be +8 degrees C. Think about that for a moment: what does it mean to you? Nothing much? Nor for me. And this single figure is susceptible to variations like cloud cover. It might be cloudy all morning and then be hot for a couple of hours before the clouds roll in again…

Where are the thermometers? Ah, well, by and large they are on land, where people live, because someone has to read and record the findings. On land the largest set by far is in the USA. There are at the moment a thousand or so to cover the land surface, and there have been as many as 6,000 and as few as a few hundred. The measuring organisation averages them, and makes adjustments, too (there will always be missing data, and measuring stations may have moved). With a thousand measurements it is legitimate (though not sensible) to go to three decimal places with your average, and two or three decimal places are common when we are told that this has been the fifth hottest year (or whatever the claim is) since measurements were taken. This did happen at the end of the last month, when Americans were told that the July they had just had was the hottest ever. Even on the available data that wasn’t so, but then, as must be becoming plain, these data contain a great deal of error.  The GTA is an index — an average of averages of averages. I doubt that anything meaningful can be said with such a statistic.

But what about the oceans, which cover approximately 70 per cent of the globe? Well, we now have a well-distributed set of Argo buoys, 3000 or so of them, that can give us sea surface temperatures, and also water temperatures below the surface as well. But they have only been in operation for a few years. Before that the measurers relied on temperatures provided by ships, via half a dozen different procedures or, after 1979, by satellite measurements of a given column of air that can be tweaked to give temperature equivalents. Where were the ships? On recognised shipping routes, for the most part. There are now recognised shipping routes (a result of the Titanic disaster) and the southern ocean has had hardly any traffic since the opening of the Suez Canal.

So how do the measurers deal with the temperatures where no one lives or with the temperatures where there aren’t any ships? The answer is that they do it by one or other variety of extrapolation — that is, by extending what data they have to areas where they haven’t any. The resulting averages are the averages of the original data, no matter how clever the extrapolation. It’s not an unreasonable procedure, and we do it all the time ourselves in our daily lives. In Canberra, where I live, there is a small range in the maximum temperatures recorded at the airport and in some suburbs. The minima have a wider range, because of the packets of cold air that roll in from the mountains and collect in valleys during the night.

Now all these data are collected and put into the General Circulation Models that purport to tell us what is going to happen to our climate. The surface of the earth is divided into 2,592 grid boxes measuring 5 degrees in latitude and five degrees in longitude, and ideally there should be one measuring point inside each box. Actually, in 1900 there were one hundred of them, and the number rose to 260 by 1950, when it fell again. The rest, the great majority, are filled in by one or other form of extrapolation.

How we measure and have measured temperature is a complex and dense business, and a fascinating story in itself. I’ve summarised the situation very briefly, but I think my account is accurate. In my postgraduate and postdoctoral days I became more and more conscious of the need for extreme accuracy in measurement, and what I have given you is, in my opinion, cause for reservation about the reliability and validity of global warming — and what it can mean to us as human beings living on the planet. To take the latter first, it means nothing to anyone to know that the average temperature in Australia last Tuesday was, for the sake of a figure, 14 degrees. We want to know about where we live, and what the highs and lows were. Index figures of anything are at a remove from whatever it is that is being measured, and the GTA has no human meaning.

That does not mean, however, that temperature measurements are worthless. People have been writing about temperature, rain and weather extremes since writing began: we are alert to our weather. We are able say that there have been unusually warm and unusually cold periods in human history, because those living at the time not only wrote about them, but did things that require certain weather conditions, like holding Frost Fairs in the late 17th century on the frozen Thames in London, or growing wine grapes near the Scottish border in the 14th century.

What it does mean is that the GTA, and the other monthly and annual averages that we have, offer a rough guide only to what has happened. The further back in the 20th century we go the larger the error in temperature data. For my part, I prefer the satellite measurements, which cover the whole globe, but they started as recently as 1979. I can agree that over the twentieth century there was an irregular and small warming. I could add that on the evidence the globe does not seem to have cooled over this time. What we cannot point to is an accurate figure to describe that warming. And we can’t do it because the errors involved are much larger than the GTA. They are large because of sampling error (the thermometer sites are not a good sample of all measurement points) and measurement error (until recently the thermometers had to be read by eye, and the error in reading is not trivial). There are other errors, too: every observation station has its own micro-climate, and each is therefore subject to local influences of which we have no knowledge.

So, yes, I can agree that the earth has been warming, and that it probably warmed during the 20th century. It may well have started warming earlier still, too. Anything more precise than that is not, at least in my judgment, supported by the methodology. I therefore argue that those who follow the movement of temperature data with attention to minute changes are having themselves on. The errors are much larger than the shifts they exclaim at, and this comment applies to the sceptics as well as to the believers.

And a final comment: I learned over my working life that people love to play with numbers, and they will invest their numbers with passionate belief. They do so because they love to measure, and they’re good at it. Economists, stock-market analysts, gamblers and shoppers all do this, not just the climate brigade, and so do I. But you have to be very careful about what it is you are measuring, and how relevant your product is to the reality from which you abstracted the numbers. In the climate domain I do not think that due care is observed enough.

In the next instalment I will write about whether or not this warming has been ‘unprecedented’.

(I thank John McLean, whose special intellectual interest this issue is, for pointing out ambiguities in an earlier draft. Those that remain are my fault!)

Join the discussion 3 Comments

Leave a Reply