A little time ago, in a post on climate, I made a remark that caused a couple of readers to object. What I said was this: ‘Since the relationship between carbon dioxide increases and temperature is logarithmic it will take a good deal of time to tell [whether or not an increase in CO2 will be harmful to us].’ One of the readers then went on to suppose what I meant, and he was pretty right. But the objections forced me to stop and think. What was I trying to say, and had I said it correctly?

A stickler for accuracy might say that the relationship I was talking about was between ‘radiative forcing’ and temperature, and that has something to it. But if you search you’ll find plenty of references to the logarithmic link’s being between carbon dioxide and temperature. I’ve been assuming there is one for a long time, without really checking, and that it implies a lengthy period between changes in the one and consequent changes in the other.

The logarithmic relationship is accepted by the IPCC and just about everyone else you can consult. What varies is the implications that people draw from it. The basis is the fact that carbon dioxide radiates only within a narrow frequency spectrum, and the first parts per million have the strongest effect. The more CO2 you add the smaller the outcome. In short, each additional doubling has the same effect on temperature. From 0 to 280 ppm the increase is the same as from 280 to 560, and the same as from 560 to 1120, and so on.

It’s like adding blankets and jumpers when it gets really cold. The first one does most of the work. Or in painting, a better example, if you apply transparent paint with the finest touch of colour, the first painting gives the strongest colouring; later layers only faintly deepen the tint.

You can see the relationship over time in the following graph. Modtran5 is the current computer program devised (and apparently owned) by the USAF to model the atmospheric propagation of electromagnetic radiation. The vertical axis shows net downwards forcing, while the horizontal axis shows parts per million of CO2. The red lines relate the two at the time of the Industrial Revolution (conventionally 1780), the green lines show the present, and the black lines the result when the Industrial Revolution level has been doubled. Increased concentrations clearly have a progressively smaller warming effect. (I’m sorry the graph is so small; I haven’t found how to enlarge it.)

Although Svante Arrhenius is also credited with it, the first mention I can find for an apparent logarithmic relationship is that of Guy Callendar, an English engineer, in 1938. His paper, read to the Royal Society, is astonishingly modern in its attack, and his simple approach seems to have been ignored by the IPCC. Steve McIntyre of ClimateAudit is a big fan of Callendar, and you can read more about Callendar here. In the 1938 paper you will find Callendar’s projection of the relationship over time. He didn’t describe it as being logarithmic, but he drew it, and it plainly has that shape.

OK, so where is the rub? The orthodox, including the IPCC, accept that that there is a logarithmic relationship, and that it will produce around 1 degree Celsius for a doubling of CO2. But it is as though they find that quite uninteresting. They are fixated on climate sensitivity, which they see as far more important. I’ve written about climate sensitivity before in a tangential way, and it’s probably worth a proper exploration in a future post.

In the orthodox presentation the 1 degree Celsius consequence of doubling CO2 becomes the basis for multiplication. Climate sensitivity is the sum of the proposed feedback consequences of a change in forcing — in this case an increase in the proportion of carbon dioxide in the atmosphere. The feedback factors include clouds, water vapour, ice and snow and a few others. The current IPCC view is that the range of multipliers is from 1.5 to 4.5. It no longer proposes a midpoint, but the obvious one here is 3. In the IPCC picture, the logarithmic effect of doubling (1 degree C) is then multiplied by feedbacks to produce an outcome of between 1.5 and 4.5 degrees C.

If the IPCC is right, then my proposed slow change from a doubling, let alone from two doublings, is called into question. But what are the facts? Well, there aren’t any. What we have in the literature are estimates based on a great variety of bases. There are so many, in fact, that the IPCC has abandoned fixing on one of them, or on an average. What is more, there are quite a number of papers that propose a sensitivity that is around 1, and one or two that place it as less than 1.

Without over-doing it, my current feeling is that the orthodox may be committed to a high sensitivity level, because without it there is no AGW scare. The gentle warming that appears to have been the case over the past 150 years seems to have been beneficial to humanity at least, and I can’t see a good reason to suppose that a continuation of it at the same rate has to be seen with horror.

What is more, the current rate of warming is very small, and we may even be in a cooling phase. While that continues, the year in which we see the 1 degree increase from the time of the Industrial Revolution moves further and further away. On the current evidence, carbon dioxide is important, but it is not a super-power.

Don

I appreciate your response but you still have not got it. You can’t present a relationship between X and Y, to imply something about Z.Specifically you president the relationship between CO2 and Temperature to imply something about Time.

Today you introduced “radiative forcing” into the discussion, but this only repeats your error. Radiative forcing is relationship between the difference of radiant energy (sunlight) received by the Earth and energy radiated back to space and

Temperature.

You http://en.wikipedia.org/wiki/Radiative_forcing

There is no mention of Time! So when you write

“But if you search you’ll find plenty of references to the logarithmic link’s being between carbon dioxide and temperature. I’ve been assuming there is one for a long time, without really checking, and that it implies a lengthy period between changes in the one and consequent changes in the other.”

No, this does not “ impl[y] a lengthy period between changes in the one

and consequent changes in the other. The answer is it depends. Climate science is complex. 🙂

Regards David

P.S. I hope your holiday was great.!

Now I understand your objection. I originally wrote that I ‘assum[ed] …that it implies …’ And indeed I was wrong to assume that that. In the new piece, I wrote ‘If the IPCC is right, then my proposed slow change from a doubling, let alone from two doublings, is called into question.’ So I recognised that one couldn’t just do that.

In practice, of course, we are talking about a lengthy period of time, at least in human terms. I accepted the estimate of 280 ppm for 1780, because I read the paper and it seemed reasonable. After 230 years we are still a long way from reaching 560 ppm, and it is impossible to know what the increase in temperature has been since 1780. At the moment, or over the last decade or so, there has hardly been any warming at all, according to the evidence.

But you are right. My assumption was flawed logically.

We can use the relationship between CO2 and temperature to tell us something about time, or rather temperature vs time, and vice versa.

The empirically determined CO2 concentration with time curve looks

like this:

http://1.bp.blogspot.com/-9TuvgwlerZ8/UvBROnTHaRI/AAAAAAABJUc/7b67VqPJ2iE/s1600/global-co2-levels-since-1700.png

My post above showed a graph of temperature vs time.

So combining these data sets data we can get an empirical plot of temp directly against CO2 concentration, (or log CO2 concentration as shown in the other graph above).

Assuming a causal connection between CO2 concentration and temperature, and the more rapid increase in temperature with CO2 concentration compared with the theoretical Modtran graph (strictly radiative forcing rather than temperature in this case) is due to positive feedback causing a sensitivity factor higher than one, the graph of temp vs CO2 concentration (or log CO2 concentration) in the future will not depend on how quickly the CO2 concentration rises with time.

However, the future shape of the temperature vs time curve, which currently appears to be also accelerating in a pseudo exponential manner in the future will depend on the rate of future CO2 emissions.

The CO2 time data shows that there has been an approximately exponential rise in CO2 emmissions and atmospheric CO2 concentration with time. (Again the CO2 concentration looks compressed on this graph due to vertical scaling)

http://scienceblogs.com/startswithabang/files/2011/06/fig1.png

Don notes that After 230 years we are still a long way from reaching 560 ppm.

But if rapid industrialization of the developing world continues so that emissions continue to rise at a pseudo exponential rate, a rapid rise in CO2 concentration reaching and exceeding 560 ppm with a consequent increase in temperature may occur sooner rather than later in spite of the log dependence of temperature on CO2 dampening that rise somewhat.

If future emissions and the rate of rise in CO2 concentration were drastically reduced, the temperature vs time curve not show as rapid a rise in future.

Hi Don,

I have been doing some work on this over the years.

You are quite correct, there is a logarithmic relationship between CO2 concentration and Radiative Forcing.

And its not a great change: 3.5W/m^2 or so for a doubling to 800ppmv. (The reason for this is that the main absorption band is nearly saturated – all the radiation is absorbed across most of the band, the only increase in absorption is in the far wings – and in the centre of the band there is increased radiation to space, reducing the “forcing”.)

As you correctly point out, the big issue is sensitivity – DegC/W/m^2 of “forcing” (or indeed of change in insolation). Does the planet respond in opposition to the change (Le Chatalier’s Principle) or is there positive feedback?

Nature carries out this experiment on a daily and an annual basis. In Canberra, the average insolation changes by about 150W/m^2 from January to July. And the mean temperature changes by 15DegC. So in Canberra, the Climate Sensitivity is 0.1DegC/W/m^2 – that is we would expect a change of about 0.4DegC if CO2 were to double.

Claims of positive feedback are unproven and counter-intuitive. They also are counter-evidence, and counter the calculated response of the Surface to a change in forcing. If you do that sum, putting in reasonable estimates for evaporation (the major net energy transport mechanism from the surface into the atmosphere) you get a sensitivity of between 0.05 and 0.1 DegC/W/m^2, entirely in agreement with what happens in Canberra.

Quite interesting Colin. I had a look at the seasonal temperature variation for Cape Reinga on the northern tip of New Zealand. It is only one degree north of Canberra but has a variation of only 6.8C which would give you a seasonal sensitivity of around 0.045 Deg c /W/M^2. Amazing how the massive heat sink of the ocean can iron out temperature sensitivities.

Also interesting is that the yearly temperature average of Canberra and Cape Reinga is somewhat similar, being 13.1C and 15.6C respectively. If you do a rough correction for differences in altitude and latitude then Canberra would be around 2C warmer than Cape Reinga.

Don you state : “The basis is the fact that carbon dioxide radiates only within a narrow frequency spectrum, and the first parts per million have the strongest effect.” The first part of this statement breaks Wein’s law which states that the peak of emitted wavelength (inverse of frequency, I believe or at least it was when I studied electronics) is dependent on the emitting object’s temperature, not it’s composition. Has that law been repealed or disproved? Just about everyone involved in the discussion seems to think that CO2 emits the same wavelengths as it absorbs. That is just not possible as it is against Wean’s Law.

I have not been able to find anything supporting the oft claimed : “CO2emitting at the same wave length as it absorbs”.

I would be interested in your comments on this point.

John

Wien’s Law unifies temperature and electromagnetic radiation (ie light) by using the concept of a black body radiator – a theoretical matter that has 100% absorption and re-radiation ( think of a lump of coal or coke – and you get the drift). A black body radiator at a certain temperature radiates a broad range of radiation, its peak wavelength determined by its temperature, across a bell shape curve skewed toward longer wavelength (or lower frequency) with a dramatic drop off at the shorter wavelength side of the curve. (This dramatic drop off explains why there is no “ultraviolet catastrophy”, predicted by 19th century physics- but that is really another story)

By and large most materials do emit radiation close to the theoretical Wien’s curve, our sun ( a giant ball of glowing gas at 5800 deg K) is a good example All matter whether solid, liquid or gas at a particular temperature follows (more or less) the Wien’s skewed bell shape curve.

Wien’s Law does not deal with absorbtion / reradiation due to atomic or molecular properties of gases. Generally in these cases a wavelength absorbed is re-emited at the same wavelength (eg Hydrogen Alpha (6563 Angstroms (1 Angstrom =10^-10 metres) – in the deep red part of the visible spectrum or CO2 at 15 microns (1 micron=10^-6 metres) in the infrared (IR) spectrum.

Generally if absorption / re-radiation is in the visible spectrum, its due to electron(s) rising and falling over different atomic energy levels, whereas if they are in the IR spectrum its due to molecular vibration. The absorption / reradiation time is much slower at the molecular levels compared to the atomic level.

CO2 absortion lines are 2.8, 4.2 and 15 microns (in the IR which ranges from 0.7 to 100 microns). The light is absorbed and, either re-emitted (at the same wavelength) or, the equivalent energy is released through colliding with another molecule, increasing its velocity (ie increasing its kinetic energy resulting in an increase in temperature).

Wien’s law calculator works out the temperature of a black body whose radiation peaks at a particular wavelength;. 2.8 microns equates to about 720 C, 4.2 microns equates to around 470 deg C and 15 microns equates to -80C. CO2’s 15 micron line is +/-2 micron wide, but well over 1/2 of the absorption is between 14 to 16 microns. There is only a small percentage of absorption betweem 13 to 14 microns and 16 to 17 microns. However a black body radiator at a temperature of -80C would have a broader radiation band trailing deep into the infrared, hence would have more energy than CO2’s narrower 15 micron line.

If you could see at 15 microns, our atmosphere would appear to be opaque after about 5 metres with our current CO2 concentration, increasing CO2 concentration would slightly reduce that distance, even doubling current C02 concentration would only reduce the distance by a relatively small margin (say 4.5 metres). At current CO2 concentration most of Earth’s 15 micron radiation is already absorbed. If you could see the Earth from space with a15 micron filter it would appear rather dark.

It is the 15 micron wavelength, radiated out towards space from the Earth and then absorbed, which causes a small degree of warming in our atmosphere. At -80C, there is not much warming, however it is suffcient to reduce by a tiny amount the Earth’s net radiation out to space (nearly all of Earth’s radiation into space lies between 8 to 13 microns).

John Tyndell (the 19th century physicist who discovered the absorption properties of gases) noted that CO2 was the weakest absorber of all the absorbing gases he experimented on.

John,

I would put it this way. Wien’s Law refers to a black body, which the Earth more or less is. But the earth’s atmosphere is not a black body at all, and as John Morland has explained earlier, Wien’s Law does not apply to absorption and re-radiation due to the molecular properties of gases. Wien’s Law has to a degree been supplanted by Planck, but that is another matter.

The reason IPCC proposes high climate sensitivity is because they couple CO2 presence in the atmosphere to the water vapor, which has a far more profound effect on climate. Were it down to CO2 alone, there’d be hardly any observable change at all. In effect IPCC ends up with their “climate sensitivity” hugely exaggerated and… unphysical.

Why is it unphysical? This is because, in the first place, the proportionality coefficients between various couplings are pulled out of thin air–the couplings may not even be linear. They are all simplistic assumptions that are then trained through climate model runs and comparisons with observed temperature trends throughout the 20th century. There are great many such parameters in the models that can be tweaked–give me enough parameters, one of my friends used to say, and I can fit an elephant. The resulting parameter vector is not even unique. You can tweak them differently and still get the same outcome.

But a more fundamental reason is that they *assume* all warming in the 1980s and 1990s to be due to anthropogenic CO2. This then leads to the highly exaggerated climate sensitivity to CO2, because this assumption is false. There have been several natural factors, very powerful, in that time that were coincident: multi-decadal ocean oscillations with periods of 20 and 60 years, roughly, that happened to be in phase, plus the extremely high solar activity, that was the most intense in… 9000 years (how can anyone knowing this ignore it–mind boggles). This is what really produced the warming, yet IPCC climate models ignore both, for purely political reasons, as we all know well enough.

As these natural factors are abating now, we end up, first, with “the pause,” but down the road, this is commonly believed by most solar scientists, with cooling that should be most pronounced around 2030.

The climate, of course, is not driven by CO2. That this is so we know from geology. Interposing known variations in the atmospheric CO2 concentration over the past 500 million years against known temperatures in that time shows no correlation. What this is telling us is that other climate drivers–clouds, ocean oscillations and currents, polar ice caps, the biosphere, solar activity, etc.–dominate, CO2 itself being insignificant in comparison, even for concentrations much higher than today.

Often when the logarithmic nature of the relationship beween CO2 concentration and forcing is discussed, the impression is given that we are near the plateau region , and there will be very little additional warming with further CO2.

The Modtran diagram showing a sharp rise up to about 100 ppm tends to compress the curve at concentrations relevant to concentrations since the industrial revolution to present concentrations and beyond. A better graphical representation is shown here:

http://knowledgedrift.files.wordpress.com/2010/05/log1-co2.jpg

With regard to the sensitivity parameter, an empirical first approximation for this figure can be taken from temperature CO2 data, the simplifying assumption being that “natural” forcing such as solar cycles, enso, volcanic eruptions etc tend to average to zero over the 160 year period of this calculation of the rise in temperature with CO2 concentration. The effects of very long term variations can also be neglected for this period.

http://oi46.tinypic.com/29faz45.jpg

The calculated value of temperature rise with doubling of CO2 is 2.04 ± 0.07 C.

Muana Loa began measuring CO2 concentrations in 1958. Calculations using this data with Hadcrut 4 temperature data since 1958 and with UAH satellite data since 1979 give temperature rises with doubling of CO2 concentration of 2.01 ± 0.38 °C and 1.80 ± 0.91 °C respectively. The error margins rise with shorter time periods due to increasing uncertainty in the temperature trends.

Most interesting, Philip. Why do the two logarithmic graphs differ so much, and why should we prefer the one you give? I haven’t seen it before, while the one I gave is widely used.

Oops put the last reply in the wrong place. But to continue:

The Hadcrut temperature and Muana Loa CO2 concentration data from 1958 is shown here:

http://tinyurl.com/lur8xso

The least squares fit of the temperature data gives a trend of

0.124 ± 0.023°C/decade (2?)

giving a temperature increase for the entire 56 year period

0.694 ± 0.124 °C (The error margin is 19%)

The change in CO2 concentration for that period is from 315 to 400 ppm. (We will neglect the small error in CO2 concentration as this data is much less noisy than the temperature data.)

The equation for temperature rise with increasing CO2 is therefore

0.694 = k log(400/315) where k is the proportionality constant.

0.694 = k x 0.239

k = 0.694/239 = 2.91

The temperature rise for a doubling of CO2 concentration is therefore

2.91 x log2 =

2.01 ± 0.38 °C

Given the noisy data, the remarkable agreement with the figure from 1850 (2.04 ± 0.07 °C) should be regarded as fortuitous.

These figures fit well within the IPCC range of 1.5 – 4-5 °C

The temperature trend for the UAH data from 1979 to the present is

http://tinyurl.com/mz4kfve

Trend: 0.138 ±0.070 °C/decade (2?)

(With a 35 year data set the error margin has blown out to 51% from 19% for the data from 56 year data set as the signal (the rise in temperature) to noise (the vertical variation of the data about the trend line) ratio decreases with shorter time periods.)

The rise in CO2 concentration for this period is 338 to 400 ppm

Repeating the calculations above for this data means that doubling of CO2 concentration gives a temperature rise of

1.80 ± 0.91 °C

Which is in experimental agreement with the earlier calculations but the error margin is very high.

Philip, your first graph would be better to have the Y axis labeled “additional forcing since the start of the industrial revolution”. It is exactly the same as the one Don presents except the X axis has been compressed so that it looks more scary.

The rest of your comment I can’t comprehend, do you have any references?

I might add here that it is not a question of graphs looking more or less “scary”. It is presenting them so that they are useful.

The modtran graph is useful in seeing how the forcing varies from concentrations of zero CO2 upwards, but this makes it difficult to see what is going on over concentrations we are concerned with on earth in the recent past, the present and coming decades and centuries, which is relevant to the discussion of anthropogenic global warming.

I was once involved in an online discussion with a person who was dead set against the idea that the rate of warming had increased with time over the last century or so. He hated the curved fit I showed above and denounced it, anyone connected with it and me for posting it in the most vituperative terms.

To back his claim that temperature had been increasing linearly he posted this graph:

http://tinyurl.com/bkoy8or

Now, he has deliberately placed a totally useless line at 9 on the vertical scale for the sole purpose of compressing the temperature data in order to claim that the data is “unequivocally” fitted by a straight line.

Shorn of the extraneous camouflage, his graph looks like this:

http://tinyurl.com/lus5pgd

You can see why he was so anxious to attempt to flatten the appearance of the curve. This is hardly “unequivocally” fit by a straight line, and is certainly inferior to the accelerating curve fit.

Graphs should be presented to inform not obscure the point under discussion, regardless of how “scary” or “comforting” alternative presentations may be.

Thanks Philip I now understand what Robert was doing in that second graph. I plotted some Had Cru4 data for the same period and got a similar sensitivity value, though with slightly more error. The R value for his graph is 0.91 which means the Rsq would be 0.83 so one could say that 83% of the warming since 1958 could be explained by CO2.

But as we well know, correlation does not necessarily mean causation. The sensitivity value Don quotes of around 1C is for no feedbacks, while the range the IPCC quote is higher, largely due to applying the positive feedback of water vapour. The models poorly handle clouds , oceanic cycles such as ENSO and there may even be unknown forcings that have yet to be discovered. Given that stable natural systems involve negative feedbacks, I would put money on them

dominating in the long term.

So what has caused an apparent sensitivity of 2.0 since 1958? My guess would be some CO2 warming which has been attenuated by negative feedbacks and a number of natural forcings and cycles which have operated in the past warm periods when CO2 was not an issue e.g. 6000 years BP.

The figure since 1958 is actually 2.0 ± 0.4 °C with the uncertainty being calculated from the 2 sigma or 95% confidence limits of the fit for the linear least squares trend of the temperature data. (See my post below). As I said I have ignored errors in the CO2 data which are small by comparison, but will add a bit more to the uncertainty of my calculation.

As I noted that was an error margin of 19% which I have carried into the value for temperature for doubling of CO2.

In other words there is a 95 % probability that the true figure based on that temperature data is between 1.6 and 2.4 C.

As I noted that is just a first approximation ignoring contributions other than CO2 increase and consequent feedbacks, on the assumption that shorter term natural forcings will tend to even out over the period, and longer term forcings, such as orbital changes responsible for ice ages will be too long term to contribute much.

So it is a rough first approximation.

Interestingly skeptic Steve McIntyre has promoted a model which uses a sensitivity factor of 1.65 C.

http://climateaudit.org/2013/07/21/results-from-a-low-sensitivity-model/

I must disagree that the the r squared value of 83% means that the graph attributes 83% of the warming to CO2.

The r2 value just tells how well the data points match up with the straight line of best fit. A perfect fit would see all data points exactly on the straight line and the r and r2 values would be 1.

The actual contribution could be greater or smaller than 83%. Again the empirical value of 2 C for doubling of CO2 concentration simply says what the slope of a straight line fit of the data is. There are no theoretical assumptions here, other than that the data can be fit by a straight line.

The simplifying assumptions I mention above are clearly only that, simplifying assumptions, which may or may not correspond closely to reality (although I think they are not a million miles off) but the real contribution of other forcings cannot be determined just from the data presented.

Dlb and Don,

Thank you for the replies.

Firstly, as Dlb says the graphs are not really different. Its just that they have a different ratio of vertical axis to horizontal axis, and the second has cut out the part of the graph before 280 ppm CO2, so that the remaining vertical scale is effectively “stretched”. The effect on temperature above that concentration (that existing prior to the industrial revolution) can be seen more clearly than in the modtran graph which must also display the curve below that level, and thus compresses the whole graph in the vertical direction relative to the horizontal. Alternatively if you “squashed” in the modtran graph from the right hand end, the curves would look similar, at least above the 280 ppm level. The vertical axis on the second graph gives the additional forcing from 280 ppm rather than the absolute forcing value on the modtran graph.

The plot of temperature vs log CO2 is (I think) from Robert Way. His “All Method Temperature Index” takes the average of 10 different temperature record data sets. The plot of temperature vs. time is shown here. (Unfortunately Dr Way does not tell us what the function for the curve is, it is probably a polynomial.)

http://www.skepticalscience.com/pics/AMTI.png

The log graph takes the CO2 concentrations for the corresponding period (from 1850 to 2010, prior to Muana Loa readings ice core data is used).

You can see by comparing the temperature time graph and the temperature log CO2 graph that the temperature peaks and troughs match up, (the spike at 1880/0.03, the local peak around 1940/0.12, the el nino peak at 1998/0.37 etc) but the as one moves to the right increasing

value of log CO2 the temperature is being “stretched”.

I will give the calculations for data from 1958 and 1979 in a reply immediately following.

[…] likely to be logarithmic. Callendar is an interesting fellow, and I wrote an essay about his work here. He thought the warming effect was beneficial to humanity, and that it might keep the next ice-age […]

[…] that is the case, the IPCC’s confidence about everything should be taken with a lot of salt. The effect of warming through increased carbon dioxide is logarithmic, which means that significant temperature increases will take a very long time even if the IPCC is […]

I think you’ve misunderstood the graph, somehow seeing it as one that involves time. If, and I have yet to be completely convinced, the relationship is logarithmic, if in the time interval [T0,T1] you double the concentration of CO2 the effect is fully realized at T1. So if you double CO2 in 150 years the effect (say a 2.5 degC increase) will have happened by the end of that 150 years.