Klotzbach et al revisited, a reply by John Christy

Reposted with permission from Marcel Crok’s blog: De staat van het klimaat

Recently Jos Hagelaars published a very extensive blog post (on the blog of Bart Verheggen) about a widely discussed paper of Klotzbach et al 2009. The title of the blog post – Klotzbach revisited – is in English, however, the post itself was written in Dutch. As a fellow Dutchman I understand that writing in Dutch is easier than writing in English. However, in this case, the blog post is focussed so much on one single paper, that Jos Hagelaars, in my opinion, should have chosen for an English version, in order to give the authors of the Klotzbach papers the chance to give a reaction. I translated the article with google translator and did some minor editing. I then shared the article with a few of the coauthors. John Christy looked at some of the issues raised by Hagelaars and wrote the following reaction which I publish here as a guest blog.

Guest blog by John Christy

In a blog post entitled “Klotzbach Revisited” Jos Hagelaars updated the results of Klotzbach et al. 2009, 2010, suggesting that the main point of Klotzbach was no longer substantiated. Klotzbach et al.’s main point was that a direct comparison of the relationship of the magnitude of surface temperature trends vs. temperature trends of the troposphere revealed an inconsistency with model projections of the same quantities.  Klotzbach et al. offered suggestions for this result which included the notion that near-surface air temperatures are easily affected by factors unrelated to greenhouse gas increases, which then implies they may be poor proxies for detecting the magnitude of the greenhouse effect which has its main impact in the deep atmosphere.

It appears Hagelaars’ key point is that when the data from Klotzbach et al. are extended beyond 2008 to include data through 2012, the discrepancies, i.e. the observed difference between surface and tropospheric trends relative to what models project, are reduced somewhat.

Confusion
The reader must understand that there are two issues that have unfortunately been convoluted and misinterpreted on this issue.  The first issue deals specifically with the relationship between a surface temperature trend and the temperature trend of the corresponding tropospheric layer above (roughly surface to 10 km altitude and referred to as LT for “Lower Troposphere”).  The second issue deals with the actual magnitude of the surface and tropospheric trends.  Thus the first issue is a question of the physics of the vertical temperature structure (i.e. internal model processes) and the second issue is a question of trend magnitudes (i.e. rates of warming or climate sensitivity).  The two are, of course, related.

Here is how the confusion often happens.  As shown in many results, the observed tropospheric trend is often near (or slightly below) the magnitude of the surface trend.  Thus, someone may say “the surface and troposphere agree” as if that validates greenhouse warming theory.  However, in model results (i.e. according to theory) the surface and tropospheric trends should NOT agree because in models the troposphere warms faster than the surface.  So, if surface and tropospheric trends agree, then by implication, model output is incorrect. Below we shall look at this more closely.

Regarding the first issue, there have been many studies which have looked at the relationship between the magnitude of the surface temperature trend relative to that of the tropospheric layer as defined above (e.g. Douglass et al. 2007.)  Global climate models when forced by extra greenhouse gases on average indicate their global average troposphere warms at a rate about 1.25 times that of the surface, i.e. the trend of the troposphere is amplified by a factor of 1.25 over that of the surface.  When confined to the tropics (20°S – 20°N) the amplification is about 1.4 times that of the surface.  This model-generated tropospheric warming in the tropics is known as the “hot spot” and has been claimed to be a signature of greenhouse warming because of its prominence in models.

Amplification
When separated by land and ocean, the model amplification factor is found to be larger over oceans than land.  Klotzbach et al. 2010 calculated the ratio over global land to be 1.1, and this was confirmed by independent analysis (see http://climateaudit.org/2011/11/07/un-muddying-the-waters/).  Hagelaars follows an early calculation by Gavin Schmidt, claiming the land value should be 0.95.  As noted however, several additional calculations confirm the value of 1.1 utilized by Klotzbach et al. 2010.  The model amplification of the ocean trends is close to 1.6 as determined by the NASA-GISS model.

The second issue is the simple magnitude of global temperature trends of the surface and troposphere as depicted by models and as observed by instruments.  Since both issues can be examined by investigating the observational record, we have created the Table below to update Klotzbach et al. 2010 and address the concerns of Hagelaars.

In his last table, Hagelaars appears to be subtracting the actual observed values of LT and Sfc which produces values very similar to those shown in the upper half of our table.  It is true that these differences are a little closer to zero than shown in Klotzbach et al., but that is due to the fact that there has been no warming in the past 10 years in both types of data.  (Note too, that if the surface and tropospheric trends “agree” in absolute magnitude, that means they do not agree with model output as noted earlier – hence closer agreement of absolute trends can imply greater disagreement with model results.)

Now, a more direct, “apples to apples” comparison test for the model output is to amplify the surface trends (with model factors) for comparison with the LT trends.  We have been conservative with the amplification factors, but even so, the differences are large – and very large over land.  Thus the basic point of Klotzbach et al. 2010 is confirmed, i.e. that the average climate model warms its atmosphere, relative to its land surface, more than seen in observations. (Other studies focus on the tropical “hot spot” where it is clear models also significantly warm the troposphere relative to observations, e.g. Christy et al. 2010.) This raises at least some suspicion as to the ability of the near-surface air temperature to be used as a proxy for greenhouse detection.

Table:  1979-2012 trends (°C/decade).  No amplification factors are applied in the upper half of the table, thus they compare different quantities.  Land, Ocean and Global factors of 1.1, 1.2 and 1.4 are applied to the surface data in the lower half.  Recent results from Santer et al. 2012 indicate a global amplification factor greater than 1.3 for model LT vs. Sfc, but we use only 1.2 below.  Lower Tropospheric data are from the University of Alabama in Huntsville v5.5 (UAH) and Remote Sensing Systems v3.3 (RSS), and surface data from the National Climatic Data Center (NCDC) and the Hadley and Climate Research Unit Temperature v4 (HadCRUT4).  Artificial values of “NCDC LT” and “HadCRUT4 LT” are calculated by multiplying their actual trends by the model amplification factors.

UAH LT

RSS LT

NCDC

HadCRUT4

Actual
Land

0.175

0.182

0.266

0.272

Ocean

0.116

0.107

0.100

0.115

Globe

0.137

0.131

0.152

0.161

Difference (No Amplification)

UAH-NCDC

RSS-NCDC

UAH-HadC

RSS-HadC

Land

-0.091

-0.084

-0.097

-0.090

Ocean

+0.016

+0.007

+0.001

-0.008

Globe

-0.015

-0.021

-0.024

-0.030

 
Hypothetical (with Amplification)

NCDC LT

HadCRUT4 LT

Land (1.1xSfc)

0.293

0.300

Ocean (1.4xSfc)

0.140

0.161

Globe (1.2xSfc)

0.184

0.193

Difference LT

UAH-NCDC LT

RSS-NCDC LT

UAH-HadC LT

RSS-HadC LT

Land

-0.118

-0.111

-0.125

-0.118

Ocean

-0.024

-0.033

-0.045

-0.054

Globe

-0.047

-0.053

-0.056

-0.062

CMIP5 versus observations
Of equal importance here are the magnitudes of the actual trends of the surface and troposphere.  The average global surface trend for 90 model simulations for 1979-2012 (Climate Model Intercomparison Project 5 or CMIP-5 used for IPCC AR5) is +0.232 °C/decade.  The average of the observations is +0.157 °C/decade.  Therefore models, on average, depict the last 34 years as warming about 1.5 times what actually occurred.  Santer et al. 2012 (for 1979-2011 model output) noted that a subset of CMIP-5 models produce warming in LT that is 1.9 times observed, and for a deeper layer of the atmosphere (mid-troposphere, surface to about 18 km) the models warm the air 2.5 times that of observations.  These are significant differences, implying the climate sensitivity of models is too high.

Signature
All of the above addresses the two issues mentioned at the beginning.  First, global climate models on average depict a relationship between the surface and upper air that is different than that observed, i.e. models depict an amplifying factor into the upper air that is greater than observed.  Secondly, the average climate model depicts the warming rate since 1979 as much higher than observed with increasing discrepancies as the altitude increases (which is consistent with the first issue).

Since this increased warming in the upper layers is a signature of greenhouse gas forcing in models, and it is not observed, this raises questions about the ability of models to represent the true vertical heat flux processes of the atmosphere and thus to represent the climate impact of the extra greenhouses gases we are putting into the atmosphere.  It is not hard to imagine that as the atmosphere is warmed by whatever means (i.e. extra greenhouse gases) that existing processes which naturally expel heat from the Earth (i.e. negative feedbacks) can be more vigorously engaged and counteract the direct warming of the forcing. This result is related to the idea of climate sensitivity, i.e. how sensitive is the surface temperature to higher greenhouse forcing, for which several recent publications suggest models, on average, have been overly sensitive.

References:
Christy, J.R., B. Herman, R. Pielke, Sr., P. Klotzbach, R.T. McNider, J.J. Hnilo, R.W. Spencer, T. Chase and D. Douglass, 2010:  What do observational datasets say about modeled tropospheric temperature trends since 1979? Remote Sens. 2, 2138-2169. Doi:10.3390/rs2092148.

Douglass D. H., J. R. Christy, B. D. Pearson and S. F. Singer (2007) A comparison of tropical temperatures trends with model predictions.. International J of Climatology 5 Dec 2007. doi: 10.1002ijoc.1651

Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr., J.R. Christy, and R.T. McNider, 2009:

An alternative explanation for differential temperature trends at the surface and in the lower troposphere. J.Geophys. Res., 114, D21102, doi:10.1029/2009JD011841.<http://pielkeclimatesci.files.wordpress.com/2009/11/r-345.pdf>

Klotzbach, P.J., R.A. Pielke Sr., R.A. Pielke Jr., J.R. Christy, and R.T. McNider, 2010:

Correction to: “An alternative explanation for differential temperature trends at the surface and in the lower troposphere. J. Geophys. Res., 114, D21102, doi:10.1029/2009JD011841″, J. Geophys. Res., 115, D1, doi:10.1029/2009JD013655.

Santer B. and 26 others (2012). Identifying human influence on atmospheric temperatures. Proceeding of the National Academy of Sciences. doi:10.1073/pnas.1210514109.

About these ads

47 thoughts on “Klotzbach et al revisited, a reply by John Christy

  1. Dr. Christy

    In your opinion what single or combination of variables in the models is producing this overly sensitive amplification factor?

    Has anyone tweaked these variable(s) in the models to more closely follow observed temperatures and projected future temperatures based upon a model that is more realistic?

  2. Well, if the models disagree with the data it must be time to adjust the data …. again :)

    Are some of the past adjustments of land data coming into play?

  3. John Christy: Great post. I’ve been trying to find something that climate models simulate properly and I can’t find it. In addition to the problems you’ve presented, they can’t simulate sea surface temperatures. See here:

    http://bobtisdale.wordpress.com/2012/10/08/model-data-comparison-sea-surface-temperature-anomalies-november-1981-through-september-2012/

    Also, the models indicate Marine Air Temperature should warm faster than sea surface temperatures:

    But in the real world it’s the other way around:

    Those graphs are from this post:

    http://bobtisdale.wordpress.com/2011/11/04/an-initial-look-at-the-hindcasts-of-the-ncar-ccsm4-coupled-climate-model/

    And they can’t model satellite-era precipitation:

    http://bobtisdale.wordpress.com/2012/12/27/model-data-precipitation-comparison-cmip5-ipcc-ar5-model-simulations-versus-satellite-era-observations/

  4. The assertion that the impact of AGW will be “in the deep atmosphere” is interesting. It would be appreciated if this could be explained further. That is the first use of the term I am aware of. Until now, the main impacts were in increased/decreased rain/drought hot cold/snow/non-snow/storms/calm. As it is, this term sounds like an ad hoc excuse to distract from yet another failure in the predicted disaster so many AGW promoters have made their career from.

  5. denniswingo:
    I can’t speak for Dr. Christy, but I think it is the sensitivity to CO2 that is driving the models higher. The models are missing something important and trying to lump it into the CO2 sensitivity number, which is rather odd, as they don’t even know if what they are missing even closely approaches linear. There is something missing in the modeled physics as is rather distinctly pointed out in the tables Dr. Christy provided. What is missing? I haven’t a clue, but something is definitely missing. Hopefully someone more studied in the subject than I will find it.

  6. So our atmosphere is more robust and responsive to heat balance than the alarmist want us to believe..

    Why does this not surprise me… Tell me again that our earth was not intelligently designed… using basic laws of chance we should not be here at all…

  7. Richard M says:
    February 22, 2013 at 8:07 am

    Well, if the models disagree with the data it must be time to adjust the data …. again :)

    Are some of the past adjustments of land data coming into play?

    =========================================

    Data manipulations biting those who did them in the butt?

    LOL

    Never thought of that one but it well could be.. too much TWEEKING is a bad thing… :)

  8. Considering all the apparent disagreements about source data,perhaps it is time we have a new group established.
    Their sole job would be the monitoring of the quality of data.
    Because as it stands, it would seem everyone who can’t get reality to agree with their theories simply changes the numbers of source data.

  9. “Has anyone tweaked these variable(s) in the models to more closely follow observed temperatures and projected future temperatures based upon a model that is more realistic?”

    The main knob in the climate models is the treatment of aerosols where there are large uncertainties. If you choose different values you get sensitivities that range from 2.1 to 4.4.

    The missing piece of the puzzle is what happens to this analysis if you select models that have sensitivity below 3.

    Put another way. looking at this data you can see that the ocean is a relatively good match.
    .02 to .03 in trend. The land is where the problem is.

    Possible causes:

    1. in both UHA and RSS temperature retrievals are done differently over land and SST.
    That is a source of potential error. note I said potential.
    2. Model amplification: the model amplification could be as much as 20% too high.
    put another way, you have to reduce amplification by 20% to get a match, but then
    the “fingerprint” gets smaller and harder to detect.
    3. UHI. Since the ocean matches fairly well and the difference is the land, the balance
    could be due to UHI and or MicroSite, this puts an upper limit on the maginitude of UHI & Microsite of around .1C per decade. That figure is consistent with a a wide variety of studies,
    even the Berkeley UHI study allows for a UHI effect that could be as high as .1C per
    decade, when you consider the full range of uncertaity ( +- .2C)

    That last point should not go un noticed by folks. Looking at the difference between land trends and UHAor RSS ( with amplification) you see that UHI & Microsite Is bounded from
    above at around .1C per decade. The problem? FINDING a difference this small with appropriate statistical analysis. Heck the error bars from the berkeley study were +- .2C. When the effect is small and the variance is high, the power of your test is dramatically effected. Another decade of data might narrow it ( have to do the math ) but anyone should be able to see that this area is open to various interpretations: bias in UHA/RSS, bias in Models, Bias in the land record.
    It also means that different people will look at the same data and draw conclusions to their bias.
    A distinterested look at the underlying data recognizes multiple explanations. One obvious test is to restrict the models to those that have a sensitivity less than 3 and see what that answer is.
    If your thesis is that the sensitivity is too high for many of the models, its easy to test.
    Another test would be to limit the land record to rural stations. That could leave microsite bias
    as a residual explanation. Simply, Microsite might be bigger than UHI, but the combination of the two seems to limited from above to ~.1C decade over the period in question.

  10. Steven Mosher says:
    February 22, 2013 at 9:57 am

    “The main knob in the climate models is the treatment of aerosols where there are large uncertainties”
    ++++++++++++++++

    Notice SMs failure to recognize the elephant in the room, namely that CO2 at near saturation levels is not even a bit player in running our current climate. No amount of knob twisting of the models will change that..

  11. Steven Mosher says: February 22, 2013 at 9:57 am

    “The missing piece of the puzzle is what happens to this analysis if you select models that have sensitivity below 3.”

    Wouldn’t it be better to select models with a TCR under a certain value. For example, GFDL-CM2.1 wouldn’t make your cut as it has a ECS of 3.4. It’s TCR, however, is a lowly 1.5, so there’s a good chance it could match observations.

  12. The point is there is NO troposphere amplification, no troposphere hotspot or tropical troposphere hotspot. A key feature of the theory and the climate models.

    It has nothing to do with aerosols as Mosher stated above.

    RealClimate had many of its followers confused for a long time (as Mosher seems to be or is trying to do) by claiming that even solar warming could cause a tropospheric hotspot so it is not a fingerprint of global warming theory. Well, yeah, it is a fingerprint of CO2-caused warming theory (despite the misdirection about solar warming).

    The only prediction of the theory which seems to be close so far is the Arctic sea ice melt trend (but then they missed by a mile on the Antarctic sea ice trend/increase). So 1 fluke out of 20 (just like election polling error margins) still makes the theory 100% wrong.

  13. “These are significant differences, implying the climate sensitivity of models is too high.”

    Well, that is what it is all about, isn’t it!

    As the current Draft of AR5 fudges over the issue of the extra warmth being completely missing (i.e. no hot spot) this paper is timely, to say the least.

    So how many times must it be shown that models including a CO2-induced hot spot are incorrectly predicting a Much Warmer Future™? Would 100 be enough? How about only 10? How much falsification is required?

    The prevarication from CAGW afficcionados is about how the Hot Spot is not really supposed to be there – it is Polar Amplification™ that exists in theory and in evidence, except for that pesky South Pole. Yet the Hot Spot, the core of GHG beliefs, is still stuck stock still in the models – no pole-amps for them! When the mob comes for the modellers with pitchforks and rakes demanding their money back, the modellers will no doubt have thought up a new excuse. But come they will. The anger is growing.

  14. ill Illis says:
    February 22, 2013 at 11:08 am
    RealClimate had many of its followers confused for a long time (as Mosher seems to be or is trying to do) by claiming that even solar warming could cause a tropospheric hotspot so it is not a fingerprint of global warming theory. Well, yeah, it is a fingerprint of CO2-caused warming theory (despite the misdirection about solar warming).
    Exactly. Its existence is not a confirmation, as it could be caused by the sun, its absence is a confirmation as it is existing in all models but not in reality.

  15. It’s worse than we thought. Global warming also causes climate models to fail! (Why not? It gets blamed for everything else.)

  16. RSS satellite dataset shows no warming for 23 years, is that because the increasing uhi effect from earlier times around the temp sensors has now ceased to have any further effect.
    Has there been any significant increase in land temperatures other than the uhi effect.

  17. Bill Illis wrote:
    “The point is there is NO troposphere amplification, no troposphere hotspot or tropical troposphere hotspot. A key feature of the theory and the climate models.”

    That is exactly right. CAGW can only be considered settled science when proponents have made multiple, non-trivial predictions and then proven them correct beyond reasonable doubt. Simply predicting that it will get warmer is 50-50, which is trivial, and even that is not doing so well. Polar amplification and the mid-troposphere hotspot are the key non-trivial predictions, and as Bill correctly summarized, both are far from being considered proven.

  18. However, in model results (i.e. according to theory) the surface and tropospheric trends should NOT agree because in models the troposphere warms faster than the surface.

    The GHG warming theory predicts warming by a specific mechanism and that mechanism results in the troposphere warming faster than the surface. If this isn’t happening, the theory is falsified, irrespective of how much the surface does or doesn’t warm.

    All the talk about sensitivity, aerosols, possible errors in the satellite data, etc. is just attempts to increase the uncertainty such that advocates of GHG warming can still claim the theory might still be correct, despite the data showing the theory is false.

    From the wikipedia entry on Kuhn’s The Structure of Scientific Revolutions, and Mosher take note,

    As a paradigm is stretched to its limits, anomalies — failures of the current paradigm to take into account observed phenomena — accumulate. Their significance is judged by the practitioners of the discipline. Some anomalies may be dismissed as errors in observation, others as merely requiring small adjustments to the current paradigm that will be clarified in due course. Some anomalies resolve themselves spontaneously, having increased the available depth of insight along the way. But no matter how great or numerous the anomalies that persist, Kuhn observes, the practicing scientists will not lose faith in the established paradigm for as long as no credible alternative is available; to lose faith in the solubility of the problems would in effect mean ceasing to be a scientist.

  19. @Louis

    “Global warming also causes climate models to fail! (Why not? It gets blamed for everything else.)”

    Hold that thought! But you should use the contemporary terms: Climate disruption causes climate models to fail! Our “CO2 pollution” has so disrupted the normal flow of events that the disequilibreated atmosphere is causing climate models, all of them, to fail! It is a complete disaster! We shall have to compensate the modellers with lots of money because it is we skeptics who have broken their models.

    @Philip Bradley

    “The GHG warming theory predicts warming by a specific mechanism and that mechanism results in the troposphere warming faster than the surface.”

    Not only that, it predicts warming by a specific rate. The absence of the extra warming has simultaneously broken the back of the ‘rate’ claim. The rate is zero, or 1 divided by zero, can’t remember, doesn’t matter. It is simply not there.

    Antigonish
    Yesterday upon the stair
    I met a man who wasn’t there.
    He wasn’t there again today!
    I wish, I wish he’d go away.
    – Hughes Mearns

    The Modellers Lament
    Yesterday up in the air
    I found a trend that wasn’t there
    It wasn’t there again today!
    I wish that trend would go away.

    My mind is now confused and slow;
    There is no incandescent glow.
    I really hoped that trend was there
    I see my model’s just hot air.
    – Crispin in Waterloo

  20. I’d highlight the existential problem faced by climate modelers by a slight rephrasing of the description of Kuhn’s work.

    But no matter how great or numerous the anomalies that persist, Kuhn observes, the practicing climate modelers will not lose faith in the established models for as long as no credible alternative model is available; to lose faith in the solubility of the model’s problems would in effect mean ceasing to be a climate modeler.

  21. I’ve said it before; the climate is chaotic. Why waste time even thinking about modeling it? Let alone countless man hours and trillions of dollars…

  22. Mosher states that the sea temperatures are a better match to the models than the land data. It is far easier to ‘adjust’ the sea temperatures to get the right result than it is on land. The whole leather bucket, manifold temperature, modern manifold temperature adjustment render the historic SST a guesstimate at best. It even more dodgy than the TOB adjustment.

  23. We know that CO2′s ability to create heate is logarithmic and that the models do not factor in negative feedback either correctly if at all.
    Therefore trying to look for a certainty when key elements are incorrect or missing is asking for trouble.
    Other than that, no warming confirmed for the last 16 years by Pachari is another admission of the AGW shambles. The usual AGW answer is that is “cherry picking”…well so is the late 20th century warming and the last 200 years.
    The Holocene we are in had it’s “Climatic Optimium” 10,000 years ago. Thats not “cherry picking”, it is what is called Empirical Data. And it should be noted that the temperature variablity per century ever since has been +/- 2.5C and the warming we experienced late 70′s to late 90′s did not cross the line up or down.
    We have therefore been cooling for 10,000 years with nothing unusual to report…unlike the Younger Dryas period when a 10C increase in temp over 3 years in the Arctic reverted over 1000 years before doing the same rise in 60 years. Not a plane in the sky or a car on the road.
    What is more CO2 levels were as much as 15 times todays levels 500 million years ago.
    Did we burn up….or green up?
    Well we are here are we not?

  24. Robertv says:
    February 22, 2013 at 3:42 pm
    I suppose there is no cooling in these models.

    Not in the ones that are angling for more funding. :)

  25. Bill Illis says (February 22, 2013 at 11:08 am): “The point is there is NO troposphere amplification, no troposphere hotspot or tropical troposphere hotspot.”

    OK, but as Steve Mosher mentioned, there could be amplification if the surface trend is exaggerated. If the model amplification is correct, then how much must the surface trend have been overestimated/overadjusted?

  26. Dr. Christy,

    I would be interested in your thoughts on Lindzen’s thoughts in his recent European Physical Journal Plus paper -

    3 Some other important physical concepts: The moist adiabat and the Rossby radius
    Any picture of the thermal structure of the atmosphere clearly displays two important features: 1) Temperatures are nearly horizontally homogeneous in the tropics, and 2) the vertical profile of temperature in the tropics closely follows the moist adiabat.
    3.1 The moist adiabat
    The moist-adiabatic lapse rate (or saturated-adiabatic lapse rate) is the rate of decrease of temperature with height along a moist adiabat. …
    The response is characterized by the so-called hot spot (i.e., the response in the tropical upper troposphere is from 2–3 times larger than the surface response). The models are likely correct in this respect since the hot spot is simply a consequence of the fact that tropical temperatures approximately follow the moist adiabat. This is essentially a consequence of the dominant role of moist convection in the tropics.
    However, we see in fig. 9 that the temperature trends obtained from observations fail to show the hot spot.
    In point of fact, it seems likely that some of the recent temperature data must be wrong!
    The resolution of the discrepancy demands that either the upper troposphere measurements are wrong, the surface measurements are wrong or both. If it is the surface measurements, then the surface trend must be reduced from “a” to “b”. Although, it is generally ill-advised to estimate climate sensitivity from observed changes in temperature (simply because of ignorance of all the relevant processes including especially natural internal variability on time scales of centuries or less), it would be very difficult to simulate the trend at “b” with models having their current sensitivity.
    Given how small the trends are, and how large the uncertainties in the analysis, such errors are hardly out of the question. In fact there are excellent reasons to suppose that the error resides in the surface measurements. To understand this requires an awareness of the Rossby Radius of Deformation.
    3.2 Rossby Radius of Deformation
    In dynamic meteorology, there is something called the Rossby Radius. It is the distance over which variables like temperature are smoothed out. This distance is inversely proportional to the Coriolis Parameter (twice the vertical component of the earth’s rotation), and this parameter approaches zero as one approaches the tropics so that temperature is smoothed over thousands of kilometers (detailed formulas can again be found on Wikipedia).
    However, this smoothing is only effective where turbulent diffusion is small. Below about 2 km, we have the turbulent trade wind boundary layer, where such smoothing is much less effective so that there is appreciable local variability of temperature. In practice, this means that for the sparsely sampled tropics, sampling problems above 2km are much less important than at the surface (Lindzen and Nigam [5]). Thus, errors are more likely at the surface.
    An important philosophical point to this little exercise is that neither ambiguous data nor numerical model outputs should automatically be assumed to be right or wrong. Both should be judged by basic, relatively fundamental theory—where such theory is available.

    5. R.S. Lindzen, S. Nigam, J. Atmos. Sci. 44, 2418 (1987).

    http://eaps.mit.edu/faculty/lindzen/ssurftgrad.pdf

  27. The only fingerprints of warming in the observations and the models are those of the climate catastrophics … cut off the hands of the climate catastrophics and the fingerprints will disappear.

  28. Gary Hladik says:
    February 22, 2013 at 5:13 pm
    OK, but as Steve Mosher mentioned, there could be amplification if the surface trend is exaggerated. If the model amplification is correct, then how much must the surface trend have been overestimated/overadjusted?
    And here the circle closes, one cannot do good science if data collection, data preparation does not have quality.

  29. We can also check to see if the Tropics are warming faster than other latitudes because this is also a component of the troposphere warming prediction – that it will mainly be concentrated from 40N to 40S with the Tropics having the highest rate overall.

    Is this evident in the data?

    Nope, we actually have a Tropical Coolspot, instead, with the Tropics troposphere warming at about 60% of the other latitudes (south polar might be lower however).

    So, again, the theory is just wrong and no misdirection about solar warming, aerosols or poor surface records can take away from this non-correct prediction.

  30. “The average of the observations is +0.157 °C/decade. Therefore models, on average, depict the last 34 years as warming about 1.5 times what actually occurred.”

    henry says

    I notice with some pride that my observation for same period (last 38 years) was 0.014°C/year which is +0.14 °C/decade. Look at my table for means:

    http://blogs.24.com/henryp/2013/02/21/henrys-pool-tables-on-global-warmingcooling/

    Pretty good, if I may say so myself. So, we know from experience that my sample is not bad. But now take some time to study all of my tables. I suspected that the speed of warming might not be constant, and therefore decided to cut the 38 years into 4 periods. Notice that for all three, maxima, means and minima, we turned from warming to cooling some 15 years ago. If you do some plotting with the 4 speeds on each of the maxima, means and minima you can get some very high correlation on binomials (parabolic) and sooner or later you will figure out exactly when we did turn from warming to cooling.

    Clearly it is this insistence to deny the fact that earth is cooling that has caused “model failure”.

    OTH we can only hope that those binomials with high correlation are actually wrong and that the speed of warming is on an A-C curve, more or less looking like this:

    http://blogs.24.com/henryp/2012/10/02/best-sine-wave-fit-for-the-drop-in-global-maximum-temperatures/

    If not, I don’t know where we will be heading.

  31. Slightly OT but there’s something that’s been bugging me for a while about the whole idea of climate sensitivity, as used in the models. It seems to be accepted that, all other things being equal, a doubling of CO2 will cause a certain temperature rise of about 1 deg C. We’re told this is a matter of physics.

    But surely there MUST be limits to that – outside certain bounds, that simple relationship can’t logically hold true. If, as an extreme example, the earth had an atmosphere with a single molecule of CO2 and we then “doubled” that concentration to two molecules, it would be absurd to suggest that single extra molecule would raise the entire planet’s temperature by one degree. In fact, if we had an atmosphere with no CO2 and someone appeared and took their first ever breath, the relationship as used should lead to immediate runaway temperatures because of the infinite increase in CO2! There may well also be a breakdown of the relationship with high concentrations, but that’s a little harder to illustrate with such a simple thought experiment.

    I’m happy to accept that the physics as used will operate ok as an approximation around the current values, but has any work been done on exactly what range of concentrations the relationship (as modelled) is valid over? Leaving aside any other criticisms of climate modelling, if a model is only valid within limits then isn’t it kind of essential to know what those limits are?

  32. This brings a question. If the higher troposphere via modeling warms faster than the surface, that would reduce the lapse rate, and slow convection. That should decrease precipitation in tropical areas. I thought models predicted the opposite. Of course, the simplest answer is that models are just wrong.

  33. Joe says:
    February 23, 2013 at 5:53 am
    ————

    There is a simple formula for the theory.

    TempCAnom = 3/2Ln(2)*Ln(CO2ppm/280) = 4.328 * Ln(CO2ppm) – 24.39

    Its just easier to use and you can put any CO2 number in there in PartPerMillion and it will pump out what the temperature change is supposed to be at the theory’s 3.0C per doubling.

    As you get to really low levels of CO2, say below 0.1 ppm, the formula falls apart because now you have exceeded to total greenhouse effect of 33C (actually it is 21C but that is for another day). Because strange things happen at lower CO2 levels, the climate scientists say the theory only holds for values above 100 ppm and probably not more than 500,000 ppm.

    I always think any theory should work across all possible values or it should just be thrown out.

    If we change it to 1.5C per doubling : TempCAnom = 2.16 * Ln(CO2ppm) – 12.19 : (which is what the actual climate seems to be telling us), then all the biggest boundary problems dissappear.

    We can have extremely low levels of CO2 and we don’t violate the greenhouse effect value and we can go to extremely high CO2 values and still have temperatures within the range that existed on historical Planet Earth (highest temp at +15.0C in the Cambrian). Problem solved. But can’t get down 1 or 2 or a trillion CO2 molecules. Values don’t work.

  34. Bill Illis says:
    February 23, 2013 at 4:23 am
    We can also check to see if the Tropics are warming faster than other latitudes because this is also a component of the troposphere warming prediction – that it will mainly be concentrated from 40N to 40S with the Tropics having the highest rate overall.

    Is this evident in the data?

    Nope, we actually have a Tropical Coolspot, instead, with the Tropics troposphere warming at about 60% of the other latitudes (south polar might be lower however).

    So, again, the theory is just wrong and no misdirection about solar warming, aerosols or poor surface records can take away from this non-correct prediction.

    If one takes into account the cooling effect of CO2 then this situation makes perfect sense. Since the CO2 at the tropics at any given altitude is radiating at a higher temperature we should see more energy loss than places where it radiates at a cooler temperature. IOW, the cooling effect is also stronger at exactly the same places the warming effect (GHE) is stronger.

    The lack of a hot spot is perfectly explained by accounting for ALL of the physics.

  35. Joe saysI
    ’m happy to accept that the physics as used will operate ok as an approximation around the current values, but has any work been done on exactly what range of concentrations the relationship (as modelled) is valid over?
    Henry says
    No. Try and understand this post here.

    http://blogs.24.com/henryp/2011/08/11/the-greenhouse-effect-and-the-principle-of-re-radiation-11-aug-2011/

    I also query this “doubling up” theory.
    In fact, I found out we do not even know if the net effect of more CO2 is that of warming, cooling or simply (close to) zero. You cannot simply “calculate” that which has never been tested before.

    My previous post

    http://wattsupwiththat.com/2013/02/22/klotzbach-et-al-revisited-a-reply-by-john-christy/#comment-1231086

    clearly shows declining temperatures (cooling) so I would not worry about the CO2 anymore. Rather prepare for the coming cold. (look at the weather stations in your area and calculate the trend)

  36. beng says
    That should decrease precipitation in tropical areas.
    Henry says
    in a cooling period you get more precipitation at lower latitudes and less at the higher latitudes
    in a warming period it is reversed

  37. Thanks Bill, so the theory derived “from physics” becomes unreliable, according to climate scientists, at less than 2 “halvings” from their magic pre-industrial levels.

    That formula also raises a problem to me. That (CO2ppm/280) looks suspiciously like it contains a term for the “pre-industrial” level, which screams “curve fit” to me because physics really shouldn’t care what the (relatively modern) pre-industrial level should be.

  38. denniswingo says:
    February 22, 2013 at 8:04 am
    ++++
    The models use CO2 as the initial forcing, which is minor, but the models then assume that this will trigger more water vapor which they say will create more of a positive feedback. As well the models also say there would be increases in methane which will as well contribute more positive feedbacks.

    So – they blame it all on CO2 as the primary driver of other feedbacks which they wrongly assert as positive.

  39. Bill Illis says:
    February 23, 2013 at 9:12 am
    Joe says:
    February 23, 2013 at 5:53 am
    ————
    There is a simple formula for the theory.
    TempCAnom = 3/2Ln(2)*Ln(CO2ppm/280) = 4.328 * Ln(CO2ppm) – 24.39
    ——————————–

    Sorry, I screwed that up, there an extra 2 in the formula.

    Should be.

    TempCAnom = 3/Ln(2)*Ln(CO2ppm/280) = 4.328 * Ln(CO2ppm) – 24.39

  40. No problem Bill. Kinda sussed that from the right hand side, but doesn’t detract from my unease with or without the extra 2.

    There’s still the problem of an arbitrary “level” which has no physical meaning except for “how it happened to be on a particular arbitrary date in history” apparently being used as a constant in a formula that they seem to promote as being derived from first principles. As I said, that suggests a curve fit rather than ture derivation from theory.

    Which would explain why it falls apart at low values, and possibly high ones as well – a sound theoretical relationship shouldn’t!

  41. Since there is great argument to the correlation of CO2 and temperatures, I doubt the formula has much merit, except to reproduce the past at one point in time. I have to agree with HenryP on this one. But only because his point seems valid on this one. It seems a curve fit, but correlation is NOT causation – and the formula seems to assume causation. Most of the observations show a severe lack of correlation at most other times in history –based on observations.

Comments are closed.