New Paper by McKitrick and Vogelsang comparing models and observations in the tropical troposphere
This is a guest post by Ross McKitrick (at Climate Audit). Tim Vogelsang and I have a new paper comparing climate models and observations over a 55-year span (1958-2012) in the tropical troposphere. Among other things we show that climate models are inconsistent with the HadAT, RICH and RAOBCORE weather balloon series. In a nutshell, the models not only predict far too much warming, but they potentially get the nature of the change wrong. The models portray a relatively smooth upward trend over the whole span, while the data exhibit a single jump in the late 1970s, with no statistically significant trend either side.
Our paper is called “HAC-Robust Trend Comparisons Among Climate Series With Possible Level Shifts.” It was published in Environmetrics, and is available with Open Access thanks to financial support from CIGI/INET. Data and code are here and in the paper’s SI.
Tropical Troposphere Revisited
The issue of models-vs-observations in the troposphere over the tropics has been much-discussed, including here at CA. Briefly to recap:
- All climate models (GCMs) predict that in response to rising CO2 levels, warming will occur rapidly and with amplified strength in the troposphere over the tropics. See AR4 Figure 9.1 and accompanying discussion; also see AR4 text accompanying Figure 10.7.
- Getting the tropical troposphere right in a model matters because that is where most solar energy enters the climate system, where there is a high concentration of water vapour, and where the strongest feedbacks operate. In simplified models, in response to uniform warming with constant relative humidity, about 55% of the total warming amplification occurs in the tropical troposphere, compared to 10% in the surface layer and 35% in the troposphere outside the tropics. And within the tropics, about two-thirds of the extra warming is in the upper layer and one-third in the lower layer. (Soden & Held p. 464).
- Neither weather satellites nor radiosondes (weather balloons) have detected much, if any, warming in the tropical troposphere, especially compared to what GCMs predict. The 2006 US Climate Change Science Program report (Karl et al 2006) noted this as a “potentially serious inconsistency” (p. 11). I suggest is now time to drop the word “potentially.”
- The missing hotspot has attracted a lot of discussion at blogs (eg http://joannenova.com.au/tag/missing-hot-spot/) and among experts (eg http://www.climatedialogue.org/the-missing-tropical-hot-spot). There are two related “hotspot” issues: amplification and sensitivity. The first refers to whether the ratio of tropospheric to surface warming is greater than 1, and the second refers to whether there is a strong tropospheric warming rate. Our analysis focused in the sensitivity issue, not the amplification one. In order to test amplification there has to have been a lot of warming aloft, which turns out not to have been the case. Sensitivity can be tested directly, which is what we do, and in any case is the more relevant question for measuring the rate of global warming.
- In 2007 Douglass et al. published a paper in the IJOC showing that models overstated warming trends at every layer of the tropical troposphere. Santer et al. (2008) replied that if you control for autocorrelation in the data the trend differences are not statistically significant. This finding was very influential. It was relied upon by the EPA when replying to critics of their climate damage projections in the Technical Support Document behind the “endangerment finding”, which was the basis for their ongoing promulgation of new GHG regulations. It was also the basis for the Thorne et al. survey’s (2011) conclusion that “there is no reasonable evidence of a fundamental disagreement between models and observations” in the tropical troposphere.
- But for some reason Santer et al truncated their data at 1999, just at the end of a strong El Nino. Steve and I sent a comment to IJOC pointing out that if they had applied their method on the full length of then-available data they’d get a very different result, namely a significant overprediction by models. The IJOC would not publish our comment.
- I later redid the analysis using the full length of available data, applying a conventional panel regression method and a newer more robust trend comparison methodology, namely the non-parametric HAC (heteroskedasticity and autocorrelation)-robust estimator developed by econometricians Tim Vogelsang and Philip Hans Franses (VF2005). I showed that over the 1979-2009 interval climate models on average predict 2-4x too much warming in the tropical lower- and mid- troposphere (LT, MT) layers and the discrepancies were statistically significant. This paper was published as MMH2010 in Atmospheric Science Letters
- In the AR5, the IPCC is reasonably forthright on the topic (pp. 772-73). They acknowledge the findings in MMH2010 (and other papers that have since confirmed the point) and conclude that models overstated tropospheric warming over the satellite interval (post-1979). However they claim that most of the bias is due to model overestimation of sea surface warming in the tropics. It’s not clear from the text where they get this from. Since the bias varies considerably among models, it seems to me likely to be something to do with faulty parameterization of feedbacks. Also the problem persists even in studies that constrain models to observed SST levels.
- Notwithstanding the failure of models to get the tropical troposphere right, when discussing fidelity to temperature trends the SPM of the AR5 declares Very High Confidence in climate models (p. 15). But they also declare low confidence in their handling of clouds (p. 16), which is very difficult to square with their claim of very high confidence in models overall. They seem to be largely untroubled by trend discrepancies over 10-15 year spans (p. 15). We’ll see what they say about 55-year discrepancies.
Conclusion –
Over the 55-years from 1958 to 2012, climate models not only significantly over-predict observed warming in the tropical troposphere, but they represent it in a fundamentally different way than is observed.
…
Read the entire story here: http://climateaudit.org/2014/07/24/new-paper-by-mckitrick-and-vogelsang-comparing-models-and-observations-in-the-tropical-troposphere/
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
“Gradually reality will overcome the fiction of the computer models.”
They reject your reality and substitute their own….
Darren:
I’ve read the McKitrick and Vogelsang paper but have not read Ross’s previous work. The McKitrick and Vogelsang paper cannot falsify models used in making EPA regulations because these models were not falsifiable. A model is falsified when the observed relative frequencies of the outcomes of events fail to match the model-computed relative frequencies but for these models there were no events or relative frequencies.
You are making your data and code available, what are you some sort of heretic?
Well….this paper has been sitting on my desk for a few days……I keep waiting on someone smarter than I am to tear into it…..cause I can’t seem to find anything wrong with it
It’s one of those “it is what it is” papers…..no magic….and a total game changer if it’s right
http://hockeyschtick.blogspot.ca/2014/07/new-paper-unexpectedly-finds-diverging.html
Paul Homewood mentions a step change in the PDO in 1976. I believe that this took place at July 1976, and was followed by similar (related?) changes in the NW Pacific area, such as Alaska – eg Fairbanks a few months later. I spotted this many years ago, and wonder why others seem to have missed it. Would like to post some plots, but don’t know how :-((
My impression is that pointing out glaring disparities between GCM simulations and observations has to be supplemented by detailed critical scrutiny of these models’ foundations. There are all sorts of excuses on offer for empirical failure, particularly as regards timescale and statistical (ensemble) interpretation. Although on the last point, I see the IPCC quoted as effectively arguing that just one simulation has to be accurate at any one time for GCM vindication (a line possibly echoed in Risbey et al). This empirical elusiveness by believers makes it important to tackle the claim that since fundamental conservation principles are incorporated into these models, they must be correct because these physical principles have been comprehensively (empirically) verified. Since the underlying dynamics generated by these conservation principles is derived from numerical approximations to the poorly understood Navier-Stokes equations, this argument does not impress much. Not to mention all the fixes and tuning added to these models in order to try to reproduce various observed major climate features. The physical credentials of some parameterizations are apparently less than convincing (just ‘brute force’ fixes?).
It needs people with the right technical background to provide an ongoing detailed critique of such model fundamentals, along with others highlighting discrepancies with reality.
Eliza says: July 24, 2014 at 9:52 am
Latitude: so basically human produced C02 has NO effect on global temperatures. In fact negative feedbacks are probably greater andhuge amounts of Co2 in the atmosphere natural or otherwise in the past seem to be related with ice ages not warm periods.Is this correct?
I don’t get from Lattitude’s post, nor the link to the paper therein, where the bold section of your question arises. I only ask because I find it an amusing perspective in the ice core proxies of CO2 vs Temperature, to ask why, ‘if CO2 is driving temperature do we re-glaciate every time CO2 peaks reaches a maximum’.
Quinn the Eskimo says “As for the remaining line of evidence, temperature records, it is perfectly obvious that current temps are well within natural variability. Therefore, all three of the lines of evidence on which the Endangerment Finding rests are completely busted.”
If Steven Goddard is correct then it is a matter of time before GISS and NCDC have adjustet the past suitably down and the current up to make the models compare well with GISS/NCDC temperature graphs. that is all the EPA require to justify their actions. This adjustment appears to be work in progress and may take a little longer.
Quinn the Eskimo: Agreed. The Endangerment Finding is based largely on the IPCC models and their “conclusions”, not to mention the “Summary for Policy Makers” – which itself, as a made-for-prime-time-press-release that in its conclusions actually deviated from the science presented in other parts of the various sections of IPCC reports (purposefully?). To show that the EPA’s Endangerment Finding is in fact based upon if not flawed then incomplete sources, would serve to provide evidence that in fact there was in all probability no Endangerment to Find in the first place – then all of the bluster to control CO2, rid the USA of inexpensive electrical power would be based upon….hot air??? I am of the opinion this line of debunking the EPA Endangerment Finding would best serve the USA and its power generation future to the positive. Also, does any one remember Willis’s “Thermostat Hypothesis”?
Reblogged this on Centinel2012 and commented:
Actually this should be no surprise; it results from over estimating CO2 forcing s and underestimating natural processes. By properly modeling the heat/energy transfers from the tropics toward the poles and combining that with a reasonable factor for carbon Dioxide with sensitivity under 1 degree C per doubling a model can be constructed that will generate global temperatures in line with NASA-GISS global temperatures significantly better than any IPCC Climate models.
“Steve and I sent a comment to IJOC pointing out that if they had applied their method on the full length of then-available data they’d get a very different result, namely a significant overprediction by models. The IJOC would not publish our comment”
Says it all really.
@ur momisugly Robin Edwards.
IIRC, either Bob Tisdale or Bill Illis (or both) have plotted this 1976 ‘step-change’ or ‘climate-shift’ previously. You may have to do some digging to find the graphs, or if we ask them nicely they may be able to reproduce it here.
Hint…. 🙂
So how do the tropical oceans warm then, specifically from CO2 going from 280 ppm to 400 ppm? ….. and how is this heat transported to the deep oceans?
I think a better question is “By how many orders of magnitude off is Travesty Trenberth ?” Is it single figures, or could he be greater than 10 orders of magnitude off with his infantile thinking?
I guess Santer has learned that truncating data can get tremendous results when he tried it for his (unpublished) paper used in Chapter 8 of IPCC SAR:
http://enthusiasmscepticismscience.wordpress.com/2012/07/01/madrid-1995-and-the-quest-for-the-mirror-in-the-sky-part-ii/michaelsknappenbergercontrasanternature12dec96/
and this better image from Michael’s original post when the paper finally came out:
http://www.eskimo.com/~bpentium/articles/pm110697b.gif
Santer’s chapter was a real lesson in just what a difference start and end points can make. Another example:
http://enthusiasmscepticismscience.wordpress.com/2012/07/01/madrid-1995-and-the-quest-for-the-mirror-in-the-sky-part-ii/santer_patterncorrelationcomparison-2/
Bernie, I didn’t know that Santer had used exactly the same trick previously. Amazing. Around the time that we submitted this earlier comment, Real Climate had published a tirade against Courtillot for truncating data, more or less accusing him of misconduct. However, they apparently were unoffended by Santer’s data truncation. Santer et al 2007 was relied upon in the EPA Endangerment Finding. It was criticized in some comments. The EPA rejected these comments on the grounds that there had been enough time to publish a reply (this was shortly before Ross managed to publish MMH 2010). Concern about the assessment reports then underway appears to have been a motivation for keeping our comment on Santer et al out of peer reviewed literature though the criticism was valid.
Endangerment finding Will Robinson, endangerment finding.
============================
My guess is that David Evans is correct, which is confirmed by other studies. Solar Impulse does not need to be very visible enough that it will fall below the average minimum (critical point).
http://pl.tinypic.com/view.php?pic=2rqyf4n&s=8#.U9HlwlV_suo
Please exactly look at the graph of the TSI. You can see exactly that to 2006, TSI still remained normal, followed by a decline below the minimum in previous cycles.
If we treat the strong solar minimum in 2008 as the solar signal and we take into account the length the previous cycle of 12 years, the effect of this solar minimum will see in 2020. Of course, the temperature drop will be uneven, depending on the thermohaline circulation.
http://www.sciencedirect.com/science/article/pii/S1364682612000417
Steve,
See summary of Santer’s defense of data selection here:
http://www.realclimate.org/index.php/archives/2010/02/close-encounters-of-the-absurd-kind/
Scroll to: The “research irregularities” allegation
@Steve McIntyre
“Bernie, I didn’t know that Santer had used exactly the same trick previously. Amazing.”
It’s worth keeping in mind that Santer has since “recanted”. 😉
“The multimodel average tropospheric temperature trends are outside the 5–95 percentile range of RSS results at most latitudes. The likely causes of these biases include forcing errors in the historical simulations (40–42), model response errors (43), remaining errors in satellite temperature estimates (26, 44), and an unusual manifestation of internal variability in the observations (35, 45). These explanations are not mutually exclusive. Our results suggest that forcing errors are a serious concern.”
http://www.pnas.org/content/early/2012/11/28/1210514109.full.pdf
“the data exhibit a single jump in the late 1970s, with no statistically significant trend either side“.
A word of caution : if you look at a sinewave on a rising trend plus noise over a single cycle, you might see it as a step-change within a trendless sequence.
Isn’t the evidence starting to align with a new conclusion – increased CO2 is a good thing?
The models portray a relatively smooth upward trend over the whole span, while the data exhibit a single jump in the late 1970s, with no statistically significant trend either side.
Gradual divergence between land and N. Atlantic SST started in the late 1960s leading to sudden drop in the SST in early 1970’s, which was not reflected in the land temperatures; to the contrary LTs were grossly enhanced by ex Soviet Union abnormal +3C anomaly.
Surprisingly divergence between Land and Land& Ocean did not take place until late 1970s ( see lower right hand side inset ). The graph was done in 2011 and it was shown couple of times on the WUWT.
Will Nitschke:
The start of your post at July 25, 2014 at 1:20 am says
The “amazing” thing is that Steve McIntyre says he was not aware of the matter because it was an important part of the ‘Chapter 8 scandal’.
Santer did not “recant” until after the alterations to Chapter 8 had been published as part of an IPCC so-called “Scientific Report”; i.e. the Second Assessment Report SAR. He “recanted” of his “trick” (i.e. a flagrant scientific falsehood) after it had fulfilled its purpose.
Seitz and Singer did much to publicise the ‘Chapter 8 scandal’ and I demolished a claim of IPCC probity by citing it in an IPCC side-meeting organised by Fred Singer. Reminding people of the matter is still important because the political nature of the IPCC is still denied by some.
And over the years on WUWT I have been repeatedly citing the false claim Santer made that he had discovered a ‘fingerprint’ of anthropogenic (i.e. man-made) global warming (AGW). Most recently, I did it yesterday on another thread here where I wrote
Richard
More heat at the surface of the topics means more clouds and more rain. Not higher temperatures. (Temperature IS NOT HEAT! Repeat it daily…)
The tops of clouds dump the heat via condensation of rain / snow / hail as IR that goes into the stratosphere. In the stratosphere, CO2 radiates that heat to space. More CO2 mean more radiated heat from the stratosphere. More surface heat means more water radiating heat. In all cases, it is more heat transport up up and away… There is no ‘trapped’ heat.
Latitude says:
July 24, 2014 at 1:15 pm
Latitude–At your suggestion, I read the Allan et al paper linked by Hockeyschtick. The description of the various datasets employed to try to extend the CERES measurements back to 1985 is an amazing saga of errors, assumptions, etc. that lead me to believe we do not have a handle on the radiation imbalance. As far as I can tell, their estimates have such wide error bands they are nonsignificantly different from zero. Note their 90% CIs in the abstract.
“Over the 1985-1999 period mean N (0.34 ± 0.67 WM–2) is lower than for the 2000-2012 period (0.62 ± 0.43 WM–2, uncertainties at 90% confidence level) despite the slower rate of surface temperature rise since 2000.”
Nonetheless, they apply their nonfindings to support the Trenberth “deep ocean” explanation of the pause. (Trenberth is acknowledged as a contributor to the paper.)