New Paper by McKitrick and Vogelsang comparing models and observations in the tropical troposphere
This is a guest post by Ross McKitrick (at Climate Audit). Tim Vogelsang and I have a new paper comparing climate models and observations over a 55-year span (1958-2012) in the tropical troposphere. Among other things we show that climate models are inconsistent with the HadAT, RICH and RAOBCORE weather balloon series. In a nutshell, the models not only predict far too much warming, but they potentially get the nature of the change wrong. The models portray a relatively smooth upward trend over the whole span, while the data exhibit a single jump in the late 1970s, with no statistically significant trend either side.
Our paper is called “HAC-Robust Trend Comparisons Among Climate Series With Possible Level Shifts.” It was published in Environmetrics, and is available with Open Access thanks to financial support from CIGI/INET. Data and code are here and in the paper’s SI.
Tropical Troposphere Revisited
The issue of models-vs-observations in the troposphere over the tropics has been much-discussed, including here at CA. Briefly to recap:
- All climate models (GCMs) predict that in response to rising CO2 levels, warming will occur rapidly and with amplified strength in the troposphere over the tropics. See AR4 Figure 9.1 and accompanying discussion; also see AR4 text accompanying Figure 10.7.
- Getting the tropical troposphere right in a model matters because that is where most solar energy enters the climate system, where there is a high concentration of water vapour, and where the strongest feedbacks operate. In simplified models, in response to uniform warming with constant relative humidity, about 55% of the total warming amplification occurs in the tropical troposphere, compared to 10% in the surface layer and 35% in the troposphere outside the tropics. And within the tropics, about two-thirds of the extra warming is in the upper layer and one-third in the lower layer. (Soden & Held p. 464).
- Neither weather satellites nor radiosondes (weather balloons) have detected much, if any, warming in the tropical troposphere, especially compared to what GCMs predict. The 2006 US Climate Change Science Program report (Karl et al 2006) noted this as a “potentially serious inconsistency” (p. 11). I suggest is now time to drop the word “potentially.”
- The missing hotspot has attracted a lot of discussion at blogs (eg http://joannenova.com.au/tag/missing-hot-spot/) and among experts (eg http://www.climatedialogue.org/the-missing-tropical-hot-spot). There are two related “hotspot” issues: amplification and sensitivity. The first refers to whether the ratio of tropospheric to surface warming is greater than 1, and the second refers to whether there is a strong tropospheric warming rate. Our analysis focused in the sensitivity issue, not the amplification one. In order to test amplification there has to have been a lot of warming aloft, which turns out not to have been the case. Sensitivity can be tested directly, which is what we do, and in any case is the more relevant question for measuring the rate of global warming.
- In 2007 Douglass et al. published a paper in the IJOC showing that models overstated warming trends at every layer of the tropical troposphere. Santer et al. (2008) replied that if you control for autocorrelation in the data the trend differences are not statistically significant. This finding was very influential. It was relied upon by the EPA when replying to critics of their climate damage projections in the Technical Support Document behind the “endangerment finding”, which was the basis for their ongoing promulgation of new GHG regulations. It was also the basis for the Thorne et al. survey’s (2011) conclusion that “there is no reasonable evidence of a fundamental disagreement between models and observations” in the tropical troposphere.
- But for some reason Santer et al truncated their data at 1999, just at the end of a strong El Nino. Steve and I sent a comment to IJOC pointing out that if they had applied their method on the full length of then-available data they’d get a very different result, namely a significant overprediction by models. The IJOC would not publish our comment.
- I later redid the analysis using the full length of available data, applying a conventional panel regression method and a newer more robust trend comparison methodology, namely the non-parametric HAC (heteroskedasticity and autocorrelation)-robust estimator developed by econometricians Tim Vogelsang and Philip Hans Franses (VF2005). I showed that over the 1979-2009 interval climate models on average predict 2-4x too much warming in the tropical lower- and mid- troposphere (LT, MT) layers and the discrepancies were statistically significant. This paper was published as MMH2010 in Atmospheric Science Letters
- In the AR5, the IPCC is reasonably forthright on the topic (pp. 772-73). They acknowledge the findings in MMH2010 (and other papers that have since confirmed the point) and conclude that models overstated tropospheric warming over the satellite interval (post-1979). However they claim that most of the bias is due to model overestimation of sea surface warming in the tropics. It’s not clear from the text where they get this from. Since the bias varies considerably among models, it seems to me likely to be something to do with faulty parameterization of feedbacks. Also the problem persists even in studies that constrain models to observed SST levels.
- Notwithstanding the failure of models to get the tropical troposphere right, when discussing fidelity to temperature trends the SPM of the AR5 declares Very High Confidence in climate models (p. 15). But they also declare low confidence in their handling of clouds (p. 16), which is very difficult to square with their claim of very high confidence in models overall. They seem to be largely untroubled by trend discrepancies over 10-15 year spans (p. 15). We’ll see what they say about 55-year discrepancies.
Over the 55-years from 1958 to 2012, climate models not only significantly over-predict observed warming in the tropical troposphere, but they represent it in a fundamentally different way than is observed.
Read the entire story here: http://climateaudit.org/2014/07/24/new-paper-by-mckitrick-and-vogelsang-comparing-models-and-observations-in-the-tropical-troposphere/
Facts mean nothing to a religious zealot ..
Ross, putting your new paper together with this other new paper…..it doesn’t look like a stretch when people have been saying that CO2 effects are not logarithmic…..people might have been right about the absorption bands being saturated after all, or right about negative feedbacks
Sunday, July 20, 2014
New paper unexpectedly finds diverging trends in global temperature & radiative imbalance from greenhouse gases
A new paper published in Geophysical Research Letters finds that the radiative imbalance from greenhouse gases at the top of the atmosphere has increased over the past 27 years while the rate of global warming has unexpectedly decreased or ‘paused’ over the past 15+ years.
This finding contradicts expectations from AGW theory of increased ‘heat trapping’ from increased greenhouse gases. However, the finding is consistent with radiosonde observations showing that outgoing longwave radiation to space from greenhouse gases has unexpectedly increased rather than decreased over the past 62 years, inconsistent with more heat being “trapped” in the mid-upper troposphere.
Not surprising to those paying attention.
Latitude says at July 24, 2014 at 8:30 am
As the feedbacks include increased water vapour in the atmosphere – which absorbs at the same wavelength as CO2 – your “or” could well be an “and“.
Yes, I’ve just argued that increased water vapour is not a positive feedback.
Instead of empirically measuring overall climate sensitivity to dioxide of carbon forcing, they merely assumed water vapor torqued it up to possible infinity, dude, and if you complain, you’re just rude. The end of the world is nigh so you’re the one who’s is high, dammit. To recalibrate models to downgrade sensitivity is silly when any fanatic can tell that the lull us just compressing yet another explosive spring of heat down full fathom five.
“Nothing of him that doth fade,
But doth suffer a sea-change
Into something rich and strange.”
My computer models say the planet is warming.
Actual data says it is not
Gradually reality will overcome the fiction of the computer models.
This glowball warming thingy is over, but the international greenie industry, who thought they had their Killer App and hitched their star to what seemed like a wonderful idea to achieve their political goals, are living in deep denial.
Meanwhile… NOAA long range predicts another polar air “blob” dropping down:
July 29 – Aug 2 ( 6 – 10 day outlook) Map
July 31 – Aug 6 (8 – 14 day outlook) Map
The paper offers readers the results of an IPCC-style “evaluation.” In an evaluation, observed temperature time series are plotted on temperature-time coordinates together with the associated computed time series. This comparison provides for visualization of the error.
It does not, however, provide for the validation or falsification of any of the models. For this purpose, the predicted relative frequencies of the outcomes of the events must be compared to the associated observed relative frequencies. If there is a match, the model is validated; otherwise it is falsified. This comparison cannot be made because events are not a concept in the methodology of research of this kind.
Latitude: so basically human produced C02 has NO effect on global temperatures. In fact negative feedbacks are probably greater and huge amounts of Co2 in the atmosphere natural or otherwise in the past seem to be related with ice ages not warm periods.Is this correct?
Reality is the ultimate peer review – but is slower. The models are being falsified. But the facts are what is the real victim of the deception.
The reason that no ‘hot ‘ spot is observed at altitude is that the lapse rate (adiabatic temperature profile) controls temperature throughout the troposphere. Warming resulting from an increase in atmospheric moisture will cause the lapse rate to decrease. Surface temperature is actually determined at the effective emission height (eeh), which can be considered to be at the Planck temperature -18 C. The question is at what altitude the Plank temperature is reached, and this is the eeh. Then the Planck temperature, the lapse rate, and the altitude at eeh together determine the surface temperature.
This is in contrast to radiative models that consider surface temperatures to result from a radiation balance at the surface. If these models show a surface temperature increase from CO2 resulting in increased moisture and a decreased lapse rate, the result will be that temperatures at altitude will move closer to surface temperatures, hence the upper tropospheric ‘hot spot.’ However, surface temperatures are actually determined in the mid-troposphere at the altitude of the eeh, Then the temperate at eeh (-18 C), the altitude at eeh, and the lapse rate specify the surface temperature, so that a decrease in lapse rate will cause the surface temperature to move close to -18 C, a cooling effect. CO2 may actually cause some warming (increase in eeh) but this will be offset by the decreased lapse rate as far as surface temperatures are concerned.
Step change in the 1970’s?
Now where is Mosher to tell us this must an error in the algorithm?
Step change in the 1970’s?
Now, where is Mosher to tell us this must be an error in the algorithm?
Pochas if the surface temperature, thermal skin layer, is determined by the eeh, does that change with clouds?
The reason I ask is that i have been measuring the surface skin temperature and find that it is largely unchanging, at least for the tropics and sub tropics.
I also find that the atmospheric radiation varies by up to 130 watts depending on cloud coverage or lack of cloud coverage. While I measure no temperature change in the thermal skin layer.
July 24, 2014 at 10:22 am
“Pochas if the surface temperature, thermal skin layer, is determined by the eeh, does that change with clouds?”
I think perhaps not much, for low-level clouds. This is because cloud formation is a reversible process and therefore can cause no change in the heat balance at cloud level. Above the cloud shortwave replaces longwave, below it longwave from the surface is reflected downward. This pretty much echoes Lindzen but I believe he says high altitude cirrus does have a warming effect. You might want to read his Iris Effect paper.
The hotspot was needed to put a cap on thunderstorms.
Without the hot spot a small increase in thunderstorms is sufficient to eliminate most of the extra energy being captured by CO2.
may I remind you of rule 1 of climate science “if the models and reality differ in value its reality which is in error ‘ see no more problem !
M Courtney says:
July 24, 2014 at 8:55 am
Latitude says at July 24, 2014 at 8:30 am
people might have been right about the absorption bands being saturated after all, or right about negative feedbacks.
As the feedbacks include increased water vapour in the atmosphere – which absorbs at the same wavelength as CO2 – your “or” could well be an “and“.
Yes, I’ve just argued that increased water vapour is not a positive feedback.
Water vapour is not a positive feedback because it convects heat upward as latent heat. A volume of humid air is lighter than a volume of dry air, and will therefore rise. Heating will increase the convection of the humid air. The water vapour does not warm the atmosphere as the heat is latent until it is released higher in the atmosphere on a water state change, condensation or freezing. This release is as IR not as a ‘warming’ collision with Nitrogen or Oxygen.
Water or ice droplets falling to earth as rain / snow / hail will absorb heat from the air by collision or by absorbing IR and thus cool the volume of air they are dropping through and eventually the surface. Eventually evaporating by taking up sufficient latent heat and the hydrological cycle cooling process repeats .
The bundling in of water vapour as ‘just another GHG’ shows a fundamental misunderstanding of basic physics. Of course it is a negative feedback.
July 24, 2014 at 8:30 am
My take is that the formula showing a logarithmic effect works fine in a laboratory where you are looking a changing [CO2]. The problem is taking that out into the atmosphere where we know some of all the variables involved including [CO2] while being ignorant of many others.
New excuse for the ‘pause’: Negative phase of the natural Interdecadal Pacific Oscillation
“Matthew ‘say anything’ England is back with a new paper which offers yet another excuse for the 17+ year pause in global warming: the negative phase of the natural Interdecadal Pacific Oscillation (IPO) [excuse #14 by my count].”
OK, if you have computer models of a _phenomenally_ complex system that don’t manage to produce projections that match measured reality and 14 excuses are then forwarded to explain why they don’t, each one claiming to completely explain away the problem, then it’s rather obvious to anyone that those attempting to model that complex system really don’t have a adequate handle on the factors involved in it. Isn’t it?
Ian W says at July 24, 2014 at 11:30 am
I tend to agree as the evidence seems to show that to be so. But this makes us extreme sceptics. Without water vapour amplification the AGW hypothesis is unable to be come newsworthy.
More than that: I propose that water vapour is a feedback that provides the stability in the climate system. That the effect of water vapour adjusts according to everything else in order to maintain the temperature of the planet at about the same level.
CO2 absorption is saturated and if it isn’t then the temperature would rise until the water vapour saturates it.
And potential water vapour reservoirs are in great excess – we have oceans of it. So the water vapour rise would be rapid. This is where we are, where we always are…
Thus I suggest that CO2 has a negligible effect on our planet.
This is another nail in the coffin to at least 2 of the 3 the lines of evidence on which the EPA relied in its Endangerment Finding.
1. Physical Understanding of Climate – One of EPA’s lines of evidence is its claimed physical understanding of climate. EPA’s contends that its understanding of the the physics of climate is sufficient to know that AGW is occurring and should be restrained by limiting GHG emissions. This paper, along with several others and the reams of empirical data on which they rest, refutes that. The claimed understanding predicts the hot spot. 55 years of observations show it doesn’t exist. Therefore, the claimed understanding is wrong, period.
2. Models. Another of EPA’s lines of evidence is models. EPA relies heavily on GCM’s to both attribute warming to man and to predict the future apocalypse. But those models are based on an invalidated physical understanding (see 1 above) and are themselves invalidated by these observations, among many others that could be offered, such as global average surface temperature, relative humidity, precipitation, etc.
As for the remaining line of evidence, temperature records, it is perfectly obvious that current temps are well within natural variability. Therefore, all three of the lines of evidence on which the Endangerment Finding rests are completely busted. Which is ironic considering that the Endangerment Finding is the premise for EPA’s regulatory rampage against GHG emissions.
first time I have commented on any Climate related Blog. But have been a keen observer for many years and Anthony, I appreciate all of the work you have done over the years as well as your contributors!
One question that keeps coming to mind in light of a Peer reviewed paper like this from McKitrick and Vogelsang.
1. How many of the Peer reviewed studies which agree CO2 based CAGW are based on the Models themselves as opposed to the Observations?
Given a paper like Cook et al. 2013, which we know has some major questions already about its validity, how many of those studies they view as confirming CAGW were based on the Models?
If the paper from McKitrick and Vogelsang and Ross’s previous work invalidate the Models in use then those studies are by default invalidated? Does this not make sense? Recent EPA regulations as well would be falsified.
What do I know, I am just a Crazy Canuck up here in Canada pondering the science.
Ross McKitrick says: ” it seems to me likely to be something to do with faulty parameterization of feedbacks. Also the problem persists even in studies that constrain models to observed SST levels.”
Here I looked at ERBE TOA radiation budget in relation to the effects Mt Pinatubo.
I found that earlier estimations of Lacis et al 1992 ( Hansen was a co-author ) for the scaling of AOD to W/m2 radiative forcing, based on physical analysis, were is good agreement with the ERBE data (though they were derived mainly from El Chichon observations).
However, work by essentially the same group just a few years later, reduced the earlier value of 33 to 21, thus _reducing_ the radiative forcing attributed to volcanoes.
This was done on the basis that the lower value worked better with climate models. The implied but unstated condition being: with climate models without reducing the high sensitivity of the models.
Clearly if they had followed the data, instead of effectively changing the data fit models, they would find the models needed a lower sensitivity to match the data.
“Gradually reality will overcome the fiction of the computer models.”
They reject your reality and substitute their own….
I’ve read the McKitrick and Vogelsang paper but have not read Ross’s previous work. The McKitrick and Vogelsang paper cannot falsify models used in making EPA regulations because these models were not falsifiable. A model is falsified when the observed relative frequencies of the outcomes of events fail to match the model-computed relative frequencies but for these models there were no events or relative frequencies.
You are making your data and code available, what are you some sort of heretic?
Well….this paper has been sitting on my desk for a few days……I keep waiting on someone smarter than I am to tear into it…..cause I can’t seem to find anything wrong with it
It’s one of those “it is what it is” papers…..no magic….and a total game changer if it’s right
Paul Homewood mentions a step change in the PDO in 1976. I believe that this took place at July 1976, and was followed by similar (related?) changes in the NW Pacific area, such as Alaska – eg Fairbanks a few months later. I spotted this many years ago, and wonder why others seem to have missed it. Would like to post some plots, but don’t know how :-((
My impression is that pointing out glaring disparities between GCM simulations and observations has to be supplemented by detailed critical scrutiny of these models’ foundations. There are all sorts of excuses on offer for empirical failure, particularly as regards timescale and statistical (ensemble) interpretation. Although on the last point, I see the IPCC quoted as effectively arguing that just one simulation has to be accurate at any one time for GCM vindication (a line possibly echoed in Risbey et al). This empirical elusiveness by believers makes it important to tackle the claim that since fundamental conservation principles are incorporated into these models, they must be correct because these physical principles have been comprehensively (empirically) verified. Since the underlying dynamics generated by these conservation principles is derived from numerical approximations to the poorly understood Navier-Stokes equations, this argument does not impress much. Not to mention all the fixes and tuning added to these models in order to try to reproduce various observed major climate features. The physical credentials of some parameterizations are apparently less than convincing (just ‘brute force’ fixes?).
It needs people with the right technical background to provide an ongoing detailed critique of such model fundamentals, along with others highlighting discrepancies with reality.
Eliza says: July 24, 2014 at 9:52 am
Latitude: so basically human produced C02 has NO effect on global temperatures. In fact negative feedbacks are probably greater andhuge amounts of Co2 in the atmosphere natural or otherwise in the past seem to be related with ice ages not warm periods.Is this correct?
I don’t get from Lattitude’s post, nor the link to the paper therein, where the bold section of your question arises. I only ask because I find it an amusing perspective in the ice core proxies of CO2 vs Temperature, to ask why, ‘if CO2 is driving temperature do we re-glaciate every time CO2 peaks reaches a maximum’.
Quinn the Eskimo says “As for the remaining line of evidence, temperature records, it is perfectly obvious that current temps are well within natural variability. Therefore, all three of the lines of evidence on which the Endangerment Finding rests are completely busted.”
If Steven Goddard is correct then it is a matter of time before GISS and NCDC have adjustet the past suitably down and the current up to make the models compare well with GISS/NCDC temperature graphs. that is all the EPA require to justify their actions. This adjustment appears to be work in progress and may take a little longer.
Quinn the Eskimo: Agreed. The Endangerment Finding is based largely on the IPCC models and their “conclusions”, not to mention the “Summary for Policy Makers” – which itself, as a made-for-prime-time-press-release that in its conclusions actually deviated from the science presented in other parts of the various sections of IPCC reports (purposefully?). To show that the EPA’s Endangerment Finding is in fact based upon if not flawed then incomplete sources, would serve to provide evidence that in fact there was in all probability no Endangerment to Find in the first place – then all of the bluster to control CO2, rid the USA of inexpensive electrical power would be based upon….hot air??? I am of the opinion this line of debunking the EPA Endangerment Finding would best serve the USA and its power generation future to the positive. Also, does any one remember Willis’s “Thermostat Hypothesis”?
Reblogged this on Centinel2012 and commented:
Actually this should be no surprise; it results from over estimating CO2 forcing s and underestimating natural processes. By properly modeling the heat/energy transfers from the tropics toward the poles and combining that with a reasonable factor for carbon Dioxide with sensitivity under 1 degree C per doubling a model can be constructed that will generate global temperatures in line with NASA-GISS global temperatures significantly better than any IPCC Climate models.
“Steve and I sent a comment to IJOC pointing out that if they had applied their method on the full length of then-available data they’d get a very different result, namely a significant overprediction by models. The IJOC would not publish our comment”
Says it all really.
@ Robin Edwards.
IIRC, either Bob Tisdale or Bill Illis (or both) have plotted this 1976 ‘step-change’ or ‘climate-shift’ previously. You may have to do some digging to find the graphs, or if we ask them nicely they may be able to reproduce it here.
So how do the tropical oceans warm then, specifically from CO2 going from 280 ppm to 400 ppm? ….. and how is this heat transported to the deep oceans?
I think a better question is “By how many orders of magnitude off is Travesty Trenberth ?” Is it single figures, or could he be greater than 10 orders of magnitude off with his infantile thinking?
I guess Santer has learned that truncating data can get tremendous results when he tried it for his (unpublished) paper used in Chapter 8 of IPCC SAR:
and this better image from Michael’s original post when the paper finally came out:
Santer’s chapter was a real lesson in just what a difference start and end points can make. Another example:
Bernie, I didn’t know that Santer had used exactly the same trick previously. Amazing. Around the time that we submitted this earlier comment, Real Climate had published a tirade against Courtillot for truncating data, more or less accusing him of misconduct. However, they apparently were unoffended by Santer’s data truncation. Santer et al 2007 was relied upon in the EPA Endangerment Finding. It was criticized in some comments. The EPA rejected these comments on the grounds that there had been enough time to publish a reply (this was shortly before Ross managed to publish MMH 2010). Concern about the assessment reports then underway appears to have been a motivation for keeping our comment on Santer et al out of peer reviewed literature though the criticism was valid.
Endangerment finding Will Robinson, endangerment finding.
My guess is that David Evans is correct, which is confirmed by other studies. Solar Impulse does not need to be very visible enough that it will fall below the average minimum (critical point).
Please exactly look at the graph of the TSI. You can see exactly that to 2006, TSI still remained normal, followed by a decline below the minimum in previous cycles.
If we treat the strong solar minimum in 2008 as the solar signal and we take into account the length the previous cycle of 12 years, the effect of this solar minimum will see in 2020. Of course, the temperature drop will be uneven, depending on the thermohaline circulation.
See summary of Santer’s defense of data selection here:
Scroll to: The “research irregularities” allegation
“Bernie, I didn’t know that Santer had used exactly the same trick previously. Amazing.”
It’s worth keeping in mind that Santer has since “recanted”. 😉
“The multimodel average tropospheric temperature trends are outside the 5–95 percentile range of RSS results at most latitudes. The likely causes of these biases include forcing errors in the historical simulations (40–42), model response errors (43), remaining errors in satellite temperature estimates (26, 44), and an unusual manifestation of internal variability in the observations (35, 45). These explanations are not mutually exclusive. Our results suggest that forcing errors are a serious concern.”
“the data exhibit a single jump in the late 1970s, with no statistically significant trend either side“.
A word of caution : if you look at a sinewave on a rising trend plus noise over a single cycle, you might see it as a step-change within a trendless sequence.
Isn’t the evidence starting to align with a new conclusion – increased CO2 is a good thing?
The models portray a relatively smooth upward trend over the whole span, while the data exhibit a single jump in the late 1970s, with no statistically significant trend either side.
Gradual divergence between land and N. Atlantic SST started in the late 1960s leading to sudden drop in the SST in early 1970’s, which was not reflected in the land temperatures; to the contrary LTs were grossly enhanced by ex Soviet Union abnormal +3C anomaly.
Surprisingly divergence between Land and Land& Ocean did not take place until late 1970s ( see lower right hand side inset ). The graph was done in 2011 and it was shown couple of times on the WUWT.
The start of your post at July 25, 2014 at 1:20 am says
The “amazing” thing is that Steve McIntyre says he was not aware of the matter because it was an important part of the ‘Chapter 8 scandal’.
Santer did not “recant” until after the alterations to Chapter 8 had been published as part of an IPCC so-called “Scientific Report”; i.e. the Second Assessment Report SAR. He “recanted” of his “trick” (i.e. a flagrant scientific falsehood) after it had fulfilled its purpose.
Seitz and Singer did much to publicise the ‘Chapter 8 scandal’ and I demolished a claim of IPCC probity by citing it in an IPCC side-meeting organised by Fred Singer. Reminding people of the matter is still important because the political nature of the IPCC is still denied by some.
And over the years on WUWT I have been repeatedly citing the false claim Santer made that he had discovered a ‘fingerprint’ of anthropogenic (i.e. man-made) global warming (AGW). Most recently, I did it yesterday on another thread here where I wrote
More heat at the surface of the topics means more clouds and more rain. Not higher temperatures. (Temperature IS NOT HEAT! Repeat it daily…)
The tops of clouds dump the heat via condensation of rain / snow / hail as IR that goes into the stratosphere. In the stratosphere, CO2 radiates that heat to space. More CO2 mean more radiated heat from the stratosphere. More surface heat means more water radiating heat. In all cases, it is more heat transport up up and away… There is no ‘trapped’ heat.
July 24, 2014 at 1:15 pm
Latitude–At your suggestion, I read the Allan et al paper linked by Hockeyschtick. The description of the various datasets employed to try to extend the CERES measurements back to 1985 is an amazing saga of errors, assumptions, etc. that lead me to believe we do not have a handle on the radiation imbalance. As far as I can tell, their estimates have such wide error bands they are nonsignificantly different from zero. Note their 90% CIs in the abstract.
“Over the 1985-1999 period mean N (0.34 ± 0.67 WM–2) is lower than for the 2000-2012 period (0.62 ± 0.43 WM–2, uncertainties at 90% confidence level) despite the slower rate of surface temperature rise since 2000.”
Nonetheless, they apply their nonfindings to support the Trenberth “deep ocean” explanation of the pause. (Trenberth is acknowledged as a contributor to the paper.)
Say that they are “logarithmic” does not help you determine the coefficient in front of the log term, and log functions, like all other smooth functions, tend to be linear in the short run because of Our Friend the Taylor Series. In fact:
for . So saying that they are logarithmic is like saying that they are linear for anything like small changes. Even if (the relative change from 300 to 400 ppm) the quadratic term represents only around a 5 or 6% nonlinear correction. So the only thing that matters is the constant in front of the log term.
I do have issues with the overall claims for CO_2 at saturation, however. One claim, for instance, is that part of the warming expected arises from additional pressure broadening as one adds more CO_2. However, pressure broadening arises from collisions between molecules and is sensitive to the absolute pressure, not the partial pressure of CO_2. Indeed, at the partial pressures at issue — less than 0.1% of the total atmosphere at a projected 600 ppm — any “variation” in pressure broadening due to alteration of CO_2 concentration is dwarfed by the real-time, substantial variation in pressure broadening due to gross atmospheric pressure changes. This is not a trivial effect and has long been studied by the telecommunications industry as it essentially alters the attenuation rate of various frequencies of electromagnetic radiation as air pressure changes with the weather. Any alteration of the pressure broadened absorptive spectrum with CO_2 concentration would be utterly indetectable noise against this general background of rapid, substantial variation.
The actual expression for the expected log variation of transmittivity is the Beers-Lambert law:
where is the cross-section of the attenuator, is the attenuation concentration, and is the mean free path of photons in the medium. The cross section will not change in any measurable way with CO_2 variations of the order expected. The mean free path scales like the cube root of the concentration — doubling the concentration decreases the mean free path by .
The really interesting question is whether or not the variation in transmittivity/absorptivity with CO_2 concentration is visible at all against the general background of:
a) Gross variations of air pressure with weather that actually do cause direct variations of for all the molecular species that contribute to Beers-Lambert (where CO_2 is just one component of many contributing species).
b) Gross variations of GHG concentrations with weather, particularly water vapor.
c) A wide range of nonlinear feedback mechanisms (such as the cloud/albedo link or nonlinear transport of latent heat vertically through the greenhouse layer or the variation of absorptivity with surface height or the seasonal variation of surface albedo or…).
Those processes could completely erase any expected Beers-Lambert increase with the blink of a feedback eye, drown it in noise to where it is irrelevant to the actual time evolution of climate on anything like a century time scale, amplify it beyond recognition to where we cook in our own juices, or anything in between and we cannot even speak to one outcome being more probable than any other at this point. The data, such as it is, suggests that natural variability vastly exceeds the response and that negative feedback dominates the climate response to any sort of additional forcing, not positive feedback. But we have so little useful, global data taken with adequate instrumentation and care that even the data is not a very trustworthy guide to the future, so far.
Terry Oldberg says:
“A model is falsified when the observed relative frequencies of the outcomes of events fail to match the model-computed relative frequencies but for these models there were no events or relative frequencies.”
If this is true, then the models are of no value. Or said another way, they do not predict anything that can be calibrated against observations. So, if you are correct, the models are not proof of anything we can regulate. If you are wrong, they are invalid based on this paper. This paper puts the nail in the coffin.
Thanks for taking the time to respond.
It’s true that models of the type that are featured in the McKitrick and Vogelsang paper are of no scientific value in regulating the climate. This is quite significant for models of this type are the basis for all of the regulations now on the books, including the EPA’s key “endangerment” finding. The endangerment finding looks to me as though it is illegal under the Daubert standard governing the admissibility of scientific testimony in federal legal proceedings in view of the lack of falsifiability of the claims that are made by models of this type.
One should not generalize to the conclusion that all climate models are of no value for in AR5, Chapter 11 of the report of Working Group 1 reports (for the first time, I believe, in an IPCC assessment report) the existence of a model for which the underlying events exist. This report provides a comparison between the predicted and the observed relative frequencies of observed events. At this point, Working Group 1 drops the scientific ball for while this comparison provides the potential basis for falsification or validation of the model, they fail to inform their readers of whether the model has been falsified or validated. Also, their comparison seems to be based upon vastly more independent observed events than can possibly be extracted from the available global temperature time series.
It looks as though something is wrong with the argument that is made by Working Group 1 in Chapter 11. When I’ve got the time, I’ll drive over to the library of the closest research university and read some of the source material in the hope of determining what’s going on.
Great Terry… looks like you’re on to something.