By Christopher Monckton of Brenchley
The Christmas pantomime here in Paris is well int0 its two-week run. The Druids who had hoped that their gibbering incantations might begin to shorten the Pause during the United Necromancers’ pre-solstice prayer-group have been disappointed. Gaia has not heeded them. She continues to show no sign of the “fever” long promised by the Prophet Gore. The robust Pause continues to resist the gathering el Niño. It remains at last month’s record-setting 18 years 9 months (Fig. 1).
Figure 1. The least-squares linear-regression trend on the RSS satellite monthly global mean surface temperature anomaly dataset continues to show no global warming for 18 years 9 months since March 1997, though one-third of all anthropogenic forcings have occurred during the period of the Pause.
The modelers ought to be surprised by the persistence of the Pause. NOAA, with rare honesty, said in its 2008 State of the Climate report that 15 years or more without warming would demonstrate a discrepancy between prediction and observation. One reason for NOAA’s statement is that there is supposed to be a sharp and significant instantaneous response to a radiative forcing such as adding CO2 to the air.
The steepness of this predicted response can be seen in Fig. 2, which is based on a paper on temperature feedbacks by Professor Richard Lindzen’s former student Professor Gerard Roe in 2009. The graph of Roe’s model output shows that the initial expected response to a forcing is supposed to be an immediate and rapid warming. But, despite the very substantial forcings in the 18 years 9 months since February 1997, not a flicker of warming has resulted.
Figure 2. Models predict rapid initial warming in response to a forcing. Instead, no warming at all is occurring. Based on Roe (2009).
The current el Niño, as Bob Tisdale’s distinguished series of reports here demonstrates, is at least as big as the Great el Niño of 1998. The RSS temperature record is beginning to reflect its magnitude.
Figure 3. The glaring discrepancy between IPCC’s predicted range of warming from 1990-2015 (orange zone) and the outturn (blue zone).
The sheer length of the Pause has made a mockery of the exaggerated prediction made by IPCC in 1990 to the effect that there should have been 0.72 [0.50. 1.08] degrees’ global warming by now. The observed real-world warming since 1990, on all five leading global datasets, is 0.24-0.44 degrees, or one-third to three-fifths of IPCC’s central prediction and well below its least prediction (Fig. 3).
The Pause will probably shorten dramatically in the coming months and may disappear altogether for a time. However, if there is a following la Niña, as there often is, the Pause may return at some time from the end of next year onward.
The hiatus period of 18 years 9 months is the farthest back one can go in the RSS satellite temperature record and still show a sub-zero trend. The start date is not cherry-picked: it is calculated. And the graph does not mean there is no such thing as global warming. Going back further shows a small warming rate.
The start-date for the Pause has been inching forward, though just a little more slowly than the end-date, which is why the Pause continues on average to lengthen.
So long a stasis in global temperature is simply inconsistent with the extremist predictions of the computer models. It raises legitimate questions whether they overstate the value for the radiative forcing in response to a proportionate change in CO2 concentration.
The UAH dataset shows a Pause almost as long as the RSS dataset. However, the much-altered surface tamperature datasets show a small warming rate (Fig. 4).
Figure 4. The least-squares linear-regression trend on the mean of the GISS, HadCRUT4 and NCDC terrestrial monthly global mean surface temperature anomaly datasets shows global warming at a rate equivalent to 1.1 C° per century during the period of the Pause from January 1997 to September 2015.
Bearing in mind that one-third of the 2.4 W m–2 radiative forcing from all manmade sources since 1750 has occurred during the period of the Pause, a warming rate equivalent to little more than 1 C°/century is not exactly alarming.
As always, a note of caution. Merely because there has been little or no warming in recent decades, one may not draw the conclusion that warming has ended forever. The trend lines measure what has occurred: they do not predict what will occur.
The Pause – politically useful though it may be to all who wish that the “official” scientific community would remember its duty of skepticism – is far less important than the growing discrepancy between the predictions of the general-circulation models and observed reality.
The divergence between the models’ predictions in 1990 and the observed outturn continues to widen. If the Pause lengthens just a little more, the rate of warming in the quarter-century since the IPCC’s First Assessment Report in 1990, taken as the mean of the RSS and UAH data, will fall below 1 C°/century equivalent (Fig. 5).
Figure 5: The mean of the RSS and UAH satellite data for the 311 months January 1990 to November 2015. The warming rate is equivalent to just 1.04 C° per century.
Roy Spencer, at drroyspencer.com, says 2015 will probably be the third-warmest year in the satellite record since 1979 on his UAH dataset, but thinks it likely that, since the second year of an el Niño is usually warmer than the first, 2016 may prove to be the warmest year in the satellite record, beating 1998 by 0.02-0.03 degrees.
The Technical Note explains the sources of the IPCC’s predictions in 1990 and in 2005, and also demonstrates that that according to the ARGO bathythermograph data the oceans are warming at a rate equivalent to less than a quarter of a Celsius degree per century. In a rational scientific discourse, those who had advocated extreme measures to prevent global warming would now be withdrawing and calmly rethinking their hypotheses.
Key facts about global temperature
Ø The RSS satellite dataset shows no global warming at all for 225 months from March 1997 to November 2015 – more than half the 443-month RSS record.
Ø There has been no warming even though one-third of all anthropogenic forcings since 1750 have occurred since the Pause began in March 1997.
Ø The entire UAH dataset for the 444 months December 1978 to November 2015 shows global warming at an unalarming rate equivalent to just 1.14 Cº per century.
Ø Since 1950, when a human influence on global temperature first became theoretically possible, the global warming trend has been equivalent to below 1.2 Cº per century.
Ø The global warming trend since 1900 is equivalent to 0.75 Cº per century. This is well within natural variability and may not have much to do with us.
Ø The fastest warming rate lasting 15 years or more since 1950 occurred over the 33 years from 1974 to 2006. It was equivalent to 2.0 Cº per century.
Ø Compare the warming on the Central England temperature dataset in the 40 years 1694-1733, well before the Industrial Revolution, equivalent to 4.33 C°/century.
Ø In 1990, the IPCC’s mid-range prediction of near-term warming was equivalent to 2.8 Cº per century, higher by two-thirds than its current prediction of 1.7 Cº/century.
Ø The warming trend since 1990, when the IPCC wrote its first report, is equivalent to 1 Cº per century. The IPCC had predicted close to thrice as much.
Ø To meet the IPCC’s central prediction of 1 C° warming from 1990-2025, in the next decade a warming of 0.75 C°, equivalent to 7.5 C°/century, would have to occur.
Ø Though the IPCC has cut its near-term warming prediction, it has not cut its high-end business as usual centennial warming prediction of 4.8 Cº warming to 2100.
Ø The IPCC’s predicted 4.8 Cº warming by 2100 is well over twice the greatest rate of warming lasting more than 15 years that has been measured since 1950.
Ø The IPCC’s 4.8 Cº-by-2100 prediction is four times the observed real-world warming trend since we might in theory have begun influencing it in 1950.
Ø The oceans, according to the 3600+ ARGO buoys, are warming at a rate of just 0.02 Cº per decade, equivalent to 0.23 Cº per century, or 1 C° in 430 years.
Ø Recent extreme-weather events cannot be blamed on global warming, because there has not been any global warming to speak of. It is as simple as that.
Technical note
Our latest topical graph shows the least-squares linear-regression trend on the RSS satellite monthly global mean lower-troposphere dataset for as far back as it is possible to go and still find a zero trend. The start-date is not “cherry-picked” so as to coincide with the temperature spike caused by the 1998 el Niño. Instead, it is calculated so as to find the longest period with a zero trend.
The fact of a long Pause is an indication of the widening discrepancy between prediction and reality in the temperature record.
The satellite datasets are arguably less unreliable than other datasets in that they show the 1998 Great El Niño more clearly than all other datasets. The Great el Niño, like its two predecessors in the past 300 years, caused widespread global coral bleaching, providing an independent verification that the satellite datasets are better able than the rest to capture such fluctuations without artificially filtering them out.
Terrestrial temperatures are measured by thermometers. Thermometers correctly sited in rural areas away from manmade heat sources show warming rates below those that are published. The satellite datasets are based on reference measurements made by the most accurate thermometers available – platinum resistance thermometers, which provide an independent verification of the temperature measurements by checking via spaceward mirrors the known temperature of the cosmic background radiation, which is 1% of the freezing point of water, or just 2.73 degrees above absolute zero. It was by measuring minuscule variations in the cosmic background radiation that the NASA anisotropy probe determined the age of the Universe: 13.82 billion years.
The RSS graph (Fig. 1) is accurate. The data are lifted monthly straight from the RSS website. A computer algorithm reads them down from the text file and plots them automatically using an advanced routine that automatically adjusts the aspect ratio of the data window at both axes so as to show the data at maximum scale, for clarity.
The latest monthly data point is visually inspected to ensure that it has been correctly positioned. The light blue trend line plotted across the dark blue spline-curve that shows the actual data is determined by the method of least-squares linear regression, which calculates the y-intercept and slope of the line.
The IPCC and most other agencies use linear regression to determine global temperature trends. Professor Phil Jones of the University of East Anglia recommends it in one of the Climategate emails. The method is appropriate because global temperature records exhibit little auto-regression, since summer temperatures in one hemisphere are compensated by winter in the other. Therefore, an AR(n) model would generate results little different from a least-squares trend.
Dr Stephen Farish, Professor of Epidemiological Statistics at the University of Melbourne, kindly verified the reliability of the algorithm that determines the trend on the graph and the correlation coefficient, which is very low because, though the data are highly variable, the trend is flat.
RSS itself is now taking a serious interest in the length of the Great Pause. Dr Carl Mears, the senior research scientist at RSS, discusses it at remss.com/blog/recent-slowing-rise-global-temperatures.
Dr Mears’ results are summarized in Fig. T1:
Figure T1. Output of 33 IPCC models (turquoise) compared with measured RSS global temperature change (black), 1979-2014. The transient coolings caused by the volcanic eruptions of Chichón (1983) and Pinatubo (1991) are shown, as is the spike in warming caused by the great el Niño of 1998.
Dr Mears writes:
“The denialists like to assume that the cause for the model/observation discrepancy is some kind of problem with the fundamental model physics, and they pooh-pooh any other sort of explanation. This leads them to conclude, very likely erroneously, that the long-term sensitivity of the climate is much less than is currently thought.”
Dr Mears concedes the growing discrepancy between the RSS data and the models, but he alleges “cherry-picking” of the start-date for the global-temperature graph:
“Recently, a number of articles in the mainstream press have pointed out that there appears to have been little or no change in globally averaged temperature over the last two decades. Because of this, we are getting a lot of questions along the lines of ‘I saw this plot on a denialist web site. Is this really your data?’ While some of these reports have ‘cherry-picked’ their end points to make their evidence seem even stronger, there is not much doubt that the rate of warming since the late 1990s is less than that predicted by most of the IPCC AR5 simulations of historical climate. … The denialists really like to fit trends starting in 1997, so that the huge 1997-98 ENSO event is at the start of their time series, resulting in a linear fit with the smallest possible slope.”
In fact, the spike in temperatures caused by the Great el Niño of 1998 is almost entirely offset in the linear-trend calculation by two factors: the not dissimilar spike of the 2010 el Niño, and the sheer length of the Great Pause itself. The headline graph in these monthly reports begins in 1997 because that is as far back as one can go in the data and still obtain a zero trend.
Fig. T1a. Graphs for RSS and GISS temperatures starting both in 1997 and in 2001. For each dataset the trend-lines are near-identical, showing conclusively that the argument that the Pause was caused by the 1998 el Nino is false (Werner Brozek and Professor Brown worked out this neat demonstration).
Curiously, Dr Mears prefers the terrestrial datasets to the satellite datasets. The UK Met Office, however, uses the satellite data to calibrate its own terrestrial record.
The length of the Pause, significant though it now is, is of less importance than the ever-growing discrepancy between the temperature trends predicted by models and the far less exciting real-world temperature change that has been observed.
Sources of the IPCC projections in Figs. 2 and 3
IPCC’s First Assessment Report predicted that global temperature would rise by 1.0 [0.7, 1.5] Cº to 2025, equivalent to 2.8 [1.9, 4.2] Cº per century. The executive summary asked, “How much confidence do we have in our predictions?” IPCC pointed out some uncertainties (clouds, oceans, etc.), but concluded:
“Nevertheless, … we have substantial confidence that models can predict at least the broad-scale features of climate change. … There are similarities between results from the coupled models using simple representations of the ocean and those using more sophisticated descriptions, and our understanding of such differences as do occur gives us some confidence in the results.”
That “substantial confidence” was substantial over-confidence. For the rate of global warming since 1990 – the most important of the “broad-scale features of climate change” that the models were supposed to predict – is now below half what the IPCC had then predicted.
In 1990, the IPCC said this:
“Based on current models we predict:
“under the IPCC Business-as-Usual (Scenario A) emissions of greenhouse gases, a rate of increase of global mean temperature during the next century of about 0.3 Cº per decade (with an uncertainty range of 0.2 Cº to 0.5 Cº per decade), this is greater than that seen over the past 10,000 years. This will result in a likely increase in global mean temperature of about 1 Cº above the present value by 2025 and 3 Cº before the end of the next century. The rise will not be steady because of the influence of other factors” (p. xii).
Later, the IPCC said:
“The numbers given below are based on high-resolution models, scaled to be consistent with our best estimate of global mean warming of 1.8 Cº by 2030. For values consistent with other estimates of global temperature rise, the numbers below should be reduced by 30% for the low estimate or increased by 50% for the high estimate” (p. xxiv).
The orange region in Fig. 2 represents the IPCC’s medium-term Scenario-A estimate of near-term warming, i.e. 1.0 [0.7, 1.5] K by 2025.
The IPCC’s predicted global warming over the 25 years from 1990 to the present differs little from a straight line (Fig. T2).
Figure T2. Historical warming from 1850-1990, and predicted warming from 1990-2100 on the IPCC’s “business-as-usual” Scenario A (IPCC, 1990, p. xxii).
Because this difference between a straight line and the slight uptick in the warming rate the IPCC predicted over the period 1990-2025 is so small, one can look at it another way. To reach the 1 K central estimate of warming since 1990 by 2025, there would have to be twice as much warming in the next ten years as there was in the last 25 years. That is not likely.
But is the Pause perhaps caused by the fact that CO2 emissions have not been rising anything like as fast as the IPCC’s “business-as-usual” Scenario A prediction in 1990? No: CO2 emissions have risen rather above the Scenario-A prediction (Fig. T3).
Figure T3. CO2 emissions from fossil fuels, etc., in 2012, from Le Quéré et al. (2014), plotted against the chart of “man-made carbon dioxide emissions”, in billions of tonnes of carbon per year, from IPCC (1990).
Plainly, therefore, CO2 emissions since 1990 have proven to be closer to Scenario A than to any other case, because for all the talk about CO2 emissions reduction the fact is that the rate of expansion of fossil-fuel burning in China, India, Indonesia, Brazil, etc., far outstrips the paltry reductions we have achieved in the West to date.
True, methane concentration has not risen as predicted in 1990 (Fig. T4), for methane emissions, though largely uncontrolled, are simply not rising as the models had predicted. Here, too, all of the predictions were extravagantly baseless.
The overall picture is clear. Scenario A is the emissions scenario from 1990 that is closest to the observed CO2 emissions outturn.
Figure T4. Methane concentration as predicted in four IPCC Assessment Reports, together with (in black) the observed outturn, which is running along the bottom of the least prediction. This graph appeared in the pre-final draft of IPCC (2013), but had mysteriously been deleted from the final, published version, inferentially because the IPCC did not want to display such a plain comparison between absurdly exaggerated predictions and unexciting reality.
To be precise, a quarter-century after 1990, the global-warming outturn to date – expressed as the least-squares linear-regression trend on the mean of the RSS and UAH monthly global mean surface temperature anomalies – is 0.27 Cº, equivalent to little more than 1 Cº/century. The IPCC’s central estimate of 0.72 Cº, equivalent to 2.8 Cº/century, that was predicted for Scenario A in IPCC (1990) with “substantial confidence” was approaching three times too big. In fact, the outturn is visibly well below even the least estimate.
In 1990, the IPCC’s central prediction of the near-term warming rate was higher by two-thirds than its prediction is today. Then it was 2.8 C/century equivalent. Now it is just 1.7 Cº equivalent – and, as Fig. T5 shows, even that is proving to be a substantial exaggeration.
Is the ocean warming?
One frequently-discussed explanation for the Great Pause is that the coupled ocean-atmosphere system has continued to accumulate heat at approximately the rate predicted by the models, but that in recent decades the heat has been removed from the atmosphere by the ocean and, since globally the near-surface strata show far less warming than the models had predicted, it is hypothesized that what is called the “missing heat” has traveled to the little-measured abyssal strata below 2000 m, whence it may emerge at some future date.
Actually, it is not known whether the ocean is warming: each of the 3600 automated ARGO bathythermograph buoys takes just three measurements a month in 200,000 cubic kilometres of ocean – roughly a 100,000-square-mile box more than 316 km square and 2 km deep. Plainly, the results on the basis of a resolution that sparse (which, as Willis Eschenbach puts it, is approximately the equivalent of trying to take a single temperature and salinity profile taken at a single point in Lake Superior less than once a year) are not going to be a lot better than guesswork.
Results for the 11 full years of ARGO data are plotted in Fig. T5. The ocean warming, if ARGO is right, is just 0.02 Cº decade–1, equivalent to 0.2 Cº century–1.
Figure T5. The entire near-global ARGO 2 km ocean temperature dataset from January 2004 to December 2014 (black spline-curve), with the least-squares linear-regression trend calculated from the data by the author (green arrow).
Finally, though the ARGO buoys measure ocean temperature change directly, before publication NOAA craftily converts the temperature change into zettajoules of ocean heat content change, which make the change seem a whole lot larger.
The terrifying-sounding heat content change of 260 ZJ from 1970 to 2014 (Fig. T6) is equivalent to just 0.2 K/century of global warming. All those “Hiroshima bombs of heat” of which the climate-extremist websites speak are a barely discernible pinprick. The ocean and its heat capacity are a lot bigger than some may realize.
Figure T6. Ocean heat content change, 1957-2013, in Zettajoules from NOAA’s NODC Ocean Climate Lab: http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT, with the heat content values converted back to the ocean temperature changes in Kelvin that were originally measured. NOAA’s conversion of the minuscule warming data to Zettajoules, combined with the exaggerated vertical aspect of the graph, has the effect of making a very small change in ocean temperature seem considerably more significant than it is.
Converting the ocean heat content change back to temperature change reveals an interesting discrepancy between NOAA’s data and that of the ARGO system. Over the period of ARGO data, from 2004-2014, the NOAA data imply that the oceans are warming at 0.05 Cº decade–1, equivalent to 0.5 Cº century–1, or rather more than double the rate shown by ARGO.
ARGO has the better-resolved dataset, but since the resolutions of all ocean datasets are very low one should treat all these results with caution.
What one can say is that, on such evidence as these datasets are capable of providing, the difference between underlying warming rate of the ocean and that of the atmosphere is not statistically significant, suggesting that if the “missing heat” is hiding in the oceans it has magically found its way into the abyssal strata without managing to warm the upper strata on the way.
On these data, too, there is no evidence of rapid or catastrophic ocean warming.
Furthermore, to date no empirical, theoretical or numerical method, complex or simple, has yet successfully specified mechanistically either how the heat generated by anthropogenic greenhouse-gas enrichment of the atmosphere has reached the deep ocean without much altering the heat content of the intervening near-surface strata or how the heat from the bottom of the ocean may eventually re-emerge to perturb the near-surface climate conditions relevant to land-based life on Earth.
Figure T7. Near-global ocean temperatures by stratum, 0-1900 m, providing a visual reality check to show just how little the upper strata are affected by minor changes in global air surface temperature. Source: ARGO marine atlas.
Most ocean models used in performing coupled general-circulation model sensitivity runs simply cannot resolve most of the physical processes relevant for capturing heat uptake by the deep ocean.
Ultimately, the second law of thermodynamics requires that any heat which may have accumulated in the deep ocean will dissipate via various diffusive processes. It is not plausible that any heat taken up by the deep ocean will suddenly warm the upper ocean and, via the upper ocean, the atmosphere.
If the “deep heat” explanation for the Pause were correct (and it is merely one among dozens that have been offered), the complex models have failed to account for it correctly: otherwise, the growing discrepancy between the predicted and observed atmospheric warming rates would not have become as significant as it has.
Why were the models’ predictions exaggerated?
In 1990 the IPCC predicted – on its business-as-usual Scenario A – that from the Industrial Revolution till the present there would have been 4 Watts per square meter of radiative forcing caused by Man (Fig. T8):
Figure T8. Predicted manmade radiative forcings (IPCC, 1990).
However, from 1995 onward the IPCC decided to assume, on rather slender evidence, that anthropogenic particulate aerosols – mostly soot from combustion – were shading the Earth from the Sun to a large enough extent to cause a strong negative forcing. It has also now belatedly realized that its projected increases in methane concentration were wild exaggerations. As a result of these and other changes, it now estimates that the net anthropogenic forcing of the industrial era is just 2.3 Watts per square meter, or little more than half its prediction in 1990 (Fig. T9):
Figure T9: Net anthropogenic forcings, 1750 to 1950, 1980 and 2012 (IPCC, 2013).
Even this, however, may be a considerable exaggeration. For the best estimate of the actual current top-of-atmosphere radiative imbalance (total natural and anthropo-genic net forcing) is only 0.6 Watts per square meter (Fig. T10):
Figure T10. Energy budget diagram for the Earth from Stephens et al. (2012)
In short, most of the forcing predicted by the IPCC is either an exaggeration or has already resulted in whatever temperature change it was going to cause. There is little global warming in the pipeline as a result of our past and present sins of emission.
It is also possible that the IPCC and the models have relentlessly exaggerated climate sensitivity. One recent paper on this question is Monckton of Brenchley et al. (2015), which found climate sensitivity to be in the region of 1 Cº per CO2 doubling (go to scibull.com and click “Most Read Articles”). The paper identified errors in the models’ treatment of temperature feedbacks and their amplification, which account for two-thirds of the equilibrium warming predicted by the IPCC.
Professor Ray Bates gave a paper in Moscow in summer 2015 in which he concluded, based on the analysis by Lindzen & Choi (2009, 2011) (Fig. T10), that temperature feedbacks are net-negative. Accordingly, he supports the conclusion both by Lindzen & Choi (1990) (Fig. T11) and by Spencer & Braswell (2010, 2011) that climate sensitivity is below – and perhaps considerably below – 1 Cº per CO2 doubling.
Figure T11. Reality (center) vs. 11 models. From Lindzen & Choi (2009).
A growing body of reviewed papers find climate sensitivity considerably below the 3 [1.5, 4.5] Cº per CO2 doubling first put forward in the Charney Report of 1979 for the U.S. National Academy of Sciences. On the evidence to date, therefore, there is no scientific basis for taking any action at all to mitigate CO2 emissions.
It is interesting to see how the warming rate, expressed as degrees per century equivalent, has changed since 1950 (Fig. T12).
Figure T12. Changes in the global warming rate, 1950-2005.
Finally, how long will it be before the Freedom Clock (Fig. T13) reaches 20 years without any global warming? If it does, the climate scare will become unsustainable.
Figure T13. The Freedom Clock edges ever closer to 20 years without global warming
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
If this is a dumb question, let me know, but – how is it that an El Niño might cause the earth to heat up. I can understand it facilitating heat redistribution across the surface of the earth but that should just lead to heating in one place and cooling in another. If there are more thermometers placed in the heating areas that would lead to a false perception of global heating.
To heat the Earth the El Niño must somehow prevent incremental heat being radiated into space. What would be the mechanism for that to happen?
What am I missing?
You miss nothing. As you say ninos just redistribute heat, perhaps like the Carbon moonies wish to redistribute wealth. Like the Carbon moonies create no wealth, ninos create no heat.
Ninos facilitate rather than retard radiation to space. This can be clearly seen from stratospheric temperature spikes from every strong nino. (Not to mention monsoons)
There is a net flow of heat from the sun to the oceans, and the heat has to come back out. Some is radiated back out to space, and some is transferred from the ocean to the atmosphere, from which it radiates back out to space. However, the transfer from the ocean to the atmosphere is unsteady, in part because the easterly trade winds in the Pacific are not steady in their pushing of warm equatorial water westward. During an El Nino, there is less of easterly trade winds, and the warm water in the western equatorial Pacific spreads eastward. This spread of warm water increases transfer of heat from the ocean to the atmosphere. During a La Nina, stronger easterly winds push the warm equatorial Pacific water more westward and into a more confined area, and transfer of heat from the ocean to the atmosphere drops.
I was an engineer for 35 years. Back then if you gave more credence to a model prediction than to actual data it would have been career limiting. I guess the so called climate experts operate under a different set of rules.
Gaia hates ecoloons and warmists.
No analogy is perfect, however I will try to take a stab at this. Let us presume that the 18 years and 9 months is one huge sine curve that starts at 0 degrees and ends at 360 degrees. To draw the longest line of zero slope would be a line from 0 to 360. Now let us suppose that a new point is added, namely at 361 degrees. The longest straight line is as long as before, but it now goes from 1 degree to 361 degrees. So the starting point moved ahead by 1 degree.
Now let us jump to 450 degrees. The longest straight line is now from 90 to 450, so it is just as long as before, but advanced by 90 degrees.
But suppose we jump to 460 degrees where it is now going down. We can now get a straight line by starting at 100 degrees or 80 degrees. However from 80 degrees to 460 is 380 degrees so that is longer than before.
What does this have to do with the length of the pause? Going from 360 to 450 is like having new anomalies above 0.24 where the start date may go up by a month or more. However having new anomalies below 0.24 is where the start date may go earlier by a month or more.
Since May, anomalies have been above 0.24. So depending on how high the higher anomaly is, and on how low the previous start date was, one of two things will happen when future anomalies are above 0.24. Either the start date is unchanged, but the new negative slope is less than before, or the start date will advance. Right now, the start is March 1997. Should it reach the spike of December 1997, the pause as we know it is over.
I think you may be confusing GISS with RSS. RSS is not changing anything, but if they did, then a cooler 1997 would make the pause last longer which is the last thing Dr. Mears would want.
However GISS wants a cooler 1998 to eliminate the pause and make the last 17 years warm up more.
I often wander , lonely as a cloud , amongst the reference pages and in the context of this post I noticed something in the atmosphere pages section that puzzled me , ignorant as I am of basic meteorology.
The UAH surface temperature plot is given above from 1980 to present . To me it looks like a set of 2 trends 1980 – 200 :- slight increase and very noisy , 2002 – present:- basically flat
But now look at the reference plots for the atmosphere , 1980 – present :
ftp://ftp.ssmi.com/msu/graphics/tlt/plots/rss_ts_channel_tlt_global_land_and_sea_v03_3.png (lower troposphere)
ftp://ftp.ssmi.com/msu/graphics/tmt/plots/rss_ts_channel_tmt_global_land_and_sea_v03_3.png(middle troposphere)
ftp://ftp.ssmi.com/msu/graphics/tts/plots/rss_ts_channel_tts_global_land_and_sea_v03_3.png(tropospher/stratosphere)
ftp://ftp.ssmi.com/msu/graphics/tls/plots/rss_ts_channel_tls_global_land_and_sea_v03_3.png(lower statosphere)
In these plots it seems to me that the 2002- present is constantly flat , but that the 1980 – 2000 section shows a trend from gradual warming to gradual cooling as the height of the sensored region increases.
Is this in agreement with conventional global warming theory?
Final question : The North Atlantic jet stream is at present being blamed in the media for the constant series of low pressure regions hitting the NW of England , with associated high winds. The implication is that this abnormally wet and stormy , and warm , weather is directly attributed to the ferocity of the jet stream . But if the latter is the result of the difference in the tropical and polar regions at the 9 – 20 km height , and the RSS record shows minimal change at that height over the past 35 years , how can the present unpleasant weather be attributed to advanced global warming acting through the medium of the jet stream?
this is a result of the amo moving into the cool phase ,similar weather patterns occurred when this happened in the past.
An increase of greenhouse gases is predicted to warm the surface and lower troposphere, and cool the upper troposphere and stratosphere. The top level of the atmosphere is cooled because more greenhouse gases increases its ability to radiate heat. The lowest levels are warmed because thermal radiation is absorbed and reradiated more times on the way out, and has more instances of being temporarily turned backwards towards the surface.
As for the jet stream and the Pacific NW storms: What’s happening now does not look unusual to me in terms of intensity of the jet stream and the storms. The intensity of the jet stream has to do with horizontal temperature gradient at and below the altitude of the jet stream. Since the Arctic has been warming more than the tropics, this weakens the northern hemisphere jet stream. Supposedly that is a problem, because a weaker jet stream is supposed to kink up more and cause stagnant weather patterns that cause droughts and floods. Over the decades worldwide, I don’t see the jet stream actually being kinked up more than it used to be. And there are signs that windstorms other than tropical cyclones have gotten very slightly milder, especially USA tornadoes of strength F2/EF2 and stronger.
Thank you Donald and “bit chilly” . I would like to explore the trends in the atmospheric RSS results again(without messing up the links) sometime because there does seem to be a discontinuity in the long term trends at the 2000-2002 period. Before 2002 the trend appears to be a decrease over time with altitude , whereas after 2002 the trend is essentially flat.
I did not realise that a weaker jet stream could be a factor in the present unpleasant , but overall mild, weather here in England. Not quite the picture being presented on some of the BBC programmes. The BBC could be such a useful medium for education of the general public if only it could be trusted to be unbiassed.
There is no need for flat/falling global temp trends to disconfirm the CAGW hypothesis.
All that’s required for disconfirmatiin is for CAGW hypothetical trend projections to exceed reality by 2+ standard deviations for a statistically significant duration.
CAGW alarmists realize CAGW is on the cusp of disconfirmation, which is why the bogus KARL2015 paper was so essential to keep projections within 2 standard deviations of revised GISTEMP/HADCRUT4 datasets, and why alarmists try to pretend RSS and UAH datasets don’t exist…
Without the KARL2015 paper, official disconfirmation under the Scientific Method would have been inevitable in around 5 years.
The gigantic disparities between RSS/UAH satellite datasets vs. GISTEMP/HADCRUT4 are already untenable, and will continue to diverge as CAGW alarmists fiddle with raw temp data to avoid CAGW’s inevitable disconfirmation.
Regarding: “The current el Niño, as Bob Tisdale’s distinguished series of reports here demonstrates, is at least as big as the Great el Niño of 1998”:
The current El Nino is as great as that of 1998 only when counting the Nino 3.4 region. The Nino regions outside 3.4 overall, and notably especially east of 3.4, have significantly lower temperature anomaly than the 3.4 region does according to Bob Tisdale in http://wattsupwiththat.com/2015/11/17/is-the-current-el-nino-stronger-than-the-one-in-199798/
” SAMURAI
December 4, 2015 at 5:11 pm
There is no need for flat/falling global temp trends to disconfirm the CAGW hypothesis.”
+++++++++++++++++++++++++++++++++++++++++++++++++
Actually I don’t care if it IS warming. What I want to know is what the CAUSE is. Where is the proof that CO2 is the control knob? Until that is shown, colour me skeptical.
I have noticed recently that a lot of people here are members or about to be members of the ROFC so we have seen cycle upon cycle and one more is of little concern. I remember several brown Christmases of the past. Haven’t had one in some time. Has the regional climate gotten colder? 😉 Just kidding.
Sir Christopher has long touted the RSS series as the “Recieved data set” because of it’s apparent sensitivity to the ’97/’98 El Nino – that is a spike larger than the oher data sets. This is convenient if you want to use statistical sleight-of-the-hand to give the illusion of a pause.
With this present El Nino, the RSS’ lack of sensitivity (so far) is becoming noticable. The other major data sets are well ahead of RSS on this:
http://www.climate4you.com/
The question is not how long is the RSS ‘pause’, but: Is RSS broken?
Village Idiot:
1) If temperatures were increasing at an increasing rate, sea level would be increasing at an increasing rate. It isn’t.
2) Your selective moral outrage is laughable. RSS broken? Have you looked at the ground measurement methodology? Saul Alynski “Accuse others of what you are guilty.”
3) Atmospheric CO2 can’t cause record day time temperatures or the oceans to warm.
4) Atmospheric CO2 can’t increase before temperatures to drive the earth out of an ice age, nor can it decrease to drive the earth back into an ice age. You can’t even explain the basics of the geologic record.
Your “science” is a joke.
https://youtu.be/QowL2BiGK7o
This is really bad news for the alarmists. A trend is defined as a series of higher highs and higher lows, or a series of lower highs and lower lows. The temperature charts are forming at least 1/2 of of a down-trend. We have established a series of lower highs. If El Nino’s can’t drive temperatures higher, even at much higher CO2 levels, the Alarmists are running out of evidence to support their nonsense. But then again, we already knew that. This documentary pretty much predicted all the nonsense you are seeing today.
https://youtu.be/QowL2BiGK7o
News Flash!!! IR between 13 and 18 microns doesn’t warm water.Visible light warms the oceans. CO2 is transparent to visible light. What is warming the oceans is also warming the atmosphere, just like a burner warms the air above it.
So with the warmies on Hijra to Paris, here’s a question for them:
99.5% (the new consensus ratio announced by Obama) believe in the “official” position which I’m told is 4C in this century.
The IPCC has been making forecasts since 1990. That’s 25 years.
Given this is long enough to accommodate natural variation we should now have unequivocal proof of a full 1C of warming. Where is it?
In that entire treatise I didn’t see one mention of NOAA’s most recent report in Science magazine. I believe it was in the July 2015 issue. If they are correct, one can throw all of the above in the trash.
See my separate posting on Karl et Al and their paper that is now under Congressional investigation.
I believe that that particular “witch hunt” (Congressional investigation) has gone the way of most CI’s, down the trash shoot. The particular paper was peer reviewed before it was published in Science. That’s more than one can say for your work and for “witch hunt’s”.
Although I share Lord Monckton’s belief that the sensitivity of temperature to carbon-dioxide concentration is quite low, I caution that statements such as the following need to be taken with a grain of salt:
To contend Monckton et al. “found” the sensitivity to be “in the region of 1 C⁰ per CO2 doubling” it is highly misleading. It would be more accurate to say that the authors merely guessed that value.
Once you’ve slogged through the logorrhea in which they camouflaged the fact, you find that the basis for their “finding” is nothing more than their §8.5 postulate that “temperature feedbacks are at most weakly net positive, with loop gain g on [-0.5, +0.1] as Fig. 5 and 810,000 years of thermostasis suggest.” For an open-loop-gain value of 1/3.2 K per W/m^2 and forcing value of 3.7 W/m^2 for doubled CO2 concentration (the values favored by the IPCC, according to their paper) the average (-0.2) of those loop-gain values does indeed result in about 1 C⁰ of temperature increase. But that is the extent of their reasoning.
Yes, the historical “thermostasis” does make it seem implausible that feedback is very positive, but where did Monckton et al.’s “[-0.5, +0.1].” come from? Why not [-1.5, -0.5]? Or [+0.1, +0.5]? After all, if you apply the average of that last range to the forcing trend of the last 63 years, for which Monckton et al.’s Fig. 6 displays the 0.11 K/decade HadCRUT4 global-average-surface-temperature trend, the value you get (0.10 K/decade), is closer to that observed 0.11 K/decade than the value (0.07 K/decade) obtained by applying Monckton et al.’s range average. (And it implies a temperature sensitivity of over 2 C⁰ if you go by the Roe curves.)
(Incidentally, Monckton et al. didn’t get the 0.9 K/decade value labeled “simple model” in their evidence-of-skill Fig. 6 by applying that “simple model”—i.e., the result of using their -0.2-loop-gain guess in the feedback equation—to the corresponding 63-year forcing trend. Instead, they applied their “simple model” to a forcing trend half again as large as the IPCC-suggested forcings for the interval over which that temperature trend actually occurred: they applied it to one of the RCP forcing projections for the rest of the century. With Lord Monckton it’s important to keep your eye on the pea.)
Now, there are undoubtedly circles in which the mere fact that Christopher Monckton guessed a loop gain of –0.2 passes for adequate reason to conclude that temperature sensitivity is around 1 C⁰. If you want to be taken seriously by people who actually understand the disciplines on which Monckton et al. purportedly based their paper, though, you would be well advised to avoid mentioning that paper.
Joe Born:
You say
NO!
Your falsehoods about guesses have been completely refuted several times and by several people including Lord Monckton here.
It would be acceptable if you were to claim,
“It would be more accurate to say that the authors calculated that value”.
Your deliberate falsehoods are merely more of your trolling.
Richard
(Please don’t assume another poster is being dishonest just because they have a differnt point of view. Discussion usually is sufficient to resolve most differences. -mod)
Thank you for proving once again that in some circles a proposition is accepted merely because Lord Monckton has uttered it.
But for those who can think for themselves and have some knowledge of linear systems, feedback, and circuit analysis, i.e., of the disciplines on which the authors purported to base their paper’s conclusions, please do follow richardscourtney’s link and determine for yourselves whether the “refutation” set forth in that thread comes even within shouting distance of addressing the factual points that I raised.
You will find all bluster, no substance.
Joe Born:
You say
NO! I have demonstrated that I don’t accept a falsehood merely because you have posted it.
I linked to one of several refutations of your falsehoods. Your internet stalking of Lord Monckton is offensive behaviour with as little merit as your falsehoods.
Richard
-mod:
Please refer to my link. I assumed nothing.
Richard
Well, a voice of reason in the wilderness. Thanks
‘Da Bear
Great article. But I suggest you amend your signature chart to include the corresponding CO2 trend, like this.
http://tallrite.com/weblog/blogimages/refs2015/CO2&TempTrends1997-Oct2015.jpg
Notable that the UK BBC Trust has forced the BBC to remove Quentins Letts radio (R4) piece on “Whats the point of the Met Office” broadcast last month. Its now taken off the BBC iPlayer, so you cannot at any point judge for yourself.
Doesn’t meet BBC broadcasting guidelines although it contained two MP’s from the last UK Parliament Energy and Climate Change Committee (Tory & Lab). Neither are extremists of the alarmist camp, which rather explains the BBC’s actions I think.
Anyway, there is little point in the UK Met Office at the cost and size that it is.
” 2016 may prove to be the warmest year in the satellite record, beating 1998 by 0.02-0.03 degrees”
0.02 – 0.03 degrees warmer than1998?
We.
Are.
DOOMED!
You might like, for even further effect, to include the huge rise in CO2 in your chart of the non-rise in global temperatures for the past 18+ years. Like, for example, this …
http://tallrite.com/weblog/blogimages/refs2015/CO2&TempTrends1997-Oct2015.jpg
The way I view the temperature record chart the pause does not begin until 2002. The abrupt temperature rise in 1998-99 masks the continuation of more or less steady temperature rise until 2002. If 2002 is used as the start of the pause then the temperature trend line should be shifted to show this and the slight warming disappears and the pause is more pronounced. The start of the pause in 2002 is also more in line with the lengths of the earlier periods of pause and temperature rise recorded since 1880. I believe this pause-rise pattern is important and should receive more attention. If the pattern continues then there will be only forty years of warming in this century.
Hello! your friendly neighbourhood environmental scientist here. There are two things that really bother me about this kind of post, and they make me (and my fellow environmental scientists following this blog) actually less inclined to take serious any legitimate concerns that may be raised.
1) 1998. Always 1998. if you would use a different reference point (say, ~1900) it would be more of interest. If you are concerned about uncertainties in older datasets, just show an upper and lower range as well.
2) why would you keep comparing with climate models from 1990? Those models are completely different than modern climate models. It’s like comparing a modern electric car to a 1960 diesel and claiming that electric cars are therefore much more environmentally friendly. My impression is that modern climate models are quite competent. You want to convince me otherwise, fine, but please use the relevant data 🙂
Cheers,
Ben
Ben, you really don’t understand the “Pause” and what it means, do you? Try re-reading the post then, this time applying your reading comprehension skills. I assume you have them.
Reading comprehension is fundamental.
benben:
No, the length of the ‘pause’ is computed from now and back in time.
And that is why as the above essay reports
If as you suggest the ‘reference point’ were 1998 then the length of the ‘pause’ would have increased by one month since last month. It remains the same because the ‘reference point’ is now and so the ‘reference point’ has moved from where it was a month ago.
I have pointed this out for onlookers who may have been misled by your post. However, I acknowledge that the concept of ‘now’ not being fixed in time must be difficult to understand by somebody with your self-proclaimed inability at reading comprehension. Indeed, such a concept must be difficult for any “neighbourhood environmental scientist”, so you have my sympathy for your failure of comprehension.
Also, you ask
I am surprised that any “neighbourhood environmental scientist” would not know the answer to your question that I answer for the benefit of any interested onlookers.
We only have data that enables assessment of the CMIP5 models cited by e.g. the UN Intergovernmental Panel on Climate Change (IPCC). There is no reason to suppose other climate models have any predictive skill.
In 2008 the US Government’s National Oceanic and Atmospheric Administration (NOAA) reported in its climate report
Ref. NOAA, ‘The State of the Climate’, 2008
(Declaration of possible personal interest by RSC: NOAA nominated me as an Expert Reviewer of the IPCC Fourth Assessment Report and I accepted the nomination so conducted peer review of that Report).
However, in 2012 when warming had ceased for seemingly 15 years, Phil Jones of the Climate Research Unit (CRU) insisted that “15 or 16 years is not a significant period: pauses of such length had always been expected”. This was a flagrant falsehood because in 2009 (when the ‘pause’ was already becoming apparent and being discussed by scientists) he had written an email (leaked as part of ‘Climategate’) in which he said of model projections,
Clearly, as recently as 2008 both NOAA in the US and the CRU in the UK agreed that “observed absence of warming” for 15 or more years would “create a discrepancy with the expected present-day warming rate” indicated by climate models. And this was a decade into the ‘pause’ which has now existed for probably more than 18 years.
Richard
And if they think that starting before 1998 is cherry picking, the pause can also be shown to start from September 2000 which is 15 years and 3 months:
http://www.woodfortrees.org/plot/rss/from:1997.1/plot/rss/from:1997.1/trend/plot/rss/from:2000.6/trend
benben says:
1998. Always 1998…
Climatologist Phil Jones designated 1997-98 as the beginning year to determine if the ‘pause’ was statistically valid.
Dr. Jones is a central figure in the Climategate emails, and he is considered an arch-Warmist. Therefore, those who use his own year as the beginning of the ‘pause’ are using the alarmist crowd’s own words.
But as usual, they are moving the goal posts because the planet is busy falsifying their beliefs. Jones was probably very confident when he made his statement that global warming would soon resume.
It hasn’t. In any other field of science, that kind of failed prediction would be cause to admit that a conjecture has been falsified. But after more than 18 years, they refuse to admit that they were wrong. Now they want to change Dr. Jones’ starting year. Ain’t gonna happen, benben.
BenBen: With regard to your
– how do you, a (self-proclaimed scientist) come up with that claim? Was it a peer-reviewed conclusion? We sceptics are so often told we must only use peer-reviewed data to oppose such claims. Perhaps, what you ought to do is read up on how this chart was created (so that the start ate was not cherry-picked) and then come back and tell us how that was so wrong. Otherwise, your hypothesis is soooooooooo wrong.
Peer review is crap!!!
“The Great Betrayal – Fraud in Science” Horace Freeland Judson
Nicholas Schroeder:
Peer review exists solely as a protection for journal Editors and has no other purpose.
Please see my recent explanation of this on WUWT.
Richard
According to Werner
“And if they think that starting before 1998 is cherry picking, the pause can also be shown to start from September 2000 which is 15 years and 3 months:”
That only applies to the RSS data. For UAH, you must start somewhere between June 1997 and February 1998, and that range of possible start-dates for the “Pause” is going to be squeezed from both ends, until only December 1997 remains., which will start to look a bit like cherry-picking.
For RSS, the Pause is rubust enough to last well into next year
It all depends on how fast it jumps up and how long it stays there. For example, if the anomaly stays at 0.43, the pause will last another 6 months. But if it spikes to 0.90 for two months, the pause is over. And then there is everything in between.
Exactly Werner
Well – let’s hope for the best, because, if nothing else, these “Pause” articles give rise to some enthusiastic discussion each month
Ha, such acerbic responses. Look, I’m not here to just uselessly argue, I’m just genuinely curious about this. Modern climate models (e.g. the last three or four years) have become much, much more accurate. I totally support your aim of being skeptical towards the mindless following of models, but at least you should compare with the current state of science, not that of 25 years ago, right? That is not a strange request I hope.
The reason why recent models are much more accurate is that only now computers have become powerful enough to couple various models that look in detail at sub-systems of the climate together.This is a major improvement difference with older models. So I am really interested to see whether these new models stack up better. They certainly seem to. See for example this picture comparing modern models to the temperature record (sorry, I don’t know how to embed pictures, perhaps someone can explain?)
http://cdn.phys.org/newman/gfx/news/hires/2015/1-globalwarmin.jpg
from this article: Jochem Marotzke & Piers M. Forster, Forcing, feedback and internal variability in global temperature trends, Nature, 29 January 2015; DOI: 10.1038/nature14117
I’m honestly curious to hear your thoughts on modern climate models.
Kind regards,
Ben
1. There is no evidence of the bidirectional EM energy transfer shown in Figure T10**.
2. The whole ‘forcing’ argument is bunkum.
3. In reality, the Earth keeps mean net surface IR emission in self-absorbed GHG bands at zero, thereby minimising radiation entropy production rate in the thermodynamic system: the Enhanced GHE does not exist.
4. There would in the absence of the water cycle be ~0.85 K CO2 climate sensitivity but the water cycle (and biofeedback) reduces it to near zero.
5. The Sagan and Pollock aerosol optical physics used to purport ‘global dimming’ supposedly hiding CO2-AGW, is wrong; the real sign of the effect is reversed and there has been global brightening, the real AGW, as Asia pushed out loads of extra aerosols during industrialisation.
**Climate Alchemists claim Pyrgeometers measure those energy flows but this instrument really measures radiant exitance, the potential energy flux from the emitter in its view angle, in a vacuum, to a perfect radiation sink at absolute zero. Net unidirectional radiant energy flux is the vector sum of exitances.
Bias Alert: Daily Mail closed down comments on 3 stories today, dealing with climate change. One after 22 comments and another after a single comment. These are generally hot topics especially with the Paris convention/party ongoing at the moment. The only article that Daily Mail left open was the one that says Chicago has no snow for the first time this year. They shut down the story about how cold and snowy the western USA has been of late. FYI.
Christopher Monckton of Brenchley, I have one question:
Who has/will be presenting this data and information in Paris? Surely there was a microphone… somewhere? GK
Once again the climate “sophists” aren’t seeing the forest through the trees. The Oceans have stopped warming, and so has the atmosphere. Given heat rises in our atmosphere, that would be expected. I’m pretty sure Ocean and atmosphere temperatures are highly correlated. We should be looking at what is controlling the temperatures of the oceans if we want to understand what is controlling the temperature of the atmosphere. Hint: It ain’t CO2.
Decades ago I earned a BSME which requires demonstrating, among many other sciencey type fields, a working knowledge of heat transfer and thermodynamics. Much of my 35 year career involved measuring energy flows in a wide variety of power generation equipment and systems. One fundamental concept is that gozintaz must equal gozoutaz. Energy doesn’t mysteriously appear or disappear and it doesn’t go round and round in some kind of self-perpetuating loop.
Search for “climate or global heat balance diagram” and Bing images will return a plethora of various versions. One would think that with a consensus there would be just one. Among them will be the one in this thread. Hover over an image and some brief origin information will appear, e.g. StepenSchneider2012, et. al.
Now many of these graphics are rather straight forward, the numbers all work out like balancing your checkbook or credit card statement. The Schneider graphic presented in this thread gives me trouble. A link to the original paper would help. BTW a watt is a power unit, energy over time, not energy per se. To determine from W/m^2 how much energy, Btu or kJ, is delivered and heats air, water, earth, requires 1) a surface area, ToA, ocean/land surface, disc, sphere and 2) a period of time, e.g. one hour, 24 hours, 8,760 hours.
………………………………………W/m^2………..+/-
Incoming Solar……………………340.0………..0.1
Atmospheric absorption…………..75.0……….10.0 (!!)
Surface shortwave absorption.…165.0………..6.0
Sub Total absorbed………………240.0
Sub Total reflected……………….100.0………2.0
Check total………………………340.0
O.K., so far so good.
Now what the earth’s surface absorbs is all that it can emit neglecting geothermal sources. The net back radiation loop (GHE?) is 398 – 345.6. Where does all this power come from? Does it start at the surface or in the sky? It’s a loop so all that counts is the net.
Sensible heating……………………..….24.0………7.0
Latent heating…………………………….88.0…….10.0
Surface emission……………………….398.0………5.0
Back radiation………………………….-345.6………9.0
Total……………………………..……….164.4
Surface shortwave absorption…………165.0………6.0
Surface imbalance………………………0.6
So this looks OK, too, even the 0.6 +/- 17 imbalance. BTW note the magnitude of some of these uncertainty ranges bearing in mind the 1750 to 2011 RF of additional 112 ppm CO2 is 2.0 +/-? W/m^2, i.e. basically lost in the uncertainties.
Now what to do about some of these other numbers.
Clear-sky emission 266.4 +/- 3.3. What is this? Not typical of other versions. Ignore this.
All-sky longwave absorption -187.9 +/- 12.5 (!!) Not typical of other versions. Where does this cooling originate? Ignore this.
Longwave cloud effect is this 26.7 or 3 +/- 5? Not clear.
So what to do.
Sensible heating……………………………..24.0…….7.0
Latent heating………………………………..88.0…..10.0
Surface emission to sky……………………398.0……5.0
Back emission from clouds……………..….-26.6……5.0
Clear-sky back emission to surface………-319.0……9.0
All-sky back emission to surface………….-345.6……9.0
Net surface…………………………………….52.4
Sub total – Yes, it works!…………………………164.4
Outgoing longwave radiation………………..239.7…..3.3
Yet to be located……………………………….75.3
All-sky atmospheric window…………..………20.0…..4.0
Longwave cloud effect(?)………………..…….26.7…..4.0
Missing…………………………………….……28.6
So what am I missing here? Somebody embezzle that 28.6 W/m^2?