By Christopher Monckton of Brenchley
The sharp el Niño spike is just about to abolish the long Pause in global temperatures – at least for now. This column has long foretold that the present el Niño would be substantial, and that it might at least shorten if not extinguish the Pause. After all, theory requires that some global warming ought to occur.
This month, though, the Pause clings on. Though January 2016 was the warmest January in the RSS satellite record since 1979, the El Niño spike has not yet lasted long enough to end the Pause. That will happen by next month’s report. The RSS data still show no global warming for 18 years 8 months, notwithstanding record increases in CO2 concentration over the period.
Dr Roy Spencer’s UAH v.6 satellite lower-temperature dataset shows the Pause has already (just) disappeared. For 18 years 2 months there has been barely any warming, though to two decimal places the anomaly is zero:
The believers say there was never a Pause in the first place. After many unconvincing alterations to all of the principal global surface tamperature datasets over the two years leading up to the Paris climate conference, the Pause all the datasets once showed had been erased.
Significantly, the two satellite datasets continued to show a steadily-lengthening Pause till last month, but over the past year or two, long before the present el Niño set in, the three terrestrial datasets had already succeeded in ingeniously airbrushing it away.
The not necessarily reliable Tom Karl of NOAA and the relentlessly campaigning Gavin Schmidt of NASA held a joint press conference to celebrate the grants their rent-seeking organizations can milk out of their assertion that 2015 was the warmest year since 1880. But they carefully omitted the trend-line from their graph, so I have added it back. It shows the world warming since 1880 at an unexciting two-thirds of a degree per century:
NOAA’s much-altered global surface temperature record, showing a 0.9 Cº global warming trend since 1880, equivalent to just two-thirds of a degree per century.
So here’s the Houston problem, the 13th chime, the dog that didn’t bark in the night-time, the fly in the ointment, the poop in the puree, the jumbo in the Jacuzzi – the $64,000 question that would once have alerted true scientists to the possibility that somewhere their pet theory might have gone more than somewhat agley.
The Jumbo in the Jacuzzi
Since the satellites of both UAH and RSS show there has been very little global warming of the lower troposphere over the past decade or two, perhaps Schmidt and Karl would care to answer the following key question, which I have highlighted in red:
Schmidt and Karl, like the Met Office this side of the pond, say there has been rapid surface warming over the past 19 years. If so, where on Earth did it come from? The laws of thermodynamics are not up for repeal. The official theory is that CO2 warms the atmosphere and the atmosphere warms the surface. But for almost 19 years the satellites show that the lower atmosphere has barely warmed. Even if there had been CO2-driven warming higher up, for the official theory says we should expect a faster warming rate in the mid-troposphere than at the surface, how could that higher-altitude warming have magically reached the surface through a lower troposphere that has not warmed at all?
IPCC had predicted in 2007, on the basis of a single bad paper by Ben Santer of Lawrence Livermore National Laboratory, that the tropical mid-troposphere should warm twice or even thrice as fast as the tropical surface. However, as the revealing final slide shown by Schmidt and Karl at their press conference demonstrates, the predicted tropical mid-troposphere hot spot (I had the honor to name it) is in reality absent. Lower and mid-troposphere anomalies are almost identical:
One clue to the source of the warming reported by the surface datasets but not by the satellite datasets over the past 19 years is to be found in another revealing diagram presented by Schmidt and Karl at their presser.
About five-sixths of the areas of “record” surface warming shown in the NOAA diagram are areas of ocean, the el Niño-driven warming of the eastern equatorial Pacific being particularly pronounced.
Aside from the ocean warming, the land-based warming was prominent over Siberia and northern China, Europe and central America, inferentially owing much to urban heat-island effects.
In short, the warming of both land and oceans shows a pattern strongly confirming the satellite record to the extent that the warming – insofar as it is not a mere artefact of the surface-temperature tampering over the past couple of years – displays a pattern suggesting that it originates not from above in the atmosphere, where it would have originated if CO2 had been the cause, but at or below the surface.
On any view, the significant warming that the terrestrial datasets claim over the past two decades cannot have come from the atmosphere, and accordingly cannot have been caused by our enrichment of that atmosphere with greenhouse gases – if, that is, the satellites are correct that the lower troposphere has not been warming.
When the first temperature-monitoring satellites began to deliver data, NASA said the satellite temperature record would be more reliable than the surface record because the coverage was more complete, the method of measurement standardized and the coverage and coverage-bias uncertainties that plague the terrestrial record were absent.
Now that the satellites of both UAH and RSS have been showing so little warming for so long, expect that story to begin to change. If the satellite data are broadly correct, then either the terrestrial data are wrong owing to unjustifiable tampering or they are detecting genuine warming that may be from urban heat-island influences or from deep-ocean warming but cannot be from the atmosphere and is not caused by our sins of emission.
One way to prop up the specious, crumbling credibility of the terrestrial temperature datasets and of the CO2 panic at the same time is to attack the satellite datasets and pretend that the measurement method that NASA itself had once said was the best available is somehow subject to uncertainties even greater than those to which the terrestrial datasets are prone.
I am not the only one to sense that Dr Mears, the keeper of the RSS satellite dataset, who labels all who ask questions about the Party Line as “denialists” and in early 2016 took shameful part in a gravely prejudiced video about global temperature change, may be about to revise his dataset sharply to ensure that the remarkable absence of predicted warming that it demonstrates is sent down the memory hole.
What of ocean warming? The ARGO bathythermographs show little warming at the surface from 2004 until the current el Niño began. What is more, ARGO stratigraphy shows that the warming is generally greater with depth. The warming of the ocean, then, appears to be coming not from above, is it would if CO2 were the driver, but from below.
I should have liked to show graphs to establish that the warming is greater in the lower than in the upper strata of the 1.25-mile slab that ARGO measures. But the ARGO marine atlas is clunky and does not seem to be as compatible with PCs as it should be. So I have been unable to extract the relevant data. If anyone is able to produce complete stratum-by-stratum anomaly-and-trend plots of the ARGO data for its 12 full years in operation from January 2004 till December 2015, please let me know as soon as the December 2015 ARGO data become available. The latest monthly update is very late, as the ARGO data often are:
If the eventual data confirm what I have some reason to suspect, then a further killer question must be faced by the tamperers:
Though the Pause is gone, the problem it poses for the Thermageddonites remains. For their own theory dictates that, all other things being equal, an initial direct warming should occur instantaneously in response to radiative forcings such as that from CO2. However, for almost 19 years there was not a flicker of response from global temperatures, casting serious doubt upon the magnitude of the warming to be expected from anthropogenic influences.
To the believers, therefore, it was important that the Pause should not merely cease, for Nature is, as expected, gradually taking care of that, but vanish altogether. The need to abolish the Pause became still more urgent when at a hearing in December 2015 Senator Ted Cruz, to the great discomfiture of the “Democrats”, displayed the RSS graph showing no global warming for 18 years 9 months.
So to another killer question that Schmidt and Karl ducked at their presser, and must now face (for if they do not answer it Senator Cruz can be expected to go on asking it till he gets an answer):
The now-glaring discrepancies between prediction and reality, and between the satellite and terrestrial datasets, are plainly evident from all datasets even after the tampering. Yet until now there has been no systematic analysis to show just how large the discrepancies have become. So here goes.
In 1990, at page xxiv of the First Assessment Report, IPCC predicted near-linear global warming of 1.0 [0.7, 1.5] K over the 36 years to 2025, a rate equivalent to 2.78 [1.94, 4.17] K/century. However, in the 26 years since 1990 the reported warming rates are equivalent to only [1.59, 1.73] K/century from the terrestrial datasets (blue needles) and [1.14, 1.23] K/century from the satellites (green needles). IPCC’s 1990 central prediction, the red needle, accordingly shows almost double the warming reported by the terrestrial datasets and at least two and a half times that reported by the satellite datasets.
Somehow, the flagrant over-prediction that the discrepancy graphs of temperatures from 1990, 1995 and 2001 to today illustrate did not get a mention in the colourful material circulated to the media by the SchmidtKarlPropagandaAmt.
The models’ extravagant over-prediction becomes still more self-evident when one looks at IPCC’s next excitable prediction. In fig. 6.13 of the 1995 Second Assessment Report, IPCC predicted a medium-term warming rate of 0.38 K over 21 years, equivalent to 1.8 K per century, assuming the subsequently-observed 0.5%-per-year increase in atmospheric CO2 concentration.
Here, at least, IPCC’s prediction is within shouting distance of the terrestrial temperature data, though still extravagantly above the satellite temperature data. But IPCC’s 1990 least prediction was well above its own central prediction made just five years later. IPCC’s 1990 central prediction was 50% above its 1995 prediction, and its 1990 high-end prediction was 130% above its 1995 prediction.
The reliability of IPCC’s predictions deteriorated still further in 2001. On page 8 of the Summary for Policymakers, it predicted that in the 36 years 1990-2025 the world would warm by [0.4, 1.1] K, equivalent to [1.11, 3.05] K/century, again a significant downshift compared with the interval of medium-term predictions it had made in 1990, and implying a central estimate equivalent to about 2.08 K/century (the red needle on the following temperature clock) over the 25-year period:
Three points are startlingly evident in these graphs. First, IPCC has inexorably and very substantially cut its predictions of medium-term warming since the exaggerated predictions in its First Assessment Report got the climate scam going in 1990.
Secondly, even its revised predictions are substantial exaggerations compared with observed, reported reality.
Thirdly – and this is very odd – the most basic measure of the uncertainties in temperature measurement in any time-series, which is the interval between the least and greatest reported trends on that series, has widened when most indications are that it should be narrowing.
To demonstrate that error-bars on temperature measurement should be narrowing in response to all those taxpayer dollars being flung at it, the HadCRUT4 dataset – which to Professor Jones’ great credit publishes the error-bars as well as the central estimate of observed temperature change – shows a considerable narrowing of the uncertainty interval over time, as methods of measurement become less unreliable:
The very reverse of what the HadCRUT4 dataset shows should be happening is happening. As Table 1 shows, the discrepancy between the least (yellow background) and the greatest (purple background) reported temperature change over successive periods is growing, not narrowing:
| Start date | GISS | HadCR4 | NCEI | RSS | UAH | Uncertainty |
| Sat:1979 | 0.60 | 0.61 | 0.37 | 0.45 | 0.42 | 0.51
K/century |
| K/century | 1.63 | 1.65 | 1.55 | 1.23 | 1.14 | |
| AR1:1990 | 0.45 | 0.41 | 0.43 | 0.29 | 0.26 | 0.73
K/century |
| K/century | 1.73 | 1.59 | 1.66 | 1.11 | 1.00 | |
| AR2:1995 | 0.33 | 0.28 | 0.32 | 0.09 | 0.09 | 1.14
K/century |
| K/century | 1.55 | 1.31 | 1.53 | 0.42 | 0.41 | |
| AR3:2001 | 0.18 | 0.13 | 0.20 | –0.02 | 0.03 | 1.46
K/century |
| K/century | 1.22 | 0.85 | 1.35 | –0.11 | 0.19 |
Table 1: Reported (dark blue) and centennial-equivalent (dark green) temperature trends on the three terrestrial (pale green background) and two satellite (blue background) monthly temperature anomaly datasets for periods starting respectively in January of 1979, 1990, 1995 and 2001 and all ending in December 2015.
Note how, on all datasets, the warming rate declines the closer to the present one begins. This, too, is contrary to official theory, which says that the warming rate should at least remain constant given the ever-increasing anthropogenic forcings acting on the climate. It is also contrary to one of the most mendacious graphs in the IPCC reports:
The official storyline, derived from the bogus statistical technique illustrated in the above IPCC graph, is that the rate of global warming is itself accelerating, and that we are to blame. The Swiss Bureau de l’Escroquerie is investigating this and, no doubt, many other outright frauds in IPCC documents.
However, note how rapidly the measurement uncertainty, here defined as the difference between the least (yellow) and greatest (pink) reported centennial-equivalent temperature trend in Table 1, widens even as the start-date of the period under consideration comes closer to the present, when by rights it should narrow. Another killer question for the believers to answer, therefore:
If one excludes the data after October 2015, which are temporarily influenced by the current el Niño spike in global temperatures, the warming rate since 1950 is lower now than at any previous date since that year.
This widening of the divergence between the terrestrial and satellite datasets is clear evidence that the effect of the tampering with all three terrestrial datasets in the two years preceding the Paris climate summit has been what one would, alas, expect of the tamperers: artificially to increase the apparent warming rate ever more rapidly as the present approaches.
A legitimate inference from this observation is that the tampering, however superficially plausible the numerous excuses for it, was in truth intended and calculated to overwhelm and extinguish the Pause that all the datasets had previously shown, precisely so that those driving and profiting from the climate scam could declare, as they have throughout the Marxstream news media, that there was never any Pause in the first place.
Let us hope that Professor Terence Kealy, former Vice Chancellor of Buckingham University, takes a very close look at this posting as he conducts his own review of the tamperings with the various terrestrial datasets.
The current el Niño, as Bob Tisdale’s distinguished series of reports here demonstrates, is at least as big as the Great el Niño of 1998. The RSS temperature record is now beginning to reflect its magnitude. If past events of this kind are a guide, there will be several months’ further warming before the downturn in the spike begins.
However, if there is a following la Niña, as there often is, the Pause may return at some time from the end of this year onward. Perhaps Bob could address the likelihood of a la Niña in the next of his series of posts on the ENSO phenomenon.
The hiatus period of 18 years 8 months is the farthest back one can go in the RSS satellite temperature record and still show a sub-zero trend. The start date is not cherry-picked: it is calculated. And the graph does not mean there is no such thing as global warming. Going back further shows a small warming rate. And yes, the start-date for the Pause has been inching forward, though just a little more slowly than the end-date, which is why the Pause has continued on average to lengthen.
The warming rate taken as the mean of the RSS and UAH datasets since they began in 1979 is equivalent to 1.2 degrees/century:
However, the much-altered surface tamperature datasets show a 35% greater warming rate, equivalent to 1.6 degrees/century:
Bearing in mind that one-third of the 2.4 W m–2 radiative forcing from all manmade sources since 1750 has occurred during the period of the Pause, a warming rate equivalent to little more than 1 C°/century is not cause for concern.
As always, a note of caution. Merely because there has been little or no warming in recent decades, one may not draw the conclusion that warming has ended forever. Trend lines measure what has occurred: they do not predict what will occur.
The Technical Note explains the sources of the IPCC’s predictions in 1990 and in 2005, and also demonstrates that that according to the ARGO bathythermograph data the oceans are warming at a rate equivalent to less than a quarter of a Celsius degree per century. In a rational scientific discourse, those who had advocated extreme measures to prevent global warming would now be withdrawing and calmly rethinking their hypotheses. However, this is not a rational scientific discourse.
Key facts about global temperature
These facts should be shown to anyone who persists in believing that, in the words of Mr Obama’s Twitteratus, “global warming is real, manmade and dangerous”.
Ø The RSS satellite dataset shows no global warming at all for 224 months from June 1997 to December 2015 – more than half the 445-month satellite record.
Ø There has been no warming even though one-third of all anthropogenic forcings since 1750 have occurred since 1997.
Ø Since 1950, when a human influence on global temperature first became theoretically possible, the global warming trend has been equivalent to below 1.2 Cº per century.
Ø The HadCRUT4 global warming trend since 1900 is equivalent to 0.77 Cº per century. This is well within natural variability and may not have much to do with us.
Ø The fastest warming rate lasting 15 years or more since 1950 occurred over the 33 years from 1974 to 2006. It was equivalent to 2.0 Cº per century.
Ø Compare the warming on the Central England temperature dataset in the 40 years 1694-1733, well before the Industrial Revolution, equivalent to 4.33 C°/century.
Ø In 1990, the IPCC’s mid-range prediction of near-term warming was equivalent to 2.8 Cº per century, higher by two-thirds than its current prediction of 1.7 Cº/century.
Ø The warming trend since 1990, when the IPCC wrote its first report, is equivalent to little more than 1 Cº per century. The IPCC had predicted close to thrice as much.
Ø To meet the IPCC’s original central prediction of 1 C° warming from 1990-2025, in the next decade a warming of 0.75 C°, equivalent to 7.5 C°/century, would have to occur.
Ø Though the IPCC has cut its near-term warming prediction, it has not cut its high-end business as usual centennial warming prediction of 4.8 Cº warming to 2100.
Ø The IPCC’s predicted 4.8 Cº warming by 2100 is well over twice the greatest rate of warming lasting more than 15 years that has been measured since 1950.
Ø The IPCC’s 4.8 Cº-by-2100 prediction is four times the observed real-world warming trend since we might in theory have begun influencing it in 1950.
Ø The oceans, according to the 3600+ ARGO buoys, are warming at a rate of just 0.02 Cº per decade, equivalent to 0.23 Cº per century, or 1 C° in 430 years.
Ø Recent extreme-weather events cannot be blamed on global warming, because there has not been any global warming to speak of. It is as simple as that.
Technical note
Our latest topical graph shows the least-squares linear-regression trend on the RSS satellite monthly global mean lower-troposphere dataset for as far back as it is possible to go and still find a zero trend. The start-date is not “cherry-picked” so as to coincide with the temperature spike caused by the 1998 el Niño. Instead, it is calculated so as to find the longest period with a zero trend.
The fact of a long Pause is an indication of the widening discrepancy between prediction and reality in the temperature record.
The satellite datasets are arguably less unreliable than other datasets in that they show the 1998 Great El Niño more clearly than all other datasets. The Great el Niño, like its two predecessors in the past 300 years, caused widespread global coral bleaching, providing an independent verification that the satellite datasets are better able than the rest to capture such fluctuations without artificially filtering them out.
Terrestrial temperatures are measured by thermometers. Thermometers correctly sited in rural areas away from manmade heat sources show warming rates below those that are published. The satellite datasets are based on reference measurements made by the most accurate thermometers available – platinum resistance thermometers, which provide an independent verification of the temperature measurements by checking via spaceward mirrors the known temperature of the cosmic background radiation, which is 1% of the freezing point of water, or just 2.73 degrees above absolute zero. It was by measuring minuscule variations in the cosmic background radiation that the NASA anisotropy probe determined the age of the Universe as 13.82 billion years.
The RSS graph (Fig. 1) is accurate. The data are lifted monthly straight from the RSS website. A computer algorithm reads them down from the text file and plots them automatically using an advanced routine that automatically adjusts the aspect ratio of the data window at both axes so as to show the data at maximum scale, for clarity.
The latest monthly data point is visually inspected to ensure that it has been correctly positioned. The light blue trend line plotted across the dark blue spline-curve that shows the actual data is determined by the method of least-squares linear regression, which calculates the y-intercept and slope of the line.
The IPCC and most other agencies use linear regression to determine global temperature trends. Professor Phil Jones of the University of East Anglia recommends it in one of the Climategate emails. The method is appropriate because global temperature records exhibit little auto-regression, since summer temperatures in one hemisphere are compensated by winter in the other. Therefore, an AR(n) model would generate results little different from a least-squares trend.
Dr Stephen Farish, Professor of Epidemiological Statistics at the University of Melbourne, kindly verified the reliability of the algorithm that determines the trend on the graph and the correlation coefficient, which is very low because, though the data are highly variable, the trend is flat.
RSS itself is now taking a serious interest in the length of the Great Pause. Dr Carl Mears, the senior research scientist at RSS, discusses it at remss.com/blog/recent-slowing-rise-global-temperatures.
Dr Mears’ results are summarized in Fig. T1:
Figure T1. Output of 33 IPCC models (turquoise) compared with measured RSS global temperature change (black), 1979-2014. The transient coolings caused by the volcanic eruptions of Chichón (1983) and Pinatubo (1991) are shown, as is the spike in warming caused by the great el Niño of 1998.
Dr Mears writes:
“The denialists like to assume that the cause for the model/observation discrepancy is some kind of problem with the fundamental model physics, and they pooh-pooh any other sort of explanation. This leads them to conclude, very likely erroneously, that the long-term sensitivity of the climate is much less than is currently thought.”
Dr Mears concedes the growing discrepancy between the RSS data and the models, but he alleges “cherry-picking” of the start-date for the global-temperature graph:
“Recently, a number of articles in the mainstream press have pointed out that there appears to have been little or no change in globally averaged temperature over the last two decades. Because of this, we are getting a lot of questions along the lines of ‘I saw this plot on a denialist web site. Is this really your data?’ While some of these reports have ‘cherry-picked’ their end points to make their evidence seem even stronger, there is not much doubt that the rate of warming since the late 1990s is less than that predicted by most of the IPCC AR5 simulations of historical climate. … The denialists really like to fit trends starting in 1997, so that the huge 1997-98 ENSO event is at the start of their time series, resulting in a linear fit with the smallest possible slope.”
In fact, the spike in temperatures caused by the Great el Niño of 1998 is almost entirely offset in the linear-trend calculation by two factors: the not dissimilar spike of the 2010 el Niño, and the sheer length of the Great Pause itself. The headline graph in these monthly reports begins in 1997 because that is as far back as one can go in the data and still obtain a zero trend.
Fig. T1a. Graphs for RSS and GISS temperatures starting both in 1997 and in 2001. For each dataset the trend-lines are near-identical, showing conclusively that the argument that the Pause was caused by the 1998 el Nino is false (Werner Brozek and Professor Brown worked out this neat demonstration).
Curiously, Dr Mears prefers the terrestrial datasets to the satellite datasets. The UK Met Office, however, uses the satellite data to calibrate its own terrestrial record.
The length of the Pause, significant though it now is, is of less importance than the ever-growing discrepancy between the temperature trends predicted by models and the far less exciting real-world temperature change that has been observed.
Sources of the IPCC predictions
IPCC’s First Assessment Report predicted that global temperature would rise by 1.0 [0.7, 1.5] Cº to 2025, equivalent to 2.8 [1.9, 4.2] Cº per century. The executive summary asked, “How much confidence do we have in our predictions?” IPCC pointed out some uncertainties (clouds, oceans, etc.), but concluded:
“Nevertheless, … we have substantial confidence that models can predict at least the broad-scale features of climate change. … There are similarities between results from the coupled models using simple representations of the ocean and those using more sophisticated descriptions, and our understanding of such differences as do occur gives us some confidence in the results.”
That “substantial confidence” was substantial over-confidence. For the rate of global warming since 1990 – the most important of the “broad-scale features of climate change” that the models were supposed to predict – is now below half what the IPCC had then predicted.
In 1990, the IPCC said this:
“Based on current models we predict:
“under the IPCC Business-as-Usual (Scenario A) emissions of greenhouse gases, a rate of increase of global mean temperature during the next century of about 0.3 Cº per decade (with an uncertainty range of 0.2 Cº to 0.5 Cº per decade), this is greater than that seen over the past 10,000 years. This will result in a likely increase in global mean temperature of about 1 Cº above the present value by 2025 and 3 Cº before the end of the next century. The rise will not be steady because of the influence of other factors” (p. xii).
Later, the IPCC said:
“The numbers given below are based on high-resolution models, scaled to be consistent with our best estimate of global mean warming of 1.8 Cº by 2030 [compared with pre-industrial temperatures]. For values consistent with other estimates of global temperature rise, the numbers below should be reduced by 30% for the low estimate or increased by 50% for the high estimate” (p. xxiv).
The orange region in Fig. 2 represents the IPCC’s medium-term Scenario-A estimate of near-term warming, i.e. 1.0 [0.7, 1.5] K (compared with 1990) by 2025.
The IPCC’s predicted global warming over the 25 years from 1990 to the present differs little from a straight line (Fig. T2).
Figure T2. Historical warming from 1850-1990, and predicted warming from 1990-2100 on the IPCC’s “business-as-usual” Scenario A (IPCC, 1990, p. xxii).
Because this difference between a straight line and the slight uptick in the warming rate the IPCC predicted over the period 1990-2025 is so small, one can look at it another way. To reach the 1 K central estimate of warming since 1990 by 2025, there would have to be twice as much warming in the next ten years as there was in the last 25 years. That is not likely.
But is the Pause perhaps caused by the fact that CO2 emissions have not been rising anything like as fast as the IPCC’s “business-as-usual” Scenario A prediction in 1990? No: CO2 emissions have risen rather above the Scenario-A prediction (Fig. T3).
Figure T3. CO2 emissions from fossil fuels, etc., in 2012, from Le Quéré et al. (2014), plotted against the chart of “man-made carbon dioxide emissions”, in billions of tonnes of carbon per year, from IPCC (1990).
Plainly, therefore, CO2 emissions since 1990 have proven to be closer to Scenario A than to any other case, because for all the talk about CO2 emissions reduction the fact is that the rate of expansion of fossil-fuel burning in China, India, Indonesia, Brazil, etc., far outstrips the paltry reductions we have achieved in the West to date.
True, methane concentration has not risen as predicted in 1990 (Fig. T4), for methane emissions, though largely uncontrolled, are simply not rising as the models had predicted. Here, too, all of the predictions were extravagantly baseless.
The overall picture is clear. Scenario A is the emissions scenario from 1990 that is closest to the observed CO2 emissions outturn.
Figure T4. Methane concentration as predicted in four IPCC Assessment Reports, together with (in black) the observed outturn, which is running along the bottom of the least prediction. This graph appeared in the pre-final draft of IPCC (2013), but had mysteriously been deleted from the final, published version, inferentially because the IPCC did not want to display such a plain comparison between absurdly exaggerated predictions and unexciting reality.
To be precise, a quarter-century after 1990, the global-warming outturn to date – expressed as the least-squares linear-regression trend on the mean of the RSS and UAH monthly global mean surface temperature anomalies – is 0.28 Cº, equivalent to little more than 1 Cº/century. The IPCC’s central estimate of 0.71 Cº, equivalent to 2.8 Cº/century, that was predicted for Scenario A in IPCC (1990) with “substantial confidence” was approaching three times too big. In fact, the outturn is visibly well below even the least estimate.
In 1990, the IPCC’s central prediction of the near-term warming rate was higher by two-thirds than its prediction is today. Then it was 2.8 C/century equivalent. Now it is just 1.7 Cº equivalent – and even that is proving to be a substantial exaggeration.
In 1995 the IPCC offered a prediction of the warming rates to be expected in response to various rates of increase in CO2 concentration:
Figure T4a. IPCC (1995) predicted various warming rates. The prediction based on the actual rate of change in CO2 concentration since 1995 is highlighted.
The actual increase in CO2 concentration in the two decades since 1995 has been 0.5% per year. So IPCC’s effective central prediction in 1995 was that there should have been 0.36 C° warming since then, equivalent to 1.8o C°/century.
In the 2001 Third Assessment Report, IPCC, at page 8 of the Summary for Policymakers, says: “For the periods 1990-2025 and 1990 to 2050, the projected increases are 0.4-1.1 C° and 0.8-2.6 C° respectively.”
Is the ocean warming?
One frequently-discussed explanation for the Great Pause is that the coupled ocean-atmosphere system has continued to accumulate heat at approximately the rate predicted by the models, but that in recent decades the heat has been removed from the atmosphere by the ocean and, since globally the near-surface strata show far less warming than the models had predicted, it is hypothesized that what is called the “missing heat” has traveled to the little-measured abyssal strata below 2000 m, whence it may emerge at some future date.
Actually, it is not known whether the ocean is warming: each of the 3600 automated ARGO bathythermograph buoys takes just three measurements a month in 200,000 cubic kilometres of ocean – roughly a 100,000-square-mile box more than 316 km square and 2 km deep. Plainly, the results on the basis of a resolution that sparse (which, as Willis Eschenbach puts it, is approximately the equivalent of trying to take a single temperature and salinity profile taken at a single point in Lake Superior less than once a year) are not going to be a lot better than guesswork.
Unfortunately ARGO seems not to have updated the ocean dataset since December 2014. However, what we have gives us 11 full years of data. Results are plotted in Fig. T5. The ocean warming, if ARGO is right, is equivalent to just 0.02 Cº decade–1, equivalent to 0.2 Cº century–1.
Figure T5. The entire near-global ARGO 2 km ocean temperature dataset from January 2004 to December 2014 (black spline-curve), with the least-squares linear-regression trend calculated from the data by the author (green arrow).
Finally, though the ARGO buoys measure ocean temperature change directly, before publication NOAA craftily converts the temperature change into zettajoules of ocean heat content change, which make the change seem a whole lot larger.
The terrifying-sounding heat content change of 260 ZJ from 1970 to 2014 (Fig. T6) is equivalent to just 0.2 K/century of global warming. All those “Hiroshima bombs of heat” of which the climate-extremist websites speak are a barely discernible pinprick. The ocean and its heat capacity are a lot bigger than some may realize.
Figure T6. Ocean heat content change, 1957-2013, in Zettajoules from NOAA’s NODC Ocean Climate Lab: http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT, with the heat content values converted back to the ocean temperature changes in Kelvin that were originally measured. NOAA’s conversion of the minuscule warming data to Zettajoules, combined with the exaggerated vertical aspect of the graph, has the effect of making a very small change in ocean temperature seem considerably more significant than it is.
Converting the ocean heat content change back to temperature change reveals an interesting discrepancy between NOAA’s data and that of the ARGO system. Over the period of ARGO data, from 2004-2014, the NOAA data imply that the oceans are warming at 0.05 Cº decade–1, equivalent to 0.5 Cº century–1, or rather more than double the rate shown by ARGO.
ARGO has the better-resolved dataset, but since the resolutions of all ocean datasets are very low one should treat all these results with caution.
What one can say is that, on such evidence as these datasets are capable of providing, the difference between underlying warming rate of the ocean and that of the atmosphere is not statistically significant, suggesting that if the “missing heat” is hiding in the oceans it has magically found its way into the abyssal strata without managing to warm the upper strata on the way.
On these data, too, there is no evidence of rapid or catastrophic ocean warming.
Furthermore, to date no empirical, theoretical or numerical method, complex or simple, has yet successfully specified mechanistically either how the heat generated by anthropogenic greenhouse-gas enrichment of the atmosphere has reached the deep ocean without much altering the heat content of the intervening near-surface strata or how the heat from the bottom of the ocean may eventually re-emerge to perturb the near-surface climate conditions relevant to land-based life on Earth.
Figure T7. Near-global ocean temperatures by stratum, 0-1900 m, providing a visual reality check to show just how little the upper strata are affected by minor changes in global air surface temperature. Source: ARGO marine atlas.
Most ocean models used in performing coupled general-circulation model sensitivity runs simply cannot resolve most of the physical processes relevant for capturing heat uptake by the deep ocean.
Ultimately, the second law of thermodynamics requires that any heat which may have accumulated in the deep ocean will dissipate via various diffusive processes. It is not plausible that any heat taken up by the deep ocean will suddenly warm the upper ocean and, via the upper ocean, the atmosphere.
If the “deep heat” explanation for the Pause were correct (and it is merely one among dozens that have been offered), the complex models have failed to account for it correctly: otherwise, the growing discrepancy between the predicted and observed atmospheric warming rates would not have become as significant as it has.
In early October 2015 Steven Goddard added some very interesting graphs to his website. The graphs show the extent to which sea levels have been tampered with to make it look as though there has been sea-level rise when it is arguable that in fact there has been little or none.
Why were the models’ predictions exaggerated?
In 1990 the IPCC predicted – on its business-as-usual Scenario A – that from the Industrial Revolution till the present there would have been 4 Watts per square meter of radiative forcing caused by Man (Fig. T8):
Figure T8. Predicted manmade radiative forcings (IPCC, 1990).
However, from 1995 onward the IPCC decided to assume, on rather slender evidence, that anthropogenic particulate aerosols – mostly soot from combustion – were shading the Earth from the Sun to a large enough extent to cause a strong negative forcing. It has also now belatedly realized that its projected increases in methane concentration were wild exaggerations. As a result of these and other changes, it now estimates that the net anthropogenic forcing of the industrial era is just 2.3 Watts per square meter, or little more than half its prediction in 1990 (Fig. T9):
Figure T9: Net anthropogenic forcings, 1750 to 1950, 1980 and 2012 (IPCC, 2013).
Even this, however, may be a considerable exaggeration. For the best estimate of the actual current top-of-atmosphere radiative imbalance (total natural and anthropo-genic net forcing) is only 0.6 Watts per square meter (Fig. T10):
Figure T10. Energy budget diagram for the Earth from Stephens et al. (2012)
In short, most of the forcing predicted by the IPCC is either an exaggeration or has already resulted in whatever temperature change it was going to cause. There is little global warming in the pipeline as a result of our past and present sins of emission.
It is also possible that the IPCC and the models have relentlessly exaggerated climate sensitivity. One recent paper on this question is Monckton of Brenchley et al. (2015), which found climate sensitivity to be in the region of 1 Cº per CO2 doubling (go to scibull.com and click “Most Read Articles”). The paper identified errors in the models’ treatment of temperature feedbacks and their amplification, which account for two-thirds of the equilibrium warming predicted by the IPCC.
Professor Ray Bates gave a paper in Moscow in summer 2015 in which he concluded, based on the analysis by Lindzen & Choi (2009, 2011) (Fig. T10), that temperature feedbacks are net-negative. Accordingly, he supports the conclusion both by Lindzen & Choi (1990) (Fig. T11) and by Spencer & Braswell (2010, 2011) that climate sensitivity is below – and perhaps considerably below – 1 Cº per CO2 doubling.
Figure T11. Reality (center) vs. 11 models. From Lindzen & Choi (2009).
A growing body of reviewed papers find climate sensitivity considerably below the 3 [1.5, 4.5] Cº per CO2 doubling that was first put forward in the Charney Report of 1979 for the U.S. National Academy of Sciences, and is still the IPCC’s best estimate today.
On the evidence to date, therefore, there is no scientific basis for taking any action at all to mitigate CO2 emissions.
Finally, how long will it be before the Freedom Clock (Fig. T12) reaches 20 years without any global warming? If it does, the climate scare will become unsustainable.
Figure T12. The Freedom Clock approaches 20 years without global warming
At last! Christopher asks the same question I have been asking repeatedly for the past year or so. The theory says the atmosphere warms the surface. Therefore from basic thermodynamics, the atmosphere must lead the warming. Now either:
The fiddling of the surface data sets produced a spurious warming ahead of the atmospheric (non)warming, in which case there has been no warming and the theory is wrong, or:
The direction of warming contradicts the theory, in which case the theory is wrong.
Either way the theory is now disproved.
Lord Monckton, if the end of the pause signals the end of your series, I must say I will miss it. Your series has been a very flashy way of drawing in the indictrinated to an alternate viewpoint. Hopefully getting them to want to educate themselves before blindly endorsing a dogma.
Climate science puts me in a strange position. If the warming of the last 100 years continues for the next hundred or two, I would celebrate man’s good fortune. On the otherhand, I pray for a cooling climate to put an end to the nonesense. But, the latter is a greedy a foolish thought. Greedy because it would be better for people to see the nonesense without force but rather by intelect, regardless of a warming climate. And foolish because the Nonesense will not end… man would have to evolve into a new species before that would come to fruition. And what a boring species we would be at that point.
Interesting… The only way to save science from non-science is to have something bad happen (cooling). If a truly wonderful thing happens (warming) the weakest and poorest will be crushed by policies that make the rich richer and destroy any hope for avoiding a superstitious, dark age.
In reply to Mr Murphy, I am hoping to continue the monthly series even if there is no Pause, because – as this column has often said – the growing discrepancy between prediction and reality is the devastating fact that will, in the end, replace hard-Left politics with scientific sanity and reason.
Maybe it is time to start using data corrected for ENSO (and possibly volcanoes) to determine the pause. This would eliminate the coming problem with El Nino temporarily leading to an upward trend. It would also end the propaganda claims of cherry picking that the useful idiots continue to repeat. I’ve seen examples like the one presented above by John Bills. Maybe the good Lord could base his next assessment on a corrected set of data????
ENSO-corrected data are what NOAA used in their modeling in 1998, by which they determined that periods of 15 years or more without warming indicated a discrepancy between prediction and observation. What they also said is that after allowing for ENSO the discrepancy between prediction and observation is even larger than without allowing for ENSO.
“What they also said is that after allowing for ENSO the discrepancy between prediction and observation is even larger than without allowing for ENSO.”
ENSO is an oscillatory effect which variably augments or diminishes the trend. It’s not what they are interested in, so they subtract it. What they said in the report for 2008 was that in the specific period 1999-2008, the ENSO component of the trend was positive. That is not a rule for all periods.
I’m not sure how Christopher Monckton can be so sure the Pause will end next month. By my calculation February could be a record breaking 0.9C and he could still claim a pause starting in December 1997.
Yes, I think end March is more likely. But 0.9C for Feb is not impossible. That’s a rise of 0.24°C from Jan, very little more than the corresponding rise in 1998.
Yet more signs of AGW…
http://fox6now.com/2016/02/06/breaking-several-cars-fall-through-ice-at-lake-genevas-winterfest/
During the second half of February, we are going to see BELOW ZERO F weather. December, in classic el Nino fashion, was unusually balmy and nice. Then as the el Nino fades, in comes the backlash not next winter, which will be wickedly cold, but NOW.
And this is the entire problem with the global warming thingie: if it is warm briefly some time or other, we are told this is permanent and we will all die but then it departs fairly quickly and we get month after month of brutal cold which we are told is temporary.
I am sick and tired of this stupid whinging.
I guess I’m become a little weary of both Tisdale’s and Monckton’s work in terms of where it is going as long as they feel obliged to use data that his being shaped to fit the shapers’ purposes. Their work, once exciting and interesting is losing its appeal because they work with the evermore fiddled temperatures. Ultimately, the point they are trying to make will disappear. I see some commenters on this thread have picked up on the real meaning of the 10 minute video from the clime syndicate in which Mears has been denigrating the very RSS satellite series he is paid to put together. When this video came out, I recognized a poker player’s ‘tell’ and commented that, following upon the karlelizing of the pause, this is 100% certain to foreshadow an abrupt adjustment to the satellite data on the basis of “drift”, some just discovered heating bias in the guts of the satellite or some such thing to justify a swing upwards. The hatchet job done in the video on Spencer and Christy is the clincher. If the two sets diverge now, then everyone can understand that S and C are DNyers. Also, they have been given encouragement on how easy Karl held his nose and just did it like Nike says and a little time goes by and the pause becomes an egregious trick by UAH guys in Alabama. Oh there will be papers on it and conferences. The news is on board no matter what the destination.
Don’t get me wrong here. Their work has had huge impact. I’d venture to say Monckton’s work, including standing up at the clime chimes in Doha to announce the pause four or five years ago and his constant pushing of the ‘pause’ into their faces has certainly urged them to do something to get rid of this thing that threatens to bring down the CAGW industry. I just hate to see C.M. heading for a silence on this topic as these goons stand by grinning.
Even the 1998 heat was manufactured by chopping the even warmer 1934 temperature down to make it happen by GISS under Hansen. We haven’t really had any warming for 82 years!! Nothing is hanging on by its fingernails.
http://i44.tinypic.com/29dwsj7.gif
http://wattsupwiththat.com/2009/06/28/nasa-giss-adjustments-galore-rewriting-climate-history/
Gary Pearse
February 6, 2016 at 7:31 pm wrote: “Even the 1998 heat was manufactured by chopping the even warmer 1934 temperature down to make it happen by GISS under Hansen. We haven’t really had any warming for 82 years!!”
Excellent point, Gary, and one everyone should keep in mind.
Lord Monckton:
While there still are liquid thermometers requiring people to read them, the newer stations use a thermistor or electrical resistance method of measurements.
Some notes from NOAA documents:
“United States Climate Reference Network (USCRN)
Functional Requirements Document”
Air Temperature
Each CRN field site shall provide air temperature measurements. Each air temperature sensor and
its supporting apparatus shall be configured to accurately reflect the ambient air temperature at the
site. Provisions to eliminate exposure to precipitation and minimize measurement biases caused by
solar heat loading are required.
Air temperature measurements shall meet the following requirements:
Minimum + 0.30 C over the range -50 to +500 C
Accuracy + 0.60 C over the ranges -50 to -600 and +50 to -600C
Resolution 0.010 C for the raw data
0.10 C for the computed five minute averages
Reported Values
for Each Sensor
– Maximum hourly value (largest 5 minute average)
– Minimum hourly value (lowest 5 minute average)
– Average of each hour=s twelve 5-minute averages
– Average temperature of the hour=s last five minute period”
“US Climate Reference Network (USCRN)
Handbook for Manual Quality Monitoring”
“Ambient Temperature
4.2.1 Primary
Inter-comparison of the 3 temperature sensors: Sensors should be within 0.3° C of one another. An hourly flag message is generated for any departure greater than 0.30° C (i.e., 0.301° C and greater).
4.2.2 Secondary
1. Comparison with the IR temperature sensor. A basic sanity check is for the ambient max temp not to be higher/greater than the IR and the ambient min not to be less than the IR. Note: When the ground is covered by snow, sleet, or hail, the IR temperature will not rise above 0° C even when the ambient temperatures rise to levels above 0° C.
2. Comparison”
“COOPERATIVE STATION INSPECTION”
“2) Max & Min Thermometers (MXMN)
A. Clean dust and dirt from the thermometers.
B. Remove any wasp or other insect nests.
C. Check and remove separations in the minimum thermometer.
D. Verify that maximum and minimum thermometers agree within 1 degree.
E. Clean and lube the Townsend support.
3) Maximum/Minimum Temperature System (MMTS)
A. Check all connections. Repair or replace if worn or loose.
B. Compare reading from quality thermometer with displayed temperature. Replace faulty component if not reasonably close to same value.
C. Check all installed lightning protection. Replace any damaged or burned components.
D. Clean the sensor unit. Be certain to remove any wasp or insect nests from inside the sensor.
E. Discuss normal operations and explain the “HI,” “LO,” and “HELP” readings on display. Also review the meaning of “last digit flashing.”
“What’s in that MMTS Beehive Anyway?”
“here in northeast Florida and southeast Georgia, we regularly find various critters making their home inside the beehive. At the Jacksonville, FL, NWS office, we usually replace the beehive on our annual visits. After
getting the dirty beehive back to the office, and before carefully taking it apart for cleaning, we leave it in a secure outside area for a day to let any “residents” inside vacate, then we dunk it in a bucket of water to flush out any reluctant squatters.
Red Wasps
Our most common uninvited guest is the red wasp. These wasps enjoy the shelter, security and height of the beehive. They usually build their nest toward the top of the unit. We have found all size nests, from small
ones with only four or five holes/cells to large nests that cover an entire louver. From personal experience, I have learned to be careful in transporting the dirty beehives. At a rural site about 2 hours away from Jacksonville, I removed a beehive from its post and set it on the ground while I put a clean beehive in its place. I rolled the dirty beehive on the grass, then shook it. Nothing came out or buzzed, so I placed it in the back of the Coop van. About 10 minutes after leaving the Coop site, I noticed a couple of wasps on the back
window. A few minutes later there were about 5 to 10 wasps on the back window. A few more minutes and there were more wasps–and they were making their way forward! Driving with the windows down, I finally found a good place to pull over so I could remove the beehive and air out the van…”
Each individual temperature USCRN station starts with a minimum .3°C error.
There are no procedures for testing individual stations to determine their exact field error range. No matter how long that station is left alone, the error assumed is the original specification error.
Bluntly put; temperature land stations are assumed functioning at peak accuracy. No effort is made to record and maintain an accurate list of station error bounds.
Nor is it possible to average together hundreds to thousands of temperature stations, each with a minimum error when new and end up with a claimed error of 0.6°C for their anomaly. Even worse, claim that global temperature is spiking 0.6°C when the smallest error bounds are 0.3°C for each station!
Nor can electrical temperature connections left in the field exposed to critters and weather be trusted to operate as if under pristine conditions. Otherwise, the old Triumph Spitfires would never have suffered from Lucas, Prince of Darkness electrical faults as they got older.
ATheoK’s central point is eminently sound. It is a scandal – and a revealing one – that very few of the trillions being spent on making non-existent global warming go away have been spent on establishing climate-reference networks of ideally-sited, standardized stations such as that of the United States.
Inferentially, the necessary expenditure (not particularly large compared with the various boondoggles by which the rich enrich themselves at the expense of the rest of us) is not being made because the usual suspects are terrified that their belief may be exposed as false.
Thank you Lord Moncktom for another excellent paper.
The reason that warmists can produce opposing graphs showing temperature rising is that we are looking at a total increase of less than 1 degree C since 1880. If temperature was graphed in the normal fashion this rise would not be visible so it is graphed as an anomaly with an enlarged scale to make it seem significant.
The most important thing is that the rate of temperature increase over the past 18 years and 8 months bears no relationship to the rate of increase in manmade CO2 emissions over the same period.
It is amazing and a little sad that eminent organisations such as The Royal Society are prepared to go along with this farce.
Littleoil has grasped the main point firmly: the mismatch between rapidly rising CO2 concentration and non-rising temperature, over as long a period as almost 19 years, is not something that should be shrugged off lightly.
As for the Royal Society, it is a disgrace and should be defunded.
The El Niño event is an impulse, not part of a trend and should be ignored as should all cyclic impulse events. They are of no long term consequence.
dp is right: the usual suspects have taken shamneless advantage of a big el Nino, trying to leave the impression that the sudden recent increase in global temperature is down to global warming, when most of it is part of a natural synoptic cycle.
A more shameless, hypocritical comment is not possible.
A more shameless, hypocritical comment is not possible
Spot on. Monckton has been using the 1998 El Niño spike (and mathturbation) to create the illusion of a pause for years. I give it 5 years before he’s doing the same thing with 2016.
John & Jim, setting ENSO completely aside for the moment, i.e., assuming, arguendo, that the 1997/98 ENSO event either never happened or had no great impact on land surface temperature, this fact remains:
CO2 UP. WARMING STOPPED.
For over 18 years.
The apparently, so far as we know at this time, the small short-term spikes in surface temp. (smaller than in 2003 and in 2010, btw) have not ended the stop in warming.
Janice,

There is zero evidence that “warming stopped for over 18 years”. Monckton’s pause is an artefact of cherry picking. Just because he cherry picks from “all available” data, doesn’t make it less of a cherry pick. What makes it a cherry pick is that he throws out the data prior to his ‘pause’, which makes the estimated trend incorrect. This is how Monckton sees the RSS data:
This cannot be justified physically or statistically. Ask Monckton how and when temperatures got to the level at which they allegedly paused. Was it a magic jump all in one month? A trend by definition should be a continuous function of time. If you fit a continuous “broken stick” (or piece-wise) model to RSS, with a change in trend 18 years and 2 months ago, this is what you get:
No honest analysis of any global temperature data set shows an 18 year pause in global warming .
That is a fact.
Unfortunately the image links above are broken:
Here is Monckton’s view
https://www.flickr.com/photos/138870749@N04/shares/GESZk1
And here is the continuous fit:
https://www.flickr.com/photos/138870749@N04/shares/cw906u
Who’re you going to believe? Johnny and Jimmy, or your lying eyes?
Excuse me while I suffer a minor mirthquake.
@Janice Moore
February 7, 2016 at 12:44 pm
I have a secret crush on you, Janet, but you know the 18+ year pause nonsense ends in just a couple of months. Concentrate on the actual message conveyed – it couldn’t be more accurate.
John@EF says:
…the 18+ year pause nonsense…
Less than one year ago the alarmist crowd was tripping over their feet, trying to explain why the so-called “pause” was happening.
But now they’ve just decided to lie and say it’s “nonsense”.
You shouldn’t lie about it now, John, because the internet doesn’t forget:
http://americandigest.org/a_the_30.jpg
Janice, You’d think that if I had a crush on you I’d at least get your name right. Hahaha. Sorry.
@ur momisugly dbstealey February 7, 2016 at 7:52 pm
Predictably fragile 18+ years, dbs – meaning, smoke. I feel sorry for MoB – his monthly posts just won’t have the same sizzle … and he knows it.
John@EF, you’re making no more sense than ‘Jim’.
My point stands, which is why you’re emitting nonsense comments now. Less than a year ago everyone on all sides of the ‘climate’ debate accepted the plain fact that global warming had been stopped for many years.
Now you’re lying about it. That’s what the Narrative requires the lemmings to do, so that’s what you’re doing.
Garbage. There is a world of difference between discussing reduced rate of warming for a short period and accepting that global warming “stopped for many years”. It never stopped, and no climate scientist ever said it did. Some said it may have temporarily slowed, others said there was no real evidence of even that. All said the long term trend was inevitably up. Sadly, they will be proven right.
“jim” sez:
“Garbage”.
Jim’s comment is ‘garbage’, because it’s no more than a baseless assertion.
Poor ‘Jim’ is unable to refute the plain fact that global warming has stopped. So he’s lying, like the other climate alarmists who lie, because the real world is debunking their failed belief system.
Global warming has been stopped for many years now. Satellite data — the gold standard of temperature measurements — verifies that fact, and it is corroborrated by more than 17,000 radiosonde balloon measurements. Those are facts, as opposed to the baseless assertions of the alarmists.
But rather than accept the verdict of an impartial Planet Earth, the alarmist crowd has chosen to lie. But their lies don’t change reality. All they do is reflect on the lack of ethics that people like ‘jim’ and ‘John@EF’ display.
It all started with Schmidt and Karl, Schmidt and Karl again (I even thought for a moment I was reading Schmidt und Karl, the way my grand-mother used to talk about her experiences in the Austro-Hungarian Empire, and then it was SchmidtKarlPropagandaAmt. That’s when I realised that the good old Lord was, well, sort of, drifting away.
More wisdom from Richard Betts :‘It’s a bit complicated
http://www.climatechangenews.com/2016/02/04/a-climate-scientist-speaks-its-a-bit-complicated/ted’
From the “complicated” link:
“I’m one of those who thinks global surface temperatures did show a pause or ‘hiatus’… surface temperatures did slow a bit in the last decade, and now they’re speeding up,” he says.
A blip from El Nino is not “speeding up”.
“It it [sic] doesn’t affect the long run at all… but you shouldn’t ignore the fact that this did happen.”
This is begging the question. There is no assurance at all that the El Nino blip will not be the final peak before a sudden decline.
“It was’t [sic] specifically predicted but it was in the range of the models in terms of variability.”
I.e., it wasn’t predicted. Full stop. The models are all over the place. Just about any possible eventuality could have been said to be “in the range” of the variability. And, if they couldn’t predict that, how much confidence should we have in the claim that “it doesn’t affect the long run”?
Here is correct link
http://www.climatechangenews.com/2016/02/04/a-climate-scientist-speaks-its-a-bit-complicated/
“After all, theory requires that some global warming ought to occur.”
And what theory is that?
You cannot keep ignoring the surface temperature record. You cannot keep ignoring the ratio of broken warm record compared to broken cold records. Saying “manipulation” all the time is not very convincing.
http://www.ncdc.noaa.gov/cdo-web/datatools/records
As usual, Mr Twitotter fails to read the head posting before attacking it. The rates of warming shown by not one, not two, but three surface temperature datasets are shown in the head posting, not once, not twice, but in three separate graphs.
That the world has warmed compared with the Little Ice Age seems both undeniable and unsurprising. But, as the head posting makes quite clear, on all datasets, including all the surface datasets, there has been around one-third to one-half of the warming that had been predicted. That, on any view, is a serious discrepancy, which indicates to the impartial mind that the models may have profitably exaggerated the warming effect of CO2.
Monckton of Brenchley.
“As usual, Mr Twitotter fails to read the head posting before attacking it.”
Oh look the “Lord” has insulted me, isn’t he a clever little sausage.
I think I shall refer to him as a Sith Lord from now on, it is fitting.
What a jerk.
I will leave it to others to tear apart your faulty reasoning.
Ah, the jerk looked into the projection mirror, and has seen himself.
Lord Monckton is correct, so Harry deserves the insult for trying to run with the big dogs. Back in your kennel, Harry. Chihuahuas can’t keep up here.
I am puzzled by one of Lord Monckton’s claims:
Aside from the ocean warming, the land-based warming was prominent over Siberia and northern China, Europe and central America, inferentially owing much to urban heat-island effects.
Siberia is very thinly populated. Therefore I would expect it to be one of the last places where weather stations would be affected by the urban heat-island effect. Is there any evidence of changes in the areas where those stations are located such as increases in population, growth in transport, or a tendency for inhabitants, as they become richer, to make themselves more comfortable by turning up the heating during the Siberian winter?
I am not asking these questions in order to make a debating point. It just struck me that warmists would challenge the comments on the urban heat-island effect and ask what the evidence is.
There are now some large centers both of population and of industry in Siberia. Or the supposed warming could simply be mismeasurement: the Russian data series are not notorious for their continuity.
https://www.youtube.com/watch?v=fL5-9ZxamXc What Is All The Fuss About? Professor Tim Noakes is a respected South African sports scientist, who’s been championing a high fat, moderate protein diet.The reason I post this video is because he exposes the danger’s of conscience science and also champion’s returning back to open minded universities. If science is to continue progressing, it must always be open to addressing the “unconventional”.http://www.biznews.com/health/2015/06/18/tim-noakes-in-his-own-words-why-i-choose-to-go-on-trial/
LORD Christopher Monckton of Brenchley This is what you are up against.
Source: Courtesy of the Noakes Foundation
However, during last week’s hearings, it emerged that the HPCSA have been procuring secret reports, and that the charge against Noakes may now be ambiguous and prejudiced. It is speculated that there’s an organised campaign to discredit him by Big Food and Big Pharma to protect the ‘medical and dietetic dogma’ and their vested interests. It’s reported that, “What the hearing has made increasingly clear is that Noakes and the science behind LCHF are threatening careers, reputations, livelihoods, businesses and profit margins”
“Furthermore, to date no empirical, theoretical or numerical method, complex or simple, has yet successfully specified mechanistically either how the heat generated by anthropogenic greenhouse-gas enrichment of the atmosphere has reached the deep ocean without much altering the heat content of the intervening near-surface strata ”
Here you go, Lord Monckton.
Some parts of the ocean are not affected by surface currents. The water there is static. The blistering heat of Global Warming evaporates a lot of the water. The remaining water is hot, but also super-saline. Thus it is a lot denser than the surrounding cooler water, and forms a descending column of hot water which reaches the bottom of the ocean. It carries the heat down.
“or how the heat from the bottom of the ocean may eventually re-emerge to perturb the near-surface climate conditions relevant to land-based life on Earth.”
And I thought we all knew the answer to that. The heat will wake up Godzilla, he will rise to the surface, and scorch everything (Tokyo first, of course) with his fiery, radioactive, breath.
Now, when do I get my Nobel Prize and research grant?
Some time ago science with it’s settled this how the universe worked found that on doing some sums 80% of it was missing. So they invented dark matter and dark energy. They were wrong and those of old that imagined an Aether were closer to the truth. The missing part is de-spun photons they are the energy that cycles through the entire universe, it stirs our sun into cycles , it stirs our Earth into staying molten in the middle after billions of years, it keeps us warm, the sun in it’s moods allows more or less to escape, thus it is ultimately the sun stupid. That gives us warm periods and cold periods, including ice ages.
Thank you another very interesting article, m’Lud. I’ve made a note of your Killer Questions in order to suitably enrage the global warming fanatics.
I was particularly interested to read about something I’ve also noticed recently: A seemingly orchestrated attempt to undermine the apparent accuracy/reliability of the satellite datasets (RSS & UAH6).
The CAGW fanatics used to deny existence of the pause, then they tried to explain it away with a few dozen crackpot theories and now they just attack the satellite data. It would be good to hear what others think about this claim of theirs.
In answer to Dreadnought, the definitive statement on the relative reliabilities of the satellite and surface temperature datasets is Dr John Christy’s superb recent testomony to the House Space, Science and Competititveness Committee. Dr Christy, of course, designed and manages the UAH temperature datasets, together with Dr Roy Spencer. His conclusion is that the satellites are better than the thermometers.
Dreadnought’s account of the slow, stepwise retreat of the climate extremists is excellent. First they hid the Pause; then, when I drew attention to it during a speech to the UN climate conference in Qatar, they sneered that I didn’t know what I was talking about, then they admitted there was a Pause, then they came up with dozens of mutually incompatible excuses for it, then they altered the land surface and sea surface temperature datasets to airbrush it away, then they realized the satellite datasets could no longer be ignored, what with Senator Cruz showing one of my monhtly RSS pause-graphs on the floor of the Senate and all, then they began attacking the satellite datasets.
Their retreat is as long and painful as that of Napoleon from Moscow.
El Niño is weakening.
http://www.cpc.ncep.noaa.gov/products/analysis_monitoring/enso_update/sstweek_c.gif
https://www.longpaddock.qld.gov.au/seasonalclimateoutlook/southernoscillationindex/30daysoivalues/
Looking forward to viewing the next few months constructions of Sir Chris’s ‘Freedom Clock’ 🙂
Who knows, we could enjoy a little ‘Freedom’ ourselves 😉
The Freedom Clock will perhaps not advance for a time: but, if the current el Nino is followed by a la Nina, and perhaps a big one, then we’ll be heading for 20 years without any global warming at all, according to the two satellite datasets.
The climate scam would not be able to survive so long a period without any global warming.
Thanks, Christopher, Lord Monckton. This is a superb article.
According to “Ben Santer of Lawrence Livermore National Laboratory, that the tropical mid-troposphere should warm twice or even thrice as fast as the tropical surface.”
Well, then, the climate models are wrong. But there’s more, and that makes the very wrong.
There is no justification for using the climate models to assert anything at all.
Monckton of Brenchley,
A few years back you made several references to the “endpoint fallacy”, where you said:
Could you explain how your pause, defined as
is not an example of the endpoint fallacy?
Bellman appears poorly schooled in logic. The very sentence he cites explains that the start-point of my monthly Pause graphs is calculated: it is not plucked arbitrarily from the ether.
The central point is very simple. One-third of all anthropogenic forcings since 1750 have arisen during the period of the Pause. And that period is now of sufficient length to pass the NOAA test: i.e., 15 years or more without any global warming indicates a discrepancy between the predictions and the observations.
What is more, NOAA were talking of ENSO-adjusted data. After adjustment for ENSO, the discrepancy between prediction and observation is actually wider than before making the adjustment.
There is, therefore, a problem with the official global warming theory. The problem is that the warming is either not happening at all, as the satellites suggest, or not happening at anything like the predicted rate, as the terrestrial tamperature datasets suggest.
Given that the head posting considers two satellite and three terrestrial datasets over periods commencing with three distinct starting dates, each chosen because it was the year in which IPCC had made another set of predictions against which the observed trend could be tested, the suggestion that the RSS graph in the head posting is an instance of the endpoint fallacy is feeble-minded.
Jim, the comment checks out. It is from Monckton. Somehow his affiliation URL has gotten transposed with his name in the comment form and appears to be saved that was as some default.
You may be correct about my schooling, as I fail to see how calculating an end point to get the desired trend is different to manually selecting the end point.
Not really the question I was asking, but isn’t the NOAA 15 years referring to surface temperatures?
But I was asking about the choice of the start date for the Great Pause, nothing to do with the dates of IPCC predictions. If I wasn’t so feeble-minded I might wonder if you were trying to change the subject.
Bellman
We (the skeptical community of science observers and realists) do NOT “pick” a start date for this trend. Granted, “you” have been schooled (propagandandized) into “believing” the start date is “picked” but that is exactly opposite of the process, and your choice of believing their lie does fit YOUR mindset and prejudices. (I will rant your prejudices appear to control your thought processes, but do not (yet) accept your conjecture that you are feeble minded. Firm-minded, prejudiced, incapable of independent thought based on the facts presented? True. But feeble minded? Probably not.)
What actually happens each month is the following, and it was described in the original text as well. The global average satellite temperature is read from those instruments, and a “flat line” is projected BACKWARDS as far as that trend line remains indistinguishable from zero. We DO NOT “pick a point” and draw a line, we “pick this month’s global average temperature (whatever it actually is) and go BACKWARDS until the trend line rises. Whatever date that point is, we report. Same process every month, regardless of whether the results confirm or contradict your assumptions. The process is neutral and breathe-tauntingly honest. Indeed, the essence of this month’s report is that February’s report may actually show a near-continuous rise!
It is the CAGW climastrologists and catastrophe-seekers who cherry pick temperatures needed to make their propaganda more effective, who ignore inconvenient truths and measurements, and who extrapolate linear trends 200-400 and 800 years into the future incorrectly.
Now, I do understand WHY they (you) do that; and I do understand why YOU (they) believe you have to do. Thus, I do understand why a firm-minded inflexible zealot believes that his (her?) opponents are as dishonest and as devious as one’s own associates need to be to promote their cause. But your firm-mindedness and your incorrect assumptions are grounded in nothing but your own experience in your own deceptions and lack of honesty.
“Not really the question I was asking, but isn’t the NOAA 15 years referring to surface temperatures?”
Indeed so, and furthermore after ENSO adjustment. If they had done a similar calculation for troposphere, they would probably have found a longer period, because the variation σ is about twice as high as for surface.
RACookPE1978
I think this illustrates the confusion that dcpetterson describes. That most definitely not how Monckton calculates the pause length. If you are just going back from the present to find the first month where the trend is positive you wouldn’t have to go back very far. If by “indistinguishable from ZERO you mean” go back to find a point where the warming trend is statistically significant then you could go back much further than June of 1997. Probably back as far as 1990,
What Monckton does is go back to find the earliest point where the trend from that point to the present is not positive. If that’s not what he’s doing then I think his algorithms need some work. I noticed any mention of statistical significance in its definition.
Bellman:
Your latest missive convinces me that you are deliberately obfuscating.
You ask me
I have repeatedly stated that!
The question being addressed in the ‘Pause’ calculation is
How long has there been a period of sub-zero trend in the data until now?
The nearest datum to “now” is the latest datum and. Therefore, it is the start point. With effluxion of time the data set includes a new latest datum and, therefore, the question needs to be calculated for the new data set (which includes the newer latest datum).
In other words – as I have repeatedly told you – the data set defines the datum which must be used as the start date and the researcher has no choice in the matter.
Then, as I told you in my first post addressed to you in this thread
It seems you are making repetitive posts to disrupt the thread and are ignoring the answers you get.
Not content with that, you write this nonsense
I recognise that this is difficult for a supporter of climastrology to understand but I will ‘spell it out’ despite your difficulty.
The “desired result” is the result the researcher hopes to obtain.
(i.e. The pseudoscientist uses the data to generate an answer which promotes an idea: i.e. the ‘analyst’ desires to promote an idea).
The calculated result is the result the data set provides.
(i.e. The researcher obtains the answer from the data which adds to knowledge: i.e. the analyst desires to reduce ignorance).
You provide both nonsense and irrelevance when you write
YES! The method of the IPCC graph is plain wrong: that is what I said. And the method being wrong means it gives wrong indications whatever data set it is used on.
And you return to hypothetical twaddle when you write
THE IPPC METHOD IS PLAIN WRONG. I REFUSE TO CONSIDER WHETHER OTHER END POINTS WOULD MAKE IT MORE OR LESS WRONG.
In conclusion, in the unlikely event that you truly are as confused about the ‘endpoint fallacy’ as you claim then I yet again commend you to try to assuage your confusion by considering the straightforward case of Santer et al. (1996).
Richard
dcpetterson:
I don’t need to “check” anything because in this thread I have repeatedly refuted your bollocks; e.g. here. But either you can’t read or you won’t read.
GLOBAL WARMING HAS STOPPED: IT IS AN EX-PARROT.
Try saying out loud “Global warming has stopped” 100 times and the truth may register inside your skull.
Richard
richardscourtney:
So, if I were to claim that according to the RSS data, temperatures had been rising at over 2C per century since May 2007, (8 years and 9 months), would you see that as a valid calculated result or as an example of the endpoint fallacy?
Or when Monckton says that temperatures were cooling between 2001 and 2009 at the rate of 1C per century, was that an example of a statistical lie or a valid claim?
Bellman:
The answer to your question is NO!
The ‘endpoint fallacy’ is when a researcher chooses the ends of an analysed period to indicate a desired result.
The analysis of ‘Pause’ length does not involve choosing an analysed period. In this case, the start point is now (i.e. the most recent month for which data is available). Each previous successive month is then assessed as being the other end of a time-series to determine the longest period back from ‘now’ that shows a “sub-zero trend”.
In other words, the “endpoints” are determined by the data – they are NOT chosen by the researcher – and, therefore, the ‘endpoint fallacy’ is not possible in this case.
Richard
So if the IPCC had produced a with a trend line starting at a month that was calculated to give the fastest warming rate, that would not be an example of the endpoint fallacy?
Bellman:
Who is “changing the subject” now?
What the IPCC may have done – but did not do – is not relevant.
However, the IPCC did publish a spurious graph which did provide a variation of the ‘endpoint fallacy’ and Viscount Monckton copies it in his above essay saying it is “one of the most mendacious graphs in the IPCC reports:”. That graph was not seen by IPCC peer reviewers: it was added after we had reviewed the AR4 WG1 report but before its publication.
A more clear example of use of the ‘endpoint fallacy’ is the paper published by Santer et al. in Nature in 1996. That paper, its use of the ‘endpoint fallacy’ and its exposure were clearly and succinctly reported by the late John Daly here.
Richard
I wasn’t changing the subject, I was offering a reductio ad absurdem
Exactly. This was the graph Monckton was referring to in the quote about the endpoint fallacy. His objection to it was that by carefully selecting the starting points had got the trend lines they wanted. I’m asking how that is different to calculating the start point that gives the longest possible negative trend line, when a negative trend line is what was required.
So far the only difference offered is that one end point was calculated and the other was plucked arbitrarily from the ether. To my simple mind calculating a point to get the desired result would be more indicative of a fallacy than arbitrarily choosing one.
This may be misleading. You (and Monckton) appear to be implying that we can go back from the current month, and start a trendline any month between now June of 1997, and we’ll get a flat trendline. Please correct me if I’m wrong. You appear to be saying that in order to get something other than a flat trendline, we have to go back farther than June of 1997.
If that is what you’re saying, be aware that it is not true. If you start a trendline before or after the point Monckton starts, you will not get a zero trend. You will get a positive trend. (There are a couple of exceptions; a small period after June of 1997 in which there is a spurious negative trend, and another such period in late 2001). There are exactly six months — six out of the 240 months in the last twenty years — in which you can get a zero trend. Nearly all the rest of the start points will give you a positive trend.
June of 1997 was specifically picked as one of the very few months that gives a zero trendline. In a noisy dataset it is not surprising that a few specifically-chosen datapoints can yield spurious or anomalous trendlines from time to time. That should not be taken to imply that some specific and physically real Event happened in June of 1997 to change the speed at which the climate is warming.
Bellman:
i agree.
The way any calculation of any trend should be done is to fix the start point PERMANENTLY and go forwards..
The way Monckton does it is the exact reverse.
And it is disingenuous to say the data “chooses” the end-points, and is therefore not cherry-picking …. because you start at a moving endpoint (present) – which is really the start – and end it when the data no longert agrees with the trend you want. BOTH ends are moving and not just the chronological end.
If you also use a feasibly unphysical “spike” in the GMT data (which is what the Sat sensing of the 97/98 Nino is) – you could run for quite some time with an equally unfeasible zero “trend”.
I would venture to suggest that if the “spike” were an equally unfeasible dip in GMT it would be ruled obvious.
BTW: If you say the trend following that Nino (blue) is robust and reliable then you should also accept that the trend before it (red) was likewise. Yes?
In that case it would only be a “pause” of the long-term trend from the start (purple) of the whole series IF the current end (Monckton’s start) “pause” (blue) trend were BELOW that prior trend extended to the present……
http://woodfortrees.org/plot/rss/from:1978/to:1997/trend/plot/rss/plot/rss/from:1997/to:2015.8/trend/plot/rss/trend
And it isn’t.
Actually it wont be until ~2025 even if the “pause” remains.
The “pause” therefore is logically a STEP-UP in GMT that has still not leveled-off long enough to fall back to the original trend.
It is of course NOT.
just as it is neither a “pause”.
The only reasonable trend that can be taken from the series is the purple one.
Bellman:
I admit to wondering if you are deliberately being obtuse.
You suggested Viscount Monckton was “changing the subject” when he used an IPCC action as illustration. And you have now objected to my suggesting you were “changing the subject” when you suggested a hypothetical action that the IPCC may have adopted but did not!
I pointed out
and you have replied
NO!
There are three differences.
First, and most important, the data defines the start-point of the ‘Pause’ but the IPCC chose the lengths of its time period.
Second, the datum which is the start-point of the Pause is the calculated result which comes from the data and is NOT “the desired result”. However, the arbitrary start-points of the IPCC graph are chosen to provide desired results; i.e. an indication of accelerating warming.
If you cannot understand the profound difference between the calculated result of the Pause and the IPCC choices of start points, then consider repeating those trend determinations from the other end of the IPCC graph (i.e.from 1850 or 1860) instead of 2005. There would be no Pause (and no present Pause because the present would not be included) but the indication of the IPCC’s arbitrary start points would be decelerating warming; i.e. the opposite of the ‘indication’ the IPCC purported to be presenting.
Third, choosing an end point enables intentional or unintentional bias of the researcher but calculating it from the data does not enable that bias.
I again suggest you consider the less complicated example of Santer et al. if you honestly are failing to understand the matter.
Richard
Toneb
You assert
Nonsense!
The question addressed by the ‘Pause’ calculation is,
How long has there been a period of sub-zero trend in the data until now?
You asserted method would not answer the question being addressed.
And ‘now’ changes with efluxion of time so it is a new – and different – question each month.
Richard
dcpetterson:
I am “implying” nothing. I am making clear and unambiguous statements.
Richar
richardscourtney
Could you explain why the data defining the start-point does not mean choosing the start point to get the result we want, and why choosing an arbitrary length for a time period makes it easier to get the result you want?
Just to be clear I’m not arguing for that IPCC graph. I agree you cannot demonstrate accelerated warming by choosing periods of different length. But your claim (I assume) is that they deliberately chose the most recent start point to show maximum warming, though they did this by choosing an arbitrary 25 year period.
By contrast the start-point of the Great Pause is the calculated date that will give the longest period with a negative trend. I find it difficult to understand how this is not the “desired result” – in this case the desire being to show the longest possible pause.
As I said I;m not defending the IPCC graph. But try the same logic to the RSS data. Start at the beginning and see how far you can go before the trend becomes positive.
Calculating an end point most definitely can enable bias. Consider my “changing the subject” comment before – assume the IPCC hadn’t chosen an arbitrary 25 year period for their graph, but had calculated the start point that would give them the fastest growth rate of a reasonable period. Would you consider that to be more of less biased?
Very good, Richard. I’ll give some details about the RSS record, then ask you to unambiguously agree with some further unambiguous statements.
Examining the RSS temperature record from 1979, there are about 440 months in the total data record. Six of those months, or about 0.14% of the total, will give you a zero trendline from that month to the present. About 430 of the months in the RSS data set, or roughly 98% of the total, will give you a positive trendline. The others — all centered around three recent and very strong El Ninos — will give a negative trendline, because they are recent and start at very, very warm points.
Now, given that, I will ask you to unambiguously agree with the following unambiguous statements.
1) Monckton went looking for the months from which there is a zero trendline. He consciously and intentionally picked one of those specific months to be the start point for a “pause”, i.e., the one farthest in the past.
2) This is an example of “cherry picking” and of an endpoint fallacy, because he went looking for that specific handful of months to satisfy a desired result, and then chose the one he liked best.
3) Further, this particular example of “cherry picking” produces a result that is at odds with the result one gets from choosing nearly any one of the other 440 or so months in the RSS record. Only 0.14% of the total data series agrees with Monckton’s position. The other 99.86% of the data does not.
4) Choosing one out of six examples, which together amount to less than two tenths of one percent of the total data, and using that one specifically chosen example to make a point that matches a pre-determined desired result — a result which runs counter to 99.86% of the data — that is a perfect example of “cherry picking” and of an endpoint fallacy. This is clearly and unambiguously true.
Are we agreed?
Bellman:
My latest reply to you has appeared in the wrong place. It is here.
Richard
richardscourtney:
So you agree there has been a step change UP in GMT IF you go with Monckton’s “analalysis” ?
dcpetterson:
You raised a ‘strawman’ that you constructed from your untrue assertion that I was “implying” something. I blew away your ‘strawman’ by pointing out that I had implied nothing.
You have responded by stating a load of irrelevant and untrue twaddle then asking me
I think we agree that you are a time-wasting troll but not much else.
This is because having failed to put words in my mouth you now provide 4 spurious points each of which I have already repeatedly refuted in this sub-thread.
Firstly, you assert
NO! ASOLUTELY NOT! EXACTLY THE OPPOSITE!
He consciously and intentionally used the most recent month to be the start point for calculating the ‘Pause’, i.e., the one nearest to now.
You follow that falsehood with this untrue assertion
NO! THAT IS COMPLETELY UNTRUE!
The question being addressed in the ‘Pause’ calculation is
How long has there been a period of sub-zero trend in the data until now?
The nearest datum to “now” is the latest datum and. Therefore, it is the start point. With effluxion of time the data set includes a new latest datum and, therefore, the question needs to be calculated for the new data set (which includes the newer latest datum).
In other words – as I have repeatedly said in this sub-thread – the data set defines the datum which must be used as the start date and the researcher has no choice in the matter. And – as I have also repeatedly said in this sub-thread – each previous successive month is then assessed as being the other end of a time-series to determine the longest period back from ‘now’ that shows a “sub-zero trend”.
In other words, the “endpoints” are determined by the data – they are NOT chosen by the researcher – and, therefore, ‘cherry picking’ and the ‘endpoint fallacy’ are each not possible in this case.
And you follow that with this insanity
There is NO “cherry picking”: there is only a determination of how long the ‘Pause’ has existed until now. And, of course, the remainder of the data set does not indicate the ‘Pause’ because the ‘Pause’ is indicated to have only existed in the determined period of the ‘Pause’.
But that madness was merely you ‘warming up’ because you follow it with this crescendo of grotesque insanity
Your claim that the analysis is other than it is demonstrates you are spouting complete bollocks. This is clearly and unambiguously true.
Richard
@Bellman and Dcpattersen,
Guys, it’s really not that complicated. In order to AVOID the end point fallacy or accusations of cherry picking, what Monckton has done is ask “how far back can you go before you get a non zero trend?” – as R Courtnay has pointed out.
So you take the most recent data (not cherry picking since it is implied in the question) and look BACK through time. Calculating a trend is insensitive to the direction you are calculating…data either goes up or down….we just want to know if there is any kind of trend.
As a consequence, the length of time of no trend may vary depending on the variability of the new data as it comes in. Some times the length shortens but in general it has lengthened.
It is simply a way of illustrating a point that despite unprecedented emissions there has been no corresponding increase in temperatures, and doing it in a way that is logically sound, and objective, since the question obviates the need to choose dates.
If you wanted to be critical you could say “yeah but period also covers the 1998 anomalous El Niño which is going to skew results”, but then you would have to say the same thing about the following La Niña.
Moncktons approach is an exceedingly good one because it absolutely avoids the need to choose end points.
Toneb:
I agree to ignore irrelevant questions provided as ‘red herrings’.
Richard
I’ll show this plot again, because it explains exactly what is happening. It shows all possible trends ending at end 2015. The start month is on the x axis. I show 3 surface data and two troposphere. The troposphere curves are at the bottom, and RSS is the lower.
http://www.moyhu.org.s3.amazonaws.com/2016/1/trend0.png
So that question
“How long has there been a period of sub-zero trend in the data until now?”
isn’t right. It is asking where is the fist crossing of the x-axis. For RSS I’ve marked that with a red ring.
But as you go back, in fact starting at most months gives you a positive trend. There are very few exceptions, and of course, the surface measures come nowhere close.
So it isn’t true that there is no endpoint selection – there is scientific endpoint selection.
Nick S.:
The “endpoints” are determined by the data and they are NOT chosen by the researcher so ‘cherry picking’ and the ‘endpoint fallacy’ are not possible when conducting the calculation of how long has there been a period of sub-zero trend in the data until now.
But you say
Nick, that is an example of sophistry which should make even you blush!
“Scientific endpoint selection” is the result of the calculation of how long has there been a period of sub-zero trend in the data until now.
Richard
Nick Stokes,
Thanks for posting that graph. As you can see, the RSS trendlines cross the zero point six times. Those are the six months in the 440+ month-long RSS satellite history where there is a zero trendline from that month to January of 2016.
Monckton picked one of those six months to be the start of his “pause”, and he picked one of them because he went looking for a zero trendline. None of the other months in the 440+ month-long RSS dataset gives a zero trendline. None of them.
Consciously and intentionally choosing one endpoint in order to create a desired result is an example of an endpoint fallacy.
If you plot a trendline from, say, December of 2015, or January of 2015, or January of 2008 to January of 2016, you will get a positive trendline. Go to woodfortrees and try it.
richardscourtney:
By no means “an irrelevant question” my friend – a fundamental one and one that is a direct consequence of accepting Monckton’s “pause” as valid.
The trend prior to the cherry-picked one MUST be equally valid.
That trend remains below below the extended “pause” zero trend until at least 2025.
So the pause HAD to happen as it is leveling-off after an (unphysical) step-jump in Global ave temp at the time of the 97/98 Nino..
The “pause is not valid.
And the step-jump is not valid..
They are NOT mutually exclusive.
Yesterday, richardscourtney asked:
The answer is, “zero months”.
Go to woodfortrees.org and plot a trendline in the RSS data from December 2015 to January of 2016. You will see the trendline is up. One cannot go even one month into the past from January of 2016 and get a “sub-zero trend.”
The only way to get anything other than a positive trend is to cherry-pick a month near a massive El Nino year. All other months show upward trends.
And even then, you can’t get anything other than a positive trend if you start before June of 1997.
Monckton did not keep checking each month going backwards until he found a “sub-zero trend.” He did the opposite. He kept checking each month going backwards until he found a zero trend. He did that because he wanted to find a zero trend, and he kept looking until he found one.
That is cherry-picking, and it is an example of endpoint fallacy.
dcpetterson:
You write:
NO! Read the above essay and you will learn
Richard
Toneb:
In attempt to justify your irrelevant ‘red herring’ you fallaciously assert ‘cherry picking’. I can only assume that reading comprehension is beyond your limited intellectual ability,
Richard
Richard Courtney,
The gaggle of climate allarmists here who are trying to claim ‘cherry picking’ don’t seem to have a clue about why the start year was picked, and by whom.
In an interview Dr. Phil Jones was asked if global warming had stopped. He replied, “Yes, but only just.” He added that fifteen years would need to pass before it could be stated with greater than a 95% statistical certainty that global warming had stopped.
Dr. Jones’ designated starting period. No doubt Jones felt very confident that global warming would resume within his time 15 year period. He was wrong. It has now been longer than 18 years since 1997.
Since Jones is an arch-Warmist (see the Climategate emails), his designated starting period wasn’t something invented by skeptics of the “dangerous AGW” hoax. It was stated by one of the alarmist clique’s main players. Skeptics have referred to it ever since.
Now they’re stuck with Dr. Jones’ definition of when the so-called “pause” began. These partisan pseudo-scientists need to go ask Jones about it. He’s still around. Maybe he can tap-dance his way out of what he said, but we’re just going by his own definition.
As stated upthread, less than a year ago everyone on all sides of the ‘climate change’ debate were in agreement: global warming had stopped. They were busy trying to find reasons to explain it:
But now, 18+ years after global warming stopped, the alarmist crowd has changed course. Since the real world is still debunking their scare, they have decided to lie outright. The new Narrative is “Global warming never stopped!” So the lemmings all jump aboard and repeat the lie, because that’s what lemmings do.
They can lie all they want, but the internet doesn’t forget. And when they lie, their credibility evaporates. That’s part of the reason the public is turning on them. No one likes liars.
richardscourtney,
Yes, Monckton did claim,
The problem is that this claim is untrue. The RSS data shows a big spike in RSS temperatures toward the beginning of that 18 year 8 month period, which skews a trendline that starts exactly at that 18 year 8 month period. You simply cannot get a flat trend line if you start somewhere else.
Monckton’s defenders here are claiming that Monckton checked each month, going backwards in time from January of 2016, until he found a month that did not have a “sub-zero trend”.
If he had done that, he would have stopped at December of 2015, because the trend from December of 2015 to January of 2016 is not a “sub-zero trend”. Do you disagree?
What he did instead is to check every month, going backwards from January of 2016, until he found one with a zero trend.
There are six such months in the RSS satellite record. He picked the earliest of those months for the start date of his “pause”.
You, and others, seem to be arguing that starting a trendline at any month after June of 1997 will give you a flat trend. It won’t. Please, try it for yourself and see. There really are only a very few months in the entire RSS record since 1979 that give a flat trendline, and Monckton carefully chose one of them.
You don’t have to believe me or to trust me. Please, try it yourself. I invite you to prove me wrong, which should be very easy to do if I am wrong.
dcpetterson:
My reply to your most recent pollution of this thread is in the wrong place: it is here.
Richard
dbstealey:
Yes. As far as I can see this latest “gaggle” of warmunists lacks a clue about anything. They even lack ability to read.
Richard
::: sigh. :::
Okay, you’re unwilling to examine the data. I understand. The data is available if you ever want to look at it.
Thanks for the conversation. I hope if we converse again you would be a little more polite, but I understand that as well. Passion can sometimes substitute for reason when one does not consider the data.
Thanks again.
dcpetterson,
I’ve posted quite a bit of data-based charts at the end of this comment thread. Go there, and you will see that many of your assertions and beliefs are wrong.
I understand that you were probably misinformed because you got your misinformation from propaganda blogs like neo-Nazi John Cook’s ‘skepticalscience’ blog. But you won’t find the truth there, all you will find are attempts to lead you in their desired direction.
If you read this site you will find that it doesn’t censor views just because they’re wrong, or not favorable to the site owner. All points of view are welcomed and encouraged. That is not the case at most alarmist blogs, as many commenters here can tell you.
Your arguments stand or fall based primarily on their credibility. If you say extreme weather events are rising, or that the natural sea level rise is now accelerating, you need evidence, not partisan blogs written by someone who calls himself the ‘Scribbler’.
Pick a subject and we’ll hash it out. Best evidence and data wins. And always keep one thing in mind: skeptics of a hypothesis have nothing to prove.
dcpetterson:
That is NOT acceptable!
I have considered the data and YOU have misrepresented it as I have explained to you in this thread e.g. here.
Your refusal to accept reality is your delusion, nothing more.
Richard
dbstealey February 8, 2016 at 9:34 am
Richard Courtney,
The gaggle of climate allarmists here who are trying to claim ‘cherry picking’ don’t seem to have a clue about why the start year was picked, and by whom.
As indicated below neither do you richard, your memory is letting you down.
In an interview Dr. Phil Jones was asked if global warming had stopped. He replied, “Yes, but only just.” He added that fifteen years would need to pass before it could be stated with greater than a 95% statistical certainty that global warming had stopped.
In February of 2010, Phil Jones was asked some questions in an interview with the BBC. The distorted view of his reply is what you are referring to.
The question was:
“Do you agree that from 1995 to the present there has been no statistically-significant global warming?”
To which Jones replied:
“Yes, but only just. I also calculated the trend for the period 1995 to 2009. This trend (0.12C per decade) is positive, but not significant at the 95% significance level. The positive trend is quite close to the significance level. Achieving statistical significance in scientific terms is much more likely for longer periods, and much less likely for shorter periods.” (The trend was actually significant at the 93% level).
Why was 1995 chosen by the interviewer as the starting point for this question? Prior to that interview Lindzen and Motl had both pointed out that 1995 the closest year for which the answer to this question is “yes”.
Dr. Jones’ designated starting period was 1997-98. No doubt Jones felt very confident that global warming would resume within that time 15 year period. He was wrong.It has now been longer than 18 years.
So the designated starting point was 1995 and it was designated by the interviewer, not Jones!
In fact in another interview with the BBC about a year later Jones notes that the HadCRUT warming trend since 1995 was by then statistically significant:
“Basically what’s changed is one more year [of data]. That period 1995-2009 was just 15 years – and because of the uncertainty in estimating trends over short periods, an extra year has made that trend significant at the 95% level which is the traditional threshold that statisticians have used for many years.
“It just shows the difficulty of achieving significance with a short time series, and that’s why longer series – 20 or 30 years – would be a much better way of estimating trends and getting significance on a consistent basis.”
As a follow-up, are Monckton’s trends significant at the 95% level?
Phil.:
You really are a funny guy!
dbstealey wrote
Please note, dbstealey wrote that, not me.
I replied to him saying
And you have proved my reply is right by you writing in response quoting what dbstealey had written and attributing it to me saying in response
Laugh? I couldn’t make this stuff up.
Anyway, thankyou for so clearly demonstrating that my reply to dbstealey is right; i.e. you warmunists even lack ability to read.
Richard
richardscourtney February 8, 2016 at 12:37 pm
And you have proved my reply is right by you writing in response quoting what dbstealey had written and attributing it to me saying in response
I should have realized that there was too much content in that post to be from you!
It would help if stealey used a consistent method of distinguishing between his own material and quotes from others, like block quotes, or italics or quotation marks.
Sorry for attributing stealey’s error to you.
Phil,:
What I wrote is right.
Your response started with an error which demonstrated what I wrote is right.
I replied pointing out that your error demonstrated what I wrote is right.
You have responded to that by claiming quantity equates to quality and admitting your error but failing to acknowledge or apologise for your error while pretending you have not misrepresented what Phil Jones said.
Your error was a demonstration that you cannot read what people write and you misrepresent what they say. Your pretending you understood what Phil Jones said is further demonstration of that because dbstealey was right about what Phil Jones said and you are wrong (as usual).
Richard
richardscourtney
Instead of assuming I’m obfuscating, why not assume I’m really stupid and that I genuinely don’t understand the point you are making?
But what do you mean when you say “the data set defines the datum” and “the researcher has no choice in the matter”? The researcher has chosen to define the Pause using a specific criterion – namely the longest period leading up to the present which will produce a negative trend. Now haven chosen this definition there will only be one month that meets that criterion. You seem to be implying that because this starting month is unique for any given data set, that means it is defined by the data and not chosen by the researcher. Am I understanding this correctly?
My point, however, is that the definition the researcher has chosen is simply defining a cherry picked start point, that is a starting month that produces the desired result, namely the longest possible negative trend. The end point fallacy is implicit in this definition.
It’s possible I’m not understanding what Monckton means by the endpoint fallacy, or that you are misunderstanding it. Maybe you could explain what you think it means. I must admit that although Monckton describes it as a well known fallacy I can find little reference to any fallacy by that name. But I assumed that what he means by it is choosing the end points in a trend calculation in order to get the best result. This is certainly a problem in statistical analysis as any confidence you can attach to such data will be overstated.
I think I get what you are saying here. You are defining “desired result” in moral terms. You are saying only pseudoscientists would stoop to looking for results they desire, whereas Monckton is an honourable man, so any result he obtains cannot be the result of a fallacy.
The trouble is that statistics doesn’t care about morality. If it’s fallacious for someone to choose endpoints to get a result you don’t like, it’s just as fallacious to do the same thing to get a result you do like.
agnostic2015
I’m not disagreeing at all with your explanation as to how Monckton calculates the start point for the pause, I’m just not sure I understand how his procedure avoids the end point fallacy or cherry picking when it’s implicit in the method that it will choose the end point that gives the longest period of negative warming.
dbstealy:
I don’t know if that was actually what Jones said, but if he did he was mistaken. You can’t specify a number of year required to establish a statistically significant result. It will depend on the data.
dcpetterson
because this nest of replies is rather overlong, I’ve replied and started anew way down below
Bellman says:
I don’t know if that was actually what Jones said, but if he did he was mistaken.
I know that’s exactly what Phil Jones said, verbatim. I’ve been following the corruption exposed in the Climategate emails since they were made public. That’s why I was able to quote him from memory.
You may be right about Joners’ inability WRT statistics. He’s about as up to speed there as Michael Mann.
(At this time I’m joining the much shorter thread below.)
Bellman:
You ask me
I was trying to be kind.
And you say
Morality is not relevant. The issue is ETHICS.
Scientific conduct consists of research to expand knowledge.
Proper scientific conduct forbids misusing data to promote an idea.
If you had read what I wrote then you would have known that I explained to you
And I again repeat, that if you don’t understand what is meant by a “pseudoscientist uses the data to generate an answer which promotes an idea” then please read this example .
I repeat, it would be good if you were to read answers I provide to you and to discuss them instead of repeating yourself ad nauseam .
Richard
PS I do know that trying to discuss ethics with a warmunist is like trying to discuss the colour green with someone who has always been blind.
It’s fun watching this discussion of whether the end-points for the “Pause” are cherry-picked. It’s been repeated ad nauseam (almost literally) how the maths is done. I guess the main focus of the argument is whether that is a valid way of describing temperature trends.
It’s on the cards that by March, there will be only one possible start date (December 1997) for the negative trend. This will make accusations of cherry-picking even less defensible.
I think one of the commenters would feel at home in one of those less-than-democratic parliaments which one occasionally sees on TV, where instead of having a reasoned discussion, they have a punch-up. I guess this is the on-line equivalent.
Richard Barraclough.
A punch-up? Please someone video that, it will go viral.
In a sense the “discussion” is academic anyway because any trends (or lack of them) would not be statistically significant to 2-sigma anyway.
For the record, searching for a trend you like IS cherry-picking. But the nay sayers will always argue against science no matter how many times it is explained to them.
Harry Twinotter
You say
For the record, I agree with you that the “nay sayers” of the IPCC, the self-styled Team, and their supporters do cherry pick their claims of rises in global temperature since ~1880, and they “will always argue against science”.
Richard
richardscourtney:
You are probably correct to say this discussion will just keep repeating, but before we put it to sleep I’ll try another tack.
With reference to the two statements quoted above, could you explain exactly what statistical approach you would take to allow us to see for a given claim whether it is the result of trying to obtain the “desired result”, or a “calculated result”? By a statistical approach I mean something that could be used objectively, without knowing if you agreed with the result or not.
Bellman:
You say to me
The pertinent “given claim” in this thread is an analysis to address the question;
How long has there been a period of sub-zero trend in the data until now?
And I have repeatedly explained exactly what statistical approach you would take to allow us to see for that claim whether it is the result of trying to obtain the “desired result”, or a “calculated result”.
The required “statistical approach” is the method adopted by Viscount Monckton.
As agnostic2015 also explained to you here where he wrote
If the end points cannot be chosen by the analyst then the analyst cannot choose end points to obtain a desired result.
This has been told to you repeatedly.
Richard
Empirical demonstration that adjusted historic temperature station measurements are correct, because they match the pristine reference network: Evaluating the Impact of Historical Climate Network Homogenization Using the Climate Reference Network, 2016, Hausfather, Cowtan, Menne, and Williams. http://www-users.york.ac.uk/~kdc3/papers/crn2016/background.html
“Empirical demonstration that adjusted historic temperature station measurements are correct…”
This is not a scientific statement. It is a statement of absolute certainty, in the face of manipulations which considerably expand the uncertainty.
“Empirical demonstration that adjusted historic temperature station measurements are correct, because they match the pristine reference network”
No, nothing can be proved using only a couple of decades of data. The title is misleading by any standard, except of course for the standard of ‘climate science”.
Anthony Watts on the other hand, takes a much longer look and finds that there are major discrepancies between “pristine” and “adjusted”.