By Christopher Monckton of Brenchley
The sharp el Niño spike is just about to abolish the long Pause in global temperatures – at least for now. This column has long foretold that the present el Niño would be substantial, and that it might at least shorten if not extinguish the Pause. After all, theory requires that some global warming ought to occur.
This month, though, the Pause clings on. Though January 2016 was the warmest January in the RSS satellite record since 1979, the El Niño spike has not yet lasted long enough to end the Pause. That will happen by next month’s report. The RSS data still show no global warming for 18 years 8 months, notwithstanding record increases in CO2 concentration over the period.
Dr Roy Spencer’s UAH v.6 satellite lower-temperature dataset shows the Pause has already (just) disappeared. For 18 years 2 months there has been barely any warming, though to two decimal places the anomaly is zero:
The believers say there was never a Pause in the first place. After many unconvincing alterations to all of the principal global surface tamperature datasets over the two years leading up to the Paris climate conference, the Pause all the datasets once showed had been erased.
Significantly, the two satellite datasets continued to show a steadily-lengthening Pause till last month, but over the past year or two, long before the present el Niño set in, the three terrestrial datasets had already succeeded in ingeniously airbrushing it away.
The not necessarily reliable Tom Karl of NOAA and the relentlessly campaigning Gavin Schmidt of NASA held a joint press conference to celebrate the grants their rent-seeking organizations can milk out of their assertion that 2015 was the warmest year since 1880. But they carefully omitted the trend-line from their graph, so I have added it back. It shows the world warming since 1880 at an unexciting two-thirds of a degree per century:
NOAA’s much-altered global surface temperature record, showing a 0.9 Cº global warming trend since 1880, equivalent to just two-thirds of a degree per century.
So here’s the Houston problem, the 13th chime, the dog that didn’t bark in the night-time, the fly in the ointment, the poop in the puree, the jumbo in the Jacuzzi – the $64,000 question that would once have alerted true scientists to the possibility that somewhere their pet theory might have gone more than somewhat agley.
The Jumbo in the Jacuzzi
Since the satellites of both UAH and RSS show there has been very little global warming of the lower troposphere over the past decade or two, perhaps Schmidt and Karl would care to answer the following key question, which I have highlighted in red:
Schmidt and Karl, like the Met Office this side of the pond, say there has been rapid surface warming over the past 19 years. If so, where on Earth did it come from? The laws of thermodynamics are not up for repeal. The official theory is that CO2 warms the atmosphere and the atmosphere warms the surface. But for almost 19 years the satellites show that the lower atmosphere has barely warmed. Even if there had been CO2-driven warming higher up, for the official theory says we should expect a faster warming rate in the mid-troposphere than at the surface, how could that higher-altitude warming have magically reached the surface through a lower troposphere that has not warmed at all?
IPCC had predicted in 2007, on the basis of a single bad paper by Ben Santer of Lawrence Livermore National Laboratory, that the tropical mid-troposphere should warm twice or even thrice as fast as the tropical surface. However, as the revealing final slide shown by Schmidt and Karl at their press conference demonstrates, the predicted tropical mid-troposphere hot spot (I had the honor to name it) is in reality absent. Lower and mid-troposphere anomalies are almost identical:
One clue to the source of the warming reported by the surface datasets but not by the satellite datasets over the past 19 years is to be found in another revealing diagram presented by Schmidt and Karl at their presser.
About five-sixths of the areas of “record” surface warming shown in the NOAA diagram are areas of ocean, the el Niño-driven warming of the eastern equatorial Pacific being particularly pronounced.
Aside from the ocean warming, the land-based warming was prominent over Siberia and northern China, Europe and central America, inferentially owing much to urban heat-island effects.
In short, the warming of both land and oceans shows a pattern strongly confirming the satellite record to the extent that the warming – insofar as it is not a mere artefact of the surface-temperature tampering over the past couple of years – displays a pattern suggesting that it originates not from above in the atmosphere, where it would have originated if CO2 had been the cause, but at or below the surface.
On any view, the significant warming that the terrestrial datasets claim over the past two decades cannot have come from the atmosphere, and accordingly cannot have been caused by our enrichment of that atmosphere with greenhouse gases – if, that is, the satellites are correct that the lower troposphere has not been warming.
When the first temperature-monitoring satellites began to deliver data, NASA said the satellite temperature record would be more reliable than the surface record because the coverage was more complete, the method of measurement standardized and the coverage and coverage-bias uncertainties that plague the terrestrial record were absent.
Now that the satellites of both UAH and RSS have been showing so little warming for so long, expect that story to begin to change. If the satellite data are broadly correct, then either the terrestrial data are wrong owing to unjustifiable tampering or they are detecting genuine warming that may be from urban heat-island influences or from deep-ocean warming but cannot be from the atmosphere and is not caused by our sins of emission.
One way to prop up the specious, crumbling credibility of the terrestrial temperature datasets and of the CO2 panic at the same time is to attack the satellite datasets and pretend that the measurement method that NASA itself had once said was the best available is somehow subject to uncertainties even greater than those to which the terrestrial datasets are prone.
I am not the only one to sense that Dr Mears, the keeper of the RSS satellite dataset, who labels all who ask questions about the Party Line as “denialists” and in early 2016 took shameful part in a gravely prejudiced video about global temperature change, may be about to revise his dataset sharply to ensure that the remarkable absence of predicted warming that it demonstrates is sent down the memory hole.
What of ocean warming? The ARGO bathythermographs show little warming at the surface from 2004 until the current el Niño began. What is more, ARGO stratigraphy shows that the warming is generally greater with depth. The warming of the ocean, then, appears to be coming not from above, is it would if CO2 were the driver, but from below.
I should have liked to show graphs to establish that the warming is greater in the lower than in the upper strata of the 1.25-mile slab that ARGO measures. But the ARGO marine atlas is clunky and does not seem to be as compatible with PCs as it should be. So I have been unable to extract the relevant data. If anyone is able to produce complete stratum-by-stratum anomaly-and-trend plots of the ARGO data for its 12 full years in operation from January 2004 till December 2015, please let me know as soon as the December 2015 ARGO data become available. The latest monthly update is very late, as the ARGO data often are:
If the eventual data confirm what I have some reason to suspect, then a further killer question must be faced by the tamperers:
Though the Pause is gone, the problem it poses for the Thermageddonites remains. For their own theory dictates that, all other things being equal, an initial direct warming should occur instantaneously in response to radiative forcings such as that from CO2. However, for almost 19 years there was not a flicker of response from global temperatures, casting serious doubt upon the magnitude of the warming to be expected from anthropogenic influences.
To the believers, therefore, it was important that the Pause should not merely cease, for Nature is, as expected, gradually taking care of that, but vanish altogether. The need to abolish the Pause became still more urgent when at a hearing in December 2015 Senator Ted Cruz, to the great discomfiture of the “Democrats”, displayed the RSS graph showing no global warming for 18 years 9 months.
So to another killer question that Schmidt and Karl ducked at their presser, and must now face (for if they do not answer it Senator Cruz can be expected to go on asking it till he gets an answer):
The now-glaring discrepancies between prediction and reality, and between the satellite and terrestrial datasets, are plainly evident from all datasets even after the tampering. Yet until now there has been no systematic analysis to show just how large the discrepancies have become. So here goes.
In 1990, at page xxiv of the First Assessment Report, IPCC predicted near-linear global warming of 1.0 [0.7, 1.5] K over the 36 years to 2025, a rate equivalent to 2.78 [1.94, 4.17] K/century. However, in the 26 years since 1990 the reported warming rates are equivalent to only [1.59, 1.73] K/century from the terrestrial datasets (blue needles) and [1.14, 1.23] K/century from the satellites (green needles). IPCC’s 1990 central prediction, the red needle, accordingly shows almost double the warming reported by the terrestrial datasets and at least two and a half times that reported by the satellite datasets.
Somehow, the flagrant over-prediction that the discrepancy graphs of temperatures from 1990, 1995 and 2001 to today illustrate did not get a mention in the colourful material circulated to the media by the SchmidtKarlPropagandaAmt.
The models’ extravagant over-prediction becomes still more self-evident when one looks at IPCC’s next excitable prediction. In fig. 6.13 of the 1995 Second Assessment Report, IPCC predicted a medium-term warming rate of 0.38 K over 21 years, equivalent to 1.8 K per century, assuming the subsequently-observed 0.5%-per-year increase in atmospheric CO2 concentration.
Here, at least, IPCC’s prediction is within shouting distance of the terrestrial temperature data, though still extravagantly above the satellite temperature data. But IPCC’s 1990 least prediction was well above its own central prediction made just five years later. IPCC’s 1990 central prediction was 50% above its 1995 prediction, and its 1990 high-end prediction was 130% above its 1995 prediction.
The reliability of IPCC’s predictions deteriorated still further in 2001. On page 8 of the Summary for Policymakers, it predicted that in the 36 years 1990-2025 the world would warm by [0.4, 1.1] K, equivalent to [1.11, 3.05] K/century, again a significant downshift compared with the interval of medium-term predictions it had made in 1990, and implying a central estimate equivalent to about 2.08 K/century (the red needle on the following temperature clock) over the 25-year period:
Three points are startlingly evident in these graphs. First, IPCC has inexorably and very substantially cut its predictions of medium-term warming since the exaggerated predictions in its First Assessment Report got the climate scam going in 1990.
Secondly, even its revised predictions are substantial exaggerations compared with observed, reported reality.
Thirdly – and this is very odd – the most basic measure of the uncertainties in temperature measurement in any time-series, which is the interval between the least and greatest reported trends on that series, has widened when most indications are that it should be narrowing.
To demonstrate that error-bars on temperature measurement should be narrowing in response to all those taxpayer dollars being flung at it, the HadCRUT4 dataset – which to Professor Jones’ great credit publishes the error-bars as well as the central estimate of observed temperature change – shows a considerable narrowing of the uncertainty interval over time, as methods of measurement become less unreliable:
The very reverse of what the HadCRUT4 dataset shows should be happening is happening. As Table 1 shows, the discrepancy between the least (yellow background) and the greatest (purple background) reported temperature change over successive periods is growing, not narrowing:
| Start date | GISS | HadCR4 | NCEI | RSS | UAH | Uncertainty |
| Sat:1979 | 0.60 | 0.61 | 0.37 | 0.45 | 0.42 | 0.51
K/century |
| K/century | 1.63 | 1.65 | 1.55 | 1.23 | 1.14 | |
| AR1:1990 | 0.45 | 0.41 | 0.43 | 0.29 | 0.26 | 0.73
K/century |
| K/century | 1.73 | 1.59 | 1.66 | 1.11 | 1.00 | |
| AR2:1995 | 0.33 | 0.28 | 0.32 | 0.09 | 0.09 | 1.14
K/century |
| K/century | 1.55 | 1.31 | 1.53 | 0.42 | 0.41 | |
| AR3:2001 | 0.18 | 0.13 | 0.20 | –0.02 | 0.03 | 1.46
K/century |
| K/century | 1.22 | 0.85 | 1.35 | –0.11 | 0.19 |
Table 1: Reported (dark blue) and centennial-equivalent (dark green) temperature trends on the three terrestrial (pale green background) and two satellite (blue background) monthly temperature anomaly datasets for periods starting respectively in January of 1979, 1990, 1995 and 2001 and all ending in December 2015.
Note how, on all datasets, the warming rate declines the closer to the present one begins. This, too, is contrary to official theory, which says that the warming rate should at least remain constant given the ever-increasing anthropogenic forcings acting on the climate. It is also contrary to one of the most mendacious graphs in the IPCC reports:
The official storyline, derived from the bogus statistical technique illustrated in the above IPCC graph, is that the rate of global warming is itself accelerating, and that we are to blame. The Swiss Bureau de l’Escroquerie is investigating this and, no doubt, many other outright frauds in IPCC documents.
However, note how rapidly the measurement uncertainty, here defined as the difference between the least (yellow) and greatest (pink) reported centennial-equivalent temperature trend in Table 1, widens even as the start-date of the period under consideration comes closer to the present, when by rights it should narrow. Another killer question for the believers to answer, therefore:
If one excludes the data after October 2015, which are temporarily influenced by the current el Niño spike in global temperatures, the warming rate since 1950 is lower now than at any previous date since that year.
This widening of the divergence between the terrestrial and satellite datasets is clear evidence that the effect of the tampering with all three terrestrial datasets in the two years preceding the Paris climate summit has been what one would, alas, expect of the tamperers: artificially to increase the apparent warming rate ever more rapidly as the present approaches.
A legitimate inference from this observation is that the tampering, however superficially plausible the numerous excuses for it, was in truth intended and calculated to overwhelm and extinguish the Pause that all the datasets had previously shown, precisely so that those driving and profiting from the climate scam could declare, as they have throughout the Marxstream news media, that there was never any Pause in the first place.
Let us hope that Professor Terence Kealy, former Vice Chancellor of Buckingham University, takes a very close look at this posting as he conducts his own review of the tamperings with the various terrestrial datasets.
The current el Niño, as Bob Tisdale’s distinguished series of reports here demonstrates, is at least as big as the Great el Niño of 1998. The RSS temperature record is now beginning to reflect its magnitude. If past events of this kind are a guide, there will be several months’ further warming before the downturn in the spike begins.
However, if there is a following la Niña, as there often is, the Pause may return at some time from the end of this year onward. Perhaps Bob could address the likelihood of a la Niña in the next of his series of posts on the ENSO phenomenon.
The hiatus period of 18 years 8 months is the farthest back one can go in the RSS satellite temperature record and still show a sub-zero trend. The start date is not cherry-picked: it is calculated. And the graph does not mean there is no such thing as global warming. Going back further shows a small warming rate. And yes, the start-date for the Pause has been inching forward, though just a little more slowly than the end-date, which is why the Pause has continued on average to lengthen.
The warming rate taken as the mean of the RSS and UAH datasets since they began in 1979 is equivalent to 1.2 degrees/century:
However, the much-altered surface tamperature datasets show a 35% greater warming rate, equivalent to 1.6 degrees/century:
Bearing in mind that one-third of the 2.4 W m–2 radiative forcing from all manmade sources since 1750 has occurred during the period of the Pause, a warming rate equivalent to little more than 1 C°/century is not cause for concern.
As always, a note of caution. Merely because there has been little or no warming in recent decades, one may not draw the conclusion that warming has ended forever. Trend lines measure what has occurred: they do not predict what will occur.
The Technical Note explains the sources of the IPCC’s predictions in 1990 and in 2005, and also demonstrates that that according to the ARGO bathythermograph data the oceans are warming at a rate equivalent to less than a quarter of a Celsius degree per century. In a rational scientific discourse, those who had advocated extreme measures to prevent global warming would now be withdrawing and calmly rethinking their hypotheses. However, this is not a rational scientific discourse.
Key facts about global temperature
These facts should be shown to anyone who persists in believing that, in the words of Mr Obama’s Twitteratus, “global warming is real, manmade and dangerous”.
Ø The RSS satellite dataset shows no global warming at all for 224 months from June 1997 to December 2015 – more than half the 445-month satellite record.
Ø There has been no warming even though one-third of all anthropogenic forcings since 1750 have occurred since 1997.
Ø Since 1950, when a human influence on global temperature first became theoretically possible, the global warming trend has been equivalent to below 1.2 Cº per century.
Ø The HadCRUT4 global warming trend since 1900 is equivalent to 0.77 Cº per century. This is well within natural variability and may not have much to do with us.
Ø The fastest warming rate lasting 15 years or more since 1950 occurred over the 33 years from 1974 to 2006. It was equivalent to 2.0 Cº per century.
Ø Compare the warming on the Central England temperature dataset in the 40 years 1694-1733, well before the Industrial Revolution, equivalent to 4.33 C°/century.
Ø In 1990, the IPCC’s mid-range prediction of near-term warming was equivalent to 2.8 Cº per century, higher by two-thirds than its current prediction of 1.7 Cº/century.
Ø The warming trend since 1990, when the IPCC wrote its first report, is equivalent to little more than 1 Cº per century. The IPCC had predicted close to thrice as much.
Ø To meet the IPCC’s original central prediction of 1 C° warming from 1990-2025, in the next decade a warming of 0.75 C°, equivalent to 7.5 C°/century, would have to occur.
Ø Though the IPCC has cut its near-term warming prediction, it has not cut its high-end business as usual centennial warming prediction of 4.8 Cº warming to 2100.
Ø The IPCC’s predicted 4.8 Cº warming by 2100 is well over twice the greatest rate of warming lasting more than 15 years that has been measured since 1950.
Ø The IPCC’s 4.8 Cº-by-2100 prediction is four times the observed real-world warming trend since we might in theory have begun influencing it in 1950.
Ø The oceans, according to the 3600+ ARGO buoys, are warming at a rate of just 0.02 Cº per decade, equivalent to 0.23 Cº per century, or 1 C° in 430 years.
Ø Recent extreme-weather events cannot be blamed on global warming, because there has not been any global warming to speak of. It is as simple as that.
Technical note
Our latest topical graph shows the least-squares linear-regression trend on the RSS satellite monthly global mean lower-troposphere dataset for as far back as it is possible to go and still find a zero trend. The start-date is not “cherry-picked” so as to coincide with the temperature spike caused by the 1998 el Niño. Instead, it is calculated so as to find the longest period with a zero trend.
The fact of a long Pause is an indication of the widening discrepancy between prediction and reality in the temperature record.
The satellite datasets are arguably less unreliable than other datasets in that they show the 1998 Great El Niño more clearly than all other datasets. The Great el Niño, like its two predecessors in the past 300 years, caused widespread global coral bleaching, providing an independent verification that the satellite datasets are better able than the rest to capture such fluctuations without artificially filtering them out.
Terrestrial temperatures are measured by thermometers. Thermometers correctly sited in rural areas away from manmade heat sources show warming rates below those that are published. The satellite datasets are based on reference measurements made by the most accurate thermometers available – platinum resistance thermometers, which provide an independent verification of the temperature measurements by checking via spaceward mirrors the known temperature of the cosmic background radiation, which is 1% of the freezing point of water, or just 2.73 degrees above absolute zero. It was by measuring minuscule variations in the cosmic background radiation that the NASA anisotropy probe determined the age of the Universe as 13.82 billion years.
The RSS graph (Fig. 1) is accurate. The data are lifted monthly straight from the RSS website. A computer algorithm reads them down from the text file and plots them automatically using an advanced routine that automatically adjusts the aspect ratio of the data window at both axes so as to show the data at maximum scale, for clarity.
The latest monthly data point is visually inspected to ensure that it has been correctly positioned. The light blue trend line plotted across the dark blue spline-curve that shows the actual data is determined by the method of least-squares linear regression, which calculates the y-intercept and slope of the line.
The IPCC and most other agencies use linear regression to determine global temperature trends. Professor Phil Jones of the University of East Anglia recommends it in one of the Climategate emails. The method is appropriate because global temperature records exhibit little auto-regression, since summer temperatures in one hemisphere are compensated by winter in the other. Therefore, an AR(n) model would generate results little different from a least-squares trend.
Dr Stephen Farish, Professor of Epidemiological Statistics at the University of Melbourne, kindly verified the reliability of the algorithm that determines the trend on the graph and the correlation coefficient, which is very low because, though the data are highly variable, the trend is flat.
RSS itself is now taking a serious interest in the length of the Great Pause. Dr Carl Mears, the senior research scientist at RSS, discusses it at remss.com/blog/recent-slowing-rise-global-temperatures.
Dr Mears’ results are summarized in Fig. T1:
Figure T1. Output of 33 IPCC models (turquoise) compared with measured RSS global temperature change (black), 1979-2014. The transient coolings caused by the volcanic eruptions of Chichón (1983) and Pinatubo (1991) are shown, as is the spike in warming caused by the great el Niño of 1998.
Dr Mears writes:
“The denialists like to assume that the cause for the model/observation discrepancy is some kind of problem with the fundamental model physics, and they pooh-pooh any other sort of explanation. This leads them to conclude, very likely erroneously, that the long-term sensitivity of the climate is much less than is currently thought.”
Dr Mears concedes the growing discrepancy between the RSS data and the models, but he alleges “cherry-picking” of the start-date for the global-temperature graph:
“Recently, a number of articles in the mainstream press have pointed out that there appears to have been little or no change in globally averaged temperature over the last two decades. Because of this, we are getting a lot of questions along the lines of ‘I saw this plot on a denialist web site. Is this really your data?’ While some of these reports have ‘cherry-picked’ their end points to make their evidence seem even stronger, there is not much doubt that the rate of warming since the late 1990s is less than that predicted by most of the IPCC AR5 simulations of historical climate. … The denialists really like to fit trends starting in 1997, so that the huge 1997-98 ENSO event is at the start of their time series, resulting in a linear fit with the smallest possible slope.”
In fact, the spike in temperatures caused by the Great el Niño of 1998 is almost entirely offset in the linear-trend calculation by two factors: the not dissimilar spike of the 2010 el Niño, and the sheer length of the Great Pause itself. The headline graph in these monthly reports begins in 1997 because that is as far back as one can go in the data and still obtain a zero trend.
Fig. T1a. Graphs for RSS and GISS temperatures starting both in 1997 and in 2001. For each dataset the trend-lines are near-identical, showing conclusively that the argument that the Pause was caused by the 1998 el Nino is false (Werner Brozek and Professor Brown worked out this neat demonstration).
Curiously, Dr Mears prefers the terrestrial datasets to the satellite datasets. The UK Met Office, however, uses the satellite data to calibrate its own terrestrial record.
The length of the Pause, significant though it now is, is of less importance than the ever-growing discrepancy between the temperature trends predicted by models and the far less exciting real-world temperature change that has been observed.
Sources of the IPCC predictions
IPCC’s First Assessment Report predicted that global temperature would rise by 1.0 [0.7, 1.5] Cº to 2025, equivalent to 2.8 [1.9, 4.2] Cº per century. The executive summary asked, “How much confidence do we have in our predictions?” IPCC pointed out some uncertainties (clouds, oceans, etc.), but concluded:
“Nevertheless, … we have substantial confidence that models can predict at least the broad-scale features of climate change. … There are similarities between results from the coupled models using simple representations of the ocean and those using more sophisticated descriptions, and our understanding of such differences as do occur gives us some confidence in the results.”
That “substantial confidence” was substantial over-confidence. For the rate of global warming since 1990 – the most important of the “broad-scale features of climate change” that the models were supposed to predict – is now below half what the IPCC had then predicted.
In 1990, the IPCC said this:
“Based on current models we predict:
“under the IPCC Business-as-Usual (Scenario A) emissions of greenhouse gases, a rate of increase of global mean temperature during the next century of about 0.3 Cº per decade (with an uncertainty range of 0.2 Cº to 0.5 Cº per decade), this is greater than that seen over the past 10,000 years. This will result in a likely increase in global mean temperature of about 1 Cº above the present value by 2025 and 3 Cº before the end of the next century. The rise will not be steady because of the influence of other factors” (p. xii).
Later, the IPCC said:
“The numbers given below are based on high-resolution models, scaled to be consistent with our best estimate of global mean warming of 1.8 Cº by 2030 [compared with pre-industrial temperatures]. For values consistent with other estimates of global temperature rise, the numbers below should be reduced by 30% for the low estimate or increased by 50% for the high estimate” (p. xxiv).
The orange region in Fig. 2 represents the IPCC’s medium-term Scenario-A estimate of near-term warming, i.e. 1.0 [0.7, 1.5] K (compared with 1990) by 2025.
The IPCC’s predicted global warming over the 25 years from 1990 to the present differs little from a straight line (Fig. T2).
Figure T2. Historical warming from 1850-1990, and predicted warming from 1990-2100 on the IPCC’s “business-as-usual” Scenario A (IPCC, 1990, p. xxii).
Because this difference between a straight line and the slight uptick in the warming rate the IPCC predicted over the period 1990-2025 is so small, one can look at it another way. To reach the 1 K central estimate of warming since 1990 by 2025, there would have to be twice as much warming in the next ten years as there was in the last 25 years. That is not likely.
But is the Pause perhaps caused by the fact that CO2 emissions have not been rising anything like as fast as the IPCC’s “business-as-usual” Scenario A prediction in 1990? No: CO2 emissions have risen rather above the Scenario-A prediction (Fig. T3).
Figure T3. CO2 emissions from fossil fuels, etc., in 2012, from Le Quéré et al. (2014), plotted against the chart of “man-made carbon dioxide emissions”, in billions of tonnes of carbon per year, from IPCC (1990).
Plainly, therefore, CO2 emissions since 1990 have proven to be closer to Scenario A than to any other case, because for all the talk about CO2 emissions reduction the fact is that the rate of expansion of fossil-fuel burning in China, India, Indonesia, Brazil, etc., far outstrips the paltry reductions we have achieved in the West to date.
True, methane concentration has not risen as predicted in 1990 (Fig. T4), for methane emissions, though largely uncontrolled, are simply not rising as the models had predicted. Here, too, all of the predictions were extravagantly baseless.
The overall picture is clear. Scenario A is the emissions scenario from 1990 that is closest to the observed CO2 emissions outturn.
Figure T4. Methane concentration as predicted in four IPCC Assessment Reports, together with (in black) the observed outturn, which is running along the bottom of the least prediction. This graph appeared in the pre-final draft of IPCC (2013), but had mysteriously been deleted from the final, published version, inferentially because the IPCC did not want to display such a plain comparison between absurdly exaggerated predictions and unexciting reality.
To be precise, a quarter-century after 1990, the global-warming outturn to date – expressed as the least-squares linear-regression trend on the mean of the RSS and UAH monthly global mean surface temperature anomalies – is 0.28 Cº, equivalent to little more than 1 Cº/century. The IPCC’s central estimate of 0.71 Cº, equivalent to 2.8 Cº/century, that was predicted for Scenario A in IPCC (1990) with “substantial confidence” was approaching three times too big. In fact, the outturn is visibly well below even the least estimate.
In 1990, the IPCC’s central prediction of the near-term warming rate was higher by two-thirds than its prediction is today. Then it was 2.8 C/century equivalent. Now it is just 1.7 Cº equivalent – and even that is proving to be a substantial exaggeration.
In 1995 the IPCC offered a prediction of the warming rates to be expected in response to various rates of increase in CO2 concentration:
Figure T4a. IPCC (1995) predicted various warming rates. The prediction based on the actual rate of change in CO2 concentration since 1995 is highlighted.
The actual increase in CO2 concentration in the two decades since 1995 has been 0.5% per year. So IPCC’s effective central prediction in 1995 was that there should have been 0.36 C° warming since then, equivalent to 1.8o C°/century.
In the 2001 Third Assessment Report, IPCC, at page 8 of the Summary for Policymakers, says: “For the periods 1990-2025 and 1990 to 2050, the projected increases are 0.4-1.1 C° and 0.8-2.6 C° respectively.”
Is the ocean warming?
One frequently-discussed explanation for the Great Pause is that the coupled ocean-atmosphere system has continued to accumulate heat at approximately the rate predicted by the models, but that in recent decades the heat has been removed from the atmosphere by the ocean and, since globally the near-surface strata show far less warming than the models had predicted, it is hypothesized that what is called the “missing heat” has traveled to the little-measured abyssal strata below 2000 m, whence it may emerge at some future date.
Actually, it is not known whether the ocean is warming: each of the 3600 automated ARGO bathythermograph buoys takes just three measurements a month in 200,000 cubic kilometres of ocean – roughly a 100,000-square-mile box more than 316 km square and 2 km deep. Plainly, the results on the basis of a resolution that sparse (which, as Willis Eschenbach puts it, is approximately the equivalent of trying to take a single temperature and salinity profile taken at a single point in Lake Superior less than once a year) are not going to be a lot better than guesswork.
Unfortunately ARGO seems not to have updated the ocean dataset since December 2014. However, what we have gives us 11 full years of data. Results are plotted in Fig. T5. The ocean warming, if ARGO is right, is equivalent to just 0.02 Cº decade–1, equivalent to 0.2 Cº century–1.
Figure T5. The entire near-global ARGO 2 km ocean temperature dataset from January 2004 to December 2014 (black spline-curve), with the least-squares linear-regression trend calculated from the data by the author (green arrow).
Finally, though the ARGO buoys measure ocean temperature change directly, before publication NOAA craftily converts the temperature change into zettajoules of ocean heat content change, which make the change seem a whole lot larger.
The terrifying-sounding heat content change of 260 ZJ from 1970 to 2014 (Fig. T6) is equivalent to just 0.2 K/century of global warming. All those “Hiroshima bombs of heat” of which the climate-extremist websites speak are a barely discernible pinprick. The ocean and its heat capacity are a lot bigger than some may realize.
Figure T6. Ocean heat content change, 1957-2013, in Zettajoules from NOAA’s NODC Ocean Climate Lab: http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT, with the heat content values converted back to the ocean temperature changes in Kelvin that were originally measured. NOAA’s conversion of the minuscule warming data to Zettajoules, combined with the exaggerated vertical aspect of the graph, has the effect of making a very small change in ocean temperature seem considerably more significant than it is.
Converting the ocean heat content change back to temperature change reveals an interesting discrepancy between NOAA’s data and that of the ARGO system. Over the period of ARGO data, from 2004-2014, the NOAA data imply that the oceans are warming at 0.05 Cº decade–1, equivalent to 0.5 Cº century–1, or rather more than double the rate shown by ARGO.
ARGO has the better-resolved dataset, but since the resolutions of all ocean datasets are very low one should treat all these results with caution.
What one can say is that, on such evidence as these datasets are capable of providing, the difference between underlying warming rate of the ocean and that of the atmosphere is not statistically significant, suggesting that if the “missing heat” is hiding in the oceans it has magically found its way into the abyssal strata without managing to warm the upper strata on the way.
On these data, too, there is no evidence of rapid or catastrophic ocean warming.
Furthermore, to date no empirical, theoretical or numerical method, complex or simple, has yet successfully specified mechanistically either how the heat generated by anthropogenic greenhouse-gas enrichment of the atmosphere has reached the deep ocean without much altering the heat content of the intervening near-surface strata or how the heat from the bottom of the ocean may eventually re-emerge to perturb the near-surface climate conditions relevant to land-based life on Earth.
Figure T7. Near-global ocean temperatures by stratum, 0-1900 m, providing a visual reality check to show just how little the upper strata are affected by minor changes in global air surface temperature. Source: ARGO marine atlas.
Most ocean models used in performing coupled general-circulation model sensitivity runs simply cannot resolve most of the physical processes relevant for capturing heat uptake by the deep ocean.
Ultimately, the second law of thermodynamics requires that any heat which may have accumulated in the deep ocean will dissipate via various diffusive processes. It is not plausible that any heat taken up by the deep ocean will suddenly warm the upper ocean and, via the upper ocean, the atmosphere.
If the “deep heat” explanation for the Pause were correct (and it is merely one among dozens that have been offered), the complex models have failed to account for it correctly: otherwise, the growing discrepancy between the predicted and observed atmospheric warming rates would not have become as significant as it has.
In early October 2015 Steven Goddard added some very interesting graphs to his website. The graphs show the extent to which sea levels have been tampered with to make it look as though there has been sea-level rise when it is arguable that in fact there has been little or none.
Why were the models’ predictions exaggerated?
In 1990 the IPCC predicted – on its business-as-usual Scenario A – that from the Industrial Revolution till the present there would have been 4 Watts per square meter of radiative forcing caused by Man (Fig. T8):
Figure T8. Predicted manmade radiative forcings (IPCC, 1990).
However, from 1995 onward the IPCC decided to assume, on rather slender evidence, that anthropogenic particulate aerosols – mostly soot from combustion – were shading the Earth from the Sun to a large enough extent to cause a strong negative forcing. It has also now belatedly realized that its projected increases in methane concentration were wild exaggerations. As a result of these and other changes, it now estimates that the net anthropogenic forcing of the industrial era is just 2.3 Watts per square meter, or little more than half its prediction in 1990 (Fig. T9):
Figure T9: Net anthropogenic forcings, 1750 to 1950, 1980 and 2012 (IPCC, 2013).
Even this, however, may be a considerable exaggeration. For the best estimate of the actual current top-of-atmosphere radiative imbalance (total natural and anthropo-genic net forcing) is only 0.6 Watts per square meter (Fig. T10):
Figure T10. Energy budget diagram for the Earth from Stephens et al. (2012)
In short, most of the forcing predicted by the IPCC is either an exaggeration or has already resulted in whatever temperature change it was going to cause. There is little global warming in the pipeline as a result of our past and present sins of emission.
It is also possible that the IPCC and the models have relentlessly exaggerated climate sensitivity. One recent paper on this question is Monckton of Brenchley et al. (2015), which found climate sensitivity to be in the region of 1 Cº per CO2 doubling (go to scibull.com and click “Most Read Articles”). The paper identified errors in the models’ treatment of temperature feedbacks and their amplification, which account for two-thirds of the equilibrium warming predicted by the IPCC.
Professor Ray Bates gave a paper in Moscow in summer 2015 in which he concluded, based on the analysis by Lindzen & Choi (2009, 2011) (Fig. T10), that temperature feedbacks are net-negative. Accordingly, he supports the conclusion both by Lindzen & Choi (1990) (Fig. T11) and by Spencer & Braswell (2010, 2011) that climate sensitivity is below – and perhaps considerably below – 1 Cº per CO2 doubling.
Figure T11. Reality (center) vs. 11 models. From Lindzen & Choi (2009).
A growing body of reviewed papers find climate sensitivity considerably below the 3 [1.5, 4.5] Cº per CO2 doubling that was first put forward in the Charney Report of 1979 for the U.S. National Academy of Sciences, and is still the IPCC’s best estimate today.
On the evidence to date, therefore, there is no scientific basis for taking any action at all to mitigate CO2 emissions.
Finally, how long will it be before the Freedom Clock (Fig. T12) reaches 20 years without any global warming? If it does, the climate scare will become unsustainable.
Figure T12. The Freedom Clock approaches 20 years without global warming
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Rising CO2 levels causing fish to get drunk ??? ROTFLMAO
University of NSW
http://www.theweathernetwork.com/news/articles/rising-carbon-dioxide-in-water-could-lead-to-drunk-fish/63230/
Marcus
February 6, 2016 at 12:26 pm
“Rising CO2 levels causing fish to get drunk ??? ROTFLMAO”
I can see the headline: with global warming drunken fish will be unable to do a beeline for fishing nets. Mitigation of this problem for fisherman will require new technology and training in making rapid random swings in their fishing boats to net these zig zagging fish.
“Dr Mears concedes the growing discrepancy between the RSS data and the models…
And Mears goes on to say;
“… The denialists really like to fit trends starting in 1997, so that the huge 1997-98 ENSO event is at the start of their time series, resulting in a linear fit with the smallest possible slope.”
I think one of the bigger questions is: where did that huge 1997-98 ENSO event go in the surface records? Amongst other things it originates at the surface?
If we were going to cherry pick the start of the Pause, surely we’d have chosen 1998, not 1997. Mears is projecting, attributing his own motives and methods to skeptics.
bobfj.
“I think one of the bigger questions is: where did that huge 1997-98 ENSO event go in the surface records? Amongst other things it originates at the surface?”
That is an easy one. It has not disappeared, it is clearly in the data set. Look at the annual data, not the 5 year filter.
http://climate.nasa.gov/vital-signs/global-temperature/
The satellite measurements respond more to El Nino events than do the surface records. It also appears the surface measurements respond more to Arctic warming than the satellite measurements. This explains some of the differences in the estimates.
The current El Nino is as strong as the 1997/98 El Nino (ENSO3.4 is 2.3C). Keep watching the satellite measurements as the year progresses. It will show a large spike as well (it is already higher than the 2010 El Nino); it takes a while for the ocean surface warming to couple with the lower troposphere.
http://images.remss.com/msu/msu_time_series.html
Hairy,
I’m using the word ‘disappear’ figuratively to describe the fact that in older records, 1998 which was popularly described as a “Super El Nino” year stuck out like dog’s balls. Here for instance is the discontinued HadCrut3 to 2011:
http://www.metoffice.gov.uk/hadobs/hadcrut3/diagnostics/global/nh+sh/annual_bar.png
And here to 2014.33 (WoodForTrees)
http://woodfortrees.org/plot/hadcrut3gl/from:1979/to:2015
Gistemp is arguably most guilty of disappearing the former “Super El Nino” and elevating more recent El Ninos:
http://data.giss.nasa.gov/gistemp/graphs_v3/Fig.A2.gif
Bobfj.
I linked to good info, and you use “hairy” instead of Harry. You can stick your childish insults up your bottom.
The 1997/98 spike is clearly visible in the annual data. So either get your eyes checked, or just stop making things up.
Harry,
I think I replied out of sequence
Harry,
Yes, sorry, I should not have called you Hairy (despite it being with past affection) because it seems to have distracted you from reading what I had to say.
If you are not prepared to admit that the older records in the discontinued HadCrut3, (one example to 2011 and the other to 2014.33) are very different to the current Gistemp, I’m sure that most readers can. Recent records have greatly diminished the prominence of what was popularly known as the “1998 Super El Nino”.
Perhaps you could stop playing semantics and note what I originally wondered:
“I think one of the bigger questions is: where did that huge 1997-98 ENSO event [these are Mears’ words] go in the surface records? Amongst other things it originates at the surface?
Older surfacerecords do in fact show 1997-8 as a huge event but recent records do not. (Old Gistemp is not immediately available to me but I recall that it too showed a huge 1998 when back then it was eagerly greeted by “the community” before awareness of The Pause later crept in)
“Old Gistemp is not immediately available to me”
You can look on their news listing; there is usually an annual report in January; Here is the Jan 2011 one:
http://www.giss.nasa.gov/research/news/20110112/509804main_GISS226X170.jpg
It’s very similar to the one you showed, if you stop after 2010.
Thanks for that Nick and for correcting my recollection of older Gistemp. Strange how Gistemp is so different to HadCruT and the two satellite time-series.
“it was important that the Pause should not merely cease, for Nature is, as expected, gradually taking care of that, but vanish altogether.”
Can you explain, given the definition of the pause, why there is any distinction between ceasing and vanishing altogether? It is either there or it is not there. If it is not there, it has vanished and ceased. Or have I got this wrong?
Next year it will probably return because of La Nina = ceasing then starting again !
My interpretation is that he Lord M left off the indication /sarc there.
To make it cease would halt it and maybe even reverse it but,
To vanish it would mean it never existed, much like they tried with the MWP.
It ain’t even a “Pause” until it starts to go up again, right? See me in another five or ten years. If the temperature continues to climb, it was a pause. If it stabilizes again then we really still won’t know what it is. If it starts going down, then it was a peak.
Semantics.
Since 1979?
My 32 month old grandson just experienced the deepest snowfall he had ever seen in his life. Had he been talking well enough the year before that, he could have said the same about the winter of 2014/15…same for the one before that. The winter before that he had not yet been born.
Yes. We are all children in terms of the characteristically long time frames of climate evolution. Which is probably why, every other generation or so, we have a panic about the Earth unnaturally heating/cooling.
@ur momisugly bartemis, 1:02 pm feb 6.: To me? It’s called “history repeating itself” Which in my opinion is the saddest mark of humanity. We just do not seem to learn from past mistakes.
Any warming since the entering our current interglacial has been nothing but beneficial.
Man is doing terrific these past 100 years.
And even more so these past 1000 years.
In fact, ever since the Holocene Optimum.
man has been around, in various guises, for a long time, but substantial and significant advance has taken place only since the Holocene Optimum. The planet has been way too cold before then such that man’s time and energy was devoted simply to survival. It is amazing how these shackles can be lifted when there is some beneficial warming.
Tom in Florida. Interesting Chilli you were eating ‘eating hot bowl of chili dressed in long sleeves, long pants and socks’ And what were you wearing?
The beauty of the English language is that many sentences can be interpreted in different ways and why it is confusing to those trying to learn it.
My eldest wants to know if Tom in Florida coughed up a hairball.
If you are implying that I am as arrogant, indifferent and independent as most cats then I take that as a compliment. 🙂
Tom in Florida says:
February 7, 2016 at 7:48 am
LOL
The RSS anomaly for January ’16 is 0.6628. Using the ’98 el nino it was 0.5498 in Jan98 but went to 0.736 in Feb98 and the pause didn’t disappear until March or April (Posted by me in Monckton’s last month update). It is fairly certain it will disappear though but whether it comes back by early 2018 is anyone’s guess at this point.
http://s19.postimg.org/wxd23ujfn/Pause_length_2016_18_3.jpg
Lord Monckton who has done so much to defend real science should defer in this case to Tony Heller aka Steven Goddard who has archived all the fraud committed by the AGW establishment for years!. This person will be remembered forever as the guy who brought AGW down/Even ted Cruz is using his graphs of fraud in Congressional hearings/Pandering to lukewarming is now very passeee. LOL
One year from now I think you will get 19 years+ (Pause) because of the Super el Nino as the start…
“The Pause hangs on by its fingernails”
Not necessarily, Global temperature pauses come and go, this one has another 20-30 years of life left.
http://www.vukcevic.talktalk.net/CT4-pause.gif
Another excellent article by Christopher Monckton that helps us keep our eye on the movement of the pea under the Warmista thimbles. The lack of correlation between levels of CO2 and global temperature would normally be sufficient to invalidate the AGW hypothesis, but it continues as CAGW is politics and not science.
Where is the tiny amount variations of heat actually coming from? Is it the Sun? What are the ‘adjustments’ really for? These are questions of science not politics..
You go on a hike. Before long, the trail heads up a fairly steep slope through a dense forest of fir trees. After half an hour or so, you think, “Phew! This is steep! I must be heading up a really, really, high mountain. I don’t know if I can make it.” Another half hour goes by, the trees thin until there are none and the trail levels off and there is nothing to see to the horizon but level, flat, sand and rock.
“So!” you exclaim, cheerfully, taking off your back pack to get a drink of water, “it was only a plateau! All this time, I’ve been hiking up the side of a giant plateau.” On you walk. And on. And on. And on. Nothing much to see. Flat as a pancake — for miles. Then, in the far distance, a low, gray, mound appears. Being a highly imaginative person, you think, “Hm. Wonder if an elephant is taking a nap on the path. Wonder how an elephant ended up here?”
Drawing closer, you see that there is an 10-ton elephant-sized pile of topsoil blocking your way (you cannot walk off the path, for there is a quick-sand-filled swamp on either side). As you scramble up over the top of it, you think, “Wow! This is the highest place anyone has ever been! Wait till I — (check altimeter) — tell all the folks back in town!” As you start down the other side of the sandpile, you notice a large helicopter with an empty bucket dangling beneath, rapidly thundering away at 3 o’ clock. Squinting into the sun, you’re pretty sure you can read, “ENSO, INC.” on the side.
“Well, waddaya know. It was just one of ENSO’s excess fill dumps. I wouldn’t exactly count that as a genuine elevation gain. Nope. Won’t tell that to the folks — they’d just laugh.”
**********************************************************************
The STOP in warming IS.
Until La Niña has had her say, we won’t know whether or not we are on a true plateau or on a broad shelf and on the way up a mountain (to a much better place for plants and people and wildlife, by the way, if so). If it is a plateau, we will be heading down, one of these months…
We — just — don’t — know.
That is why,
at this point in time,
there is no “pause” (in warming) — yet.
All we know is: CO2 UP. WARMING STOPPED.
(cue Latitude… 🙂 )
Curious that Lord Monckton is so happy to embrace complex computer models for arriving at determination climate parameters, when he has historically been so dismissive of such models. I refer firstly to the models needed to deduce temperature from the actual measurements made by his favored satellites, which are of atmospheric brightness and NOT of temperature. Secondly, the RSS temperature series relies on a “diurnal correction” determined from, horror of horrors, CMIP5 computer climate models.
A few years ago WUWT was sounding off about the IPCC’s supposedly scandalous use of “gray literature” references. Yet in repeatedly citing “UAH version 6”, the Noble Lord Monckton is demonstrating a very heavy reliance on “gray literature”. UAH vs. 6 has not passed peer review, and Dr Spencer has released only sketchy information on his methods. Furthermore, he has failed to release his code, so maybe Steve McIntyre should be launching a FOI demand to UAH.
Such intellectual contortions do remind me of the lengths people go to in order to defend, say, intelligent design. Indeed I would go so far as to propose that there now exists a “Religion of The Pause”.
“I refer firstly to the models needed to deduce temperature from the actual measurements made by his favored satellites, which are of atmospheric brightness and NOT of temperature.”
This is the dumbest of all the trumped up charges against the satellite data. Thermometers do not measure temperature either. They measure thermal expansion of a thermally sensitive medium, or thermal increase in electrical resistance, or some other well calibrated phenomenon associated with temperature.
“Secondly, the RSS temperature series relies on a “diurnal correction” determined from, horror of horrors, CMIP5 computer climate models.”
Mmmm, no. The diurnal correction is for incredibly well known and well understood drift in the ascending node of the orbit. The correction is on the order of hundredths of degC/decade.
You have been duped by a concerted campaign to discredit the satellite data, the best, most spatially extensive, most uniform, and most objective data that we have at the current time.
No. A thermometer doesn’t measure thermal expansion. Think about it. If you had a mercury thermometer how would you be able to work out the thermal expansion of mercury just by looking at the scale on the thermomenter. It USES the very simple relation between thermal expansion and temperature rise. You need no complex computer models to determine temperature, just a simple formula.
As for your comments on the Diurnal correction being “incredibly well known” with, can you provide references? Mears and Wentz of RSS determined it in a 2005 paper in Science, http://images.remss.com/papers/rsspubs/Mears_Science_2005_Diurnal.pdf :
“In our work on MSU2, we used a different approach to evaluate the diurnal cycle. We used 5 years of hourly output from a climate model as input to a microwave radiative transfer model to estimate the seasonally varying diurnal cycle in measured temperature for each satellite view angle at each point on the globe (7).”
Rather than calling me “dumb”, Bartemis, can I suggest you find some evidence to support your contentions
Do you even read what you write?
A thermometer uses the thermal expansion of the contained fluid within a measured narrow space. The marks establishing the column as a thermometer are based on the thermal expansion of the fluid for a given temperature range.
At no point does a thermometer actually measure temperature, the measurement lines are drawn to expected thermal expansion points within a specified range.
Go, Theo! +1!
(and yes, we can see where the blockquote was supposed to end, no problem)
Bartemis,
“or some other well calibrated phenomenon associated with temperature”
Yes, and microwave emission is one such. But with resistance etc, you know exactly where you are measuring that property. With brightness there is a very difficult inverse to work out where, in an atmosphere with a large temperature gradient, the brightness signal was generated.
“The diurnal correction is for incredibly well known and well understood drift”
For UAH, the correction was introduced this year, with ver 6. Here is what Roy Spencer says about it:
” For example, years ago we could use certain AMSU-carrying satellites which minimized the effect of diurnal drift, which we did not explicitly correct for. That is no longer possible, and an explicit correction for diurnal drift is now necessary. The correction for diurnal drift is difficult to do well, and we have been committed to it being empirically–based, partly to provide an alternative to the RSS satellite dataset which uses a climate model for the diurnal drift adjustment.”
Nonsense.
Thanks for that well reasoned rebuttal of my arguments. Just what I would expect from a Sweet person.
They and all here, especuially the warmistas, keep forgetting the other FOUR radiosonde balloon datadasets that support the THREE (one not shown below), satellites data. So its a TOTAL OF SEVEN datasets against all the failed models. http://www.globalwarming.org/wp-content/uploads/2013/06/CMIP5-73-models-vs-obs-20N-20S-MT-5-yr-means1.png
Watch how there will be no reply from stokes ect
Dear Bill H,
It sounds like some reading might help you. Try using some of the words in your questions and assertions as search terms on Bing or Google and see if you can learn on your own. Also, the WUWT search box (upper right margin) is useful for this.
Here is a bit to get you started in Learning About Satellite Temperature Data:
ShrNfr:
“As a guy who did his PhD thesis on how to tease temperatures out of the brightness temperature to get temperature at the standard levels in the 1970s, it is almost impossible to fudge the data other than by outright fabrication. As the horn rotates around, one of its views is of a calibration load with a known temperature. Altitude will effect the weighting functions a tad, but those are an evolving process over time and the altitude of the satellite is well known and so the weighting function can be evaluated on the basis of the physics of the oxygen molecular spectrum. I suppose it is remotely possible that the observation frequency could change substantially, but I, for one, have never encountered that. Compared to the “adjustments” that are made to the surface temperature network, there is almost zero wiggle room in the microwave sounders. *** ”
(http://wattsupwiththat.com/2016/01/15/friday-funny-or-not-so-funny-satellite-deniers/#comment-2120541 )
IOW: satellites measure temperature (and you will find this is so throughout the literature about satellite data).
That satellites do not measure temperature directly, I think you were not quarreling with, but, in case you do not understand that point, here are three WUWT commenters to help you:
1) simple tourist:
“Now RSS is non-PC, because it isn’t a direct measure of temp (what? how can you directly measure energy content?), but I remember the time when the alarmists were parroting the big spike pf the year 98, with RSS.
These guys have no face.”
2) Tom T:
“Correct there is no such thing as a direct measure of temperature. Liquid thermometers measure the thermal expansion of a liquid. Prop thermometers measure voltage drop due to resistance (this is how MMTS sensors work).”
3) Steven F:
“There are two electronic devices used to measure temperature. thermistors and thermocouples. Thermistors us a temperature sensitive material, typically a semiconductor, and measure the resistance of it.
Thermocouple have two different metals joined together. When exposed to heat a small voltage is generated by the thermocouple.
As is typically the case you get what you pay for. The most expensive devices are typically very accurate. If you spend even more you get a very accurate sensor that has had a calibration check done.
It is my understanding that the satellites HAH and RSS use have platinum based thermocouples which are some of the most accurate temperature sensors available.”
(All 3 comments nested from simple-touriste’s comment: http://wattsupwiththat.com/2016/01/15/the-climateers-new-pause-excuse-born-of-desperation-the-satellites-are-lying/#comment-2120491 )
***********************************************************************************
Re: Your snarl at Bartemis, per ShrNfr, since at least the 1970’s, they have known about diurnal correction. I’d say that makes it, by 2016, “incredibly well known.”
Best wishes in your science learning!
Your ally for science truth,
Janice
Janice, you never fail to astound me !
Marcus!! Hi.
Heh. Veeery cleverly worded, Marcus, however, I am going to take “astound” as a compliment, so,
THANK YOU, VEDDY MUCH! 🙂
Hope all is well. You’re being prayed for. Just want to.
Your WUWT pal,
Janice
“Janice Moore February 6, 2016 at 9:06 pm”
Guilty as charged!
One could state that I am HTML closures challenged… No excuses, I am guilty.
And, thank you, Christopher Monckton, for all your excellent work on behalf of freedom!
Freedom. That is the bottom line, here.
One word stands out in all this. TAMPERATURE
Spot on. lol
I’ve just noticed a real howler by my Noble Viscount.
He says “Since the satellites of both UAH and RSS show there has been very little global warming”.
Actually UAH and RSS are both analysing data from the same third party satellites to produce their temperature series. They have both published peer-reviewed temperature series which don’t agree very well. However, Spencer of UAH has now, apparently, disowned his earlier peer-reviewed work in favour of some gray literature that he has produced, which lacks any clear explanation of his methods.
They agree quite well.
http://woodfortrees.org/plot/uah/plot/rss
Bartemis, before making such claims you should do some statistical analysis. Over the period of the “pause”, 1996-2015, the data provided by the peer reviewed method of Spencer and Christie (UAH vs 5) shows a warming trend of 0.2 degrees per decade (using data on wood for trees). No pause there.
Indeed if you look WITH CARE at the graph you will notice that over this period the UAH data are consistently lower than the RSS data till about 2008.
Apologies for the typo, the trend per decade from UAH vs 5 is 0.12 degrees celsius, not 0.2. However, the conclusion is the same: no pause over this period.
Try your plot with annual smoothing to take out the monthly noise
http://woodfortrees.org/plot/uah/mean:12/plot/rss/mean:12
Bill H and Nick S, please stop catering for the kiddies and portraying this as people too incompetent to read the thermometer sticking out of Earths Rectum.
For the last 15 years, the trend in RSS is less than 0.03°C/century while its less than 0.1°C/century in the UAHv5. The correction brings it closer to RSS so peer reviewed or not, you would assume that it was a better estimate rather than a worse one.
Both take data of microwave emission of O2 from the whole atmosphere and then need to calculate what just the lower troposphere would give in the absence of the remaining atmosphere, and we are talking about a difference that is equivalent to the temperature change walking up a few flights of stairs.
Nick Stokes February 6, 2016 at 3:57 pm
Meh.
http://woodfortrees.org/plot/uah/mean:12/offset:0.075/plot/rss/mean:12
Robert B February 6, 2016 at 5:27 pm
For the last 15 years, the trend in RSS is less than 0.03°C/century while its less than 0.1°C/century in the UAHv5. The correction brings it closer to RSS so peer reviewed or not, you would assume that it was a better estimate rather than a worse one.
Both take data of microwave emission of O2 from the whole atmosphere and then need to calculate what just the lower troposphere would give in the absence of the remaining atmosphere, and we are talking about a difference that is equivalent to the temperature change walking up a few flights of stairs.
The ‘correction’ to produce version 6 is in fact a different product which covers a different part of the atmosphere than either RSS TLT or UAH 5.6, it includes a greater contribution from higher in the atmosphere. It’s more similar to TMT, which is perhaps why Christy is now focussing on TMT?
So, it appears you believe all the surface data sets are junk because they continue to replace their older data with different data which “doesn’t agree very well”.
Bill H:
Howl away Billh!
RSS uses these satellites:
TIROS-N
NOAA-06
NOAA-07
NOAA-08
NOAA-09
NOAA-10
NOAA-11
NOAA-12
NOAA-14
NOAA-15
NOAA-16
NOAA-17
NOAA-18
METOP-A
AQUA
NOAA-19
UAH uses these satellites:
European METOP-A
NOAA polar orbiter, NOAA-19
NOAA and NASA satellites
And it is likely that both RSS and UAH are using or researching using the NOAA Suomi NPP satellite and forthcoming JPSS satellites.
Each site may be concurrently developing global temperature tracking, but each site is building upon their own research and verifying their temperatures against balloon radiosondes.
Now, about those ‘third party’ satellites? Can you identify which ‘third party’ satellites they’re using and valid research identifying specific errors with those satellites?
I’m sure that when you launch your own line of ‘third party’ BillH rent-a-junker satellites, NOAA, UAH and RSS will consider purchasing data feeds from your visual, infrared and microwave sensors; i.e. if your data is trustworthy…
To control people you need either a crisis or a manufactured crisis!
“…Mr Obama’s Twitteratus…”
Should it be ‘twitteratus?’ Or ‘twitteraster?’
“As Table 1 shows, the discrepancy between the least (yellow background) and the greatest (purple background) reported temperature change over successive periods is growing, not narrowing:”
Table 1 is absurd. It compares quite different things – surface temperature vs troposphere, and then claims the difference between them is an “uncertainty”. Uncertainty of what? In every case the difference shown is actually between a surface measure and a troposphere. No uncertainty, it just means they are different.
It also has at least one error. The first slope for NCEI should be 1, not 1.55. That actually affects the “uncertainty”. But it may be that the NCEI top value should be 0.57, not 0.37.
But even if they really were uncertainties, they would be expected to increase as the period diminishes. The uncertainty of an OLS trend goes up as (from memory) n^-1.5, where n is number of points (durection). So scaling the .51 value accordingly, the expected uncertainties are
.51, .87, 1.19, 1.98
ie increasing more rapidly than Table 1.
Mr Stokes, in his increasing desperation, now suggests that my Table 1 compares surface temperature with tropospheric temperature. No: it compares surface temperature ANOMALIES with LOWER troposphere temperature ANOMALIES: and it concerns itself less with the fact that there is a difference between the surface and lower-troposphere anomalies than with the fact that the difference between them is widening when it should be narrowing.
Although variability and hence uncertainty increase as the period under review decreases, this relatively small increase ought to have been more than outweighed by the increasing reliability of measurements, what with all the billions thrown at them. Instead, the satellites, supported by the radiosondes, show little or no warming of either the mid or the lower troposphere, but the surface tamperature datasets – which, like the satellite datasets, showed the Pause until a couple of years ago – managed to airbrush it away.
“No: it compares surface temperature ANOMALIES with LOWER troposphere temperature ANOMALIES:”
It is introduced saying
“As Table 1 shows, the discrepancy between the least (yellow background) and the greatest (purple background) reported temperature change over successive periods is growing, not narrowing”
In fact, the change in anomalies should be also the change in temperatures. But the fact remains – the emphasised differences are the difference in behaviour between surface and troposphere. They aren’t “uncertainties”.
“the difference between them is widening when it should be narrowing”
Who said it should be narrowing? They aren’t measures of the same thing, separated by measurement error. They are measurement of different places.
In the last year I have totally lost interest in any official data on global weather /climate trends. This is because it simply cannot be believed. I began ignoring oceanic data earlier than that, after the entire climate community allowed a single PhD student Josh Willis to change the Argos buoy data from showing cooling to showing warming by simply editing out the cold tail of the data with no justification other than political mandate.
It’s an Alice in Wonderland of surreal adjustments that are increasing exponentially in intensity and now dwarf any remaining original signal. Anthony’s collection here at WUWT of official data is very laudable, and in an earlier generation with even one honest official in ten, it would tell us at least something about the world’s climate. But it no longer does. All the datasets are in the hands of vetted activists and produce only Salvador Dali-esque psycodelic artwork. Not climate data – understanding of what the word “data” even means has been lost, politically crushed.
For real climate information one must read between the lines of sea ice and glaciation reports, weather anomalies, farm animal cold-deaths, fisheries data, unusual wildlife sightings and the like.
All we have to do is watch the ice in Hudson Bay. If it melts in summer, we are safe. If it doesn’t, we are doomed by another Ice Age. Right now, it is totally iced every winter by December and doesn’t melt until end of May so I doubt the planet is all that hot.
I am glad to see someone else make this point.
If this ‘science’ was properly conducted, a random sample of the ARGO buoys taken from those showing the largest cooling trend, and a random sample of the ARGO buoys taken from those showing the greatest warming trend would have been returned to the laboratory for instrument and calibration testing.
Any genuine ‘scientist’ would have tested to see whether there was or was not some genuine instrument error, before deciding there must be a problem, the problem is the buoys showing cooling, this ‘problem’ can be corrected simply by deleting the ‘offending’ buoys from the data set.
What sort of science is that?
Whenever ARGO comes up for discussion, this initialisation problem/incident should always be mentioned and the data should then be viewed with an appropriate caveat in mind..
As you suggest none of the underlying data or time series extrapolations on temperature/temperature anomaly are fit for purpose. The land based data has been so severely bastardised that it is incapable of genuine scientific study. Pre ARGO (and note the caveat above) there is no reliable data on ocean temperature, and even with ARGO the data series is way too short and there is insufficient sampling/coverage.
It is a joke. Unfortunately, given the wasted billions, just not a funny one.
First the ARGO data was “corrected” in 2007, but it still showed cooling. So then it was “fixed” in 2011.
Tallbloke has a post on this…
https://tallbloke.wordpress.com/2012/02/27/argo-the-mystery-of-global-warmings-missing-heat/
I found this comment to the post to be even more relevant. The idea that any atmospheric process could sequester heat in the ocean is laughable. There is nothing man made that can have any impact on ocean temperature.
Considering LWIR is fully absorbed in the first few microns, alarmists need to calculate how warm a thimble of water would need to be to change the temperature of an Olympic size swimming pool. Further, the idea that LWIR somehow slows cooling is equally laughable, since a 2mph wind changes the heat loss from evaporation at a greater rate than the supposed CO2 forcing slows it.
“MostlyHarmless says:
February 27, 2012 at 7:41 pm
Few people appreciate just how small the heat capacity of the atmosphere is, when compared with the oceans below. Several of my first blog posts were on scale in the climate system. Here’s what I said about atmosphere & oceans (if anyone spots an error, let me know; unlike certain climate scientists, I don’t mind my errors being “outed”; I want to be correct in my arguments.):
Normal atmospheric pressure at the Earth’s surface can balance a column of mercury (which is a very dense liquid metal) 760 mm high (just over 3/4 metre); this equates to a column of water 10.33 metres high. Another way to envisage this is to consider that the atmosphere exerts a pressure equal to the weight of the column of air above a given area. The pressure is 1.03325 kilograms per square centimetre; the height of a column of water weighing 1.033 kg, and therefore exerting the same pressure on 1 sq.cm is 1033 cm or 10.33 metres. The mass of the atmosphere is equivalent to a depth of just 10.33 metres of sea water. When the ocean area is taken into account (71% of the earth’s surface), this equates to 14.5 metres depth of ocean.
When heat content or capacity is considered, the disparity is even larger. The specific heat (amount of heat needed to heat one gram of a substance one degree Celsius) of sea water is 3.93, the specific heat of dry air is 1.006.
So what does all this mean? It means that the heat capacity of the atmosphere is equivalent to just 14.5 x 1.006/3.93 or just 3.7 metres of ocean depth. The ocean’s heat capacity is hundreds of times greater than that of the atmosphere.
The idea that the atmosphere “drives” the oceans is risible; the reverse must be the case.”
Tallbloke has a post on ARGO. First they corrected the data (2007) but it still showed cooling, and then they quietly “fixed” it (2011)
https://tallbloke.wordpress.com/2012/02/27/argo-the-mystery-of-global-warmings-missing-heat/
In answer to Belousov, what remains interesting is that, even after the tampering by the believers, the terrestrial datasets are at one with the satellite datasets in not showing anything like the predicted rates of global warming.
well said belousov.
all i have to say is tell me again in 5 yrs we are still warming ..then in 10 yrs tell me why we are cooling ..
and if i make it to 85 and see us warm again (35yrs from now) i can say it’s all happened before ..
I lived through a 30 year cool cycle and a 20 year warm cycle. Now I have a 30 year cool cycle to look forward to? It appears as if I’ll get the short end of the climate stick. Maybe I should have been a hockey player.
The Watts study showed at least 50% more warming of US surface data, because of the UHI effect. If true and if this was a similar result for the rest of the planet, we would have less than 1 C/ century warming since 1979 for the surface data. That’s close to the satellite data since 1979. Also Roy Spencer stated that the Watts study showed about 60% too much warming for the present US surface data.
I would have thought that if the heat was entering the ocean, then the dangerous water vapor positive feedback effect would be short circuited, and all that is left would be the direct radiative effect of the extra co2, which isn’t all that much.
PeterG is broadly correct. The ocean acts as an enormous heat sink, and its very large heat capacity is one of the major reasons why the Earth’s surface temperature appears to have varied by little more than 3 K either side of the 800,000-year mean over the past 800,000 years. it is also one of the major reasons why very large and very rapid global warming as a result of our sins of emission is not at all likely.
For Australians, Bass Strait is about the area measured by one Argo Buoy but only averages 63m deep. The surface temperature at the moment varies from 19-21°C. Here is a loop for the weeks forecast from BOM
http://www.bom.gov.au/oceanography/forecasts/idyoc15.shtml?region=15&forecast=1#
As you can see, three measurements in the one spot for the month can not get you the monthly SST average to better than ±1°C. Its irrelevant how precise the thermometers are on the buoys and the LLN can’t be applied willy nilly.
Robert B provides excellent confirmation of the point that measurements of ocean temperature are prone to very large and, at present, substantially unconstrainable uncertainties. Unfortunately, this allows the usual suspects to make up any old nonsense they like, because no one is in a position to prove them wrong. By the same token, they can’t prove they’re right, of course.
The warming of the ocean, then, appears to be coming not from above, is it would if CO2 were the driver, but from below.
This sounds reasonable. I’m going to attempt a back-of-the-envelope calculation; please help me if I get it wrong.
1 – This link gives heat energy coming from Earth’s interior as 0.03% of Earth’s total energy budget.
2 – Figure T10 (above) gives the TOA heat input as 340 W/m2.
3 – It gives the imbalance as 0.6 ± 0.4 W/m2.
4 – That means the imbalance could be 0.2 W/m2.
5 – That would be 100 x 0.2 / 340 = 0.06%
6 – That means half the TOA imbalance could be accounted for by heat coming from the Earth’s interior.
amiright?
Disclaimer – I’m using numbers from Figure T10 for sake of argument. I refuse to believe they can do the planet’s energy budget as accurately as they say they can.
The assumption is that the amount coming from Earth doesn’t vary by 100%. More likely that the variation is large but affecting currents rather than adding to the energy budget.