El Niño has not yet paused the Pause

Global temperature update: no warming for 18 years 5 months

By Christopher Monckton of Brenchley

Since December 1996 there has been no global warming at all (Fig. 1). This month’s RSS temperature – still unaffected by the most persistent el Niño conditions of the current weak cycle – shows a new record length for the Pause: 18 years 5 months.

The result, as always, comes with a warning that the temperature increase that usually accompanies an el Niño may come through after a lag of four or five months. If, on the other hand, la Niña conditions begin to cool the oceans in time, there could be a lengthening of the Pause just in time for the Paris world-government summit in December 2015.

clip_image002

Figure 1. The least-squares linear-regression trend on the RSS satellite monthly global mean surface temperature anomaly dataset shows no global warming for 18 years 5 months since December 1996.

The hiatus period of 18 years 5 months, or 221 months, is the farthest back one can go in the RSS satellite temperature record and still show a sub-zero trend.

The divergence between the models’ predictions in 1990 (Fig. 2) and 2005 (Fig. 3), on the one hand, and the observed outturn, on the other, also continues to widen.

clip_image004

Figure 2. Near-term projections of warming at a rate equivalent to 2.8 [1.9, 4.2] K/century, made with “substantial confidence” in IPCC (1990), for the 303 months January 1990 to March 2015 (orange region and red trend line), vs. observed anomalies (dark blue) and trend (bright blue) at less than 1.4 K/century equivalent, taken as the mean of the RSS and UAH v. 5.6 satellite monthly mean lower-troposphere temperature anomalies.

clip_image006

Figure 3. Predicted temperature change, January 2005 to March 2015, at a rate equivalent to 1.7 [1.0, 2.3] Cº/century (orange zone with thick red best-estimate trend line), compared with the near-zero observed anomalies (dark blue) and real-world trend (bright blue), taken as the mean of the RSS and UAH v. 5.6 satellite lower-troposphere temperature anomalies.

The Technical Note explains the sources of the IPCC’s predictions in 1990 and in 2005, and also demonstrates that that according to the ARGO bathythermograph data the oceans are warming at a rate equivalent to less than a quarter of a Celsius degree per century. There are also details of the long-awaited beta-test version 6.0 of the University of Alabama at Huntsville’s satellite lower-troposphere dataset, which now shows a pause very nearly as long as the RSS dataset. However, the data are not yet in a form compatible with the earlier version, so v. 6 will not be used here until the beta testing is complete.

Key facts about global temperature

Ø The RSS satellite dataset shows no global warming at all for 221 months from December 1996 to April 2015 – more than half the 436-month satellite record.

Ø The global warming trend since 1900 is equivalent to 0.8 Cº per century. This is well within natural variability and may not have much to do with us.

Ø Since 1950, when a human influence on global temperature first became theoretically possible, the global warming trend has been equivalent to below 1.2 Cº per century.

Ø The fastest warming rate lasting 15 years or more since 1950 occurred over the 33 years from 1974 to 2006. It was equivalent to 2.0 Cº per century.

Ø In 1990, the IPCC’s mid-range prediction of near-term warming was equivalent to 2.8 Cº per century, higher by two-thirds than its current prediction of 1.7 Cº/century.

Ø The global warming trend since 1990, when the IPCC wrote its first report, is equivalent to below 1.4 Cº per century – half of what the IPCC had then predicted.

Ø Though the IPCC has cut its near-term warming prediction, it has not cut its high-end business as usual centennial warming prediction of 4.8 Cº warming to 2100.

Ø The IPCC’s predicted 4.8 Cº warming by 2100 is well over twice the greatest rate of warming lasting more than 15 years that has been measured since 1950.

Ø The IPCC’s 4.8 Cº-by-2100 prediction is almost four times the observed real-world warming trend since we might in theory have begun influencing it in 1950.

Ø The oceans, according to the 3600+ ARGO bathythermograph buoys, are warming at a rate equivalent to just 0.02 Cº per decade, or 0.23 Cº per century.

Ø Recent extreme weather cannot be blamed on global warming, because there has not been any global warming to speak of. It is as simple as that.

 

 

Technical note

Our latest topical graph shows the least-squares linear-regression trend on the RSS satellite monthly global mean lower-troposphere dataset for as far back as it is possible to go and still find a zero trend. The start-date is not “cherry-picked” so as to coincide with the temperature spike caused by the 1998 el Niño. Instead, it is calculated so as to find the longest period with a zero trend.

The satellite datasets are arguably less unreliable than other datasets in that they show the 1998 Great El Niño more clearly than all other datasets. The Great el Niño, like its two predecessors in the past 300 years, caused widespread global coral bleaching, providing an independent verification that the satellite datasets are better able to capture such fluctuations without artificially filtering them out than other datasets.

Terrestrial temperatures are measured by thermometers. Thermometers correctly sited in rural areas away from manmade heat sources show warming rates below those that are published. The satellite datasets are based on reference measurements made by the most accurate thermometers available – platinum resistance thermometers, which provide an independent verification of the temperature measurements by checking via spaceward mirrors the known temperature of the cosmic background radiation, which is 1% of the freezing point of water, or just 2.73 degrees above absolute zero. It was by measuring minuscule variations in the cosmic background radiation that the NASA anisotropy probe determined the age of the Universe: 13.82 billion years.

The RSS graph (Fig. 1) is accurate. The data are lifted monthly straight from the RSS website. A computer algorithm reads them down from the text file, takes their mean and plots them automatically using an advanced routine that automatically adjusts the aspect ratio of the data window at both axes so as to show the data at maximum scale, for clarity.

The latest monthly data point is visually inspected to ensure that it has been correctly positioned. The light blue trend line plotted across the dark blue spline-curve that shows the actual data is determined by the method of least-squares linear regression, which calculates the y-intercept and slope of the line.

The IPCC and most other agencies use linear regression to determine global temperature trends. Professor Phil Jones of the University of East Anglia recommends it in one of the Climategate emails. The method is appropriate because global temperature records exhibit little auto-regression, since summer temperatures in one hemisphere are compensated by winter in the other. Therefore, an AR(n) model generates results little different from a least-squares trend.

Dr Stephen Farish, Professor of Epidemiological Statistics at the University of Melbourne, kindly verified the reliability of the algorithm that determines the trend on the graph and the correlation coefficient, which is very low because, though the data are highly variable, the trend is flat.

RSS itself is now taking a serious interest in the length of the Great Pause. Dr Carl Mears, the senior research scientist at RSS, discusses it at remss.com/blog/recent-slowing-rise-global-temperatures.

Dr Mears’ results are summarized in Fig. T1:

clip_image008

Figure T1. Output of 33 IPCC models (turquoise) compared with measured RSS global temperature change (black), 1979-2014. The transient coolings caused by the volcanic eruptions of Chichón (1983) and Pinatubo (1991) are shown, as is the spike in warming caused by the great el Niño of 1998.

Dr Mears writes:

“The denialists like to assume that the cause for the model/observation discrepancy is some kind of problem with the fundamental model physics, and they pooh-pooh any other sort of explanation.  This leads them to conclude, very likely erroneously, that the long-term sensitivity of the climate is much less than is currently thought.”

Dr Mears concedes the growing discrepancy between the RSS data and the models, but he alleges “cherry-picking” of the start-date for the global-temperature graph:

“Recently, a number of articles in the mainstream press have pointed out that there appears to have been little or no change in globally averaged temperature over the last two decades.  Because of this, we are getting a lot of questions along the lines of ‘I saw this plot on a denialist web site.  Is this really your data?’  While some of these reports have ‘cherry-picked’ their end points to make their evidence seem even stronger, there is not much doubt that the rate of warming since the late 1990s is less than that predicted by most of the IPCC AR5 simulations of historical climate.  … The denialists really like to fit trends starting in 1997, so that the huge 1997-98 ENSO event is at the start of their time series, resulting in a linear fit with the smallest possible slope.”

In fact, the spike in temperatures caused by the Great el Niño of 1998 is largely offset in the linear-trend calculation by two factors: the not dissimilar spike of the 2010 el Niño, and the sheer length of the Great Pause itself.

Curiously, Dr Mears prefers the much-altered terrestrial datasets to the satellite datasets. However, over the entire length of the RSS and UAH series since 1979, the trends on the mean of the terrestrial datasets and on the mean of the satellite datasets are near-identical. Indeed, the UK Met Office uses the satellite record to calibrate its own terrestrial record.

The length of the Great Pause in global warming, significant though it now is, is of less importance than the ever-growing discrepancy between the temperature trends predicted by models and the far less exciting real-world temperature change that has been observed. It remains possible that el Nino-like conditions may prevail this year, reducing the length of the Great Pause. However, the discrepancy between prediction and observation continues to widen.

Sources of the IPCC projections in Figs. 2 and 3

IPCC’s First Assessment Report predicted that global temperature would rise by 1.0 [0.7, 1.5] Cº to 2025, equivalent to 2.8 [1.9, 4.2] Cº per century. The executive summary asked, “How much confidence do we have in our predictions?” IPCC pointed out some uncertainties (clouds, oceans, etc.), but concluded:

“Nevertheless, … we have substantial confidence that models can predict at least the broad-scale features of climate change. … There are similarities between results from the coupled models using simple representations of the ocean and those using more sophisticated descriptions, and our understanding of such differences as do occur gives us some confidence in the results.”

That “substantial confidence” was substantial over-confidence. For the rate of global warming since 1990 – the most important of the “broad-scale features of climate change” that the models were supposed to predict – is now below half what the IPCC had then predicted.

In 1990, the IPCC said this:

“Based on current models we predict:

“under the IPCC Business-as-Usual (Scenario A) emissions of greenhouse gases, a rate of increase of global mean temperature during the next century of about 0.3 Cº per decade (with an uncertainty range of 0.2 Cº to 0.5 Cº per decade), this is greater than that seen over the past 10,000 years. This will result in a likely increase in global mean temperature of about 1 Cº above the present value by 2025 and 3 Cº before the end of the next century. The rise will not be steady because of the influence of other factors” (p. xii).

Later, the IPCC said:

“The numbers given below are based on high-resolution models, scaled to be consistent with our best estimate of global mean warming of 1.8 Cº by 2030. For values consistent with other estimates of global temperature rise, the numbers below should be reduced by 30% for the low estimate or increased by 50% for the high estimate” (p. xxiv).

The orange region in Fig. 2 represents the IPCC’s less extreme medium-term Scenario-A estimate of near-term warming, i.e. 1.0 [0.7, 1.5] K by 2025, rather than its more extreme Scenario-A estimate, i.e. 1.8 [1.3, 3.7] K by 2030.

Some try to say the IPCC did not predict the straight-line global warming rate that is shown in Figs. 2-3. In fact, however, the IPCC’s predicted global warming over so short a term as the 25 years from 1990 to the present are little different from a straight line (Fig. T2).

clip_image010

Figure T2. Historical warming from 1850-1990, and predicted warming from 1990-2100 on the IPCC’s “business-as-usual” Scenario A (IPCC, 1990, p. xxii).

Because this difference between a straight line and the slight uptick in the warming rate the IPCC predicted over the period 1990-2025 is so small, one can look at it another way. To reach the 1 K central estimate of warming since 1990 by 2025, there would have to be twice as much warming in the next ten years as there was in the last 25 years. That is not likely.

Likewise, to reach 1.8 K by 2030, there would have to be four or five times as much warming in the next 15 years as there was in the last 25 years. That is still less likely.

But is the Pause perhaps caused by the fact that CO2 emissions have not been rising anything like as fast as the IPCC’s “business-as-usual” Scenario A prediction in 1990? No: CO2 emissions have risen rather above the Scenario-A prediction (Fig. T3).

clip_image012

Figure T3. CO2 emissions from fossil fuels, etc., in 2012, from Le Quéré et al. (2014), plotted against the chart of “man-made carbon dioxide emissions”, in billions of tonnes of carbon per year, from IPCC (1990).

Plainly, therefore, CO2 emissions since 1990 have proven to be closer to Scenario A than to any other case, because for all the talk about CO2 emissions reduction the fact is that the rate of expansion of fossil-fuel burning in China, India, Indonesia, Brazil, etc., far outstrips the paltry reductions we have achieved in the West to date.

True, methane concentration has not risen as predicted in 1990 (Fig. T4), for methane emissions, though largely uncontrolled, are simply not rising as the models had predicted, and the predictions were extravagantly baseless.

The overall picture is clear. Scenario A is the emissions scenario from 1990 that is closest to the observed emissions outturn, and yet there has only been a third of a degree of global warming since 1990 – about half of what the IPCC had then predicted with what it called “substantial confidence”.

clip_image014

Figure T4. Methane concentration as predicted in four IPCC Assessment Reports, together with (in black) the observed outturn, which is running along the bottom of the least prediction. This graph appeared in the pre-final draft of IPCC (2013), but had mysteriously been deleted from the final, published version, inferentially because the IPCC did not want to display such a plain comparison between absurdly exaggerated predictions and unexciting reality.

To be precise, a quarter-century after 1990, the global-warming outturn to date – expressed as the least-squares linear-regression trend on the mean of the RSS and UAH monthly global mean surface temperature anomalies – is 0.35 Cº, equivalent to just 1.4 Cº/century, or a little below half of the central estimate of 0.70 Cº, equivalent to 2.8 Cº/century, that was predicted for Scenario A in IPCC (1990). The outturn is visibly well below even the least estimate.

In 1990, the IPCC’s central prediction of the near-term warming rate was higher by two-thirds than its prediction is today. Then it was 2.8 C/century equivalent. Now it is just 1.7 Cº equivalent – and, as Fig. T5 shows, even that is proving to be a substantial exaggeration.

Is the ocean warming?

One frequently-discussed explanation for the Great Pause is that the coupled ocean-atmosphere system has continued to accumulate heat at approximately the rate predicted by the models, but that in recent decades the heat has been removed from the atmosphere by the ocean and, since globally the near-surface strata show far less warming than the models had predicted, it is hypothesized that what is called the “missing heat” has traveled to the little-measured abyssal strata below 2000 m, whence it may emerge at some future date.

Actually, it is not known whether the ocean is warming: each of the 3600 automated ARGO bathythermograph buoys somehow has to cover 200,000 cubic kilometres of ocean – a 100,000-square-mile box more than 316 km square and 2 km deep. Plainly, the results on the basis of a resolution that sparse (which, as Willis Eschenbach puts it, is approximately the equivalent of trying to take a single temperature and salinity profile taken at a single point in Lake Superior less than once a year) are not going to be a lot better than guesswork.

Fortunately, a long-standing bug in the ARGO data delivery system has now been fixed, so I am able to get the monthly global mean ocean temperature data – though ARGO seems not to have updated the dataset since December 2014. However, that gives us 11 full years of data. Results are plotted in Fig. T5. The ocean warming, if ARGO is right, is equivalent to just 0.02 Cº decade–1, or 0.2 Cº century–1 equivalent.

clip_image016

Figure T5. The entire near-global ARGO 2 km ocean temperature dataset from January 2004 to December 2014 (black spline-curve), with the least-squares linear-regression trend calculated from the data by the author (green arrow).

Finally, though the ARGO buoys measure ocean temperature change directly, before publication NOAA craftily converts the temperature change into zettajoules of ocean heat content change, which make the change seem a whole lot larger.

The terrifying-sounding heat content change of 260 ZJ from 1970 to 2014 (Fig. T6) is equivalent to just 0.2 K/century of global warming. All those “Hiroshima bombs of heat” are a barely discernible pinprick. The ocean and its heat capacity are a lot bigger than some may realize.

clip_image018

Figure T6. Ocean heat content change, 1957-2013, in Zettajoules from NOAA’s NODC Ocean Climate Lab: http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT, with the heat content values converted back to the ocean temperature changes in fractions of a Kelvin that were originally measured. NOAA’s conversion of the minuscule temperature change data to Zettajoules, combined with the exaggerated vertical aspect of the graph, has the effect of making a very small change in ocean temperature seem considerably more significant than it is.

Converting the ocean heat content change back to temperature change reveals an interesting discrepancy between NOAA’s data and that of the ARGO system. Over the period of ARGO data, from 2004-2014, the NOAA data imply that the oceans are warming at 0.05 Cº decade–1, equivalent to 0.5 Cº century–1, or rather more than double the rate shown by ARGO.

ARGO has the better-resolved dataset, but since the resolutions of all ocean datasets are very low one should treat all these results with caution. What one can say is that, on such evidence as these datasets are capable of providing, the difference between underlying warming rate of the ocean and that of the atmosphere is not statistically significant, suggesting that if the “missing heat” is hiding in the oceans it has magically found its way into the abyssal strata without managing to warm the upper strata on the way. On these data, too, there is no evidence of rapid or catastrophic ocean warming.

Furthermore, to date no empirical, theoretical or numerical method, complex or simple, has yet successfully specified mechanistically either how the heat generated by anthropogenic greenhouse-gas enrichment of the atmosphere has reached the deep ocean without much altering the heat content of the intervening near-surface strata or how the heat from the bottom of the ocean may eventually re-emerge to perturb the near-surface climate conditions that are relevant to land-based life on Earth.

Most ocean models used in performing coupled general-circulation model sensitivity runs simply cannot resolve most of the physical processes relevant for capturing heat uptake by the deep ocean. Ultimately, the second law of thermodynamics requires that any heat which may have accumulated in the deep ocean will dissipate via various diffusive processes. It is not plausible that any heat taken up by the deep ocean will suddenly warm the upper ocean and, via the upper ocean, the atmosphere.

If the “deep heat” explanation for the hiatus in global warming were correct (and it is merely one among dozens that have been offered), then the complex models have failed to account for it correctly: otherwise, the growing discrepancy between the predicted and observed atmospheric warming rates would not have become as significant as it has.

The UAH v. 6.0 dataset

The long-awaited new version of the UAH dataset is here at last. The headline change is that the warming trend has fallen from 0.14 to 0.11 C° per decade since 1979. The UAH and RSS datasets are now very close to one another, and there is a clear difference between the warming rates shown by the satellite and terrestrial datasets.

Roy Spencer’s website, drroyspencer.com, has an interesting explanation of the reasons for the change in the dataset. When I mentioned to him that the usual suspects would challenge the alterations that have been made to the dataset, he replied: “It is what it is.” In that one short sentence, true science is encapsulated.

Below, Fig. T7 shows the two versions of the UAH dataset superimposed on one another. Fig. T8 plots the differences between the two versions.

clip_image020

Fig. T7. The two UAH versions superimposed on one another.

clip_image022

Fig. T8. Difference between UAH v. 6 and v. 5.6.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
132 Comments
Inline Feedbacks
View all comments
Editor
May 4, 2015 5:50 pm

Dr Norman Page May 4, 2015 at 3:32 pm

There is good evidence that we have entered the descending ie cooling phase of the millennial cycle and that phase will last until about 2650.

Given than 2650 is over six hundred years from now, I have to conclude that your definition of “good evidence” and my definition of “good evidence” are diagonally parking in parallel universes …
w.

SkepticGoneWild
Reply to  Willis Eschenbach
May 4, 2015 6:48 pm

No. No. No! Dr.Page is completely wrong regarding the year 2650. The millennial cycle will be one of warming and it will end in the year………..wait for it…………… 2525! :

Notice the CO2 induced fog swirling around the floor.

SkepticGoneWild
Reply to  SkepticGoneWild
May 4, 2015 7:09 pm

I strongly suggest the IPCC adopt the above as their theme song; dramatic score with a rising crescendo of bat crazy alarmism.

Reply to  Willis Eschenbach
May 4, 2015 9:19 pm

Willis for the evidence that we passed the peak of the millennial solar driver cycle at about 1991 see Figs 14 and 13 at
http://climatesense-norpag.blogspot.com/2014/07/climate-forecasting-methods-and-cooling.html
It is a reasonable working hypothesis that Figs 9 and 5 suggest that we are just coming to, just at or just past a millennial temperature peak and that FIgs 13 and14 suggest that it has just passed –
given about a 12 year lag between the driver peak and the RSS temperature peak .We can see the start of the cooling trend at about 2003. at
http://www.woodfortrees.org/plot/rss/from:1980.1/plot/rss/from:1980.1/to:2003.6/trend/plot/rss/from:2003.6/trend
The 12 year cooling trend is not long enough to prove anything but it is certainly consistent with having just passed the peak.
As to the 2650 date The simplest and most conservative hypothesis is that in general the trends during the next millennial cycle will be an approximate repeat of the last one.- probably slightly cooler over all as we will be another thousand years closer to the next big Ice Age by 3000 AD The peak to trough of the last one was about 650 years (1000 -1650 ) with a faster 350 year climb from the Maunder minimum to the current peak. ( Fig 9) We wont have good evidence for this until about 2120 – but we will be able to check the trend along the way to strengthen the hypothesis.

Reply to  Dr Norman Page
May 4, 2015 11:27 pm

Dr Norman Page May 4, 2015 at 9:19 pm

As to the 2650 date The simplest and most conservative hypothesis is that in general the trends during the next millennial cycle will be an approximate repeat of the last one.- probably slightly cooler over all as we will be another thousand years closer to the next big Ice Age by 3000 AD

As my brother used to say, “It’s easy to predict the future … as long as it looks exactly like the past”. All you’ve done is claimed that the next millennium will be a repeat of the last, a most simplistic claim in my book. Look at the centuries as an example. The “simplest and most conservative hypothesis” is NOT that this century will look just like the last century.
In any case, you claimed that you had “good evidence” about 2650 … but now we find out it’s just your guess.
Dr. Page, do you see why folks don’t pay you much mind? Your exaggerations come back to bite you on the fundamental orifice. You don’t have a “good evidence” what will happen in 2650, it’s just a boastful fantasy. Nobody knows what the climate will be like in 2650, that’s just your ego hitting warp speed.
If you restrict yourself to believable claims and avoid over-egging your pudding, you’ll get a lot more folks to try to follow your ideas.
w.

Reply to  Dr Norman Page
May 5, 2015 6:24 am

Willis I agree that my statement
“There is good evidence that we have entered the descending ie cooling phase of the millennial cycle and that phase will last until about 2650.”
is confusing. To clarify- I think that the evidence that we have entered the cooling phase of the millennial cycle is good. (ftgs 5,9,13,14)
That the cooling phase of the millennial cycle will last for 650 years as it did for the last millennial cycle is a perfectly reasonable suggestion – I certainly do not expect that the shorter term centennial and decadal modulations of that trend within that 650 years will track the 1000 -1650 trends exactly. In complex natural systems everything only happens once because other things ie the state of the system as a whole are never equal.

Aicha Wallaby
May 4, 2015 7:44 pm

Shorter Dr. Mears: “I call these people ‘denialists’ for noticing what I myself have also noticed.”

May 4, 2015 9:17 pm

Lord Monckton,
I’d like to suggest enhancing the following point from your “Key facts” summary…
“Ø The oceans, according to the 3600+ ARGO bathythermograph buoys, are warming at a rate equivalent to just 0.02 Cº per decade, or 0.23 Cº per century.”
It seems that this point is badly in need of a follow-on/added statement about the error bars on this number which, if applied, would show that statistically you have almost as much confidence of ocean cooling as you do ocean warming. Sorry, but I’m just not impressed/convinced that the 0.02C per decade result actually does show warming given the measurement technique, the sensor calibration accuracy, the measurement timing, the total duration of the data set and the measurements’ spatial resolution among other things.
I apologize in advance if I missed a statement about Argo error analysis in the technical notes (I did see the technical details of the system) or if it’s already addressed in this comments thread.
Bruce

Monckton of Brenchley
Reply to  Boulder Skeptic
May 5, 2015 2:25 am

Boulder Skeptic raises a most interesting point. I suspect that one approach might be to apply Kriging to the ARGO measurements: it is a well-established method of interpolation. But the uncertainty is certainly formidable.

Larry Wirth
May 4, 2015 11:51 pm

Point of order: The currently correct “term of art” is “warmunist.”

richardscourtney
Reply to  Larry Wirth
May 6, 2015 8:57 am

Larry Wirth
As the inventor of the word “warmunist”, I write to say it is only one of several correct “terms of art”. I am sure you can devise a good one, too.
Richard

ren
May 5, 2015 1:58 am
goldminor
Reply to  ren
May 5, 2015 9:28 am

It looks like the ENSO conditions have peaked in the different regions, especially in regions 3 and 4. Region 1-2 may hold the higher positive levels for a bit longer, but otherwise it sure looks like it is over to me. If this holds true, then I will have successfully forecast the last 3 peaks and 2 valleys on the MEI. Also, notice the change to conditions in the area of the Blob. That area will look entirely different by the end of July.

ren
May 5, 2015 6:02 am

The temperature at a height of 1500 m shows the most-cooled areas. The blue color indicates the temperature below 0 C.
http://oi57.tinypic.com/iy0oie.jpg

Lars P.
May 5, 2015 8:34 am

El Nino did not paused the pause yet, but some are working hard at it:
pause? what pause?
notrickszone.com/2015/05/02/151-degrees-of-fudging-energy-physicist-unveils-noaas-massive-rewrite-of-maine-climate-history/
Over the last months I have discovered that between 2013 and 2015 some government bureaucrats have rewritten Maine climate history between 2013 and 2015 (and New England’s and of the U.S.). This statement is not based on my opinion, but on facts drawn from NOAA 2013 climate data vs NOAA 2015 climate data after when they re-wrote it

John Craig
May 5, 2015 2:06 pm

Is it the moon? Metonic cycle?

Brian
May 5, 2015 6:21 pm

Are you sure the satellites use Platinum RTDs? I thought they used Microwave receivers, which receive 60Ghz Raidio waves from excited oxygen molecules.
http://www.remss.com/missions/amsu
“MICROWAVE SOUNDING UNIT (MSU)
The Microwave Sounding Units (MSU) operating on NOAA polar-orbiting satellite platforms were the principal sources of satellite temperature profiles from late 1978 to the early 2000’s. The MSUs were cross-track scanners that made measurements of microwave radiance in four channels ranging from 50.3 to 57.95 GHz on the lower shoulder of the Oxygen absorption band. These four channels measured the atmospheric temperature in four thick layers spanning the surface through the stratosphere. The were 9 MSUs in total. The last MSU instrument, NOAA-14, ceased reliable operation in 2005.
ADVANCED MICROWAVE SOUNDING UNIT (AMSU)
A series of follow-on instruments, the Advanced Microwave Sounding Units (AMSUs), began operation in 1998. The AMSU instruments are composed of 2 sub units, AMSU-A and AMSU-B. AMSU-B is a humidity sounder (not discussed further), and AMSU-A is a 15-channel temperature sounder similar to MSU. Of the 15 channels, 11 (Channels 4 through 14) are located in the 60 GHz absorption complex and thus are most closely related to atmospheric temepratures at various heights above the surface. The increased number of channels relative to MSU means that AMSU-A samples the temperature of the atmosphere in a larger number of layers. The AMSU measurement footprints are also smaller than those for MSU, leading to higher spatial resolution. 3 AMSU channels (Channels 5,7,and 9) are closely matched to MSU channels 2,3 and 4. By using these channels,we have extended our climate-quality dataset to the present.”

KevinK
May 5, 2015 6:55 pm

Jan wrote;
“This is very naïve and it is of course wrong. Do you seriously mean that if we have 3599 independent measurements with accuracy +/- 0.1C and one with accuracy +/- 10 C, the accuracy of the whole set is +/- 10C?”
YES, if you have no prior knowledge of which sensor is only accurate to +/- 10C then the entire data set is only good to +/- 10 C for any traceable analysis purposes. And the Argo system does not know the current accuracy of the individual sensors. Short of hauling them all back on board and recalibrating them we do not know the current accuracy of the system. It might be as good as when it was designed, it very well might not be. Nobody really knows.
I have been responsible for calibrating the focal plane sensors to NIST traceable accuracies for the most modern commercial imaging satellites currently orbiting the Earth. I am quite familiar with the proper ways to account for error propagation through a system. I consider the ARGO system to be a waste of money.
I do not believe that anybody knows the true temperature of the Ocean’s to better than a few degrees C.
Does not matter how many smart people worked on it, there is no calibration traceability after they are dropped in the water, nobody knows what the true temperature is. They really really want to believe that with lots of measurements the accuracy must be better, but it simply is not.
Sorry, but that is my well informed opinion, Cheers, KevinK.

KevinK
May 5, 2015 7:57 pm

Just for fun I will expand on my previous comment, again Jan wrote:
““This is very naïve and it is of course wrong. Do you seriously mean that if we have 3599 independent measurements with accuracy +/- 0.1C and one with accuracy +/- 10 C, the accuracy of the whole set is +/- 10C?””
OK here is an example of how accuracy works in the engineering world; I have a factory that manufactures engines for automobiles. These engines have many parts all of which have to be accurate (with respect to their physical dimensions) to plus or minus one thousandth of an inch or else the engine cannot be assembled or function (for very long).
So one approach would be to buy 5000 micrometers (a precision length/width/diameter measuring instrument/tool). Then we calibrate them all once against one standard instrument. Then we send them out all over the factory floor to the folks that setup the machining tools, to the quality control persons that check the parts, to the operators that run the machines. What happens; well some of these micrometers get dropped onto a concrete floor occasionally (nobody is perfect) which distorts them. Some get some dust/dirt inside which degrades their performance, some even get “adjusted” by the user because they believe they are “out of spec” or “acting funny”. After a while the accuracy of our original “fleet” of micrometers has drifted away from their original accuracy. Heck one of them even got run over by a forklift (just that one time, honest). What happens is that there is one micrometer (precision measuring instrument) that has been dropped, driven over and manhandled so much that it is no longer sufficiently accurate.
IF that micrometer is off by ten thousandths of an inch then none of the engine parts fit together properly anymore and the whole factory grinds to a halt until somebody finds that micrometer and re-calibrates it.
Of course nobody knows which micrometer it is because there is no process to periodically recalibrate (i.e. reestablish the accuracy of) the micrometers.
This is why in precision manufacturing environments there is always a strict re-calibration process in place. EVERY measuring instrument gets re-calibrated against a NIST traceable standard at least once a year.
I have have products ready to ship to a customer but the quality control folks would not “sign off” because one tool (out of hundreds) used to manufacture the product was “not in calibration”. This is a desired outcome so that a multimillion dollar satellite does not get shipped to a customer with any doubts about the provenance of the entire manufacturing process.
The ARGO system exhibits NONE of the attributes of a modern manufacturing environment, and hence produces a sub standard product that actual paying customers (forking over their own dollars willingly without the force of government taxation) would never purchase.
The whole ARGO system was proposed by those that believe they understand the climate and can control it (given unlimited power over all the other folks currently residing on the Earth). They got the funding and the system basically shows NO DISCERNIBLE CHANGE IN OCEAN TEMPERATURES beyond properly accounted for calibration errors. Now they want to dismiss the calibration issues and claim we must trust them when this system allegedly shows hundredths of degrees of warming. Yeah, and I saw a unicorn run across the road in front of me on the way to my office yesterday morning, really I did, honest.
Cheers, KevinK

Reply to  KevinK
May 5, 2015 8:30 pm

KevinK, your analogy to a factory does not hold. We are interested in the average ocean temperature, not the precise temperature in every cubic kilometer.
A scientific project works different than a factory floor, but that does not mean that the science is all wrong. It does not work that way, and it does not help to write your very wrongheaded assumptions in capital letters.
/Jan

KevinK
Reply to  Jan Kjetil Andersen
May 5, 2015 9:00 pm

Jan, sorry, but a factory floor is all about knowledge, knowing exactly what the statistical distribution of part sizes is so the parts will work together.
The average temperature of the ocean is a meaning less bit of information. When doing a proper energy analysis of a system the important information is the total heat content, this cannot be determined by measuring the average temperature. The temperature is a result of the total heat content and the thermal capacity of the material, it is a product of those two pieces of information. Without carefully measuring the thermal capacity of the oceans (dependent on pressure, temperature, salinity, etc. and never a uniform property over the vast spatial distribution of the oceans) and the temperature where that specific thermal capacity exists it is impossible to discern the heat content of the oceans.
After all the expenditures of money to measure the temperature of the oceans we know nothing more than when we started. There may be more total energy, there may be less or most likely the energy content probably goes up and down due to causes nobody really understands at this point in time.
We, all of us, do not know the current, past or future energy content of the oceans, and probably never will to better than a few percent, if that. Asserting that a scientific project knows the energy content of the oceans with the kind of accuracy associated with temperature differences of 0.1 degree C is pure unadulterated hubris.
Again, sorry if my opinions about the current state of the accuracy of knowledge about the energy content of the oceans is offensive, but there is no evidence that anybody can defend their claimed knowledge to the accuracy commonly present in the modern engineering world.
You just don’t know.
Cheers, KevinK

Reply to  KevinK
May 5, 2015 9:10 pm

Manufacturing an automobile engine is different from generating a temperature dataset. As KevinK explained, when parts have to fit and work together, one badly drifted measurement tool can effectively throw a wrench into the works. But for a temperature dataset, one thermometer out of 3600 going wrong by 10 degrees C does not wreck the whole dataset by 10 degrees C, not even if it is not known which thermometer of the 3600 is the one that went so badly wrong. A somewhat-global distribution of thermometers don’t interact with each other like parts of an automobile engine do.

Monckton of Brenchley
Reply to  Donald L. Klipstein
May 6, 2015 3:17 pm

The problem with the terrestrial datasets is the persistent unidirectional tampering with large numbers of station records from all over the world. The satellite data are less subjective. It is excellent news that Terry Kealey of the U. of Buckingham is leading an investigation into the corruption of the terrestrial datasets.

Monckton of Brenchley
Reply to  Donald L. Klipstein
May 6, 2015 3:20 pm

KevinK is of course correct about the uncertainty surrounding our ill-resolved attempts to measure ocean temperature, with each bathythermograph buoy covering some 200,000 cu. km of ocean. Even with some ingenious Kriging to interpolate, it is not particularly likely that we are measuring the change in ocean temperature correctly. Nevertheless, the ARGO dataset is the least ill-resolved we have, and it shows warming of the ocean at a rate equivalent to only 0.2 K/century, which is scarcely alarming.