El Niño strengthens: the Pause lengthens

Global temperature update: no warming for 18 years 6 months

By Christopher Monckton of Brenchley

For 222 months, since December 1996, there has been no global warming at all (Fig. 1). This month’s RSS temperature – still unaffected by a slowly strengthening el Niño, which will eventually cause temporary warming – passes another six-month milestone, and establishes a new record length for the Pause: 18 years 6 months.

What is more, the IPCC’s centrally-predicted warming rate since its First Assessment Report in 1990 is now more than two and a half times the measured rate. On any view, the predictions on which the entire climate scare was based were extreme exaggerations.

However, it is becoming ever more likely that the temperature increase that usually accompanies an el Niño may come through after a lag of four or five months. The Pause may yet shorten somewhat, just in time for the Paris climate summit, though a subsequent La Niña would be likely to bring about a resumption of the Pause.


Figure 1. The least-squares linear-regression trend on the RSS satellite monthly global mean surface temperature anomaly dataset shows no global warming for 18 years 6 months since December 1996.

The hiatus period of 18 years 6 months is the farthest back one can go in the RSS satellite temperature record and still show a sub-zero trend. Note that the start date is not cherry-picked: it is calculated. And the graph does not mean there is no such thing as global warming. Going back further shows a small warming rate.

The divergence between the models’ predictions in 1990 (Fig. 2) and 2005 (Fig. 3), on the one hand, and the observed outturn, on the other, continues to widen. For the time being, these two graphs will be based on RSS alone, since the text file for the new UAH v.6 dataset is not yet being updated monthly. However, the effect of the recent UAH adjustments – exceptional in that they are the only such adjustments I can recall that reduce the previous trend rather than steepening it – is to bring the UAH dataset very close to that of RSS, so that there is now a clear distinction between the satellite and terrestrial datasets, particularly since the latter were subjected to adjustments over the past year or two that steepened the apparent rate of warming.


Figure 2. Near-term projections of warming at a rate equivalent to 2.8 [1.9, 4.2] K/century, made with “substantial confidence” in IPCC (1990), for the 305 months January 1990 to May 2015 (orange region and red trend line), vs. observed anomalies (dark blue) and trend (bright blue) at less than 1.1 K/century equivalent, taken as the mean of the RSS and UAH v. 5.6 satellite monthly mean lower-troposphere temperature anomalies.


Figure 3. Predicted temperature change, January 2005 to May 2015, at a rate equivalent to 1.7 [1.0, 2.3] Cº/century (orange zone with thick red best-estimate trend line), compared with the near-zero observed anomalies (dark blue) and real-world trend (bright blue), taken as the mean of the RSS and UAH v. 5.6 satellite lower-troposphere temperature anomalies.

The Technical Note explains the sources of the IPCC’s predictions in 1990 and in 2005, and also demonstrates that that according to the ARGO bathythermograph data the oceans are warming at a rate equivalent to less than a quarter of a Celsius degree per century.

Key facts about global temperature

Ø The RSS satellite dataset shows no global warming at all for 222 months from December 1996 to May 2015 – more than half the 437-month satellite record.

Ø The entire RSS dataset from January 1979 to date shows global warming at an unalarming rate equivalent to just 1.2 Cº per century.

Ø Since 1950, when a human influence on global temperature first became theoretically possible, the global warming trend has been equivalent to below 1.2 Cº per century.

Ø The global warming trend since 1900 is equivalent to 0.8 Cº per century. This is well within natural variability and may not have much to do with us.

Ø The fastest warming rate lasting 15 years or more since 1950 occurred over the 33 years from 1974 to 2006. It was equivalent to 2.0 Cº per century.

Ø In 1990, the IPCC’s mid-range prediction of near-term warming was equivalent to 2.8 Cº per century, higher by two-thirds than its current prediction of 1.7 Cº/century.

Ø The warming trend since 1990, when the IPCC wrote its first report is equivalent to 1.1 Cº per century. The IPCC had predicted two and a half times as much.

Ø Though the IPCC has cut its near-term warming prediction, it has not cut its high-end business as usual centennial warming prediction of 4.8 Cº warming to 2100.

Ø The IPCC’s predicted 4.8 Cº warming by 2100 is well over twice the greatest rate of warming lasting more than 15 years that has been measured since 1950.

Ø The IPCC’s 4.8 Cº-by-2100 prediction is four times the observed real-world warming trend since we might in theory have begun influencing it in 1950.

Ø The oceans, according to the 3600+ ARGO bathythermograph buoys, are warming at a rate of just 0.02 Cº per decade, equivalent to 0.23 Cº per century.

Ø Recent extreme-weather events cannot be blamed on global warming, because there has not been any global warming to speak of. It is as simple as that.



Technical note

Our latest topical graph shows the least-squares linear-regression trend on the RSS satellite monthly global mean lower-troposphere dataset for as far back as it is possible to go and still find a zero trend. The start-date is not “cherry-picked” so as to coincide with the temperature spike caused by the 1998 el Niño. Instead, it is calculated so as to find the longest period with a zero trend.

The satellite datasets are arguably less unreliable than other datasets in that they show the 1998 Great El Niño more clearly than all other datasets. The Great el Niño, like its two predecessors in the past 300 years, caused widespread global coral bleaching, providing an independent verification that the satellite datasets are better able to capture such fluctuations without artificially filtering them out than other datasets.

Terrestrial temperatures are measured by thermometers. Thermometers correctly sited in rural areas away from manmade heat sources show warming rates below those that are published. The satellite datasets are based on reference measurements made by the most accurate thermometers available – platinum resistance thermometers, which provide an independent verification of the temperature measurements by checking via spaceward mirrors the known temperature of the cosmic background radiation, which is 1% of the freezing point of water, or just 2.73 degrees above absolute zero. It was by measuring minuscule variations in the cosmic background radiation that the NASA anisotropy probe determined the age of the Universe: 13.82 billion years.

The RSS graph (Fig. 1) is accurate. The data are lifted monthly straight from the RSS website. A computer algorithm reads them down from the text file and plots them automatically using an advanced routine that automatically adjusts the aspect ratio of the data window at both axes so as to show the data at maximum scale, for clarity.

The latest monthly data point is visually inspected to ensure that it has been correctly positioned. The light blue trend line plotted across the dark blue spline-curve that shows the actual data is determined by the method of least-squares linear regression, which calculates the y-intercept and slope of the line.

The IPCC and most other agencies use linear regression to determine global temperature trends. Professor Phil Jones of the University of East Anglia recommends it in one of the Climategate emails. The method is appropriate because global temperature records exhibit little auto-regression, since summer temperatures in one hemisphere are compensated by winter in the other. Therefore, an AR(n) model would generate results little different from a least-squares trend.

Dr Stephen Farish, Professor of Epidemiological Statistics at the University of Melbourne, kindly verified the reliability of the algorithm that determines the trend on the graph and the correlation coefficient, which is very low because, though the data are highly variable, the trend is flat.

RSS itself is now taking a serious interest in the length of the Great Pause. Dr Carl Mears, the senior research scientist at RSS, discusses it at remss.com/blog/recent-slowing-rise-global-temperatures.

Dr Mears’ results are summarized in Fig. T1:


Figure T1. Output of 33 IPCC models (turquoise) compared with measured RSS global temperature change (black), 1979-2014. The transient coolings caused by the volcanic eruptions of Chichón (1983) and Pinatubo (1991) are shown, as is the spike in warming caused by the great el Niño of 1998.

Dr Mears writes:

“The denialists like to assume that the cause for the model/observation discrepancy is some kind of problem with the fundamental model physics, and they pooh-pooh any other sort of explanation.  This leads them to conclude, very likely erroneously, that the long-term sensitivity of the climate is much less than is currently thought.”

Dr Mears concedes the growing discrepancy between the RSS data and the models, but he alleges “cherry-picking” of the start-date for the global-temperature graph:

“Recently, a number of articles in the mainstream press have pointed out that there appears to have been little or no change in globally averaged temperature over the last two decades.  Because of this, we are getting a lot of questions along the lines of ‘I saw this plot on a denialist web site.  Is this really your data?’  While some of these reports have ‘cherry-picked’ their end points to make their evidence seem even stronger, there is not much doubt that the rate of warming since the late 1990s is less than that predicted by most of the IPCC AR5 simulations of historical climate.  … The denialists really like to fit trends starting in 1997, so that the huge 1997-98 ENSO event is at the start of their time series, resulting in a linear fit with the smallest possible slope.”

In fact, the spike in temperatures caused by the Great el Niño of 1998 is almost entirely offset in the linear-trend calculation by two factors: the not dissimilar spike of the 2010 el Niño, and the sheer length of the Great Pause itself.

Curiously, Dr Mears prefers the much-altered terrestrial datasets to the satellite datasets. The UK Met Office, however, uses the satellite record to calibrate its own terrestrial record.

The length of the Great Pause in global warming, significant though it now is, is of less importance than the ever-growing discrepancy between the temperature trends predicted by models and the far less exciting real-world temperature change that has been observed. It remains possible that el Nino-like conditions may prevail this year, reducing the length of the Great Pause. However, the discrepancy between prediction and observation continues to widen.

Sources of the IPCC projections in Figs. 2 and 3

IPCC’s First Assessment Report predicted that global temperature would rise by 1.0 [0.7, 1.5] Cº to 2025, equivalent to 2.8 [1.9, 4.2] Cº per century. The executive summary asked, “How much confidence do we have in our predictions?” IPCC pointed out some uncertainties (clouds, oceans, etc.), but concluded:

“Nevertheless, … we have substantial confidence that models can predict at least the broad-scale features of climate change. … There are similarities between results from the coupled models using simple representations of the ocean and those using more sophisticated descriptions, and our understanding of such differences as do occur gives us some confidence in the results.”

That “substantial confidence” was substantial over-confidence. For the rate of global warming since 1990 – the most important of the “broad-scale features of climate change” that the models were supposed to predict – is now below half what the IPCC had then predicted.

In 1990, the IPCC said this:

“Based on current models we predict:

“under the IPCC Business-as-Usual (Scenario A) emissions of greenhouse gases, a rate of increase of global mean temperature during the next century of about 0.3 Cº per decade (with an uncertainty range of 0.2 Cº to 0.5 Cº per decade), this is greater than that seen over the past 10,000 years. This will result in a likely increase in global mean temperature of about 1 Cº above the present value by 2025 and 3 Cº before the end of the next century. The rise will not be steady because of the influence of other factors” (p. xii).

Later, the IPCC said:

“The numbers given below are based on high-resolution models, scaled to be consistent with our best estimate of global mean warming of 1.8 Cº by 2030. For values consistent with other estimates of global temperature rise, the numbers below should be reduced by 30% for the low estimate or increased by 50% for the high estimate” (p. xxiv).

The orange region in Fig. 2 represents the IPCC’s less extreme medium-term Scenario-A estimate of near-term warming, i.e. 1.0 [0.7, 1.5] K by 2025, rather than its more extreme Scenario-A estimate, i.e. 1.8 [1.3, 3.7] K by 2030.

It has been suggested that the IPCC did not predict the straight-line global warming rate that is shown in Figs. 2-3. In fact, however, its predicted global warming over so short a term as the 25 years from 1990 to the present differs little from a straight line (Fig. T2).


Figure T2. Historical warming from 1850-1990, and predicted warming from 1990-2100 on the IPCC’s “business-as-usual” Scenario A (IPCC, 1990, p. xxii).

Because this difference between a straight line and the slight uptick in the warming rate the IPCC predicted over the period 1990-2025 is so small, one can look at it another way. To reach the 1 K central estimate of warming since 1990 by 2025, there would have to be twice as much warming in the next ten years as there was in the last 25 years. That is not likely.

Likewise, to reach 1.8 K by 2030, there would have to be four or five times as much warming in the next 15 years as there was in the last 25 years. That is still less likely.

But is the Pause perhaps caused by the fact that CO2 emissions have not been rising anything like as fast as the IPCC’s “business-as-usual” Scenario A prediction in 1990? No: CO2 emissions have risen rather above the Scenario-A prediction (Fig. T3).


Figure T3. CO2 emissions from fossil fuels, etc., in 2012, from Le Quéré et al. (2014), plotted against the chart of “man-made carbon dioxide emissions”, in billions of tonnes of carbon per year, from IPCC (1990).

Plainly, therefore, CO2 emissions since 1990 have proven to be closer to Scenario A than to any other case, because for all the talk about CO2 emissions reduction the fact is that the rate of expansion of fossil-fuel burning in China, India, Indonesia, Brazil, etc., far outstrips the paltry reductions we have achieved in the West to date.

True, methane concentration has not risen as predicted in 1990 (Fig. T4), for methane emissions, though largely uncontrolled, are simply not rising as the models had predicted. Here, too, all of the predictions were extravagantly baseless.

The overall picture is clear. Scenario A is the emissions scenario from 1990 that is closest to the observed CO2 emissions outturn.


Figure T4. Methane concentration as predicted in four IPCC Assessment Reports, together with (in black) the observed outturn, which is running along the bottom of the least prediction. This graph appeared in the pre-final draft of IPCC (2013), but had mysteriously been deleted from the final, published version, inferentially because the IPCC did not want to display such a plain comparison between absurdly exaggerated predictions and unexciting reality.

To be precise, a quarter-century after 1990, the global-warming outturn to date – expressed as the least-squares linear-regression trend on the mean of the RSS and UAH monthly global mean surface temperature anomalies – is 0.27 Cº, equivalent to less than 1.1 Cº/century. The IPCC’s central estimate of 0.71 Cº, equivalent to 2.8 Cº/century, that was predicted for Scenario A in IPCC (1990) with “substantial confidence” was two and a half times too big. In fact, the outturn is visibly well below even the least estimate.

In 1990, the IPCC’s central prediction of the near-term warming rate was higher by two-thirds than its prediction is today. Then it was 2.8 C/century equivalent. Now it is just 1.7 Cº equivalent – and, as Fig. T5 shows, even that is proving to be a substantial exaggeration.

Is the ocean warming?

One frequently-discussed explanation for the Great Pause is that the coupled ocean-atmosphere system has continued to accumulate heat at approximately the rate predicted by the models, but that in recent decades the heat has been removed from the atmosphere by the ocean and, since globally the near-surface strata show far less warming than the models had predicted, it is hypothesized that what is called the “missing heat” has traveled to the little-measured abyssal strata below 2000 m, whence it may emerge at some future date.

Actually, it is not known whether the ocean is warming: each of the 3600 automated ARGO bathythermograph buoys takes just three measurements a month in 200,000 cubic kilometres of ocean – roughly a 100,000-square-mile box more than 316 km square and 2 km deep. Plainly, the results on the basis of a resolution that sparse (which, as Willis Eschenbach puts it, is approximately the equivalent of trying to take a single temperature and salinity profile taken at a single point in Lake Superior less than once a year) are not going to be a lot better than guesswork.

Unfortunately ARGO seems not to have updated the ocean dataset since December 2014. However, what we have gives us 11 full years of data. Results are plotted in Fig. T5. The ocean warming, if ARGO is right, is equivalent to just 0.02 Cº decade–1, equivalent to 0.2 Cº century–1.


Figure T5. The entire near-global ARGO 2 km ocean temperature dataset from January 2004 to December 2014 (black spline-curve), with the least-squares linear-regression trend calculated from the data by the author (green arrow).

Finally, though the ARGO buoys measure ocean temperature change directly, before publication NOAA craftily converts the temperature change into zettajoules of ocean heat content change, which make the change seem a whole lot larger.

The terrifying-sounding heat content change of 260 ZJ from 1970 to 2014 (Fig. T6) is equivalent to just 0.2 K/century of global warming. All those “Hiroshima bombs of heat” of which the climate-extremist websites speak are a barely discernible pinprick. The ocean and its heat capacity are a lot bigger than some may realize.


Figure T6. Ocean heat content change, 1957-2013, in Zettajoules from NOAA’s NODC Ocean Climate Lab: http://www.nodc.noaa.gov/OC5/3M_HEAT_CONTENT, with the heat content values converted back to the ocean temperature changes in Kelvin that were originally measured. NOAA’s conversion of the minuscule warming data to Zettajoules, combined with the exaggerated vertical aspect of the graph, has the effect of making a very small change in ocean temperature seem considerably more significant than it is.

Converting the ocean heat content change back to temperature change reveals an interesting discrepancy between NOAA’s data and that of the ARGO system. Over the period of ARGO data, from 2004-2014, the NOAA data imply that the oceans are warming at 0.05 Cº decade–1, equivalent to 0.5 Cº century–1, or rather more than double the rate shown by ARGO.

ARGO has the better-resolved dataset, but since the resolutions of all ocean datasets are very low one should treat all these results with caution. What one can say is that, on such evidence as these datasets are capable of providing, the difference between underlying warming rate of the ocean and that of the atmosphere is not statistically significant, suggesting that if the “missing heat” is hiding in the oceans it has magically found its way into the abyssal strata without managing to warm the upper strata on the way. On these data, too, there is no evidence of rapid or catastrophic ocean warming.

Furthermore, to date no empirical, theoretical or numerical method, complex or simple, has yet successfully specified mechanistically either how the heat generated by anthropogenic greenhouse-gas enrichment of the atmosphere has reached the deep ocean without much altering the heat content of the intervening near-surface strata or how the heat from the bottom of the ocean may eventually re-emerge to perturb the near-surface climate conditions relevant to land-based life on Earth.

Most ocean models used in performing coupled general-circulation model sensitivity runs simply cannot resolve most of the physical processes relevant for capturing heat uptake by the deep ocean. Ultimately, the second law of thermodynamics requires that any heat which may have accumulated in the deep ocean will dissipate via various diffusive processes. It is not plausible that any heat taken up by the deep ocean will suddenly warm the upper ocean and, via the upper ocean, the atmosphere.

If the “deep heat” explanation for the Pause were correct (and it is merely one among dozens that have been offered), the complex models have failed to account for it correctly: otherwise, the growing discrepancy between the predicted and observed atmospheric warming rates would not have become as significant as it has.

Why were the models’ predictions exaggerated?

In 1990 the IPCC predicted – on its business-as-usual Scenario A – that from the Industrial Revolution till the present there would have been 4 Watts per square meter of radiative forcing caused by Man (Fig. T7):


Figure T7. Predicted manmade radiative forcings (IPCC, 1990).

However, from 1995 onward the IPCC decided to assume, on rather slender evidence, that anthropogenic particulate aerosols – mostly soot from combustion – were shading the Earth from the Sun to a large enough extent to cause a strong negative forcing. It has also now belatedly realized that its projected increases in methane concentration were wild exaggerations. As a result of these and other changes, it now estimates that the net anthropogenic forcing of the industrial era is just 2.3 Watts per square meter, or little more than half its prediction in 1990:


Figure T8: Net anthropogenic forcings, 1750 to 1950, 1980 and 2012 (IPCC, 2013).

Even this, however, may be a considerable exaggeration. For the best estimate of the actual current top-of-atmosphere radiative imbalance (total natural and anthropo-genic net forcing) is only 0.6 Watts per square meter (Fig. T9):


Figure T9. Energy budget diagram for the Earth from Stephens et al. (2012)

In short, most of the forcing predicted by the IPCC is either an exaggeration or has already resulted in whatever temperature change it was going to cause. There is little global warming in the pipeline as a result of our past and present sins of emission.

It is also possible that the IPCC and the models have relentlessly exaggerated climate sensitivity. One recent paper on this question is Monckton of Brenchley et al. (2015), which found climate sensitivity to be in the region of 1 Cº per CO2 doubling (go to scibull.com and click “Most Read Articles”). The paper identified errors in the models’ treatment of temperature feedbacks and their amplification, which account for two-thirds of the equilibrium warming predicted by the IPCC.

Professor Ray Bates will shortly give a paper in Moscow in which he will conclude, based on the analysis by Lindzen & Choi (2009, 2011) (Fig. T10), that temperature feedbacks are net-negative. Accordingly, he supports the conclusion both by Lindzen & Choi and by Spencer & Braswell (2010, 2011) that climate sensitivity is below – and perhaps considerably below – 1 Cº per CO2 doubling.


Figure T10. Reality (center) vs. 11 models. From Lindzen & Choi (2009).

A growing body of reviewed papers find climate sensitivity considerably below the 3 [1.5, 4.5] Cº per CO2 doubling that was first put forward in the Charney Report of 1979 for the U.S. National Academy of Sciences, and is still the IPCC’s best estimate today.

On the evidence to date, therefore, there is no scientific basis for taking any action at all to mitigate CO2 emissions.

187 thoughts on “El Niño strengthens: the Pause lengthens

  1. Years ago at ClimateAudit I was ridiculed for saying that not only did we not know the magnitude of water vapor and cloud feedbacks, but also we were ignorant of the sign of it.

    • According to fig. T9 Stephens et al (cousins of Trenberth et al ??) The imbalance of power input-output to the earth is 0.6 +/- 17 W/m^2

      Now as I recall, when I was at UofA and my cohorts were doing polarized proton and neutron beam experiments, they considered it a success if they got the order of magnitude right. So 0.6 +/- 6 would have been cause for a beer at the bar.

      But 0.6 +/- 17 is not in the beer at the bar class.

      More like going to the bar for a game of darts !!

      And I’ve only just perused Lord Monckton’s latest essay, so I can’t wait to find out what other jewels he has exposed for us.

      So back to the reading.


      • And if the surface imbalance is essentially zero +/- 17 W/m^2, how does that “knowledge” improve to +/- 0.4 by the time it gets escaped to space ??

        I must have skipped the statistics class that tells you how to add noise to noise, and get silence.

  2. “However, the effect of the recent UAH adjustments – exceptional in that they are the only such adjustments I can recall that reduce the previous trend rather than steepening it ”

    Read more:

    for UAH:

    version B, NOAA-7 correction, reduced the trend
    version C, removal of residual annual cycle increased the trend
    version D, orbital decay adjustment increased the trend
    version D, hot target temperature correction decreased the trend
    version 5 di urnal correction increased the trend
    version 5.1 tightening data screening decreased the trend
    version 5.2 diurnal drift correction increased the trend

    or look at CRU oocean or NOAA ocean. adjustments cool

    • Those adjustments certainly cooled the 1930’s and before, eh?

      (Using my best Canadian accent.)

    • I have no idea what Stephen Mosher is trying to say with relation to what Christopher Monckton of Brenchley has to say. I read the post by Christopher Monckton of Brenchley and found his points believable and accurate.

      So come on Stephen, please explain EVERYTHING that is wrong with this post by Christopher Monckton of Brenchley and not just allude in some drive-by trolling way that something is wrong in the state of Denmark.

      • I believe his point is that anyone who has criticized the adjustments made to the surface record data, must also criticize these adjustments.
        Of course he either just doesn’t get the fact, or is doing his best to ignore the fact that we aren’t criticizing adjustments per se, we are criticizing adjustments using faulty methods or worse, undisclosed methods.

      • Monkton claim:

        “However, the effect of the recent UAH adjustments – exceptional in that they are the only such adjustments I can recall that reduce the previous trend rather than steepening it”

        1. he claims the are EXCEPTIONAL because they are the ONLY adjustments that REDUCE a trend.
        2. The caveat being “that he knows of”

        So I help the good lord. he needs to READ MORE because they are not exceptional.

        in fact the most important adjustments COOL THE RECORD

      • Mosher

        So I help the good lord. he needs to READ MORE because they are not exceptional.

        in fact the most important adjustments COOL THE RECORD

        And, by cooling the record (the old temperatures), that change CREATES a warming trend because the RECENT temperatures are not changed.
        You are making our point that the RESULT of the adjustments CREATES today’s Man-made global warming.

    • Are you finally admitting that you can’t actually criticize the adjustments, despite the fact that all the data and all of the methods have been made publicly available?

  3. According to the Met Office the 2014 for the CET was one of the warmest on the record. However, in last 5 months the CET has stalled a bit and it has been hovering around its 20 year average, detailed daily min/max temps HERE

  4. By clever use of the two definitions of GW (only the man made component versus all inclusive) you can still say that global warming continues and the models are right, even if the measured temperature drops.
    I wonder if the models were made according to the old definition. Most work on the models began when the old definition was still effective.

  5. The habitues of the Climate Fearosphere are hilariously famous for their knee- jerk “Oh, but…” excuses which spread like wildfire amongst their myriad group think outlets. One of their funniest repeat- after- me refrains is that the pause is cherry picked. I’m gonna just LOL at the thought of that blathering stupidity and nonsense.
    So, LOL.

  6. There is no such opportunities to be strengthened El Niño. Falls amount of solar energy extending into the system.

  7. which will eventually cause temporary warming..
    and they will claim that they were right…..without realizing it proves they were wrong

    • I would like to hear how the magical CO2 dust creates or affects the el Niño wave. Back radiation heating water again, but controlling 380 ppm of the atmosphere against Mother Natures 30,000 ppm water vapor will save us.

  8. Figure T6 – global ocean heat content change

    y-axis scale intervals

    50ZJ = 0.02K
    100ZJ = 0.04K
    150ZJ = 0.06K
    200ZJ = 0.07K??
    250ZJ = 0.09K

    Mistake or rounding?

  9. It is an interesting exercise to print out the Australian BOM SOI monthly data and simply color in the months of El Nino in Red and La Nina in Blue. ( values -8 and more negative El Nino months : La Ninas +8 and more positive. Look at the frequency and amplitudes of each during warming and cooling episodes.
    Note these events are symptoms not the ultimate cause of the warming and cooling.
    Compare the current trend with say the super EL Nino of 1997-8. This one probably won’t amount to much as far as temperatures are concerned. Maybe a modest peak in about October – November.
    The current daily SOI has turned recently turned positive see
    At the present, the cooling trend since the RSS Millennial temperature peak in 2003 continues. See
    For forecasts of the timing and amplitude of the further coming cooing see

  10. Thank you, Lord Monckton, for your very informative and entertaining presentations that I’ve viewed on the internet.

    • That “person” used to post frequently on National Review.
      He got all bent out of shape because we didn’t afford him the level of worship that he felt a full professor such as himself was due.
      When he found himself incapable of winning an open argument he retreated to a forum in which he could win any argument because the opposition was banned.
      Much as I presume his classroom operates.

      • “When he found himself incapable of winning an open argument he retreated to a forum in which he could win any argument because the opposition was banned.
        Much as I presume his classroom operates.”

        Well, he’s banned here.

      • Nick Stokes

        Yes, such abusive and offensive trolls who persistently and deliberately disrupt serious debate should be removed from any serious blog. I don’t understand why you are permitted the leniency afforded to your behaviour.


      • @Nick Stokes

        Please explain the reasoning behind your observation. As it stands, it makes no sense.

  11. Lord Monckton fails to mention that his Monckton of Brenchley et al. (2015) article was recently refuted by a peer-reviewed article in the same journal, Richardson et al. (2015), which found that:

    “In summary, M15 fail to demonstrate that IPCC estimates of climate sensitivity are overstated. Their alternative parameterization of a commonly used simple climate model performs poorly, with a bias 350 % larger and RMSE 150 % larger than CMIP5 median during 2000–2010. Their low estimates of future warming are due to assumptions developed using a logically flawed justification narrative rather than physical analysis. The key conclusions are directly contradicted by observations and cannot be considered credible.”


    • Whole subject of sensitivity has grown from an amoeba to a half a tone jellyfish.
      It can be easily numerically demonstrated that the so called average ‘global temperature’s sensitivity to the annual change of either of two natural variables:
      1. The Earths rate of rotation
      2. Geomagnetic dipole
      btw which are measured BY FAR more accurately than the estimate of the ‘GT’, is similar or even greater than the one in the respect of CO2.concentration.

      • Micro Tesla. Tesla is MKS unit for strength of magnetic field, 1T= Webber/m2 or 1T= 10,000 Gauss.
        a) Geomagnetic field is by far strongest modulator of cosmic rays
        b) Secular change in the GMF is more often than not result of circulation within the liquid core at or near the mantle’s boundary.
        Jackson – Bloxham data for the field variability at the boundary show close correlation to the sunspot cycles

        In that respect the current temperature pause associated with the swing in the dipole values direction, may well be related to some elements of the solar activity.
        Graph is obtained by extracting shorter periodicities from Jackson – Bloxham data published in early 1990s, thus to eliminate the end effect of filtering the output is curtailed to 1980.
        It is regretful that the authors of the data compilation, have not gone back since to update it.

      • Mark Richardson, Zeke Hausfather, Dana A. Nuccitelli, Ken Rice, John P. Abraham.

        I didn’t realize that Zeke had succumbed to the dark side quite so completely.

    • Seriously?
      The list of authors suggesting that M15 cannot be considered credible comprises Mark Richardson, Zeke Hausfather, Dana A. Nuccitelli, Ken Rice and John P. Abraham.
      What a mind bending collection of honesty, integrity, openness and dedication to scientific discovery.
      Just think, with the addition of Michael Mann, Ben Santer and Stephen Lewandowsky you could have assembled 97% of climate science’s most dedicated shysters all in one place.

      • Right on! I’m hoping that an orange suit and a hopalong walk are in the future for at least some of these people, as they are escorted into Sheriff Joe’s emporium for some baloney sandwiches for quite a few years in the future.

    • “””””….. Their low estimates of future warming are due to assumptions developed using a logically flawed justification narrative rather than physical analysis. The key conclusions are directly contradicted by observations and cannot be considered credible.” …..””””

      This is my selection for getting the Bullwer Lytton prize this year.

      Congratulations to its authors.


    • Their low estimates of future warming … are directly contradicted by observations and cannot be considered credible.”
      hop into the way-back machine sherman, were off to observe the future.

      • MRW

        Would you want to publicise it if you were associated with such tripe?


    • Since the Richardson et al. paper is paywalled, I have no idea what is meant by “Their alternative parameterization of a commonly used simple climate model.”

      What Monckton et al. did was use an equation that wildly miscalculates the responses of the Roe models the authors ostensibly assumed; their equation is the wrong one for calculating such models’ transient responses. The systems described by the Roe models (as abstracted in Monckton et al.’s Table 2) are all time-invariant, and all but the first one impose delay: they have memory. But Monckton et al.’s equation treats those models as though they all described time-variant, memoryless systems. If you were to follow the logic of their equation, for example, you would conclude that all the water in a leaky bathtub vanishes the instant the faucet is closed, no matter how much water was in it previously or how small the leak is. Consequently, the calculations in their Tables 4 and 6 are meaningless.

      Their projection for the rest of this century doesn’t depend on that feature, though. It’s essentially just the result of assuming a model that’s memoryless and exhibits a feedback level at the center of a range the authors divined from gazing at the last 800,000 years, chanting “thermostasis,” throwing in some eye of newt, and dubbing the range’s high end the “process engineer’s design limit.” For that model their equation is okay; unlike the Roe models depicted in Monckton et al.’s Fig. 4, that model is memoryless,

      So an explanation of the Richardson et al. paper’s rationale would be helpful.

      • Monckton of Brenchley: “Mr Born’s points have been raised and answered before.”

        The mark of the charlatan: first evade, then contend the question has already been answered. From the Clinton school of public discourse.

        Lord Monckton’s “answer” was to say that the error was minor and to accuse me of dishonesty for not going through this non-unity-transience fraction examples. Had he gone through those examples properly as I have to compare the correct model output with the ones the Monckton et al. equation produces, he would have found some of his equation’s results are off by more than a factor of three.

        Lord Monckton’s “answers” are all hat, no cattle. I can back up everything I say. He can’t.

    • Thanks, Zeke. I’ll add to my reading list.

      Note also that the whole case above against the IPCC 1990 projections is based on cherry-picking only one of four Scenarios, Scenario A, on the contention that it most closely matches emissions. But emissions are not concentrations nor forcings. Scenarios B, C and D have been disappeared. The High-Low-Best chart is based on varying climate sensitivity, not forcings. all under a single forcing – Scenario A. However Monckton himself removes the basis of his house of cards as he correctly states present day net anthropogenic forcings as 2.3W/m2, which are far closer to IPCC Scenarios B and C. (IPCC 1990 Table 2-4).

      Monckton states the global temperature trend since 1990 as 1.06C / century. IPCC 1990 projected ‘just over 0.1C per decade’ under Scenario B. Thus ‘Under Scenario B the IPCC models projected the actual temperature trend accurately’ is a true statement, certainly containing more truth than the post above, because Scenario B actually came to pass, Scenario A decidedly did not.

      • Phil Clarke

        Note also that the whole case above against the IPCC 1990 projections is based on cherry-picking only one of four Scenarios, Scenario A, on the contention that it most closely matches emissions.

        So, you are saying it is a “cherry pick” to compare with reality the scenario that “most closely matches emissions”! And you say the resulting observation that the model predictions are wrong should be ignored because they are based on that “cherry pick”!

        Phil Clarke, in all seriousness, please say who outside of an insane asylum would be willing to accept your nonsensical assertions.


      • No, Richard, I said that was the basis of his Lordships selection, that CO2 emissions in 2014 were closer to Scenario A. But CO2 is not the only GHG and 2015 is only one year. Scenario A had CO2 concentrations well in excess of current values and net forcings – from CO2 and all other manmade forcings ditto.

        In 1990, the IPCC did not know how forcings would develop, and so they made a range of forecasts based on a range of possible futures forcings and concentrations. A glance at the report, Table 2.4, is enough to see that Scenarios B and C were a lot closer to how forcings increased in reality. Anyone still using scenario A is the person more suited to the asylum ;-)

      • Phil Clarke

        I quoted what you did say and commented on that. Then I asked you

        Phil Clarke, in all seriousness, please say who outside of an insane asylum would be willing to accept your nonsensical assertions.

        You have replied saying

        No, Richard, I said that was the basis of his Lordships selection, that CO2 emissions in 2014 were closer to Scenario A. But CO2 is not the only GHG and 2015 is only one year. Scenario A had CO2 concentrations well in excess of current values and net forcings – from CO2 and all other manmade forcings ditto.

        I am assuming you know your revision of your original bollocks is also bollocks.

        The IPCC projected CO2 equivalence (n.b. NOT only CO2).
        Scenario A was nearest to what happened in terms of emissions.

        If as you claim “Scenario A had CO2 concentrations well in excess of current values and net forcings” then that is – of itself – a failure of the model when Scenario A closely matched the emissions.

        And the temperature has NOT risen as the models projected for Scenario A, instead we have had the ‘pause’.

        I repeat,
        Phil Clarke, in all seriousness, please say who outside of an insane asylum would be willing to accept your nonsensical assertions.


    • “Lord Monckton fails to mention that his Monckton of Brenchley et al. (2015) article was recently refuted by a peer-reviewed article in the same journal, Richardson et al. (2015)…”

      Ah, so it’s been “refuted” by all the “Usual Suspects”, in fact.

      Yeah, right…

  12. When you start to consider where they are changes that the claimed precision of these changes is often beyond the actual ability of the instruments used to measure them , given error factors, you can see how much panic has been created on the back of such poor reality based data. Which may explain way climate ‘science ‘ much prefers models where it can ‘create’ all the changes it needs .

  13. This chart shows the RSS data before the start of Lord Monckon’s pause, with the linear trend extrapolated forward to 2015. So if warming stopped in 1997, current temperatures must be below this line, right?


    Not so:


    There’s apparently been a slightly higher rate of warming since the “pause” started.

  14. we are getting a lot of questions along the lines of ‘I saw this plot on a denialist web site. Is this really your data?’

    I take this as a hopeful development. Not too hopeful, I’ve learned not to expect too much from people, but still somewhat encouraging.

  15. Why have average temps settled down closer to the long-term mean over the last 3 years? Is it actual or an artefact of the calculation methods? Could it be that there is some sort of cycle playing out, such that there’s a few years of big swings followed by a few that are more settled?

    • When you get over the hill, you start picking up speed.

      Same gose for warm-stop-cool hills !


    • Who knows? Maybe CO2 is ironing out the extremes. That would be the worst of all possible worlds for alarmism. An unbearable boringness.

  16. Carl Mears, ( http://www.remss.com/about/profiles/carl-mears) the senior research scientists mentioned by Monckton in his article made the following statement.

    ” A similar, but stronger case can be made using surface temperature datasets, which I consider to be more reliable than satellite datasets (they certainly agree with each other better than the various satellite datasets do!) ”

    This is in the link provide by Monckton. http://www.remss.com/blog/recent-slowing-rise-global-temperatures under the section titled “Measurement Errors”

    Why does the man responsible for his own product consider surface temperature datasets to be more reliable than those coming from satellites?

      • Richard M,

        That’s exactly right. Mears is a wannabe. Just another self-serving rent seeker IMHO. He’s riding the climate gravy train for sure.

      • They no longer conform to his bias. Like those people that were sure that aliens were coming on Haleys Comet. Although instead of pushing the date to somewhere in the future, they just off’d themselves to avoid the embarrassment.

      • He’s a VP of RSS.

        When someone in that position labels millions of honest people as “deniers” and “denialsits”, Mears is just protecting his gravy train. That is not professional language, that is attacking people for simply having a different scientific point of view. He calls people names because he does not have the credible science necessary to support his beliefs/rent seeking. I rest my case.

    • Joel D. Jackson

      Why does the man responsible for his own product consider surface temperature datasets to be more reliable than those coming from satellites?

      When was the quote made, and under what circumstances was he claimed to have made the statement, and what satellite and surface station temperature data sets was he comparing? Information and reliability of data sets has changed considerably over time.

    • That’s an old quote. With the recent calibration update to UAH there is very little variance from RSS. And the UAH agrees with the radio sonde data. What measure are taken to ensure the accuracy of the land surface data? I’m not aware of any.

    • “Why does the man responsible for his own product consider surface temperature datasets to be more reliable than those coming from satellites?”

      They measure different things. If you want to know the temperature of the lower troposphere, RSS is best. If you want to know the surface, that’s the place to measure.

      • The problem, of course, is that are no surface weather stations covering approximately 75% of the Earth’s surface (oceans and uninhabited areas).

      • Well if want to know the surface, the best place to measure is at the surface, where you measure.

        It’s no good for any other place on the surface, except the place you measure.


      • OK. So when one takes stations acknowledged to be good (siting, equipage, etc.) they show cooling. When one takes stations acknowledged to be poor for whatever reasons, they show warming. So, why do you want to include the sheep with the goats, and then somehow smear the goat shit over massive areas of the globe where there are no stations at all, then bray like an ass that the globe is warming (“at an unprecedented rate”?). Are you an idiot?

    • I wonder how many of the skeptics here know that RSS



      A GCM !!!

      [so prove it instead of blathering about it – Anthony]

  17. Why are anomalies used, again? It seems that if the global temp avg was, say, 13C in 1885 and the same in 2005, the placement of the zero for the anomaly could make 1885 cooler than the later date, even though it clearly would not be.

  18. Mr Monckton, you left out the graph from the latest IPCC report which compared model ocean heat projections vs observations. There is no missing heat in the deep oceans, as the models did not underpredict ocean heat content.

  19. What are the chances the temps have been slowly trending downward since 2000 or so? There are some independent data sets that show this, correct?

  20. Personally I look at the graphs from 1979 to today and go “look it’s the same temperature today as it was in 1982! Just imagine if global mean temperatures hit 1979 levels for a few months or more. Not a big difference

    • You’re right, it’s not coincidental. The PDO is a measure of North Pacific temperature, and AMO of Atlantic temperature. What it shows is that temperature correlates with temperature.

      • Nick Stokes June 3, 2015 at 6:55 pm
        … “ The PDO is a measure of North Pacific temperature,”

        You might want to have another go at this.

      • PDO as defined by Dr. Mantua

        Positive PDO
        The SST anomaly cool in the interior of the North Pacific and warm along the Pacific Coast
        + the sea level pressure is below average over the North Pacific.

        Negative PDO
        Reverse of the above

    • Nick Stokes says:

      What it shows is that temperature correlates with temperature.

      Nick, I always knew you were super-duper smart! That statement proves it.

  21. The terrifying-sounding heat content change of 260 ZJ from 1970 to 2014 (Fig. T6) is equivalent to just 0.2 K/century of global warming. All those “Hiroshima bombs of heat” of which the climate-extremist websites speak are a barely discernible pinprick. The ocean and its heat capacity are a lot bigger than some may realize.

    4.13 x 10^17 joules / KM^3. What does that number represent? That is the energy it takes to convert one cubic kilometer of continental ice from -30 oC to water at 4 oC
    Useful information:
    heat of fusion of water = 334 J/g
    heat of vaporization of water = 2257 J/g
    specific heat of ice = 2.09 J/g•°C
    specific heat of water = 4.18 J/g•°C

    Step 1: Heat required to raise the temperature of ice from -30 °C to 0 °C (for temp see average profile temp Antarctica) http://www.pnas.org/content/99/12/7844.full
    Use the formula q = mcΔT Per Kg 1000 x 2.09 x 30 = 62700 Joules

    Step 2: Heat required to convert 0 °C ice to 0 °C water
    q = m•ΔHf Per Kg 1000 x 334 = 334000 Joules

    Step 3: Heat required to raise the temperature of 0 °C water to 100 °C water
    q = mcΔT per Kg 1000 x 4.18 x 4 = 16720 Joules

    Total -30 oC ice to +4 oC water per Kg = 413420 Joules / KG
    q = heat energy
    m = mass
    c = specific heat
    ΔT = change in temperature
    ΔHf = heat of fusion

    One metric tonne of water has a volume of one cubic meter (1 tonne water(1,000 KG = 1 m³)
    One gagatonne of water has a volume of one billion cubic meters, or one cubic kilometer.(1 Gt water = 1 km³) Of course, one gigatonne of ice has a greater volume than one gigatonne of water. But it will still have a volume of 1 km³ when it melts.
    413420 Joules/KG x 1000 KG/t x 1,000,000,000 t/KM^3 = 4.1342E+17 Joules / KM^3

    But you say ‘DD’ how does this compare to the well known ‘Hiroshima bomb’ measurement.
    By today’s standards the two bombs dropped on a Japan were small — equivalent to 15,000 tons of TNT in the case of the Hiroshima bomb and 20,000 tons in the case of the Nagasaki bomb. (Encyclopedia Americana. Danbury, CT: Grolier, 1995: 532.)
    In international standard units (SI), one ton of TNT is equal to 4.184E+09 joule (J)

    Hiroshima bomb TNT 15000 x TNT to Joules 4.18E+09 = Joules total 6.276E+13 =>
    or 1 KM^3 of ice melt (4.1342E+17 / 6.276E+13) = # HiroBmb per Km^3 = 6587
    That is correct. Place one Hiroshima bomb in a grid every 54 meters apart to melt the ice.

    How about all that ‘Ocean Heat Content Change’? Ocean heat content has increased by your noted 2.60 X 10E23 Joules since 1970.

    So 2.60 X 10E23 Joules / 4.1342 x 10E17 Joules/KM^3 = 628,930 KM^3

    Well that sounds like a lot of ice, but Antarctica has between 26 and 30 million and Greenland has 2.5 million of those KM^3, so in reality it works out to 628,930 / 30,000,000 = 2.1% of the total.


    Since the heat cannot both melt the ice and heat the water, please tell me to what accuracy in percentage has the volume of ice has been measured since 1970?

    • DD More’s quantitative analysis, reminding us that one must heat ice to turn it to water and then set about heating the water, is interesting. I shall do my best to verify it and may add it, with attribution, to the Tecnical Note in future monthly temperature updates.

      If more quantitative checking of this sort had been done, there would never have been a climate scare in the first place.

      • I think when it comes to El Nino we need qualitative checking too :

        Canberra set for a dry and warm winter
        Sydney Morning Herald
        May 28, 2014
        ” the outlook for the July-August period will mean Canberra’s recent dry spell will continue in the coming months with temperatures remaining relatively warm throughout the day and night.
        A key influence of the warmer, drier conditions is the El Nino weather pattern now forming in the Pacific.”
        [my bold]

        Canberra’s extreme run of overnight lows ‘only seen every 10 to 15 years’
        ABC News
        “Canberra’s latest string of frosty mornings has hit a new benchmark for extreme overnight temperatures.
        Since early Monday morning Canberrans have spent more hours in temperatures below zero than above.

        Overnight, Canberrans shivered though a low of -7 degrees, with mercury only climbing above zero after 9:30am.

        This follows Monday’s cold temperatures which saw 20 centimetres of snow fall on the New South Wales ski resorts and Canberra region, the -7C lows experienced on Tuesday morning- which matched temperatures in the Snowy Mountains, and Wednesday’s -5C lows.

        Bureau of Meteorology forecaster Sean Carson said temperature lows seen this week, along with last August’s extreme run of lows only occurred once a decade.

        “August last year saw a pretty impressive run of overnight lows, where we saw eight consecutives nights below -5C, with the lowest night in that sequence getting all the way down to -8C,” he said.

        “But prior to that it is probably something we only see every 10 to 15 years where we get such cold minimums.”

        Mr Carson said this “impressive” run of temperatures was more common in the 1970s.

        “It used to be fairly regular before the 1970s, before we saw a lot of development across Canberra, which did give Canberra a little bit of heat coming from industry,” he said. [..]

        Mr Carson said frosty conditions were set to continue with the El Nino taking effect in Australia.

        With the El Nino conditions across the Australian region it tends to lead to dry conditions, clearer skies and I’d say more of these frosty mornings are likely to continue throughout winter,” he said.”
        [my bold]

        So there you have it. El Nino brings warm weather to south-eastern Australia, but if it’s exceptionally cold then that’s El Nino too.

        Perhaps a bit more chilling, though, is the reference to the 1970s. There was a PDO cooling phase then, and there’s another one underway now.

        It was nice, however, to see the urban heat effect being recognised.

      • Jeez Chris,
        I’ve made this point dozens of times and you have only just now gotten it?

        For example on the supposed melting of west antarctic ice which costs something between 5 and 25 Watts per square meter.

        On rainfall, for example, if you do the calcs working out the specific heat of vaporization plus the potential energy to get it to at least 3 km high, you can show that at best the 0.6W energy balance could increase hydrological cycling by 0.8% but of course the IPCC claim 5% and I’ve even read someone claim 20% for a doubling (3.4Watts per meter squared) energetically impossible.

        On about 90% of climate papers, the researchers attempt to repeal the law of conservation of energy claiming effects that use more energy than CO2 supplies. Every time climate science claims an EFFECT there is an energy penalty a NEGATIVE feedback. More photosynthesis – energy penalty, ice melting – energy penalty, more/stronger storms – energy penalty, more evaporation/rain – energy penalty, warm oceans -energy penalty and on it goes. Every time an effect happens it usually extracts some of that thermal energy from the atmosphere and there is only so much CO2 derived energy to go around add up the energy penalies of all the supposed hundreds of effects and check against conservation of energy. Then you have to remember that if the supposed effect is to happen then the driving force must be there, if a warm atmosphere is melting ice then that measly amount of energy must be split between the warm atmosphere and whatever the effect is for it to be causal. Once all the energy has gone into melting the ice or whatever then the atmosphere is no longer warm and the effect can then no longer be sustained.

        We must bear in mind that 0.6W per square meter affecting the lower troposphere is the epuivalent of a christmas tree light in a column of air of 10000 cubic metres, it’s not very much energy! Most of these thermogeddon effects require A LOT OF ENERGY…

    • Error in sentence:
      Step 3: Heat required to raise the temperature of 0 °C water to 100 °C water
      q = mcΔT per Kg 1000 x 4.18 x 4 = 16720 Joules
      4 deg, not 100 deg.
      Haven’t read far enough ahead to see if anyone else caught this. Doesn’t change the results though, as the author intended 4 deg.

      • Thanks Victor, I just caught that too. Had copied the Steps from an example of ice thru water to steam. Have updated my saved copy.

  22. Why so many models? If you go target shooting and the goal is to hit the bulls-eye you don’t take along your shotgun for that’s what the climate models output looks like , a shotgun blast into the future . Its a bit like going to a horse race and betting on all the horses and at the races end telling everyone that you won and how clever you were .
    And how many models are there 50-60 + . Why are these people allowed to play this ridiculous game? The models are now and for every minute of everyday getting more and more ludicrously inaccurate compared to the real world the case can be made easily that they should be ignored and abandoned saving taxpayers millions of dollars .
    Why not try writing a computer model that exactly matches the “real” climate as it is , would that not give you a much better grasp of what is going on in the climate than “wishful thinking”.

  23. However, it is becoming ever more likely that the temperature increase that usually accompanies an el Niño may come through after a lag of four or five months.

    That would be an unusual and mysterious lag.

    Interannual Variations And The Enso Phenomenon: Insights Via Singular Spectra Analysis
    [Dickey & Keppenne, 1997]

    […] ENSO events occur when both the QB and LF components add constructively, with positive LOD and MSOI anomalies indicative of an El Niño (warm event—in which an increase in atmospheric angular momentum results in high LOD values), while a decrease in LOD and MSOI reflects cold (La Niña) events. No discernible lags or leads between the two series are observed. ‘It is the sum of these components, LF plus QB—the full ENSO variability—that is the most coherent, indicating the robust link between the ENSO phenomena and interannual LOD variations.

    ENSO usually brings instant karma.

    • ‘That would be an unusual and mysterious lag.’

      It would indeed, thanks for the clarification.

      • The non-radiative transports in response to El Niño take some time – from 2 to 6 months. Also, the current El Niño continues to strengthen. It has not reached its peak yet. So one should be prepared for a shortening of the Pause, and the propaganda onslaught that will accompany it.

  24. The only relevant data set is between 1950 and 1974.
    All other data in the history of the planet is just 4.5Bn years of cherry picking.

  25. Christopher

    You may remember that I suggested a year or so ago that around this time we might be able to say that this decade (i.e. the last ten years) was cooler than the last decade (the ten years before that). Seeing as one of the graphs above shows cooling of 0.02 degrees for just over ten years till now and we know temperatures were flat for 18.5 years, we must surely be very close to that point.

    To be able to say simply, “This decade is cooler than the last decade” and to say it before Paris would be quite a coup.

    • Mr Scute raises an interesting point. But, of course, since the Pause goes back only 1.85 decades, this decade is a little warmer the its predecessor – by about a fortieth of a degree.

      One should really avoid decades comparisons, which the IPCC only used in its 2013 headline graph to conceal the true length of the Pause.

      • FWIW, I always wondered why nature would adopt our calendar, whether daily/weekly/monthly/yearly/decadely/centuryly or even milleniumly, it seems a bias.

      • check your message settings and try turning off predictive…pardon the pun. sorry. I ended up here reading comments after looking for graphic evidence of increasing Hawaiian volcanic activity(thus co2 in air) that would explain/parallel mouna loa’s co2 decadal graphs. this graph shows 46 active volcano’s in 1995 increases to 70 in 2009. a 52% rise in that time. http://www.preparingforthegreatshift.org/Volcanic%20Activity%201950%20to%202008.jpg I’ve always found it funny that a laboratory built to measure co2 to track n warn of possible volcanic activity increase, fails to flag such. if not, why was it built in a co2 volcanically active area like Hawaii to measure co2?

      • SIA, while I admire the skepticism and independent thought in your line of inquiry, that path has been thoroughly rung out. The measurements at Mauna Loa have been shown to agree with measurements taken around the world.

        Worth looking into though is the brand new satellite data showing regional CO2 differences. Interestingly, the areas of relatively higher CO2 do not primarily coincide with industrial centers or volcanism. Nature is the lead emitter.

  26. Many thanks Sir, another great post as always.

    It concerns me that Dr Mears uses ” Denialists” and “denier” .

    The graphic showing the 18yrs 6 months ( or latest version at the time) showing no warming should be on bill boards wherever there is a climate meeting/junket such as Paris in December. This would be expensive and they would be vandalised of course but worth making the point never the less. El Niño might create a small problem by then of course.

    • How about an art piece, the opposite to the ‘Denialists’ tombstone’ that Lord Monckton’s name was inscribed on. The Royal Academy in Piccadilly has some really impressive, giant, 3D sculptures that sit in its courtyard for weeks while thousands throng past to the galleries. So I suggest a 3-dimensional graph 150 feet long and 10 feet high stretching diagonally across the courtyard. You could walk round it, through it, sit your kids on it. All done in shiny metal with a big blue trend line running through the middle. All we need is an unindoctrinated artist who wishes to express his artistic freedom and give inspiration to the masses.

    • Here’s a subtle but very pointed poster that should be put on 1000 billboards across the world:

      Image—A hockey stick with its shaft slanting upwards & to the right and with its blade flat and pointing to the right.
It’s transparently overlaid on a graph of the running mean of GASTA, averaged from five sources, which aligns with the shape of the hockey stick.

      Caption—“Who’s in Denial Now?”

      Make that 10,000 billboards.

    • DB….So does using “warmist’ completely discredit people too? It should. Anyone who uses a term that is clearly intended to demean or belittle, is in my opinion saying more about the person doing the saying than the receiver, irrespective of the side they speak for.

      • simon

        Nice try but no coconut. You are asserting a false equivalence.

        Warmist has no associated connotations and is useful shorthand for those who believe in catastrophic man-made global warming.

        Denier is associated with holocaust denial and is used to demean people who accept whatever empirical data indicates; i.e. those who support science.


      • Well Richard, I say if you really want to debate the science rather than just play in the mud, you should be able to do so without resorting to using these terms. And that goes for both sides.

      • simon: “Richard…And by the way, knock off the racist coconut stuff.”

        “Racist”? What the eff is racist about “Nice try but no coconut”?

        You have never been on a fairground and had a go on the coconut shy, have you?

        What a thin-skinned little fellow you are!

      • catweazle666

        Sorry, but I don’t agree simon is “thin skinned”.

        Siumon is merely typical of learning deficient warmists, and I think you should have more sympathy with the mentally disabled.


  27. Every time I see this I am amazed that the climate models were so skillful in predicting El Chicon and Pinatubo. /sarc

  28. still unaffected by a slowly strengthening el Niño, which will eventually cause temporary warming

    Temporary? I thought the strong 1998 El Nino caused a not so temporary step change (upwards) in the global temperature trend.

      • Mr Schoneveld is right that the 1998 el Nino marked the culmination of the Singer Event – the sudden jump in surface temperatures over just four or five years either side of which there was little or no warming. However, coral bleaching and other data show that great el Ninos are rare (there have only been two previous one in the past 350 years). On that batting average, though another great el Nino remains possible, I do not think it particularly likely. If I am right in thus assessing the probability, it is likely that the el Nino warming will be followed by a compensatory cooling, restoring and even lengthening the Pause.

        But I do not know. And nor, I suspect, does anyone else, however expensive his model is to the taxpayer.

  29. I wish somebody could explain to me how only a third of the energy received at the earth’s surface comes from the sun (Fig T9). What is the source of the other two-thirds of the energy we receive?

  30. Re: SIA @ June 3, 2015 at 9:21 pm

    The prevailing winds at the height of the observatory on the big island are usually from the W and descending,capping an inversion present over much of the semitropical eastern north Pacific ocean. The inversion impedes circulation of marine air and VOG (volcanic smog) in the boundary layer. The trade winds there generally flow from the NE. In short, although VOG is visible and noxious at lower elevations at times, the air at the observatory is probably as good as you’ll get–and there’s facilities already there.
    You do well to question whether their observations are at times contaminated; however, and how they account for it.

  31. Just a few thoughts on the oft-cited suggestion that the choice of 1997/1998 is “cherry picking”.
    This is an odd application of such an accusation.
    If, for example, I wished to explain that I had not used my car for some period of time, then I might state that, “my car has been idle on the drive since January”.
    Some may claim that I have just “cherry picked” the start date to maximise the period of time over which I claim to have not used my car.
    What I have actually done, is fairly describe the period of time in which my car has not moved.
    Similarly, when choosing the beginning of the so-called “hiatus”, it is reasonable to look back and ask, “how long have global average temps not moved?”
    Answering this question takes us back to a start date of 1997/1998.
    This is not cherry picking. It is answering an important question.
    But it also must be remembered that global temps have been upwardly trending since 1750 and so we should not be surprised EVEN IF that trend had continued since 1997/1998.
    That the climate decided not to continue warming during the period in which mankind emitted 1/3 of total CO2 emissions is really remarkable.
    I don’t know how BEST is getting a “perfect fit” with CO2.
    It don’t look like a perfect fit, to me.

  32. To quote Dr Mears from Sir Christopher’s link in above post:

    “…..surface temperature datasets, which I consider to be more reliable than satellite datasets ….”

    • One fine day, when Village Idiot gets his nanny to read him the head posting, he will learn that Dr Mears’ view about the relative reliability of satellite and terrestrial datasets is therein stated. Repetition is, therefore, otiose.

      • Stands out a mile that you’re obviously worried about too many Villagers hearing that inconvenient little truth

        [it also stands out that you are too timid to stand behind your own words and put your name to them, as Monckton does, but instead you hide behind a moniker. I guess you are worried that too many villagers might realize who the idiot behind “village idiot” actually is – Anthony]

      • No one of any consequence, Anthony. While Mr M was at Cambridge punting on the River Cam and sipping pink champagne, I was working in the fields in the Fen not an hours drive from there.

        Monkton needs to use his name because he needs the oxygen of publicity to feather his nest. I don’t.

    • Most grateful for this link., One day, when I get some time, I shall apply Kriger’s technique to the ARGO record, for the oceans are not at all unsuitable for Kriging, and see whether that makes much difference. Kriging – if honestly applied – is quite a good technique for filling in the gaps in sparse data. But it may be that the resolution of the dataset is altogether insufficient, and we are merely guessing.

      My own method of estimating changes in the upper 2 km of the ocean is to watch the surface air temperature, for the ocean is two or three orders of magnitude denser than the air, and is intimately connected with it via tropical afternoon convection. Since the air is not warming at present, my best guess (back of the envelope) is that the upper ocean is not warming either.

  33. Nit pick: I wonder what units our noble friend is using when he uses coulomb masculine ordinal (Cº). I assume that’s a typo for degree Celsius (°C). No wonder some think his material is charged.

    • A physical convention that I try to follow, simply for clarity, is that if one is talking of an absolute temperature expressed in Celsius one uses C followed by the degree-mark, but if one is talking of temperature changes or anomalies one precedes the C with the degree-mark.

      • In that case, you’re not using the correct SI units. The letter C is used for coulomb, a measure of electrical charge. Degrees Celsius is a digraph that you can’t split or reorder without making it meaningless. In some fonts it’s a single character, ℃. To invent a convention that no-one else uses will cause confusion to your readers, cause editors to reach for the blue pencil, and give fuel to those who wish to criticise you. If you want to highlight a value as a difference or change, perhaps add a delta somewhere, eg. ΔT = x.xx °C

        Pennies are used both for a count of how much money one may have (an absolute value) and also for comparing two prices (a relative value). The notation does not change. On the other hand, if you go overdrawn the colour of the ink might.

    • Celsius is not an SI unit, so one may do with it as one pleases. I did not invent the convention that repositions the degree-sign to indicate an anomaly: it is widely used in physics. Perhaps those complaining are not physicists. Or, if they are, perhaps they have not come across this convention. But they should not have assumed that I had invented it.

      And if they are incapable of distinguishing between Coulombs and Celsius from the context, they should really go back to kindergarten and start again. Bottom line: don’t whine.

      • The degree Celsius is an ‘SI coherent derived unit’ and as such is part of the SI system and one is not free to do with it as one wishes. See section 4 of the NIST guide to the SI that I referred to above, the only correct version is ºC.

  34. The thing that kills me about the Stevens et. al. toy energy flow diagrams like the one you reproduce above is that a child can see the problem with it — but because people are so damn ignorant about statistical error, they don’t. This figure is quite lovely in that it includes the error bars. But look at the error bars! If one single process in this flow is imprecise at the level of a whole watt, the money numbers, the ones that are supposed to show the clearly resolved energy imbalance of magnitude less than one whole watt (per square meter) are bullshit!

    And what do we see? Uncertainties like \pm 10 W/m^2 internally, and look at reflected solar! Its uncertainty is 2 W/m^2. Look at outgoing LWIR radiation! Its uncertainty is 3.3 W/m^2. And yet (in a true statistical miracle) the final answer of 0.6 W/m^2 is supposedly certain to 0.4 W/m^2.

    What do they teach in universities these days? How could referees not catch this? The minimum uncertainty according the simplified rules I learned once about a time, ignoring the inside and looking only at the TOA line is 0.1 + 2 + 3.3 = 5.4 W/m^2. (Not precisely, so call it 5 W/m^2.)

    That is, the correct answer that should have been displayed on the top line is (wait for it):

    0.6 \pm 5 Watts per square meter!

    Which is sort of like saying we have no bloody idea what the energy imbalance (if any) of the planet is.

    And the beauty of it is, they can show this to any congressperson or government official you like and nobody will ever notice, they’ll just take your 0.6 and ignore even the uncertainty you claimed (with no possible justification that I can see) and run with it. Heck, even scientists just glance at it and (apparently) say nothing.


    • I had spotted what Professor Brown had spotted – that the sum of the individual uncertainties in the components of the net tropopausal radiative imbalance exceeds the stated uncertainty for that quantity by an order of magnitude.

      However, on further investigation I found that the total incoming and total outgoing radiation are comparatively well constrained, being measured directly by satellites. The individual components, however, depend on more than a little guesswork. We know the totals in and out quite well (hence the small uncertainty there), but we know the components less well (hence the large uncertainties there).

      • And how is visible light reflected from the Earth’s surface not also measured by satellites? I suspect that that would be the claim, but I am enormously skeptical that the uncertainties in SWV and LWIR are so large, but they somehow know the total of the two accurately. It simply isn’t that difficult to measure the visible light brightness of the planet (for example) and the outgoing LWIR is what they are primarily measuring when the measure integrated outgoing radiation.

        I repeat, there is something very sketchy about a claim that we know the outgoing radiation to four significant digits but only know visible and LWIR separately to two.


    • RGB perhaps you should have read the paper before criticizing it? A relevant paragraph is:
      “For the decade considered, the average imbalance is 0.6 = 340.2 − 239.7 − 99.9 Wm−2 when these TOA fluxes are constrained to the best estimate ocean heat content (OHC) observations since 2005 (refs 13,14). This small imbalance is over two orders of magnitude smaller than the individual components that define it and smaller than the error of each individual flux. The combined uncertainty on the net TOA flux determined from CERES is ±4 Wm−2 (95% confidence) due largely to instrument calibration errors (12, 15). Thus the sum of current satellite-derived fluxes cannot determine the net TOA radiation imbalance with the accuracy needed to track such small imbalances associated with forced climate change (11). Despite this limitation, changes in the CERES net flux have been shown to track the changes in OHC data (16, 17). This suggests that the intrinsic precision of CERES is able to resolve the small imbalances on inter annual timescales (12, 16), thus providing a basis for constraining the balance of the measured radiation fluxes to time-varying changes in OHC (Supplementary Information). The average annual excess of net TOA radiation constrained by OHC is 0.6±0.4 Wm−2 (90% confidence) since 2005 when Argo data (14) became available, before which the OHC data are much more uncertain (14). The uncertainty on this estimated imbalance is based on the combination of both the Argo OHC and CERES net flux data (16).”

      By the way rather than the crude method you used for combination of errors I always taught my students the RMS approach, consequently I got:

      √(2^2+3.3^3)= ~3.8 which rounds up to ±4 which agrees with their estimate.

  35. Groundhog ENSO

    Checking through the WUWT ENSO page I noticed a remarkable thing – there is an animation of Pacific SSTs labeled as 3 month but actually it gives the whole year:

    But that’s not the remarkable thing. What is, is that I found it very hard to notice the jump from June 2015 back to June 2014. They are almost the same.

    It looks like we are stuck in a Groundhog year as far as ENSO is concerned. The yearly pattern is repeating almost exactly. And that goes for the accompanying Greek chorus of el Nino expectation – that also peaks around no before fading toward the end of the year.

    If you want to know what will happen for the next month or two – or the whole next year – just watch the video.

  36. “The start-date is not “cherry-picked” so as to coincide with the temperature spike caused by the 1998 el Niño. Instead, it is calculated so as to find the longest period with a zero trend.”
    ~ Monckton ~

    Of course that’s a distinction without a difference. Plotting from a few months on either side of the super-EN time period yields a positive trend. The next major El Nino evaporates the “value” of M’s monthly RSS nonsense and further like monthly posts will also evaporate – he’ll be left to seek another PR device. But if that major EN creates another “Tisdale stair-step”, who knows, Monckton’s RSS greatest hits could reprise on WUWT … from a higher base temp.

    The tactics of a charlatan.

    • How They wriggle as Their predictions are demonstrated to have been exaggerated. John@EF As the head posting states, the influence of the 1998 el Nino outlier on the least-squares linear-regression trend is almost entirely offset by that of the 2010 el Nino outlier. If John@EF thinks that all is well with the IPCC’s predictions, he should contact the IPCC rather than wasting his time here: for the IPCC, accepting advice from expert reviewers such as me, has greatly reduced (indeed, all but halved) its near-term warming predictions.

      And, whether “John@EF” or any other true-believer in the New Religion likes it or not, I do no harm by simply reporting what the temperature data actually show. What “John@EF” is really complaining about is the failure of the data to behave in accordance with the Party Line. To those of us who do science empirically, if the empirical results conflict with the Party Line there is no point in whining about the empirical results. Instead, one must abandon the Party Line. It is wrong. Get over it.

      • “As the head posting states, the influence of the 1998 el Nino outlier on the least-squares linear-regression trend is almost entirely offset by that of the 2010 el Nino outlier.”

        … and the 2008 and 2011 la Niñas, what influence did they offset? Again, the next major eN will leave you search another PR device. Meanwhile, I’ll track your stellar and warmly embraced paper on retractionwatch.com

      • John@EF,

        Your ‘retractionwatch’ drive-by snark made no sense to me. You asked a question, then ended with a complete non sequitur.

        I didn’t understand what you meant, so I searched the retractionwatch site. There was no retraction of the Monckton-Soon paper. There was only one hit for “Monckton”, which linked to that swivel-eyed lunatic blog Motherboard. Are you part of their insane rabble of mouth-breathing, unthinking parrots, head-nodding along with the government/media’s incessant “carbon” scare propaganda? You have the same attitude here.

        However that may be, there was no retraction of the paper, as you deceptively implied. There was only lots of ignorant criticism by people who don’t know squat about the subject. Most of the comments employed the obligatory pejoratives “denialist” and “deniers” (like you have before), and the head article was no better.

        Anyone who uses such mindless, meaningless insults as “denialist”, etc., has clearly lost the scientific debate; if the facts and evidence were on your side, you would have no need to call people stupid names like those. Facts and observations would be more than sufficient to win the science debate.

        As a matter of fact, you have decisively lost the science debate: exactly none of the alarming predictions you people have ever made over the past thirty years have come true; from decimated Polar bears, to South Sea islands, and Manhattan, and Florida being inundated, to ‘children won’t know what snow is’, to ocean ‘acidification’, to sea level rise accelerating — to the big prediction that started it all: runaway global warming and climate catastrophe. None of it happened.

        Global warming stopped almost twenty years ago! All your scary predictions were wrong. No exceptions. But you will not man-up and admit it. The ‘carbon’ scare has become your True Blue Green Religion.

        When your climbdown position requires you to label those who simply have a different scientific point of view as “denialists”, then you are either basically an evil person, or you are mentally unbalanced. Maybe both.

      • @dbstealey,
        You really must relax a bit – getting that agitated causes you to jet off on tangents and delete posts that shouldn’t be. I don’t believe I’ve ever used the D word, which renders most of your response to the trash pile. Nor did I imply a paper retraction was underway – that’s another of your fabrications to argue against.

        The reference to the retraction-watch site was simply an expression of the personal level of respect I hold for M’s weakling “IPCC reviewer” & empirical research comments. I’m certainly interested in whether or not the Chinese pay-for-publish journal eventually votes that solid science trumps ready cash. You may say they have … I would disagree.

      • “Although PDO phase seems to be an important influence on spring temperatures in the northwestern United States, eastern temperature regimes in annual, winter, summer, and fall temperatures are more coincident with cool and warm phase AMO regimes. Annual AMO values also correlate significantly with summer temperatures along the Eastern Seaboard and fall temperatures in the U.S. Southwest. Given evidence of the abrupt onset of cold winter temperatures in the eastern United States during 1957/58, possible climate mechanisms associated with the cause and duration of the eastern U.S. warming hole period—identified here as a cool temperature regime occurring between the late 1950s and late 1980s—are discussed.”

      • John@EF,

        I don’t buy your defensiveness, because you have a long hiustory here of being fixated on Lord Monckton. It was clear you tried to imply that retractionwatch had retracted his submission. Not nice, John@EF (or should I say Jack Greer?)

        Want to cut to the chase? Probably not, but here’s the bottom line:

        Global warming stopped many years ago.

        Every scary prediction your side made was a complete failure. But you keep digging in your heels against all the contrary evidence, and insist that global warming is still going on as always.

        I don’t say this as someone who rejects the warming effect of CO2. It’s crystal clear to me that carbon dioxide causes global warming. I’ve never said anything different. I suppose that makes me a lukewarmer.

        But it is also clear that almost all the warming effect happened in the first 20 – 30 or so ppm. At current concentrations (≈400 ppm) there is no measurable warming effect. The CO2 concentration could go up by 20%, 30%, or 40%, and any warming would still be too small to measure. That is made clear in charts like this (I have several others that show the same thing, if you don’t like this one):

        Look at that chart, and tell us how much warming would result from a 25% rise in CO2 from current levels. All the info you need is there.

        THAT is why the climate alarmist crowd guessed wrong when they incessantly predicted runaway global warming (now changed Orwell-style to “climate change”). That is what Mr. Monckton is saying, too, from all I’ve read. But you are fixated on him for some unknown reason. Maybe because he’s been right and you’ve been wrong?

        Speaking for myself, the difference between skeptics and your side of the fence is that if observations, facts, and evidence showed us that skeptics of “dangerous man-made global warming” were wrong, we would look for the reasons, study the matter, and if human CO2 emissions were causing problems, we would admit it and try to help work out the best way to deal with the situation.

        But as it has turned out, there is no identifiable damage or harm from the rise in CO2. The only difference it has made has been entirely beneficial. More CO2 would be even better. And a 2º rise in global T would be better on net balance, too. Not that it’s going to happen. See the chart again.

        So skeptics will admit it if we’re shown to be wrong. But alarmists won’t. What’s up with that?

      • I certainly don’t know when, exactly, but do you honestly believe we’ve seen the last of very strong el Ninos?

      • The drop in temperature in the north of the Atlantic since 2010 is already about 0.2 degree C.

    • “I don’t buy your defensiveness, because you have a long hiustory here of being fixated on Lord Monckton. It was clear you tried to imply that retractionwatch had retracted his submission. Not nice, John@EF (or should I say Jack Greer?)”

      I don’t care what you buy, or what you should say, but your fabricated retraction bit is simply nonsense. Sorry.

  37. Hmmm. This looks like lots of hair splitting over fractions of a degree. Can anyone tell me when it’s going to finally start warming up at my house? It seems unusually cool for June, and with the current state of the sun, will the El Niño really have much of an effect on North American temperatures?

    • “This looks like lots of hair splitting over fractions of a degree. “

      I call it “False Precision Syndrome”, and without it, “climate science” wouldn’t exist.

      I’ve never come across a “climate scientist” yet who would recognise an error bar if you beat them on the kneecaps with it, and the more dodgy the ground upon which their fantasies are based, the more arrogant, patronising, supercilious and disingenuous their bluster.

  38. As I see in:


    If the end of November 1996 is chosen as a breakpoint between two linear trend segments, there is a big step from one linear trend to the other, and the sloped one has a lower slope than that of the linear trend of the whole RSS record. This suggests that the step and its cause, the 1997-1998 El Nino, are part of the warming period rather than the non-warming period afterwards.

    What I see as the earliest arguable time for a breakpoint is the middle of 1997, when the step coincides with the leading edge of the El Nino spike. My eyeball estimate is that this minimizes the standard deviation from the two segments, even though that leaves the cause of the step being considered part of the pause rather than part of the warming.

    Another arguable start time for the beginning of the pause is the oldest beginning of a continuing non-warming trend after the great El Nino. That is the end of February 2000.
    Yet another one is when the temperature changes from mostly being cooler than the flat trend line, to mostly around the flat trend line. That is around March 2001.

  39. As for the top of atmosphere imbalance of .6 W/m^2: Assuming no feedbacks other than the Planck one, which means reciprocal of climate sensitivity being 3.3 W.m^2-K, the world is nearly .2 degree C cooler than equilibrium. This is roughly how much the world will warm in the future if the increase of greenhouse gases stops now, assuming no positive or negative feedbacks or natural variations.

    • Even if Mr Klipstein’s estimate of warming in the pipeline is correct (and the error bars are so large that there arguably isn’t an “energy imbalance” anyway), his estimate of just 0.2 K in the pipeline is one-third of the IPCC’s estimate in its Holy Books. Piece by piece, the edifice of extreme prediction crumbles away.

  40. It’s always worth checking on the Peruvian anchovies to get their perspective on the current status of ENSO and possible hints of the future.

    Here is the latest report on the Peruvian anchovy fishery from undercurrent news:


    So buyers are placing orders “hand-to-mouth” due to falling prices. Why are prices falling?
    Its due to the pervasiveness of warmist hype which is harming everything that it touches.
    There is this universal pressure to talk up the prospects of El Nino since this is politically correct mood music of global warming alarmism. So even outside of climate sciences, professions such as fishing feel the baleful effect of this propaganda. Here’s what has happened. The talk of el Nino has been so hysterical that anchovy buyers (believing – reasonably but falsely – in the skill of ENSO scientists) expected the el Nino to actually happen and to reduce anchovy catches – as a real el Nino always does. The industry was fed bad information due to endless politically motivated screeching of “el Nino el Nino”. So the industry did the normal thing and increased the anchovy price in response to this anticipation of falling anchovy catches due to el Nino.

    But anchovy catches have not fallen because there is no el Nino. The current el Nino is a political fiction, in the eastern Pacific it’s not happening. Climate scientists don’t know this since they long ago traded curiosity and honesty about the real world for political expediency and secure careers. Thus anchovy prices are now falling fast, putting buyers in a difficult position. So they are buying “hand-to-mouth”, buying only the minimum needed week by week as prices fall. Even the Undercurrent News article is infected with this el Nino hype and instinctively talks up el Nino, even though the sustained strong anchovy catches show that a real el nino has yet to begin. The word “projection” of an el Nino even appears in the article, indicating further the pervasiveness of the corrosive warmist-speak leaking out of climate science to distort and damage wider spheres of human activity.

    But the anchovy, in contrast to climate scientists, has both the intelligence and the honest insight to know what is really happening with ENSO. And what is realty happening is climatology, just a normal annual cycling in the Pacific with some reasonably healthy Kelvin waves washing up on the coast of Peru but, as yet, still no El Nino.

  41. Tiny average temperature variations, with margins of error not shown, over short periods of time, tell us almost nothing useful.

    They do not tell us the long-term trend.

    They do not tell us what average temperature is “normal”, if such a condition existed.

    They do not tell us what parts of Earth were most affected by the climate change, and least affected.

    They do not tell us if anyone was harmed by the climate change (not likely).

    They do not tell us if any animals and/or plants were harmed by the climate change.

    In my opinion. the charts display meaningless random variations of the average temperature, based on inaccurate measurements, covering a mere 20 to 40 years of Earth’s 4.5 billion year history!

    These short-term average temperature anomaly charts are no more useful for predicting the future climate than tea leaves on the bottom of one’s cup of tea (worthless).

    The charts are useful in demonstrating that Earth’s climate varies, but we already knew that … and showing that computer game climate forecasts are nearly worthless, but we already knew that too.

    Climate talking points for non-scientists:

Comments are closed.