Global Temperature Update – No global warming for 17 years 11 months

… or 19 years, according to a key statistical paper.

By Christopher Monckton of Brenchley |

The Great Pause has now persisted for 17 years 11 months. Indeed, to three decimal places on a per-decade basis, there has been no global warming for 18 full years. Professor Ross McKitrick, however, has upped the ante with a new statistical paper to say there has been no global warming for 19 years.

Whichever value one adopts, it is becoming harder and harder to maintain that we face a “climate crisis” caused by our past and present sins of emission.

Taking the least-squares linear-regression trend on Remote Sensing Systems’ satellite-based monthly global mean lower-troposphere temperature dataset, there has been no global warming – none at all – for at least 215 months.

This is the longest continuous period without any warming in the global instrumental temperature record since the satellites first watched in 1979. It has endured for half the satellite temperature record. Yet the Great Pause coincides with a continuing, rapid increase in atmospheric CO2 concentration.


Figure 1. RSS monthly global mean lower-troposphere temperature anomalies (dark blue) and trend (thick bright blue line), October 1996 to August 2014, showing no trend for 17 years 11 months.

The hiatus period of 17 years 11 months, or 215 months, is the farthest back one can go in the RSS satellite temperature record and still show a sub-zero trend.

Yet the length of the Great Pause in global warming, significant though it now is, is of less importance than the ever-growing discrepancy between the temperature trends predicted by models and the far less exciting real-world temperature change that has been observed.

The First Assessment Report predicted that global temperature would rise by 1.0 [0.7, 1.5] Cº to 2025, equivalent to 2.8 [1.9, 4.2] Cº per century. The executive summary asked, “How much confidence do we have in our predictions?” IPCC pointed out some uncertainties (clouds, oceans, etc.), but concluded:

“Nevertheless, … we have substantial confidence that models can predict at least the broad-scale features of climate change. … There are similarities between results from the coupled models using simple representations of the ocean and those using more sophisticated descriptions, and our understanding of such differences as do occur gives us some confidence in the results.”

That “substantial confidence” was substantial over-confidence. A quarter-century after 1990, the outturn to date – expressed as the least-squares linear-regression trend on the mean of the RSS and UAH monthly global mean surface temperature anomalies – is 0.34 Cº, equivalent to just 1.4 Cº/century, or exactly half of the central estimate in IPCC (1990) and well below even the least estimate (Fig. 2).


Figure 2. Near-term projections of warming at a rate equivalent to 2.8 [1.9, 4.2] K/century , made with “substantial confidence” in IPCC (1990), January 1990 to August 2014 (orange region and red trend line), vs. observed anomalies (dark blue) and trend (bright blue) at less than 1.4 K/century equivalent, taken as the mean of the RSS and UAH satellite monthly mean lower-troposphere temperature anomalies.

The Great Pause is a growing embarrassment to those who had told us with “substantial confidence” that the science was settled and the debate over. Nature had other ideas. Though more than two dozen more or less implausible excuses for the Pause are appearing in nervous reviewed journals, the possibility that the Pause is occurring because the computer models are simply wrong about the sensitivity of temperature to manmade greenhouse gases can no longer be dismissed.

Remarkably, even the IPCC’s latest and much reduced near-term global-warming projections are also excessive (Fig. 3).


Figure 3. Predicted temperature change, January 2005 to August 2014, at a rate equivalent to 1.7 [1.0, 2.3] Cº/century (orange zone with thick red best-estimate trend line), compared with the observed anomalies (dark blue) and zero real-world trend (bright blue), taken as the average of the RSS and UAH satellite lower-troposphere temperature anomalies.

In 1990, the IPCC’s central estimate of near-term warming was higher by two-thirds than it is today. Then it was 2.8 C/century equivalent. Now it is just 1.7 Cº equivalent – and, as Fig. 3 shows, even that is proving to be a substantial exaggeration.

On the RSS satellite data, there has been no global warming statistically distinguishable from zero for more than 26 years. None of the models predicted that, in effect, there would be no global warming for a quarter of a century.

The Great Pause may well come to an end by this winter. An el Niño event is underway and would normally peak during the northern-hemisphere winter. There is too little information to say how much temporary warming it will cause, but a new wave of warm water has emerged in recent days, so one should not yet write off this el Niño as a non-event. The temperature spikes caused by the el Niños of 1998, 2007, and 2010 are clearly visible in Figs. 1-3.

El Niños occur about every three or four years, though no one is entirely sure what triggers them. They cause a temporary spike in temperature, often followed by a sharp drop during the la Niña phase, as can be seen in 1999, 2008, and 2011-2012, where there was a “double-dip” la Niña that is one of the excuses for the Pause.

The ratio of el Niños to la Niñas tends to fall during the 30-year negative or cooling phases of the Pacific Decadal Oscillation, the latest of which began in late 2001. So, though the Pause may pause or even shorten for a few months at the turn of the year, it may well resume late in 2015 . Either way, it is ever clearer that global warming has not been happening at anything like the rate predicted by the climate models, and is not at all likely to occur even at the much-reduced rate now predicted. There could be as little as 1 Cº global warming this century, not the 3-4 Cº predicted by the IPCC.

Key facts about global temperature

  • The RSS satellite dataset shows no global warming at all for 215 months from October 1996 to August 2014. That is more than half the 428-month satellite record.
  • The fastest measured centennial warming rate was in Central England from 1663-1762, at 0.9 Cº/century – before the industrial revolution. It was not our fault.
  • The global warming trend since 1900 is equivalent to 0.8 Cº per century. This is well within natural variability and may not have much to do with us.
  • The fastest measured warming trend lasting ten years or more occurred over the 40 years from 1694-1733 in Central England. It was equivalent to 4.3 Cº per century.
  • Since 1950, when a human influence on global temperature first became theoretically possible, the global warming trend has been equivalent to below 1.2 Cº per century.
  • The fastest warming rate lasting ten years or more since 1950 occurred over the 33 years from 1974 to 2006. It was equivalent to 2.0 Cº per century.
  • In 1990, the IPCC’s mid-range prediction of near-term warming was equivalent to 2.8 Cº per century, higher by two-thirds than its current prediction of 1.7 Cº/century.
  • The global warming trend since 1990, when the IPCC wrote its first report, is equivalent to below 1.4 Cº per century – half of what the IPCC had then predicted.
  • Though the IPCC has cut its near-term warming prediction, it has not cut its high-end business as usual centennial warming prediction of 4.8 Cº warming to 2100.
  • The IPCC’s predicted 4.8 Cº warming by 2100 is well over twice the greatest rate of warming lasting more than ten years that has been measured since 1950.
  • The IPCC’s 4.8 Cº-by-2100 prediction is almost four times the observed real-world warming trend since we might in theory have begun influencing it in 1950.
  • From 1 April 2001 to 1 July 2014, the warming trend on the mean of the 5 global-temperature datasets is nil. No warming for 13 years 4 months.
  • Recent extreme weather cannot be blamed on global warming, because there has not been any global warming. It is as simple as that.

Technical note

Our latest topical graph shows the RSS dataset for the 214 months October 1996 to August 2014 – just over half the 428-month satellite record.

Terrestrial temperatures are measured by thermometers. Thermometers correctly sited in rural areas away from manmade heat sources show warming rates appreciably below those that are published. The satellite datasets are based on measurements made by the most accurate thermometers available – platinum resistance thermometers, which not only measure temperature at various altitudes above the Earth’s surface via microwave sounding units but also constantly calibrate themselves by measuring via spaceward mirrors the known temperature of the cosmic background radiation, which is 1% of the freezing point of water, or just 2.73 degrees above absolute zero. It was by measuring minuscule variations in the cosmic background radiation that the NASA anisotropy probe determined the age of the Universe: 13.82 billion years.

The graph is accurate. The data are lifted monthly straight from the RSS website. A computer algorithm reads them down from the text file, takes their mean and plots them automatically using an advanced routine that automatically adjusts the aspect ratio of the data window at both axes so as to show the data at maximum scale, for clarity.

The latest monthly data point is visually inspected to ensure that it has been correctly positioned. The light blue trend line plotted across the dark blue spline-curve that shows the actual data is determined by the method of least-squares linear regression, which calculates the y-intercept and slope of the line via two well-established and functionally identical equations that are compared with one another to ensure no discrepancy between them. The IPCC and most other agencies use linear regression to determine global temperature trends. Professor Phil Jones of the University of East Anglia recommends it in one of the Climategate emails. The method is appropriate because global temperature records exhibit little auto-regression.

Dr Stephen Farish, Professor of Epidemiological Statistics at the University of Melbourne, kindly verified the reliability of the algorithm that determines the trend on the graph and the correlation coefficient, which is very low because, though the data are highly variable, the trend is flat.

Other statistical methods might be used. A paper by Professor Ross McKitrick of the University of Guelph, Canada, published at the end of August 2014, estimated that at that date there had been 19 years without any global warming.

150 thoughts on “Global Temperature Update – No global warming for 17 years 11 months

    • In the future children just won’t know what global warming looks like.

      Like Tim Yeo, head of the UK parliamentary committe on technology and science, they will think “warming” means “it is still warmer than it used to be”.

  1. If one attempts to predict an El Niño or La Niña occurrence likelihood in the next 4 months from when the anomaly passes 0.5 pos or neg respectively, we have only had 1 El Niño and 3 La Nina’s in the last 20 times approx, a ratio of 20 %, not the 80% the BOM is predicting.
    Figures a bit tough.

  2. Also to be considered is that to three decimal places on a per-decade basis, there has been no global cooling for 18 full years.

    • Oh my! You just triggered a lost memory! My band opened for Screaming Lord Sutch over 40 years ago in Milwaukee! Thanks for the tweak!

    • I saw Screaming Lord Sutch in Weymouth in 1962. Cans of blazing gasoline on the stage during “Great Balls of Fire”… The loudest PA system in the world…. Amazing.

  3. When doing real science, the “r^2 = 0.000” of Figures 1 and 3 means the trend lines are complete crud and no responsible researcher would dare claim those lines have any meaning except perhaps to show a linear fit is complete crud, while the “r^2 = 0.245” of Figure 2 still means the exact same thing. When you get r^2 up into at least the low 0.9x range, then there may be something worth noting and worthy of further study.

    So why should we act like these lines have any significance? Because it’s Climate Science (TM), not real science?

    • Wow. You have no idea what’s going on here. Please educate yourself before making such ignorant comments. r^2=0 is exactly the point! The linear regression (“trend line”) has ZERO explanatory power because THERE IS NO TREND IN GLOBAL TEMPERATURES over the last 18 years.

      • Wow. You have no idea what’s going on here. Please educate yourself before making such ignorant comments. The r^2 test shows how well the regression line fits the data. The post specifically addresses there has been no positive trend to warming for nearly 18 years. An r^2 of zero shows the linear regression is a terrible fit to the data and no conclusions should be drawn from the linear fit.

        And why would you say ‘there is no trend in global temperatures’ when a linear fit is referenced? The trend would be positive, neutral, or negative. A linear fit will always show a trend with one of those three qualities, it will not show there is “no” trend.

      • kadaka:

        That’s right, no trend, but there is supposed to be a trend due to increased CO2 according to the consensus theory. So, which is it in your mind. Consensus theory is wrong, or natural variability is much more than previously thought?

      • if the slope of the line is zero…the equation that the model should produce would be something like y=Bo + 0(x), and if the data was linear, the r square would be really high.
        the fact that it isnt very high tells you that while there is no slope to the regression line (temps are flat) the data isnt really linear. thats all.
        really simple.

      • Exactly David. r^2 = 0 implies there is 0 correlation between y and x. In other words, 0 correlation between global temperature anomaly (y) and time (x). Man-made CO2 has increased steadily over the past 20 years. If it was a significant driver of global temperature, then the correlation between global temperature and time, would not be 0 over the past 20 years.

  4. Linear regression is valid. Natural gas consumption follows a perfectly linear trend with falling temperatures (heating) with only curving at high temperatures where it reaches a residual value (all other purposes).

  5. With this August anomaly, the average is 0.258 over 8 months. This would rank in sixth place if it stayed this way. To set a record in 2014, the average anomaly over the next 4 months needs to be 1.134. The highest ever anomaly for RSS was in April of 1998 when it was 0.857.

    For UAH version 5.6, the anomaly would have to jump from 0.199 to 0.768 and stay there for the next four months to break a record. The highest ever anomaly on version 5.6 was set in April of 1998 when it reached 0.663. Version 5.6 would come in fourth if the anomaly average stayed where it is after 8 months.

    There is no way that any satellite data will come in first or even second for 2014. So the 1998 records are safe this year.

  6. I agree with others who have stated we need to stop calling this the “Pause.” It suggests so certainty that temps are headed up once the Pause is “done.” So far internal natural climate dynamics between the ocean atmosphere is the best explanation for the Pause, and maybe for most of the 80s-90s warmup. So it may also just get colder for a decade or so. Then the Pause would really just be The Plateau.

    • Was just about to post something similar.

      It is a halt in warming at this point and can only be called a “pause” if warming resumes.

      If cooling begins, then it will be called either the end of the LIA recovery or the beginning of the decline to LIA2.

      At least, that’s my take.

      • As you may of seen, I have made great efforts on this forum to stop it being referred to as ‘pause’, as to do so indicates that you know future events. However, Monckton is another one who still wants it named as such. Odd, given his anti-AGW belief, that he keeps referring to it in the sense that warming will resume. There comes a point when you say that warming has stopped. Personally, I think it has passed. Any new warming could even be seen to be a new ‘block’ of warming! But with the AMO about to fall, temps are only heading downward for 25-30 years.

  7. A Modeler was arguing with an Observer about the existence of CAGW and failing to get his point across. They had been arguing for two hours and finally the Modeler in frustration sat down.

    “Listen,” said the Modeler, “You are like a man in a dark room, with no lights and windows, wearing a blindfold looking for a black cat that isn’t there. What do you say to that?”

    The Observer thought for a moment.

    “Yes, you are probably right,” he said, “but you are also like a man in a dark room, with no lights and windows, wearing a blindfold looking for a black cat that isn’t there. The only difference is, you have found the cat.”

    With all due respect to Dave Allen circa 1970

  8. I see the abbreviation FAR on one of your charts. It might be a good idea to adopt a uniform enumeration of these reports, i.e., AR1, AR2, etc., rather than FAR, SAR. . ., AR4, AR5. Any adult is able to figure it out either way, but let’s make it a bit less cryptic–for the children!

    • Your idea makes sense, which is never a good idea when dealing with ClimateScience!. Unfortunately, we now have decades of references to FAR and SAR all through the literature. So now it seems we are stuck with it. It was only when the Fourth Report was being prepared that the problem was noticed. The Fourth AND Fifth reports would get called FAR, same as the first report. Worse, the sixth and seventh reports would get SAR, same as the second report. As I recall, there was a bit of chaos while it was all sorted out. I have always been amused that an organization which makes a living by predicting the future, never saw this one coming.

    • Seconded! Thirded and Fourthed, as well. In fact, let’s call your suggestion Sensibly Rename Assessment Reports (SRAR). Then your original can be SRAR1, and my agreement can be SRAR2. SRAR3 anyone? We definitely don’t want FSRAR, SSRAR, TSRAR, etc…


      (Seriously, I agree and have been trying to implement this quietly on my own, but it would be good to get the meme out there before S(ixth)AR.)


  9. Of course if one uses data from, say, UAH, GISS, Hadcrut4 or Cowtan and Way, then, on applying linear least squares regression, one observes that there is a warming trend, or more precisely an atmospheric warming trend. Cynics might accuse the Noble Viscount of Cherry Picking in his exclusive use of RSS data, but now I’m sounding like Bishop Hill.

    As for GLOBAL warming then the evidence from ARGO floats is very strongly in favour of a pretty relentless warming trend during the period often described as the “pause”. Willis Eschenbach (wuwt, June 2013) had to go to the rather extreme lengths of taking the 2nd derivative of this trend (which gives not the rate of warming, but the rate of increase of the rate of warming) in an attempt to demonstrate its “insignificance”,

    • Bill H.,

      The ARGO submersible buoys show no warming, which contradicts the models.

      Global warming has stopped. Even the IPCC admits that, when they use the weasel word “pause”. They are not the only ones. Just about every organization involved in global temperature recording now uses the same two Orwellian words: “Pause”, and/or “Hiatus”.

      Both words mean the same thing: global warming has stopped. Whether it has stopped for ten years, or fifteen years, or twenty years does not matter. What matters is the fact that every climate alarmist and alarmist organization was flat wrong, when they endlessly predicted that global warming would accelerate. Instead, it stopped.

      When skeptics are shown to be wrong if new facts appear, we admit it and re-assess the situation. That is entirely different from the climate alarmist crowd, which refuses to admit that global warming has stopped.

      That looks like what you are doing. The rest of us can see that global warming has stopped. Be a stand-up guy, and admit it. No one will hold it against you. In fact, it will generate admiration — whereas claiming that global warming is still chugging along as usual brings ridicule.

    • The IPCC scenario Sir Christopher shows are surface temperatures; he then proceeds to to compare them with satellite data (which require ‘adjusting’ before release). Comparing apples with pears is a necessary part of this particular illusion. Making a splash at the AGW skeptic trough, draws attention to one’s self.

      • Village Idiot

        The Third Viscount Monckton of Brencley is a Peer of the Realm and not merely a knight, so you insult him by addressing him as “Sir Christopher”. Lord Monckton or Viscount Monckton would be proper.

        In his above article he compares different data sets and demonstrates that each of the data sets indicates global warming has stopped.

        I agree with you that there is an “illusion”, and that “illusion” is that global average surface temperature anomaly (GASTA) is a real metric: please read Appendix B of this.

        The ridiculous ‘surface temperature’ data is “adjusted” almost every month with astonishing results. Each of these data sets changes from ‘apples’ to become ‘pears’ most months; e.g. see here.

        Your ‘belly-flop’ has made a “splash” which has drawn attention to yourself, but only a Village Idiot could fail to be embarrassed by that.

        I don’t know of an “AGW skeptic trough”: is it related to the mythical “oil money”?


      • HadCrut4 shows a warming of 1.38 ±0.92°C/century (2σ) over the same period (1990-2014). Exactly the same as RSS+UAH’s 1.37°C/century.

      • If I may add Village Idiot, according to HadCrut4 there has been ZERO WARMING thus far this century; 2001-2014: -0.09 ±1.75 °C/century

      • All temperature data, whether satellite or terrestrial, are adjusted for various factors before the final monthly anomalies are determined. The surface temperatures measured by terrestrial weather stations and the lower-troposphere temperature measured by satellites track one another very closely for obvious reasons. In both instances, temperatures are being measured – i.e. apples are being compared with apples. And the mean of the satellite datasets over recent decades is very, very close to the mean of the terrestrial datasets.

  10. wasp nests here at my place (central maine usa) are all underground, first time in yrs I have not had to kill a nest in outbuildings.
    others (I am told not personally verified) nearby have seen some 20 feet up in trees.
    going to be long winter.
    I wish there were warming…

    • That I have noticed there have been no wasps around our place (So. Calif) since about 6-9 months ago. These buggers usually like to nest up under the eaves where it is warm and dry. Doesn’t seem much cooler, but that is subjective anyway.
      BTW WD-40 kills them almost instantly; I always keep a spray can handy.

  11. Excerpted from IPCC AR5 TS.6 Key Uncertainties
    “Paleoclimate reconstructions and Earth System Models indicate
    that there is a positive feedback between climate and the carbon
    cycle, but confidence remains low in the strength of this feedback,
    particularly for the land. {6.4}”

    TS.6 is a page and a half at the end of the technical section, a summary of what the scientists don’t know, have doubts, uncertainties. The authors of this section apparently did not compare notes with the authors of the summary. The tone of confidences and certainties could not be more contradictory. Other uncertainties include clouds, ice sheets, sea levels, and more. Recommended reading. Of particular interest is this comment’s opening excerpt. By “..remains low…” are they suggesting that IPCC AR4 had low confidence in the magnitude and that low confidence continues with AR5? Let’s take a look at this CO2 feedback loop.

    As I understand it, the CO2 feedback loop works like this: CO2 absorbs energy from a specific wavelength of sunlight, whereupon its electrons become excited, jumping in and out of their orbits. The excited CO2 molecule then emits a less energetic wavelength, the incident wavelength minus the work function. This is known as the photoelectric effect, the discovery and explanation of which garnered Einstein his Nobel prize. It’s also how fluorescent light bulbs, lasers, and LEDs work. The re-emitted wavelength excites water vapor molecules which heat up just as in your microwave. The heated water molecules heat the air which heat the oceans which release CO2. CO2 is less soluble in warm liquid than in cold. The crisp spritz opening a cold beer as opposed to the geyser from opening a beer that has been in the trunk all day. This is known as a positive feedback loop. It feeds on itself like feedback between a microphone and PA system. If the magnitude is large enough it rapidly escalates, like a chain reaction. The rapid increase in global warming predicted by assorted GCMs is due to the magnitude selected for the feedback loop. So how much heat does the air from this loop and the allegedly rising global temperatures transfer into the ocean?

    Here’s the science section:
    First let’s define the properties. The heat capacity of water is 1 Btu/lb-°F. The heat capacity of air is 0.24 Btu/lb-°F. The density of water is 62.4 lb/cu ft. The density of air is 0.0763 lb/cu ft. The latent heat of water’s evaporation or condensation is about 950 Btu/lb.

    Sensible Heat Transportation 4.2 pounds of air to heat one pound of water.
    Per pound Heat Capacity Hot Cold Btu
    Air 0.24/Btu/lb-°F 80 50 7.2
    Water 1.0/Btu/lb-°F 80 50 30 30/7.2 = 4.2

    Latent Heat Transportation, 1 lb of air
    Dry Bulb Relative Humidity water content, grains water content, lb Heat Content, Btu
    Air 90 °F 0% 0.0 0 21.6
    90 °F 100% 218.4 0.0312 56.0
    saturated air: 1,101 Btu/lb
    Latent Evaporation Heat Transportation, 1 lb of water
    Water 950 Btu/lb

    Premise 1: Water’s latent heat of evaporation moves a lot more energy, by a factor as large a 100, from the ocean to the atmosphere than the sensible heat of the temperature difference moves energy from the air to the ocean.

    Premise 2: Water evaporates into the air not because the air is warm, but because the air is dry.
    Here are few thought exercises to grasp the concepts.

    A therapeutic swimming pool in Phoenix is heated to 80 °F. The warm water soothes arthritic joints. The pool is covered with a canopy so there is no solar gain. The canopy sides are open to the ambient 105 °F. The heater fails. What happens to the pool’s water temperature? Thermodynamics says that heat will flow from the hot source to the cold sink, from the 105 °F air to the 80 °F water. This is sensible heat, transferred by contact, convection, conduction. So why is it necessary to heat the pool at all? Air is terrible heat transfer medium. It is stagnant air trapped in the walls of your house that keeps you warm or cool. But there is also evaporation from the pool’s surface. Just like your evaporative cooler, evaporating water cools itself. Actually the pool’s water temperature at the water/air interface will approach the ambient wet bulb temperature.

    Fill a plastic gallon milk jug with water and install the cap. Place it in 105 °F shade together with a shallow tub with a gallon of water about 1” to 2” deep. After several hours open the jug and pour a little water onto your cupped hands. Place a hand in the tub of water. What did you observe? The water in the closed jug is quite warm. The water in the tub is cool. What’s the difference? The open tub allowed the water to evaporate, transferring energy into the air and keeping the water cool. Repeat the experiment, but this time pour the contents of the warmed jug into another shallow pan. How long does it take for the warm water to cool to the same temperature as the tub? There’s the project for your next school science project.

    The water/steam/Rankine cycle has been used for over a hundred years in, among many applications, the production of electricity. The steam that exhausts from the turbine must be condensed back into water so it can be pumped back through the boiler. This condensation is accomplished by pumping cold water through a shell and tube heat exchanger, aka the steam surface condenser. Thousand horsepower pumps move hundreds of thousands of gallons per minute through the tubes where the water absorbs the latent heat of condensation, by coincidence, about 950 Btu/lb. The water is frequently pumped to a wet cooling tower where the water sprays and cascades through an air stream. The air and water droplets form surface contact layers where the latent heat of evaporation transfers the condensed steam’s energy to the air stream. In the process, the air’s sensible heat or dry bulb temperature actually increases only a few degrees.

    The crust on the ocean’s floor is relatively thin in many spots, as little as a few thousand feet. The weight of gazillion tons of water keep the earth’s molten core from breaking through – most of the time. However, the extreme heat from the earth’s core warms the water at the bottom of the ocean, a heat source similar to the steam surface condenser mentioned earlier. Instead of pumps, the warm water rises, circulates, to the surface where it evaporates the geothermal heat flux energy into the air, cools, and then sinks, natural circulation.

    Over the past couple of decades the CO2 concentration at Mauna Loa has steadily increased, but assorted atmospheric temperatures have essentially flat lined. (Build your own graphs at embarrassing missing heat was first supposedly “found” in the Pacific and then later “found” in the Atlantic. Considering the previous observations the chances that the newly discovered heat came from the atmosphere are rather slim. The heat most likely comes from the geothermal heat flux through the ocean floor. IPCC AR5 TS.6 doesn’t know what the ocean is doing below 2,000 meters and low confidence above that. The average depth of the ocean is 4,000 meters though that makes the bottom half a big unknown.

    Excerpted from IPCC AR5 TS.6
    “Observational coverage of the ocean deeper than 2000 m is still
    limited and hampers more robust estimates of changes in global
    ocean heat content and carbon content. This also limits the quantification
    of the contribution of deep ocean warming to sea level
    rise. {3.2, 3.7, 3.8; Box 3.1}”

    Why is the magnitude of the CO2 feedback loop important, why does it even matter? Quite frankly, the magnitude of the CO2 feedback loop is all that matters. That magnitude determines how quickly the atmosphere warms, how soon the ice caps melt, the sea levels rise, all of the dire projections of the IPCC AR5 summary and GCMs. If the magnitude of the feedback loop is small compared to other drivers of heating and cooling, such as the latent heat of evaporation (and that is rather obvious), then all of the dire projections, handwringing, and calls to action are naught but tales of sound and fury, signifying nothing, told by you know whom.

    Premise 3: The magnitude of the CO2 feedback loop is irrelevant since the role that loop plays in warming the atmosphere is insignificant.

    • I think this supports your premise #3. Adding more CO2 to an atmosphere with 400 ppm causes no measurable increase in temperature. That all happened in the first couple dozen ppm.

    • CO2 feedback is only one of many feedbacks that may operate on the climate object. However, it is not the most important feedback – theory would lead us to expect that the water vapor feedback might be more important, though there are many uncertainties. Also, the magnitude of the CO2 feedback is unknown. The IPCC’s Fourth Assessment Report, for instance, puts it at 25-225 ppmv per Kelvin of global warming, a remarkably wide interval.

    • I like your extensive review Nick,
      And of course we have a few sceptics.
      Just one simple point.
      Water vapor is 99.9 percent of the GREEN HOUSE GAS EVERYONE IS FREAKING OUT ABOUT.
      So please leave the .1 percent alone it is stupid to get crazy about .1 percent of nothing.
      C02 is feeding you and your family.
      Be happy about that and shut up.
      My God,
      Did you not hear the story of chicken little in first grade.
      The world goes thru cycles.
      If you cant handle that then get out get out fast.
      Do you think even if C02 was a problem,( it is not) that China would slow down for a second.
      Give me and the smart people on this forum a break.
      Dave H

    • No, the theory is not in doubt. But the amount of global warming that might actually occur in our complex ocean-atmosphere system is turning out to be much harder to predict than the usual suspects had over-confidently thought. All other things being equal, some warming is to be expected from our adding greenhouse gases to the atmosphere. But we do not know whether all other things are equal.

  12. I don’t understand why a least squared fit is best. 2 days at +1 degree should count the same as 1 day at +2 degrees. Prove otherwise.

    • I agree. It always seemed to me that a least-squares system gave too much weight to outliers, and that it would be more sensible to use say a “least abs” or “sqrt abs” method. I ran a number of tests using various other methods on various datasets, both real-world and contrived, and found that the methods made very little difference other than with very small contrived datasets. So unless someone comes up with a better study than mine, I accept least-squares as a reasonable standard mechanism.

      • If you are dealing with say random measurement errors, even if not normally distributed, then least squares applies. But what is random here?

      • Theoretically least squares can be shown to the “best unbiased linear estimator” assuming that variability that has nothing to do with the real physical relationship is normally distributed. ie assuming random measurement errors or other purely random variation in the physical quantities being measured.

        Outliers do have a large effect but if the errors are normally distributed they will be rare enough to prevent it being the best estimation of a linear slope.

        It also assumes negligible error in x coordinate which is applicable here.

        However, in order to try to fit a linear model, you need a reason to suppose that a linear model is appropriate. If the r^2 statistic is close to zero the result is telling you that you were wrong. This is the point KDK was trying to make.

        There is nothing about climate data on any scale that suggests a linear model is a useful or appropriate model. Climate change is NOT linear.

        However Lord Monckton is making a political point in a political debate, he is deliberately adopting the alarmists metrics to show that they are wrong.

        Likewise the terminology. I like his “The Great Pause”. I can see this going down in history books: ” the Great Pause of the 21st century, which preceded the ……”

    • I think one of the reasons a least squares fit is used is the math is much simpler than trying to use abs(δT). That would make it not so much “the best” but “the most convenient.” While perhaps not so much a problem in this case given the discrete data, modeling things as a continuous function like a polynomial lets you integrate the function more readily than you can a discontinuous function.

    • “Prove otherwise” is kind of pushy when you don’t elaborate your point. It’s tempting to reply that you’ll just have to keep on not understanding…..

  13. I doubt any of this matters. The big problem with global warming, as evidenced by the recent new study that debunks the previous study, now showing CO2 AND solar forcing operating in tandem, means that advocates will find evidence. 97% of them will find the evidence, you know. So only 3% are looking for anti-evidence.

    In a quanta in the noise instance of global warming, whose going to win out, no matter the truth?

    The pause could last 30 years, and advocates will be saying the same thing.

  14. I realize we’ve been using a negative trend as the cut-off but it is interesting that the trend for 18 years is only 2.67567e-05 per year. For all intents and purposes that is zero. So, I don’t think anyone is being too wild in saying the pause/hiatus/plateau is now 18 years.

    • You raise a good point. And interestingly enough, the negative slope for 215 months is “slope = -5.54517e-05 per year”. So since the negative slope for 15 years is more negative than the positive slope for 16 years, one could say that to the nearest month, the absence of warming is indeed 18 years.

  15. What I find somewhat incredible is that during the period of no global temperature increase, there has been substantial growth in sea ice. When one considers that formation of ice releases significant heat somewhere, either to outer space, into the atmosphere, or into the oceans. It is just like a refrigerator, to cool the inside of the box or make ice, the heat is ejected to the room.
    Has anyone calculated the amount of energy released associated with ice growth? If so I would appreciate a reference.
    Is it possible that this heat has gone into the ocean thus explaining the reported rise in ocean temperatures?
    Any thoughts?

    • Yes, when ice forms heat is released – – but it is a reduction in surrounding heat that causes the ice to form in the first place, and the released heat only compensates for a part of that missing heat. So all that happens is that the surroundings cool a bit less than they otherwise would have. And vice versa on melting, of course. So there isn’t actually any excess heat to be disposed of, just a negative feedback slowing the system down a bit. That’s my take, anyway.

      • Thanks for comment.
        Looking at it the other direction, if the surroundings begin to heat up and if the ice starts melting it will take heat energy away from the surroundings (air or water) and mitigate the extent of heating up. The ice formation or melting is potentially a significant “heat” sink that reduces the effect of warming. While the Arctic ice mass was melting it effectively reduced the extent of global warming, similarly as the ice mass is growing it essentially reduces the amount of global cooling.
        For me the question is how significant is the process relative to other factors, of course this is related to the mass of ice formed or melted which has been significant over the last several years..
        What am I missing?

      • There is no ‘missing heat’ in the oceans, yet more bad physics from the Trenberth School trying to keep the IPCC climate scam going. Ocean warming is in the upper ocean; it is balanced by cooling of the deeps.

        The Earth operates as a heat engine to ensure thermalised SW = OLR. There is no heat trapping by GHGs. The Arctic melt-freeze cycle, 50 to 70 years, is all about accumulation in ice of materials which reduce cloud albedo, accelerating ice melt. When most of the old ice has disappeared, the process reverses. The same mechanism over a much longer time scale provides the amplification of tsi change at the end of ice ages….

      • I looked at this the other day where Bob Tisdale posted SST data showing notable temperature “anomalies” in the Bering Sea and around Greenland.

        A rough calculation ( using physicals constants for pure water, not sea water ) shows that freezing 1kg of water releases enough energy to heat 100kg of water by the “anomaly” of 0.8 kelvin show in those areas.

        The increasing ice volume is causing localised warming. Apparently this is large enough to cause an increase in the global averaged “anomaly” since there is not significant warming or cooling anomalies elsewhere.

        So there you have it, freezing in the Arctic is causing global warming. !!

      • Greg,
        I agree with your comment. The interesting point for me is that we chase hundredths of a degree change in global temperature while ignoring the fact that this measurement does not consider the energy released or adsorbed by our poles. If this is significant relative to hundredths of degrees why is this apparently ignored?

  16. Take the Doomer Geographers and place them into a lead-lined titanium 1 m thick pressure sphere.

    Line the exterior of the sphere with 100 tons of TNT.


    Would we finally get the Princeton Tokamak to ignite beautiful Fusion.

    A well worth experiment to use the worthless bodies of Geographers to supply the needed power to the New World Order.

    Ha ha

  17. SIGINT EX September 4, 2014 at 8:02 pm

    Your fascination with death scenes has gotten you snipped on multiple occasions. Frankly, it isn’t “ha ha” at all. You need help.

  18. At the present state of the Earth’s evolution, the atmosphere self controls to make the average warming from all well-mixed GHGs exactly zero. As for the IPCC’s claims about radiative and IR physics, it is easy to show that it is wrong from the operation of night vision equipment.

    The detector at the same temperature as the surroundings shows an image that shimmers, alternating light and dark at any position. What is really being detected is the thermal incoherence about zero mean flux, thus proving net energy flux is the vector sum of opposing emittances.

    ‘Back radiation’ does not exist. There is no ‘positive feedback’. The effect of GHGs on OLR is exactly compensated by lower atmosphere processes. IPCC science is absurdly wrong.


    This analysis is based on the clear outlier, RSS. Adding RSS to UAH in Figures 2 and 3 is adding sour milk to fresh.

    Note WoodForTrees uses UAH v5.5 dataset, at the full page you can click on “Raw data” and see the source. For the post above, while the RSS source is given on Figure 1 and discussed in text, no mention is made of the UAH version and source.

      • It has to be 5.6 since it goes to August and 5.5 is not out yet for August.
        2014 6 0.31 0.19 0.37 0.32 0.23 0.40 0.29 0.12 0.35 0.51 0.53 0.50 0.18 0.13 0.25 0.21 -0.22 0.33 0.31 0.36 0.24 -1.21 -1.32 -1.12 -0.11 -0.02 0.44
        2014 7 0.30 0.13 0.40 0.29 0.20 0.38 0.31 -0.01 0.42 0.46 0.42 0.49 0.17 0.10 0.25 0.27 -0.27 0.42 0.21 0.41 -0.11 -0.30 -1.09 0.33 -0.28 -0.09 0.42
        Year Mo Globe Land Ocean NH Land Ocean SH Land Ocean Trpcs Land Ocean NoExt Land Ocean SoExt Land Ocean NoPol Land Ocean SoPol Land Ocean USA48 USA49 AUST

        August v5.6 isn’t here either.;O=D

        Latest “Last Modified” date is 14-Aug-2014. If he used August data, I don’t know where he got it.

        The “Technical note” says a “computer algorithm” scrapes the RSS data monthly and does the graphing. Perhaps likewise there is also another that gathers the UAH data, and they were averaged for another one or two that produced the other graphs.

        The August RSS anomaly is 0.193. Zoom in on Figure 2, “RSS + UAH” is just under 0.2, could be about 0.19. Same in Figure 3.

        July RSS was 0.351. July UAH was 0.221. It is unlikely August UAH v5.6 would be close enough to August RSS to allow the average to be just about the same as August RSS alone.

        If August UAH was not available, a smart “computer algorithm” for averaging might simply average with the available data instead of a null or “N/A” value, like with spreadsheets. A smarter one though should not give an average without all the data. It is possible what was done to generate the graphs did not flag August UAH as missing but did label them as extending to August because it did have August RSS data.

        Conclusion: August UAH was not used in Figures 2 and 3.

      • From Werner Brozek on September 5, 2014 at 8:28 am:

        Exactly right! It was 0.199 according to:

        Ah, that’s right! Thank you, my friend. I had forgotten about the UAH update.

        It does seem strange to make such a big issue in the “Technical note” section of how the RSS numbers are accurate due to using a “computer algorithm”, and then conveniently not mention the last UAH number used in 2/3 of your graphs was added by hand, but it does make it possible for August UAH to have been included.

        As the possibility is there, and the value is close enough to average with August RSS as shown in the two graphs, I admit my conclusion may be in error and withdraw it.

        But the use of RSS still reeks of cherrypicking, when it’s clearly at variance with even the other satellite dataset. Mixing it with UAH is like mixing a rotten banana with fresh strawberries then offering the “freshly made” smoothie for sale.

        With everything else showing the reality of the “hiatus”, do we really need to promote the outlier to also gloat over a few extra months?

      • it does make it possible for August UAH to have been included

        Of course this is possible with everything but GISS. GISS can have a latest value that would make you think the plateau will go up or down, but then the opposite happens due to all other adjustments.
        I also wish UAH and RSS were closer. We were told that would be the case with version 6. But who knows when that will come out?
        You talk about “mixing a rotten banana with fresh strawberries”. If you were to categorize GISS as one of these, which one would it be?

      • Ah, GISS for this century is not bad, look at recent trend lines, too many eyes watching for them to get away with much. It’s the past they keep screwing with, before the satellites were watching, those numbers you can’t trust. For now they’re fresh strawberries, although somewhat tart with a residual taste of fertilizer.

      • But there is still a rather large difference between GISS and Hadcrut4. With GISS, the slope is flat for 9 years and 11 months. With Hadcrut4, the slope is flat for 13 years and 6 months.

        GISS, 12mo moving average, 1995.92 to 2014.5 yields 0.110 +/- 0.110 °C/Decade (2 sigma). No statistically significant warming from November 1995 to July 2014, 18 yrs and 8 mo. Isn’t that better than saying there’s a flat slope for 9 yrs and 11 mo?

        HadCRUT4, 1994.67 to 2014.5, 0.097 +/- 0.097 °C/Decade. September 1994 to July 2014, 19 yrs and 10 mo. Not that much difference between HadCRUT4 and GISS that way, eh?

        HadCRUT4 is practically two decades without statistically significant warming? Why aren’t we mentioning that instead of mucking around with outlier RSS and flat slopes?

      • The site that you mention has not been updated since January. There is no difference if you put in 2014.5 or 2014.08. Besides, I cover statistically significant warming as well in my own posts. See my next one with July data, either today or tomorrow. In section 1, I give the time for no warming, and in section 2, I give the time for no statistically significant warming. They are really two different things.

    • HadCRUT is bullshit, you can not take the “average” of the temperature of two media that have heat capacities that are orders of magnitude different.

      HadSST is water, CRUTem is air.

      Even if they want to argue that SST is a ‘proxy’ for near surface marine air temperature, land based air temps have a variability that is roughly twice that of SST. It’s trying to average apples and oranges.

      I agree that there is no reason to prefer RSS to UAH, so that is a cherry pick. It would be better to do the same processing on both. The result is not much different. IIRC it’s >15 yr for UAH.

      • In response to “Greg”, there are several reasons to prefer RSS to UAH. It usually reports first; I have now been providing the RSS trends for several years, providing an interesting insight into the ever-lengthening Pause; and at present the compilers of the UAH data have realized their dataset is running hot and are about to bring their dataset more closely into line with the mean of the three terrestrial datasets with a significant revision.

        And – though I am open to correction on this – the CRUTem data do not measure the temperature of the land surface: they measure the temperature of the air immediately above the land surface. By the same token, the HadSST data do not measure the temperature of the sea surface: they measure the temperature of the air immediately above the sea surface.

        Though it is true that temperatures above the land are more volatile than those above the sea, there is no particular reason why one should not take the mean of the temperature readings from both datasets as the basis for providing some indication of whether the Earth as a whole is warming or cooling. Whether over the land or over the sea, it is air temperatures that are being averaged – so, apples compared with apples, not apples with oranges.

      • The UAH trend from January 1999 to August 2014 is 0.143 degrees C / decade, a bit higher than the overall trend and a 27% increase from where the overall trend stood in 1999. The mean temperature in 1999 (i.e. overall mean from 1/1979 – 1/1999) was -0.0971 This has increased to 0.148 as of August 2014 (1/1979 – 8/2014). So during the 15 year hiatus, the UAH record shows: 1) the 15 year linear trend increased, 2) average temperatures increased over the previous 15 years, 3) the overall linear trend increased and 4) the overall average temperatures increased.

        That’s one heck of a hiatus.

  20. And now we have additional SO2 finding its way around northern reaches …,!MODIS_Aqua_SurfaceReflectance_Bands143,!MODIS_Terra_SurfaceReflectance_Bands121,!MODIS_Terra_CorrectedReflectance_TrueColor,!MODIS_Aqua_CorrectedReflectance_TrueColor~overlays,!Aura_Orbit_Dsc,!Aura_Orbit_Asc,!OMI_SO2_Upper_Troposphere_and_Stratosphere,!OMI_SO2_Middle_Troposphere,OMI_SO2_Lower_Troposphere,!AIRS_Prata_SO2_Index_Night,!AIRS_Prata_SO2_Index_Day,Reference_Labels,Reference_Features,Coastlines&time=2014-09-04&map=-129.476562,-30.345591,140.523438,118.998159

    That could get interesting.

  21. “The satellite datasets are based on measurements made by the most accurate thermometers available – platinum resistance thermometers, which not only measure temperature at various altitudes above the Earth’s surface via microwave sounding units but also constantly calibrate themselves by measuring via spaceward mirrors the known temperature of the cosmic background Radiation”. Please can you explain this method of temperature measurement in more detail or give some reference. I don’t understand how you measure the radiation temperature with a Pt thermometer . It seems to act as a bolometer and there must be some optical filters between the source and the thermometer.

  22. Mr Knoebel says that a r^2 of 0.000 indicates that the linear trend is a poor fit to the data. However, it is in the nature of the algorithm that determines r^2 that if the trend is zero even small departures from the trend line will produce an r^2 of zero. It is self-evident from looking at the data curve that the data are stochastic: nevertheless, it is also self-evident that the rate of warming is zero, or as near zero as makes little difference, and the r^2 of 0.000 is one indication of this.

    Mr Knoebel says one should not talk of “no trend”. He prefers “neutral trend”. If the trend is neither a positive nor a negative trend but is a zero trend, it is often referred to as “no trend”.

    Mr Knoebel says adding the RSS and UAH satellite datasets and taking their mean is inappropriate. However, this exercise produces a trend very close to the trend if one takes the mean of the three terrestrial datasets. RSS tends to run cold: UAH (in its current version, at any rate) tends to run hot. Averaging the two happens to cancel out the cold and hot running. UAH, however, is soon to move to version 6, which will bring it more closely into line with RSS.

    Mr Knoebel says the August data for UAH were not used. They were used, for the monthly anomaly is published here by Roy Spencer. However, the data table available on the internet tends not to be updated till the middle of the month.

    Mr Knoebel says I have cherry-picked the RSS dataset. However, the zero trend shown on that dataset is within the error margins of all the other datasets.

    Bottom line: not one of the models predicted as its central estimate that there would be no global warming for approaching two decades notwithstanding the considerable growth in CO2 concentration over the period. No amount of statistical nit-picking will alter that fact. Likewise, there is a growing discrepancy between the predicted and actual warming rates. This, too, is undeniable. IPCC has all but halved its short-term predictions of global warming, demonstrating that it has – albeit with reluctance – accepted the fact of the Great Pause. It is time for Mr Knoebel to do the same.

    • “RSS tends to run Cold”

      Looked at the answers in the back of the book, Sir Christopher? You seem to have access to divine knowledge of the true temperature.

      “UAH, however, is soon to move to version 6” More ‘adjustments’ eh?

      When are you going to start comparing like with like? IPCC’s scenarios are for surface temperature. Comparison with an average of the surface temperature data sets, for example, would be far more meaningful. Though, of course, it wouldn’t serve your agenda.

      • Village Idiot

        You correctly say that doing as you suggest would not “serve” the “agenda” of Lord Monckton whose “agenda” is to proclaim the truth.

        Please say why you think anybody would prefer the unstated agenda of an anonymous internet troll who admits to being an Idiot.


      • In answer to “Village Idiot”, comparison of the IPCC’s exaggerated predictions with the mean of the three principal terrestrial datasets would look very similar to comparison of its predictions with the mean of the two satellite datasets: for the terrestrial and satellite means are very close to one another. The sole advantage of using the satellite datasets is that they report sooner each month than the terrestrial datasets. However, from time to time I provide additional reports examining all of the principal datasets, terrestrial as well as satellite.

    • Mr McCulloch is correct. Since January 2001, for instance, the RSS dataset shows global temperature falling at a rate equivalent to 0.5 K/century. However, most datasets show little or no trend (and if anything a minuscule uptrend) since that date.

  23. “It has been roughly two decades since there was a trend in temperature significantly different from zero. The burst of warming that preceded the millennium lasted about 20 years and was preceded by 30 years of slight cooling after 1940.”

    1940’s-70’s 30 years of cooling.
    1970’s-90’s 20 years of warming.
    1990’s-2014 20 years of neither cooling nor warming.

    Natural cycles, then. When do the fraud trials begin?

  24. If there is anything that we have learned from 135 years of Ocean SST history, it is that global climate seems to be driven by the cycles of our oceans . These 60-70 year pole to pole Pacific and Atlantic ocean cycles point to cooling climate for the next 20-30 years. This cooler cycle may not trough until 2035/2045. So this so called pause will not end anytime soon.. The coldest period could be 2030-2050. This is similar to what happened 1880-1910 when the coldest period was 1900-1920.

    • “Herkimer” is right that the ocean oscillations [particularly the Pacific decadal oscillation] have a strong influence on temperatures. However, as India, China and eventually even poor Africa industrialize and provide universal electricity (chiefly from fossil fuels) as the fastest, surest way to lift their peoples out of poverty and hence to stabilize their populations, CO2 emissions and concentrations are bound to increase.

      All other things being equal, therefore, I should expect some warming to occur between now and 2050. However, the Sun is likely to be less active over the next 40 years than over the past 40. That small but persistent reduction in solar activity may perhaps hold temperatures down. If so, the predictions of the useless models will look even sillier than they already do.

  25. The IPCC and most other agencies use linear regression to determine global temperature trends. Professor Phil Jones of the University of East Anglia recommends it in one of the Climategate emails. The method is appropriate because global temperature records exhibit little auto-regression.

    Sigh. I just posted this link to a wonderfully illuminating William Briggs comment on the McKittrick thread, but it is almost certainly a good idea to post it here as well:, How To Cheat, Or Fool Yourself, With Time Series: Climate Example.

    To summarize here:

    It is true that you can look at the data and ponder a “null hypothesis” of “no change” and then fit a model to kill off this straw man. But why? If the model you fit is any good, it will be able to skillfully predict new data (see point (1)). And if it’s a bad model, why clutter up the picture with spurious, misleading lines?

    Why should you trust any statistical model (by “any” I mean “any”) unless it can skillfully predict new data?

    Again, if you want to claim that the data has gone up, down, did a swirl, or any other damn thing, just look at it!

    (Emphasis his.)

    The point being that it is bad to fit linear trends to global temperature, not good, because it is almost always a bad idea to fit a linear trend to a hand-picked segment of a timeseries and enormously risky and misleading to fit a trend even to the entire data set. Phil Jones is simply mistaken, either through ignorance (not unlikely) or because he wishes to convince himself that particular linear trends in some of the many, many data chords one can select in the many, many climate timeseries are meaningful. I strongly suggest that you read Briggs’ article (if you haven’t already) and take it to heart, because this is one of the most abused notions in the history of misapplied statistical reasoning.

    As Brigss (and I independently, in other threads, because I’ve co-founded two companies based on predictive modelling for money so far and have a bit of expertise here and in AI and pattern recognition) have often pointed out — there is only one good reason to build a linear model of a timeseries — or a logistic model — or a Fourier model — or a quadratic model — or an exponential model — or a neural network based model (my personal favorite for high dimensional problems) — or a model based on the textual writings of Nostradamus. That is to predict the future. Not the present. Not even (really) the past, although see below.

    This use is admirable, even though it is only marginally science — perhaps a first step towards science, because fitting an unmotivated or poorly motivated linear (or whatever) model to data is in fact a logical and statistical fallacy, little better than using Nostradamus unless and until it is backed up by a functional model and works!

    Let me state that last bit once again, even more strongly: and works! A predictive model of any kind is useful precisely to the extent that it shows skill in prediction. Period. For as long as this desirable state of affairs lasts, which is regrettably often not very long when one fits a linear model, especially to a manifestly non-stationary time series of data! That is, it is particularly dumb to fit straight lines to data that ain’t on a straight line the minute one gets outside of one’s fit interval and where one expects the underlying (invisible) causal parameters that influence the data to be doing lots of wild and exciting and non-stationary things.

    This is where Jones (and you, by inheritance) makes another capital mistake. Why in the world would anybody fit a linear trend to climate data when a glance at any of the extant series on pretty much any quantity suffices to demonstrate that almost none of the data can be fit by a linear trend for a time longer than ten to thirty years? Take a peek at HADCRUT4: . A linear trend (drawn) sucks at fitting the data, which is nonlinear, non-stationary and incredibly poorly fit by a linear model. Yes, one can look at this and think “Hey, maybe I can fit this with a linear trend plus a fourier component.” Or, perhaps with an offset exponential trend or a logarithmic trend plus a fourier component. Or I look at it and think “Damn, I could easily build a NN to fit that timeseries”. But, if we did any of these things (and we could make all of them work to at least substantially improve on the linear trend) would that fit have any predictive value whatsoever?

    Doubtful. If you look back 2000 years: you see that your model utterly fails to hindcast past temperatures. If you use common sense to think about the future, you realize that your model implies that 2000 years from now the temperature will be well above the boiling point of water. It might work for the HADCRUT4 data, it might even extrapolate for some ways into the future (exhibit some skill at prediction) but you know that the model — any of these models — will fail in the future, most of them quite rapidly because they didn’t even work in the past outside of the interval you happened to fit.

    Here is where unmotivated or poorly motivated function fitting is a most dangerous approach to predictive modelling. You could quite possibly fit HADCRUT4 pretty well with some of the models I’ve proposed. I’ve tinkered a bit with linear plus fourier, and it was a vast improvement on linear, and I’ll bet I could do even better with either an exponential or a logarithm plus a fourier term (or two or three) to better catch the slight gain over the long term linear trend near the end. But do I expect any of those models to have any predictive skill? No, of course not. Why would they? They would laughably fail if I went back a mere 50 more years, and they would fail so badly on the 2000 year data even if I re-fit them to the 2000 year data that one wouldn’t even try in the first place. It is obviously an accident that the last 150 years can be fit in this way. The term “accident” here doesn’t mean that there may not be reasons for it — it means that those reasons themselves are accidents in the grand scheme of climate dynamics; they are not stationary

    Here’s a radical idea, so dumb and yet so functional. Perhaps the best fit to the data is to fit chorded linear trends like the ones Briggs describes. Or one can spline the data, which basically means fitting e.g. cubics to segments to get an interpolating line. The former is a sort of “punctuated trend” model, where a “punctuated equilibrium” model would insist on fitting flat segments wherever possible joined by comparatively sudden steps. The latter would actually work decently well on HADCRUT4 if the steps were perhaps 10 years to 40 years (varying) wide. Obviously, punctuated trend would work even better. And a spline, like fitting it with a full Legendre polynomial series or Fourier series or any other complete functional basis on a finite line segment series, would obvious do as well as you want it do — you can fit it all the way down to the noise if you like.

    The point of this is hopefully obvious. After we fit all of those trend segments together in some way that pleases us, what do those segments actually mean? Not a damn thing. After all, sometimes they trend up, sometimes down, sometimes flat. Sometimes the up trend lasts 20 years. Sometimes only 5 or 10. Sometimes a flat segment lasts 30 years. Sometimes a down trend lasts 20 years. We have no idea why a single one of these lines has the parameters that best fit the data. We haven’t a clue as to why the climate changes to a different trend wherever we with our vastly experienced eye and the seat of our pants decided that a trend had changed and started to fit a different linear trend to the following data up to another equally arbitrary point. And God help you if you think this sort of constructive process is extrapolable — really, any of these constructions. Wall Street is paved with the bones of brokers who thought they’d detected a reliable trend in market timeseries — bones driven into the pavement when the broker in question eventually jumped out of a high window onto it. And trust me, one day “climate science” is going to have its own boneyard.

    In the meantime, we continue to live in “statistics hell”. McKittrick’s paper did not demonstrate that over 19 years of data the trend is indistinguishable from zero. It demonstrated that 19 years is the longest stretch over which one cannot reject the null hypothesis of no trend at the 95% confidence level. Those are not, actually the same thing. Over those 19 years, the data has a very definite trend. It’s just that, by cleverly applying an abstruse and complex model, he was able to find a way that the actual data had a probability of 0.05 (subject to a raft of assumptions about the nature of excursions of the data from a truly neutral trend, all Bayesian priors and none of them capable of surviving a posterior probability correction) given an “actual” trend line (whatever that means — I think nothing at all, what do you think?) with no slope. The incredibly silly of p = 0.05 which after all is only 1 in 20. What he’s really saying is that it is 95% likely that the data has a positive trend, but 95% isn’t likely enough to reject the possibility of no slope — and see what Briggs has to say about “confidence intervals” in linear trends fit to selected data chords. It’s a confidence game.

    I repeat: please read Briggs’ post very, very carefully and try to actually learn from it. It makes the point that I think you wish to make, but it makes it in a statistically defensible way. Forget this trend, or that trend. Don’t draw trend lines at all — the data speaks for itself, and drawing a trend line through it is part of a complex lie, or rather an attractive fantasy that after doing so you can keep the trend line and ignore the data thereafter, because the trend line you so laboriously extract between carefully chosen endpoints means something. Keep trend lines only to the extent that you (the creator of the trend line) are willing to wager your professional reputation and all hope of future financial support for your work on the gamble that the trend line is extrapolable, that is, will exhibit actual predictive skill into the future. And for for the love of God, Montressor, if you take nothing else from Briggs, take this:

    Notice that we stated specifics of the line in terms of the “trend”, i.e. the unobservable parameter of the model. The confidence interval was also for this parameter. It most certainly was not a confidence interval on the actual anomalies we expect to see.

    If we use the confidence interval to supply a guess of the certainty in future values, we will be about 5 to 10 times too sure of ourselves. That is, the actual, real, should-be-used confidence interval should be the interval on the anomalies themselves, not the parameter.

    In statistical parlance, we say that the parameter(s) should be “integrated out.” So when you see a line fit to time series, and words about the confidence interval, the results will be too certain. This is an inescapable fact.

    Again, emphasis his. But mine as well. The point is Christopher (if I may take the liberty of calling you Christopher, as Mr. Monckton sounds dreadfully formal and while you are no doubt a Lord, I’m an American and you aren’t my Lord;-) all of the confidence intervals you are asserting in these periodic postings of yours are sheer piffle, as are the trend lines themselves. So are the trend lines fit by many a well-intentioned climate scientist and less-well-intentioned politician, but just because they are statistical idiots isn’t any good reason to emulate them. Confidence in what, exactly? The assertion that the climate will continue to evolve following the fit trend line? Don’t make me laugh.

    I agree completely that it is worthwhile pointing out that the predictive models in CMIP5 are actively, dynamically failing — as history suggests that any monotonic model will fail to predict the climate for nearly all of the time because the climate is non-stationary, no set of predictive parameters in a “fit” are likely to persist for as long as 30 years, depending on a whole raft of Bayesian assumptions that I, like Briggs, will quietly ignore for the purposes of this discussion and that are, probably, not true. The climate models that are being judged were deemed worth of consideration in the first place based solely on their success at fitting the reference interval, which is cosmically stupid in predictive modelling — anybody can fit the training data, especially when it is nearly monotonic. Skilled modellers would hold out a trial set and only train on part of the data, and very skilled modellers would insist on their model being able to track key non-monotonic features in any model intended to predict nonlinear phenomena, data that goes up and down. A single glance at figure 9.8a in AR5 is sufficient to give any competent modeller the willies — none of the models in CMIP5 come anywhere near the hindcast data (which will have to do as a trial set past the monotonic training set) and of course, as you point out, there is substantial and increasing deviation of the models singly and collectively from reality for the “future” of the training set up to the present. The models, in other words, have no skill.

    That’s really the only thing one has to point out. Whatever the skill of the modellers, the models they have built have no skill. When assessed individually they’d be failing hypothesis tests with p less than 0.05 right and left, especially if those tests were extended to include hindcasts of the data. If individual runs are compared structurally to the actual climate, the failure would be even worse as they would generally fail to reproduce any of the gross statistical features of the actual climate — the right temperature variance, autocorrelation times, drought/flood, violence of storms, predictions of the distribution of atmospheric warming. Failing on any of these would be worrisome — failing on all of them can only be fixed one of two ways.

    First, the simplest thing to do would be to just acknowledge that the models are not working well, that they have no skill, and should not be relied on. This would be the simplest thing because it is the truth, and because frankly, if they did work, it would actually be surprising given what we know about the computational complexity and difficulty of solving highly nonlinear PDEs for very long times into the future at a spatiotemporal granularity that is well over 20 orders of magnitude larger than what we might expect to need to do a good job of simulating the climate. Not to mention our incredible ignorance of the actual initial state and sensitivity of the dynamics to initial state and the approximated physics and the possibly unknown physics, but I’ll stop there.

    The second way that would work just as well would be to say, well, maybe these models are actually working to some approximation, but when we examine the spread of results they predict, their probable error is so large that they are still useless as a predictive tool. This is (sure) another way of saying they have no skill, but it avoids the humiliation of failing a p-test. It isn’t that they fail a p-test, it is that the standard error in their predictions is so large that we can’t take them seriously in the first place, no physically reasonable time evolution of the climate is excluded or out of bounds of the envelope of their perturbed parameter predictions.

    Note that either way, the conclusion is the same. The more complex models, like the simpler linear trends that are a lot easier for humans to digest and a lot easier to turn into statistical lies for political and economic gain, have no real skill, and the human species should view them with the same jaded eye we would view a racetrack tout who promised us a perfect “system” for predicting the outcome of horse races.


    • I agree – and took a pounding for saying so recently from the bobble heads who lap this stuff up. It is interesting in the way that tossing a deck of playing cards in the air is interesting if all the kings land face up or that clouds can look like sheep or godzilla or both.

      It is a better political tool than a scientific finding and I’m ok with that. I do wonder what it would look like of a person were to plot as a time series the average temperature from each of Monckton of Brenchley’s plots over the time he’s been creating them. That would produce a trend of something and I don’t know what it would tell us except that no two plots produce the same average temperature over the series.

      • I am of course delighted that “dp” joins Professor Brown in questioning the use of statistical trends by the IPCC and by the climatological community. However, since linear trends are used with great frequency by climatologists, I use them here to provide a highly visible and very clear comparison between what was predicted and what has occurred.

        Perhaps “dp” would like to produce a plot of the temperature anomaly for each of my successive monthly trend-lines. Since the Pause is lengthening, I should not expect to find a significant change. However, “dp”, in doing any such analysis, would of course be applying a statistical technique to a statistical technique with which it disagrees – hardly a valuable exercise.

    • Sigh! It ought to be self-evident by now to such regular readers of this column as Professor Brown that a) I have not stated or implied that a least-squares linear-regression trend has any predictive skill whatsoever, and have frequently made it explicit that it has none; b) I have not asserted or implied that a least-squares regression is the best method of determining a trend – merely that it is what the IPCC and most climatologists use; c) I do not assert that any statistical process is the best method of determining a trend.

      Whether Professor Brown likes it or not, the IPCC and the world of climatology usually uses least-squares linear-regression trends. So I use them too: for I am trained in logic, and am content to argue on ground of my opponents’ choosing. If Professor Brown does not like this or any other of the statistical processes applied to data series in climatology, then there is really not the slightest point in addressing that complaint to me: he should instead address it to the Secretariat of the IPCC, or to Professor Jones, or to James Hansen, or to all the numerous climatologists who regularly use least-squares trends. While they use such trends, I shall use them too, for the sake of determining the extent to which the trends they had predicted are not in fact occurring.

      One virtue of displaying the trend line is that it provides the very clearest visual indication that global warming has not been happening over the past decade or two. For this reason, my graphs often appear on national television programs, where they are effective because ordinary people can understand that a horizontal line that represents the data over a chosen period indicates that there has been no warming (or, for that matter, cooling) over the period in question. That fact runs counter to what they are being told daily in the news media (about which Professor Brown seems to make no complaint at all).

      So the question is this. Are the news media and the scallywags driving the climate scare correct in saying that global warming is continuing at the predicted rate? If they are correct, then Professor Brown’s argument against my graphs should not be that he does not like me using this or any statistical process to discern the rate at which the temperature is or is not changing: it should be that my conclusion that there has been no warming for a decade or two is simply incorrect.

      If, on the other hand, the media are not correct, then why, o why, is Professor Brown whining at me for providing a quite widely circulated visual demonstration that they are not correct, instead of whining at them for being incorrect? He is aiming, somewhat futilely, at the wrong target.

      The Professor also takes me to task for daring to show in graphical form the IPCC’s prediction intervals, rather than simply its central estimates. He says those intervals are meaningless. I know that: but they are, for better or worse, the IPCC’s intervals, and it is legitimate for me to draw those intervals and also to draw the trends on the real-world, observed data, thereby showing that the trend lines do not fall on – or even particularly close to – the predicted intervals. Once again, there is really no point in his whining at me when he should be addressing his complaint about the meaninglessness of the IPCC’s intervals to the IPCC secretariat.

      One of the difficulties in being a layman and having no piece of paper to say that I have received the appropriate Socialist training in these matters is that, with unbecoming frequency, I am somewhat arrogantly lectured because I use the methodologies that the IPCC and the world of climatology uses. I use these methodologies not because I approve of them but because they are the language that the IPCC and the climatologists are familiar with.

      Why, for instance, is it all right for the IPCC to select four very carefully chosen least-squares linear-regression trend lines, apply them simultaneously to a single dataset, apply a fraudulent statistical dodge to pretend, quite falsely, that the rate of global warming is accelerating and that we are to blame, and yet without a murmur of dissent from Professor Brown or from the numerous me-too trolls on this thread who whine at my perfectly reasonable use of linear-regression trends? Why does Professor Brown not do as I have done, and write to the IPCC to make it clear that their wilful misconduct in resorting to flagrant and mendacious abuses of statistical process such as this one is not acceptable? That would be a far more constructive use of his time. If only I were not a lone complainer, the IPCC might start having to do proper science.

      In fact, Professor Brown is coming quite close to saying that there is no value in applying any form of statistical trend to any dataset. That is a perfectly respectable point of view. However, we can either sit idly by and watch the media and the IPCC falsely claiming that global warming is occurring at the predicted and accelerating rate, or find some visually clear and academically precedented method of indicating that global warming is not occurring at the predicted rate. I prefer not to sit on my hands and whine, but to do something about this and several other of the falsehoods being perpetrated for political expediency and financial profit, at great cost not only in treasure but in lives. I invite Professor Brown to raise his game, and address to the IPCC the complaints he has pointlessly addressed to me.

      • “One virtue of displaying the trend line is that it provides the very clearest visual indication that global warming has not been happening over the past decade or two.”
        Likewise, the trend line also shows that global cooling has not been happening over the past decade or two.

      • As an impartial observer with the greatest respect for both you and Dr. Brown, it seems that you may have taken offense where none was intended.
        I always enjoy reading your posts and ripostes.

  26. Only one quibble with this otherwise great article:

    “the possibility that the Pause is occurring because the computer models are simply wrong about the sensitivity of . . . ”

    Sounds like the models are actually the cause of the pause . . . even this proudly skeptical non-scientist wouldn’t go that far. . .

    Cause of the Pause, though – has a nice ring to it : )

  27. Christopher Monckton,

    Consider the intellectually dishonest terminology used to describe the behavior in recent decades of GASTA time series data and the RSS / UAH satellite time series data. Please see my comment just posted at another WUWT thread { } .

    Here is that comment.

    John Whitman
    September 5, 2014 at 10:20 am


    Well, terminology is the really funny thing with respect to describing the behavior in recent decades of the GASTA time series data and the satellite time series data.

    hiatus of AGW

    pause of AGW

    plateau of AGW

    leveling off of AGW

    suspension of AGW

    flattening of AGW

    Any reference to AGW in the description of the behavior is not just biased evaluation; it is irrelevant and unnecessary evaluation.

    The accurate description of the behavior in recent decades of the GASTA time series data and the satellite time series data is no statistically significant change in the temperature.


    Also, on a different thought, I was recently very interested in and convinced by a recent comment by rgbatduke here on this thread and on another recent WUWT thread about the fundamental intellectual error committed by those who apply linear trends to either GASTA time series data or the RSS / UAH satellite time series data for the purpose of explaining a portion of a time series.


    • I agree with Mr Whitman that those who apply linear trends to surface or lower-troposphere global temperature trends for the purpose of explaining any part of a time series commit a fundamental intellectual error: for such trend-lines do not “explain” anything (still less do they predict anything): they merely provide one method of visualizing what has actually occurred.

      Finding the reasons why there has been no global warming for a decade or two is one of the current hot topics in climate research, which is why more than 50 mutually inconsistent explanations have been conjured into being. Very nearly all of these explanations suffer from a fundamental intellectual error: they are untestable guesswork and are not, stricto sensu, science at all.

      However, I disagree with his implication that it is better to describe the recent Great Pause as not “statistically significant” rather than as non-existent. For statistical significance is a notoriously slippery concept, whereas my finding the longest period each month during which there has been no global warming at all, using the statistical method that is more common than any other in the analysis of temperature change, has the merit of specificity, clarity, and self-consistency. It has been interesting to note how the Great Pause has been inexorably lengthening, and how this behavior is in ever starker contrast to the behavior that had been predicted with “substantial” – but misconceived “confidence”.

  28. To those who complain that my temperature graph is somehow unfair, I reply that the rules of the game are clearly enough stated. I simply determine the earliest month in the recent record since when the temperature data show a zero least-squares linear-regression trend. The IPCC had originally predicted global warming of about 0.3 K/decade over the near term. The fact that the trend has been zero for approaching two decades indicates clearly that the IPCC was wrong. It is as simple as that: and no amount of wriggling will alter the fact that the predictions of the models are manifestly and flagrantly wrong, as my graphs reveal with a clarity that is, no doubt, painful to some.

  29. Mr Tracton’s response to my reply to Professor Brown is that a zero trend-line indicates not only lack of warming but also lack of cooling. But I had already made that point explicit in my reply to the Professor. Now he says I should use the UAH as well as RSS datasets. It may be that he has not read the head posting, in which two of the three graphs feature the UAH as well as the RSS dataset.

    Also, I provide detailed updates based on all five principal datasets every few months. But the monthly report on the RSS dataset, which usually provides its monthly value before any of the others, has become a regular feature here, and – to answer a point by an earlier commenter – successive updates show that the Great Pause has been gradually lengthening, though I expect that to change as the current el Nino begins to bite.

    Besides, UAH is currently undergoing a revision that will remove its hot running over recent decades and bring it closer to the mean of the three terrestrial datasets, which show no global warming for well over 13 years.

    Whichever way one slices and dices the datasets, the result is the same: the world has not been warming since 1990 at anything like the rate that was then predicted, and has not been warming recently at all.

    Since the greenhouse theory indicates that, even with strongly negative temperature feedbacks, there should have been some global warming over recent decades, and since the climate, being a chaotic object, is deterministic, there must be reasons for the Great Pause that are manifestly under-represented in current climate models, raising legitimate questions about whether some of the comparatively rapid warming from 1976-2001 may have been attributable not to Man but to Nature, and about whether climate sensitivity – in the short to medium term, at any rate – is anything like as high as the IPCC profits by persuading the feeble-minded to believe.

  30. Mr Ross asks whether I have taken offense at Professor Brown’s comment. Not in the least. I much admire and enjoy his vigorous comments and occasional rants here (admire the former and enjoy the latter). I should very much like the Professor to try to influence the IPCC to stop using fraudulent statistical techniques as a way to sex up the global warming dossier. And if, at the same time, he wants to point out that the IPCC is using silly global-warming intervals and is leaning too heavily on statistical methods rather than simply eyeballing the data and coming to the common-sense conclusion that the predicted rate of warming is not occurring, so much the better.

  31. Lord Monckton,


    “The fastest measured centennial warming rate was in Central England from 1663-1762, at 0.9 Cº/century – before the industrial revolution. It was not our fault.”

    You may wish to check this. According to CET annual data, the period 1663-1762 warmed at 0.86C/century. The periods 1908-2007 and 1909-2008 both warmed at 0.87C/century.

    Although all three periods warmed at 0.9C/century to one decimal place, the fasted measured centennial warming rate using the annual averages was, by a tiny fraction, the period 1909-2008.

    • Further to the above, should CET for 2014 average 9.8C or greater, then the period 1915-2014 will set a new fastest measured centennial warming rate for the series.

    • I have checked the points raised by DavidR and can report as follows:

      1. One should use monthly data, not annual data, wherever possible. It is the monthly records that we analyze in these columns. The monthly data for the Central England Temperature Record show a warming at 0.90 K over the century 1663-1762. The monthly data for 1909-2008, taken as the mean of the HadCRUt4, GISS, and NCDC terrestrial datasets, showed a warming of 0.79 K over the century. However, the monthly data on the Central England Temperature Record for 1909-2008 shows warming of 0.91 K.

  32. Lord Monckton,

    Thank you for the above. Using the monthly data I find that the periods 1908-2007 and 1909-2008 both have a warming rate of 0.91 over the century. Also, the period 1907-2006, although 0.90 for the century, is also fractionally warmer than the period 1663-1762, which I have in 4th place.

    Should Central England Temperature (CET) values for the remaining four months of 2014 (Sept-Dec) remain anywhere close to their respective 1961-90 averages then the warming over the century 1915 to 2014 should easily set a new fastest centennial warming rate record.

    • The central point remains, however, that the rate of warming during a period with high CO2 emissions is just about the same as the rate during a period with low CO2 concentrations. One should not lose sight of the main point.

  33. “On the RSS satellite data, there has been no global warming statistically distinguishable from zero for more than 26 years.”

    How bizarre! every global temperature series, and even CET shows that most of the warming was from 1988 onwards.

    “The ratio of el Niños to la Niñas tends to fall during the 30-year negative or cooling phases of the Pacific Decadal Oscillation, the latest of which began in late 2001.”

    El Nino frequency and intensity increases sharply through the coldest parts of solar minima, there is no chance of a 30 year negative PDO mode with that happening soon.

    • The statement that there has been no significant trend in RSS data over the past 26 years is from the recently published paper by McKitrick. There is no reason I can see for believing it is incorrect. If you find it ‘bizarre’, your problem is with standard statistical hypothesis testing. As rgbatduke points out, it just means the probability that the past 26 years of temperatures are consistent with no trend exceeds 0.05 (just).

  34. Ulric Lyons,

    What, exactly, is “bizarre”? No one disagrees with the fact that there has been global warming. It is the planet’s natural recovery from the LIA. But it stopped around 1997. The UAH satellite data shows the same thing: no global warming after 1997.

    Global warming has stopped, Ulric. For many years now. It may resume at some future time. Or not. Or, the planet may begin to cool. At this point, no one knows. All we know is that global warming stopped a long time ago.

    • Mr.Stealey,

      You attempted further up this thread to justify your claim that “global warming stopped at least 18 years ago” by using a graph of oceanic heat content covering 8 years. Yes, only 8 years. Why?? You can find graphs of OHC covering more than 18 years on WUWT, yet alone elsewhere on the Web, as I pointed out to you in response to your claim. These show no sign of such a halt, yet you keep on with your dogmatic assertion.

      At least Eschenbach has the intelligence to recognise the problem that the rise in OHC is causing to AGW-gainsayers, though his attempt to show that this rise is insignificant was, frankly, risible

  35. Ulric Lyons,

    “No one disagrees with the fact that there has been global warming. It is the planet’s natural recovery from the LIA.”

    What physical process has driven “the planet’s natural recovery from the LIA”?

    • David R says:

      What physical process has driven “the planet’s natural recovery from the LIA?

      Good question. What physical process caused the LIA in the first place?


      Bill H says:

      …only 8 years. Why??

      Because that is an ARGO chart, and that’s when ARGO started.

      Here are some charts of ocean heat content [OHC] and SST:

      chart 1 [10 year chart]

      chart 2 Notice the “adjustments”. They constantly do this, when the data does not show what they want: global warming. That is dishonest, unless they show a step-by-step methodology, from the raw data to the ‘adjusted’ [non]data. Since they don’t show how or why they did the adjustments, the final result should be disregarded.

      chart 3 Another “adjustment”, made without any methodological explanation. The funding for these agencies is dependent upon keeping the global warming scare alive. They have a vested self-interest in making adjustmnents that show more warming than there rally is. A true scientific skeptic questions all but raw data.

      You make assertions, Bill H, but that is all they are. I post data. If you disagree, do the same.


      Ulric Lyons, can you provide a provenance for this chart you posted? Thanks. It looks like an anomaly chart, which would not show a trend.

      Also, you still say:

      Bizarre to claim statistically zero warming for the last 26 years, when most of the warming was from 1988 onwards.

      As I stated above, I do not disagree that there has been warming since 1988. But that warming stopped around 1997. If you don’t like RSS data, then global warming still stopped many years ago. Every major organization including the IPCC admits that now. Please argue with them if you disagree.

  36. Gentlemen, it is interesting to peruse your discussion, I am an engineer who has always doubted anthropogenic global warming. I am also a politician (State Representative). My suspicion is that water vapor by virtue of concentration play a much greater role than CO2 in the green house effect, and would therefore have a generally stabilizing effect. The sun would have a much greater effect on deviations in water vapor. The way I explain it to my colleagues is sun very very big; earth big; mankind very very small.

    I also do a fair amount of computational fluid dynamics modeling of furnaces. Same types of models, different application. Since we model the furnace, a chaotic system, as though it were at steady state and we are calibrating the model results to averages taken over extended time periods hours in my cases; I always maintain that the one sure thing that we know about our modeling results is that we know they are wrong be cause the conditions modeled never actually exist in fact.

    • Daniel,
      You are 100% correct. I also am an Engineer and while I am more familiar with solid Mechanics computer models (FEA), I have had a lot of exposure to results of CFD studies for systems that are order of magnitude smaller and less chaotic the Climate models. CFD is useful to compare configurations; however, their divergence from real world data gives rise to skepticism as to their use without real world data.
      No experienced engineer would trust CFD results alone for an important design of complex systems.

  37. Nice one, again, Christopher. 215 months, no warming. IPCC still estimates +4,8C to 2100, seeming more and more impossible and surrealistic. CO2 rising, temperature in standstill, IPCC think warming hides in the oceans, but the warming is only to be found in their increasingly reddening faces. Let them hide the decline, hide the pause, hide reality, and keep believing in their playstation 7 and the garbage in and out 1000 billion £ models. Let freedom Ring.

  38. Hi, I would like to add a point: It is insignificant whether the temperature rises now or falls during the next years. I expect the one or the other. It changes nothing in whether the climate change is influenced by men and how much.
    Only when we are able to distinguish unambiguously between the one and the other we really can really tell.

Comments are closed.