Temperature tampering temper tantrums

By Christopher Monckton of Brenchley

Commenters on my recent threads explaining the gaping error my team has found in official climatology’s definition of “temperature feedback” have asked whether I will update my series pointing out the discrepancy between the overblown predictions in IPCC’s First Assessment Report of 1990 on which the climate scam was based and the far less exciting reality, and revealing some of the dodgy tricks used by the keepers of the principal global-temperature datasets to make global warming look worse than they had originally reported.

I used to use the RSS satellite dataset as my chief source, because it was the first to publish its monthly data. However, in November 2015, when that dataset had showed no global warming for 18 years 9 months, Senator Ted Cruz displayed our graph of RSS data demonstrating the length of the Pause during a U.S. Senate hearing and visibly discomfited the “Democrats”, who wheeled out an Admiral, no less, to try – unsuccessfully – to rebut it. I predicted in this column that Carl Mears, the keeper of that dataset, would in due course copy all three of the longest-standing terrestrial datasets –GISS, NOAA and HadCRUT4 – in revising his dataset in a fashion calculated to eradicate the long Pause by showing a great deal more global warming in recent decades than the original, published data had shown.


[Fig 1.] The least-squares linear-regression trend on the pre-revision RSS satellite monthly global mean surface temperature anomaly dataset showed no global warming for 18 years 9 months from February 1997 to October 2015, though one-third of all anthropogenic forcings had occurred during the period of the Pause. Ted Cruz baited Senate “Democrats” with this graph in November 2015.

Sure enough, the very next month Dr Mears (who uses the RSS website as a bully-pulpit to describe global-warming skeptics as “denialists”) brought his dataset kicking and screaming into the Adjustocene by duly tampering with the RSS dataset to airbrush out the Pause. He had no doubt been pestered by his fellow climate extremists to do something to stop the skeptics pointing out the striking absence of any global warming whatsoever during a period when one-third of Man’s influence on climate had arisen. And lo, the Pause was gone –


[Fig 2.] Welcome to the Adjustocene: RSS adds 1 K/century to what had been the Pause

As things turned out, Dr sMear need not have bothered to wipe out the Pause. A large el Niño Southern Oscillation did that anyway. However, an interesting analysis by Professor Fritz Vahrenholt and Dr Sebastian Lüning (at diekaltesonne.de/schwerer-klimadopingverdacht-gegen-rss-satellitentemperaturen-nachtraglich-um-anderthalb-grad-angehoben) concludes that his dataset, having been thus tampered with, can no longer be considered reliable. The analysis sheds light on how the RSS dataset was massaged. The two scientists conclude that the ex-post-facto post-processing of the satellite data by RSS was insufficiently justified –


[Fig 3.] RSS monthly global mean lower-troposphere temperature anomalies, January 1979 to June 2018. The untampered version is in red; the tampered version is in blue. Thick spline-curves represent the simple 37-month moving averages. Graph by Professor Ole Humlum from his fine website at www.climate4you.com.

RSS racked up the previously-measured temperatures from 2000 on, increasing the overall warming rate since 1979 by 0.15 K, or about a quarter, from 0.62 K to its present 0.77 K –


[Fig 4.]

You couldn’t make it up, but Lüning and Vahrenholt find that RSS did

The year before the RSS data were Mannipulated, RSS had begun to take a serious interest in the length of the Pause. Dr Mears discussed it in his blog at remss.com/blog/recent-slowing-rise-global-temperatures. His then results are summarized below –


[Fig 5.]  (Orig Figure T1) Output of 33 IPCC models (turquoise) compared with measured RSS global temperature change (black), 1979-2014.

Dr Mears had a temperature tantrum and wrote:

“The denialists like to assume that the cause for the model/observation discrepancy is some kind of problem with the fundamental model physics, and they pooh-pooh any other sort of explanation.  This leads them to conclude, very likely erroneously, that the long-term sensitivity of the climate is much less than is currently thought.”

Dr Mears conceded the growing discrepancy between the RSS data and the models, but he alleged we had “cherry-picked” the start-date for the global-temperature graph:

“Recently, a number of articles in the mainstream press have pointed out that there appears to have been little or no change in globally averaged temperature over the last two decades.  Because of this, we are getting a lot of questions along the lines of ‘I saw this plot on a denialist web site.  Is this really your data?’  While some of these reports have ‘cherry-picked’ their end points to make their evidence seem even stronger, there is not much doubt that the rate of warming since the late 1990s is less than that predicted by most of the IPCC AR5 simulations of historical climate.  … The denialists really like to fit trends starting in 1997, so that the huge 1997-98 ENSO event is at the start of their time series, resulting in a linear fit with the smallest possible slope.”

In fact, the spike caused by the el Niño of 1998 was almost entirely offset by two factors: the not dissimilar spike of the 2010 el Niño, and the sheer length of the Pause itself.


[Fig 6.] Graphs by Werner Brozek and Professor Brown for RSS and GISS temperatures starting both in 1997 and in 2000. For each dataset the trend-lines are near-identical. Thus, the notion that the Pause was caused by the 1998 el Niño is false.

The above graph demonstrates that the trends in global temperatures shown on the pre-tampering RSS dataset and on the GISS dataset were exactly the same before and after the 1998 el Niño, demonstrating that the length of the Pause was enough to nullify its imagined influence.

It is worth comparing the warming since 1990, taken as the mean of the four Adjustocene datasets (RSS, GISS, NCEI and HadCRUT4: first graph below), with the UAH dataset that Lüning and Vahrenholt commend as reliable (second graph below) –


[Fig 7.] Mean of the RSS, GISS, NCEI and HadCRUT4 monthly global mean surface or lower-troposphere temperature anomalies, January 1990 to June 2018 (dark blue spline-curve), with the least-squares linear-regression trend on the mean (bright blue line), compared with the lesser of two IPCC medium-term prediction intervals (orange zone).


[Fig 8.] RSS lower-troposphere anomalies and trend for January 1990 to June 2018

It will be seen that the warming trend in the Adjustocene datasets is almost 50% greater over the period than that in the RSS dataset that Lüning and Vahrenholt find more reliable.

After the adjustments, the RSS dataset since 1990 now shows more warming than any other dataset, even the much-tampered-with GISS dataset –


[Fig 9.]  Centennial-equivalent global warming rates for January 1990 to June 2018. IPCC’s two mid-range medium-term business-as-usual predictions and our revised prediction based on correcting climatology’s error in defining temperature feedback (white lettering) are compared with observed centennial-equivalent rates (blue lettering) from the five longest-standing datasets.

Note that RSS’ warming rate since 1990 is close to double that from UAH, which had revised its global warming rate downward two or three years ago. Yet the two datasets rely upon precisely the same satellite data. The difference of almost 1 K/century in the centennial-equivalent warming rate shows just how heavily dependent the temperature datasets have become on subjective adjustment rather than objective measurement.

Should we cynically assume that these adjustments – up for RSS, GISS, NCEI and HadCUT4, and down for UAH – reflect the political prejudices of the keepers of the datasets? Lüning and Vahrenholt can find no rational justification for the large and sudden alteration to the RSS dataset so soon after Ted Cruz had used our RSS graph of the Pause in a Senate hearing. However, they do not find the UAH data to have been incorrectly adjusted. They commend UAH as sound.

The “MofB” hindcast is based on two facts: first, that we calculate Charney sensitivity to be just 1.17 K per CO2 doubling, and secondly that in many models the predicted equilibrium warming from doubled CO2 concentration, the “Charney sensitivity”, is approximately equal to the predicted transient warming from all anthropogenic sources over the 21st century. This is, therefore, a rather rough-and-ready prediction: but it is more consistent with the UAH dataset than with the questionable Adjustocene datasets.

The extent of the tampering in some datasets is enormous. Another splendidly revealing graph from the tireless Professor Humlum, who publishes a vast range of charts on global warming in his publicly-available monthly report at climate4you.com –


[Fig 10.] Mann-made global warming: how GISS boosted apparent warming by more than half.

GISS, whose dataset is now so politicized as to render it valueless, sMeared the data over a period of less than seven years from March 2010 to December 2017 so greatly as to increase the apparent warming rate over the 20th century by just over half. The largest change came in March 2013, by which time my monthly columns here on the then long-running Pause had already become a standing embarrassment to official climatology. Only the previous month, the now-disgraced head of the IPCC, railroad engineer Pachauri, had been one of the first spokesmen for official climatology to admit that the Pause existed. He had done so during a speech in Melbourne that was reported by just one newspaper, The Australian, which has long been conspicuous for its willingness faithfully to reflect both sides of the climate debate.

What is fascinating is that, even after the gross data tamperings towards the end of the Pause by four of the five longest-standing datasets, and even though the trend on all datasets is also somewhat elevated by the large el Niño of a couple of years ago, IPCC’s original predictions from 1990, the predictions that got the scare going, remain egregiously excessive.

Even IPCC itself has realized how absurd its original predictions were. In its 2013 Fifth Assessment Report, it abandoned its reliance on models for the first time, substituted what it described as its “expert judgment” for their overheated outputs, and all but halved its medium-term prediction. Inconsistently, however, it carefully left its equilibrium prediction – 1.5 to 4.5 K warming per CO2 doubling – shamefully unaltered.

IPCC’s numerous unthinking apologists in the Marxstream media have developed a Party Line to explain away the abject predictive failure of IPCC’s 1990 First Assessment Report and even to try to maintain, entirely falsely, that “It’s worser than what we ever, ever thunk”.

One of their commonest excuses, trotted out with the glazed expression, the monotonous delivery and the zombie-like demeanor of the incurably brainwashed, is that thanks to the UN Framework Convention on Global Government Climate Change the reduction in global CO2 emissions has been so impressive that emissions are now well below the “business-as-usual” scenario A in IPCC (1990) and much closer to the less extremist scenario B.

Um, no. Even though official climatology’s CO2 emissions record is being hauled into the Adjustocene, in that it is now being pretended that – per impossibile – global CO2 emissions are unchanged over the past five years, the most recent annual report on CO2 emissions shows them as near-coincident with the “business-as-usual” scenario in IPCC (1990) –


[Fig 11.] Global CO2 emissions are tracking IPCC’s business-as-usual scenario A

When that mendacious pretext failed, the Party developed an interesting fall-back line to the effect that, even though emissions are not, after all, following IPCC’s Scenario B, the consequent radiative forcings are a lot less than IPCC (1990) had predicted. And so they are. However, what the Party Line is very careful not to reveal is why this is the case.

The Party realized that its estimates of the cumulative net anthropogenic radiative forcing from all sources were high enough in relation to observed warming to suggest a far lower equilibrium sensitivity to radiative forcing than originally decreed. Accordingly, by the Third Assessment Report IPCC had duly reflected the adjusted Party Line by waving its magic wand and artificially and very substantially reducing the net anthropogenic forcing by introducing what Professor Lindzen has bluntly called “the aerosol fudge-factor”. The baneful influence of this fudge-factor can be seen in IPCC’s Fifth Assessment Report –


[Fig 12.] Fudge, mudge, kludge: the aerosol fudge-factor greatly reduces the manmade radiative forcing and falsely boosts climate sensitivity (IPCC 2013, fig. SPM.5).

IPCC’s list of radiative forcings compared with the pre-industrial era shows 2.29 Watts per square meter of total anthropogenic radiative forcing relative to 1750. However, this total would have been considerably higher without the two aerosol fudge-factors, totaling 0.82 Watts per square meter. If two-thirds of this total is added back, as it should be, for anthropogenic aerosols are as nothing to such natural aerosols as the Saharan winds that can dump sand as far north as Scotland, the net anthropogenic forcing becomes 2.85 Watts per square meter. Here is how that makes a difference to apparent climate sensitivity –

clip_image026[4] clip_image028[4]

[Fig 13.] How the aerosol fudge-factor artificially hikes the system-gain factor A.

In the left-hand panel, the reference sensitivity (the anthropogenic temperature change between 1850 and 2010 before accounting for feedback) is the product of the Planck parameter 0.3 Kelvin per Watt per square meter and IPCC’s 2.29 W m–2 mid-range estimate of the net anthropogenic radiative forcing in the industrial era to 2011: i.e., 0.68 K.

Equilibrium sensitivity is a little more complex, because official climatology likes to imagine (probably without much justification) that not all anthropogenic warming has yet occurred. Therefore, we have allowed for the mid-range estimate in Smith (2015) of the 0.6 W m–2 net radiative imbalance to 2009, converting the measured warming of 0.75 K from 1850-2011 to an equilibrium warming of 1.02 K.

The system-gain factor, using the delta-value form of the system-gain equation that is at present universal in official climatology, is the ratio of equilibrium to reference sensitivity: i.e. 1.5. Since reference sensitivity to doubled CO2, derived from CMIP5 models’ data in Andrews (2012), is 1.04 K, Charney sensitivity is 1.5 x 1.04 or 1.55 K.

In the right-hand panel, just over two-thirds of the 0.82 K aerosol fudge-factor has been added back into the net anthropogenic forcing, making it 2.85 K. Why add it back? Well, without giving away too many secrets, official climatology has begun to realize that the aerosol fudge factor is very much too large. It is so unrealistic that it casts doubt upon the credibility of the rest of the table of forcings in IPCC (1990, fig. SPM.5). Expect significant change by the time of the next IPCC Assessment Report in about 2020.

Using the corrected value of net anthropogenic forcing, the system-gain factor falls to 1.13, implying Charney sensitivity of 1.13 x 1.04, or 1.17 K.

Let us double-check the position using the absolute-value equation that is currently ruled out by official climatology’s erroneously restrictive definition of “temperature feedback” –

clip_image030[4] clip_image032[4]

[Fig 14.] The system-gain factor for 2011: (left) without and (right) with fudge-factor correction

Here, an important advantage of using the absolute-value system-gain equation ruled out by official climatology’s defective definition becomes evident. Changes in the delta values cause large changes in the system-gain factor derived using climatology’s delta-value system-gain equation, but very little change when it is derived using the absolute-value equation. Indeed, using the absolute-value equation the system gain factors for 1850 and for 2011 are just about identical at 1.13, indicating that under modern conditions non-linearities in feedbacks have very little impact on the system-gain factor.

Bottom line: No amount of temperature-tampering tantrums will alter the fact that, whether one uses the delta-value equation (Charney sensitivity 1.55 K) or the absolute-value equation (Charney sensitivity 1.17 K), the system-gain factor is small and, therefore, so are equilibrium temperatures.

Finally, let us enjoy another look at Josh’s excellent cartoon on the Adjustocene –


[Fig 15.]

261 thoughts on “Temperature tampering temper tantrums

  1. I still don’t know why people bother to do any calculations…today
    ..we won’t know what today’s temperature is for at least another 50 years

      • well…put yourself in the shoes of these people that run the climate models
        It takes a lot of effort to put all that data in there….then takes a long time to run them

        …and they will never be right

        No matter what temp history they feed into the model today….it won’t be the same tomorrow

        The climate models are total junk…

        They hyped up the temp history to show a faster rate of warming..to try and scare everyone…..and are so stupid didn’t realize that permanently screws up the models

        …and the models reflect that….by showing the same fake faster rate of warming

  2. The Viscount Monckton writes,

    “Note that RSS’ warming rate since 1990 is close to double that from UAH, which had revised its global warming rate downward two or three years ago. Yet the two datasets rely upon precisely the same satellite data. The difference of almost 1 K/century in the centennial-equivalent warming rate shows just how heavily dependent the temperature datasets have become on subjective adjustment rather than objective measurement.”

    Here is a selected excerpt from a published paper worth reading showing HOW and WHY corrections are made for RSS and UAH, from Taylor Francis,

    Examination of space-based bulk atmospheric temperatures used in climate research

    This was done because these residual trend differences were very likely due to the changing impact of solar heating on the relatively rapidly drifting p.m. instruments. With the truncation of NOAA-14 data in 2001 and the trend adjustment based on simultaneous comparison with NOAA-12 and NOAA-15, the NOAA-14 trend difference in UAH data was considerably reduced. NOAA-12 was not impacted, however, as it was assumed to be stable. The fact that the US VIZ comparison indicates the relative warming of the satellites begins with NOAA-12 is a strong indication that it too was characterized by a spurious warming trend that was not accounted for in the UAH trend adjustment. In any case, this adjustment procedure is a partial explanation for the result that relative to the other satellite datasets in nearly all comparisons, UAH correlates highest, has the lowest magnitude of differences and the least difference in trends.
    On the other hand, RSS (and likely NOAA and UW in some manner) choose to retain the relatively warm trend of NOAA-14, which they termed an ‘unexplained mystery’ (Mears and Wentz 2016).

    Mears, C. A., and F. J. Wentz. 2016. “Sensitivity of Satellite-Derived Tropospheric Temperature Trends to
    the Diurnal Cycle Adjustment.” Journal of Climate 29: 3629–3646. doi:10.1175/JCLI-D-15-0744.1.
    [Crossref], [Web of Science ®], [Google Scholar]

    This, combined with a likely spurious warming of NOAA-12, produces the effect of ‘lifting’ the post-NOAA-14 time series up, producing a more positive trend.


    • This was done because these residual trend differences were … “. I haven’t looked for the detail, but this could be dodgy. You can’t change the data to make the trend behave as you think it should, and then derive any conclusions from the data about the trend.

      • Analysis of the RRS changes by Christy and Spencer were reported by Roy Spencer in two articles on his website


        The conclusion was that, while some aspects were not revealed by RSS, 80% of the increase was due to adding data from a older, dying satellite that UAH has discontinued using about a decade earlier. UAH could no longer make useful corrections to the instrument readings resulting from the satellite’s decaying orbit and resulting instrument heating by the increasing atmospheric friction. RSS seemed to have also taken the position that since there were so many unknown factors with that satellite, they were not going to even try to correct anything, thus adding an even larger warming bias.

        The other 20% of the increase comes mainly from changes in the climate model RSS uses to adjust (calibrate) the satellite data. Unlike UAH, RSS uses only model projections, not real measured data, to verify the calculations.

        • The comments by AndyHce and Sunsettomy are most helpful in explaining just why UAH’s dataset is to be preferred to that of RSS, and how the discrepancy between the two came about.

        • Actually, Spencer and Christy don’t have a single evidence that NOAA-15 is right and NOAA-14 wrong. They just “know” what is right and pick the satellite that give the desired low trend (based on their preconceived beliefs, I presume).
          The RSS team have lengthy discussions in their method papers on this issue, they can’t find any error in either of the satellites, and keep both to split the error.
          The UAH team sneeks their significant satellite “choice” through peer-review with a subordinate clause in figure text.
          RSS say that they want the satellite data to be independent, so they don’t use radisondes, reanalyses, etc to decide which of NOAA 14 and 15 that is right.

          I have looked into the matter and ALL other data; radiosondes, reanalyses, nearby AMSU-channels, water vapour, surface temps, etc, etc, suggests that NOAA-14 is (mostly) right and NOAA-15 wrong.
          Here is one example:


          UAH drops like rock compared to the neighbour channels during the period when NOAA-15 runs alone. The drop stops when the non-drifting Aqua satellite is introduced (actually UAH:s diurnal drift correction is based on the difference between NOAA-15 and Aqua, so NOAA-15 gets drift corrected by Aqua).
          RSS is only “half wrong” since they have split the error.

          The nonscientific UAH cherry-pick of satellites have produced a dataset with the absolutely lowest trend of all in the AMSU-era. The lower stratosphere trend has become so low so a mighty hotspot pops up all over the globe, and of course especially in the tropics. (Quite ironically since the UAH team don’t believe in hotspots)


          • Olof – I looked at your graphs and I reject your story as biased and hostile.

            The magnitudes of the differences you claim in your first graph are tiny.

            So is the magnitude of the “mighty hotspot” you allege in your graph 2.

            Your obvious bias against the good people at UAH is apparent in you choice of words – for example:

            “Actually, Spencer and Christy don’t have a single evidence that NOAA-15 is right and NOAA-14 wrong. They just “know” what is right and pick the satellite that give the desired low trend (based on their preconceived beliefs, I presume).”


            “The nonscientific UAH cherry-pick of satellites have produced a dataset with the absolutely lowest trend of all in the AMSU-era.”

            I suggest this is not science – it is slander.

          • Oh yes? So you haven’t noticed that the tone has been set by good Lord Monckton when he disingenuously accuses all honest scientists of wrongdoing, more precisely all keepers of temperature datasets except UAH. “Slander, “bias”, and “hostile” is just the the middle name..
            You have no problem with that, I suppose..

            When it comes to facts, actual data..
            The “tiny” difference shown by UAH in my first graph matches the residual difference between AMSU and MSU, shown by RSS in their method paper (Mears and Wentz 2016, fig 7c).
            Take-home-message: NOAA-14 is supported, but not NOAA-15
            (NOAA 14 vs 15 is the single largest uncertainty in the satellite series)

            About the second graph, if you can’t see the AMSU-era hotpot (in blue), ie that the trend in the upper troposphere is twice as high compared to that of the lower troposphere, then you would probably not find any hotspot anywhere, not even in model data.

            I am not saying that the UAH team are cherry-picking satellites intentionally. They may be blinded by confirmation bias. They have a long story of contending very low temperature trends against all others, long before NOAA-15 flew into orbit, and they have been forced to correct their datasets for errors (found by others) several times.

            So until anyone can show any kind of evidence supporting that NOAA-15 is right and NOAA-14 wrong, I will claim that UAH TMT and TLT are flawed datasets due to a biased pick of data, which isn’t a sign of sound scientific practice..

          • Prior to updating UAH, S and C had a much better grasp on positioning than Mears. As you know Mears employs modelling to derive his deliverables. So Mears would have no way of distinguishing drift between 14 and 15. UAH was positioned to distinguish and did inform Mears, who has not implemented the correction. As well, both RSS and UAH have accepted correction from the other. This relationship has been long and fruitful. So which dataset is most accurate? Neither is particularly accurate as both teams will attest; with error bands dwarfing ARGO.
            Something else, “RCP8.5 is not business as usual”. It’s a statement of relative certainty. Are you aware of this?

          • @ Olof,
            I remember the discussion on the satellite data. Nothing you’ve said makes any sense. On the the one hand there is plenty of evidence that NOAA/NASA has altered past data sets showing the past was colder, and warming the present in a continuous moving wave.
            And contrary to what AGW believes, I haven’t forgotten that the original temperature data is rotting in a landfill. And as far I know, still, there is no way of knowing how much the data was altered. From other sources, newspapers, magazines, and other papers, it looks like the data was altered quite a bit.
            Either the temperature record is consistent and verifiable or it’s all fiction.
            They are changing the co2 record as well…. although it looks like lately, NOAA has changed some of it back.
            After 20 years AGW is pretty much fiction whether the data is cherry picked, derivative to death, tortured data, analysis to ad infinitude…. and nobody on the street cares.

          • rishrac

            “….there is plenty of evidence that NOAA/NASA has altered past data sets showing the past was colder, and warming the present in a continuous moving wave.”

            You could say exactly the same thing, only in reverse, about UAH.

            UAH made a much bigger adjustment, in terms of its effect on trend, than any of the surface data sets ever have when it introduced its v6 to replace v5 in 2016.

            I wonder why you don’t object to that adjustment?

          • Namely I have print outs of what the temperatures were. And secondly and most importantly, as I think that co2 follows temperature, what’s the difference between 1998 and 2017 ? Man made co2 was 12 BMT (at least) more in 2017 than in 1998 and the ppm/v in 1998 was 2.93, and in 2017, 3.05 . Production in 1998 was about 18 BMT short of causing the 1.5 ppm/v that showed up. ( atmospheric co2 that’s 9 BMT ).

            Despite increasing production of co2 , the yearly ppm/v did not. The yearly ppm/v does follow temperature and has for the last 60 years.

            NOAA/NASA did have the co2 ppm/v per year going all the way back to 1890 at one point. There were no negative numbers.

  3. Marxstream Media! Perfect. They have been so truth-challenged for so long that most of us call it the lamestream media, but your term is more spot-on.

    • With all due respect Mr. Monckton, while I agree with using terms such as Marxstream Media because it is accurate, how would this sway independent minds who are genuinely interested in researching climate? I ask this because the left won’t concede, even if we decended into an ice age tomorrow. The right is generally skeptical, so the real need is for those undecided and unlearned, but who have fatigue from the narrative.
      I’m listening to Alex Epstein discussed methods for discourse, and as we all know you are quite the wordsmith, so I’m wondering if rather than talk in borderline condescending tone (which is certainly justified), we ought to remove all frustration and emotion from our tone.
      Our frustration is aimed at the virtue signallers who haven’t done an iota of research, the blind faithful to the goddess GAIA, but those on the fence will see such tactics as similar to the warmists.

      Granted, I’m guilty of exactly what I’m saying we shouldn’t do when blasting such antagonists as Mosher and Chris et. al., which I need to reign in, but I’m also not contributing such incredible work as yourself in essay form.

      Thank you for your continued efforts and I hope you have not taken offense to my suggestion.

      • This is a sensible request – but applied to a stupid world.

        Humans operate like sheep, so once a divide is established, the vast majority select one side or the other. Those independent minds in the middle are attacked alike by both sides, and rapidly shrink into insignificance.

        Any attempt to produce cogent but bland statements of fact will simply be presented by the other side as a collapse of support, and they will redouble their efforts to have any such statements removed from public view entirely.

        The voice of moderation is drowned by extremists. It’s always been so….

        • Agree.
          If one side in a contentious debate moderates their tone, and the other side makes no change, the result is that the ones making their points more forcefully come off as having more conviction.
          In a perfect world, calm and cool and perfectly polite would win out over brash hyperbole.
          But we are not living in that perfect world, and there are no signs we are heading in that direction.
          Besides for that, the science of persuasion dictates that emotional arguments are what gets peoples’ attention.
          If you are citing bland facts and the person you are debating is making impassioned appeals to emotion, you lose the battle of persuasion.
          The Senate hearings were an excellent example of this effect.
          The sad fact is, what we have is more like a fight, a brawl, than a discussion.
          The way you win a fight is, if someone steps on your toes, you punch them in the throat.
          If they poke you with an elbow, you throw them down the stairs.
          Just ask President Trump.

          • Lol, all is fair in love and war. Principles be damned, it’s about winning!

            You can do that if you want, but you can’t then pretend to care about principles.

      • Good point. One problem. Gaia by water vapour is the dominant controlling effect of our two state climate. The oceans keep us in the narrow band between ice age GHE-limited stable state and the 100Ka maximum perturbation interglacial, limited by clouds, in a very small range of a few degrees since there were oceans to control things.

        The variation of 340W/m^2 solar insolation possible naturally by our smart lagging adaptive water vapour atmosphere is over 140W/M^2, very obviously. That’s the GAIA control, the few W/m^2 stuff is noise. Easily mopped up, in a few hunded years. What CO2? Nothing to see here, unless you stare down a microscope and believe you are looking at the big picture.

        If you believe in Gaia, you can’t believe in CO2, or any other small effect, as significant within the level of the awsomely powerful controls that power Gaia. That’s cognitive dissonance.

  4. In the “Adjustocene” climate temperature data is adjusted every month. Of the 1656 Monthly entries GISSTEMP’s Land Ocean Temperature Index (LOTI) published in June, 460 were adjusted in the July edition. Fully 177 of those adjustments were made to data from the 19th century. This goes on every month. It does add up:

    Comparison of 2002 and 2018 GISS LOTI and Trends:


    For the “Y” Axis GISS says to “Divide by 100 to get changes in degrees Celsius (deg-C).

    • “Comparison of 2002 and 2018 GISS LOTI and Trends:”

      Looking at the chart, 3rd one down in the head post, appears they didn’t start adjusting much until 2002…..did something happen in 2002 that they use as an excuse for that?

      • Latitude,

        2002 is the oldest edition (in the current format) that can be found on the Internet Archives WayBack Machine. The Red 2018 plot & trend is only shown to 2002. It does of course continue for the next 16 years.

        The data is changed every month and appears to follow a pattern. Here’s another plot that compares 2002 to changes made since then to GISSTEMP’s LOTI:


        Each plot represents the average of the changes made for that year. As you can see, since the early ’70s all of the changes increased global temperatures. Prior to that date, most of the changes lowered global temperatures.

        That the changes follow a pattern is a matter of fact. Why those changes follow a pattern is a matter of opinion.

        • I think I remember several posts about that here a while back…..Don’t they run some algorithm that adjusts past temps every time they enter a new set of temps?..and that adjusts the past every time

        • These adjustments are exactly the opposite of what they scientifically should be.

          Consider a single weather station being put into service in the early 1900’s. It was known then that the appropriate place to take weather measurements was in an open, grass field, away from any vertical features (trees, building, ect.) that would have an impact on the radiational environment surrounding the site. Typically, these stations were put at airports, because they were usually in grassy fields away from human populations, with all their buildings and pavements.

          There is nothing that can cause this well-placed, brand new weather station to produce temperatures that are too warm. Over time however, the gradual addition of buildings, pavement and vegetation growth would lead to that exact same station showing an artificial warming trend.

          As this describes that majority of reporting stations that are at least 50 to 100 years old or more, there is no doubt that there is a warm-bias in the temperature data. The corrections should be to adjust the latest temperatures down and leave the oldest temperatures bloody-well alone.

          While UHI is not the only reason to adjust the temperatures, it is by far the greatest! Our cities are routinely 3-5 degrees are more warmer than the countryside that surrounds them, and from whence they came. Because UHI has always been underestimated, the surface temperature record was already too warm before they started adjusting it for political purposes it to make it even warmer.

          • Ignores ‘time of observation bias’ and the fact that many if not most airport stations were added later in the series, the result of Met services trying to remove the UHI effect from stations previously located in town centres.

    • Steve,

      Yes, the later trend is bigger than the earlier one. But by how much? We must be talking hundredths of a degree C per decade at most. No person looking at either trend could deny that both are warming to a very similar degree. Why would they corrupt the data to achieve such a minimal effect? It’s not reasonable.

  5. The “adjustments” to RSS were so blatant, all the explanations are as lame as the excuses for Climategate in Wikipedia.

  6. I am still astonished by the ugly answers I received from Lord M on my last comments.
    However, for those interested here, I would recommend reading my post on this

    especially my results on minima.
    It appears globally cooling already started …?

    hence more cooling and precipitation at the lower lats and more dryness at the higher lats

    which will continue in the years to come.

    most of USA and Canada still in the dark about the next > dust bowl drought’ coming up soon

    • We already have drought in Western Canada but it is unlikely that we will have a “dust bowl” as farming practices have changed for the better since the 30’s. Time will tell!

      • You make a valid point but there has been a major reduction of fallow/rotating land due to corn for ethanol conversion. It has a detrimental effect on wildlife as the tall grass in the fallow acreage offered cover and food during the winter. In the event of such a major dust-bowl these conservation plots would probably offer some capture of drifting soils. Certainly a level of salvation to wildlife.

        • Also large amounts of windbreak trees which were planted decades ago are being removed in order to increase planted acreage.
          I am certain a day will come in which the people that own these lands rue the day they cut and plowed their windbreaks.
          Just a matter of time.

      • John
        Hope you r right,
        as for the topic here, on this post
        I found an uncanny relationship between RSS, UAH, GISS and Hardcrut.
        I do know the sats were completely out a few times and had to be ‘recalibrated’
        but how?
        it seems to me they used the terrestrial sets to re-calibrate on….
        but we know we cannot trust those data……
        All my data for the past 43 years( ending 2015) show a half curve ,
        which confirms the sine wave for the 87 years Gleissberg cycle

        2 more decades of cooling coming up….

        more rain and precipitation at the lower lats
        more dryness at the higher lats

  7. First, they denied the existence of the “Pause”. Then, when that didn’t work, they tried to get rid of it, not unlike their campaign against MWP. The mendacity of the Climate Liars knows no bounds.

    • On the Guardian they call skeptics liars, and then when you give them scientific papers and articles (or even sound logic) to support your argument, negate theirs, and show that you are not lying or misinformed, they make you and your links disappear. They ‘unperson’ you! True evil. Any thoughts on whether it is better to ignore the Guardian altogether and leave the little echo chamber of toxic morons – they get very nasty – to stew in each other’s juices, until they realize that very few people are listening? Or, is it better to battle on in the hope that some might have the curiosity and intelligence to break free from the cult and investigate the issue? I do hate giving them ‘clicks’.

      • Sylvia,
        The Guardian may have prominence on WUWT but it is not particularly influential in the UK, aside from among true believers of the left, whose views are unbending. Its print circulation is a puny 150,000. OK, so many times this number read it online, but you get the point. So, it’s worth looking at to see what the AGW proponents are minded to believe, but no more than that.

      • Sylvia

        Doubtless you have encountered RockyRex if you have been on the guardian Comment is Free section, which is anything but free.

        An odious character who, I understand, is an unannounced moderator. He’s an ex Geography scoolteacher I believe and maintains his own little database of “facts” he just regurgitates whenever anyone presents a reasonable argument. Of course many posts opposing his pronouncements simply disappear, and I challenged him on it a number of times. Eventually, my account was deleted.

        My desire in reading the guardian, was to maintain as balanced a perspective as I could. Not interested any more as I recognised so many lies on there even before I pitched up at WUWT expecting a rough ride for asking questions. Nothing could be further from the truth. Unlike the alarmist sites, my questions were answered with patience and humour no matter how stupid or contentious they were.

        • Yes Ol’ Rocky Rex. There are a few regulars. Erik Friedrikson is another, who seems to have convinced commenters that he knows what he’s talking about by simply quoting large passages of IPCC reports and statements by Mann and Hansen and any alarmist who gives the most frightening predictions. He does this with authority and politeness but he seems to be completely lacking in basic scientific knowledge. He seems to place absolute faith in people in authority, and I get the sense he doesn’t know how to cope when there is peer reviewed evidence which refutes his belief system. The tactic they all take then is to dismiss it as ‘denialism’ and it must have been paid for by the FF industry! Dense, the lot of them.

  8. Monckton of Brenchley says:

    I predicted in this column that Carl Mears, the keeper of that dataset, would in due course copy all three of the longest-standing terrestrial datasets –GISS, NOAA and HadCRUT4 – in revising his dataset in a fashion calculated to eradicate the long Pause by showing a great deal more global warming in recent decades than the original, published data had shown.

    Sure enough, the very next month Dr Mears (who uses the RSS website as a bully-pulpit to describe global-warming skeptics as “denialists”) brought his dataset kicking and screaming into the Adjustocene by duly tampering with the RSS dataset to airbrush out the Pause.

    Similar predictions were made about Colorado University’s Sea Level Research Group. They hadn’t updated their chart for over a year, but they published papers explaining that the rate of sea level rise should, or was expected to, show acceleration, and sure enough this past January, Dr. R. Steve Nerem adjusted the data from the early ’90s which accomplished exactly that.

    By the way their web page
    has been down for nearly a week now, so it will be interesting to see what they’re cooking up.

    • Ah. Heark back to Trenberth’s lament in the original Climategate emails, “the data must be wrong.” This attitude is the downfall of science and the beginning of a new Dark Age. Philsophy and religion start with unprovable postulates and proceed to make arguments for how things “ought” to be. Science starts with observation and measurements. The entire purpose of this is to limit the influence of “theory” on the scientist’s consluions according to Sir Francis Bacon himself. Doyle had Holmes observe that “… It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts. …” Post-normal science accepts that data is theory “laden,” that the observer has an assumption in mind to begin with. And evidently, post-normal scientists believe it is OK to adjust data because, well, it has already been “polluted.” Oddly, they never object to correction that make the data look as they expect it should.

      • “Philsophy [sic] and religion start with unprovable postulates and proceed to make arguments for how things “ought” to be.”

        Yeah. Like how smart scientists said that the universe was eternal & without beginning, unlike those stupid Jews & Christians. Or that time was an absolute, rather than that “One day is as a thousand years” Bible nonsense. Or that life could easily come about from non-life via random processes. Or that any old universe was likely to host life, no special fine tuned initial conditions, constants, nor laws necessary. Or that life would very gradually change over time in a tree of life, not in sudden bursts.

        Of course, all this ignores the small matter that science was invented only in a philosophical & religious worldview that the universe is rational, uniform, and law-following because it was created by a rational, law-giving God.

  9. Since RSS revised temps. upwards at about the same time as UAH revised them downward, it is obvious that one of the data sets has been changed to suit the views of the scientists who produce it. Obviously people need to decide for themselves which adjustment is based on sound science and which isn’t. Personally I find it highly suspicious that RSS, as well as sea surface temps. and hence the thermometer based data sets were adjusted upwards not long before the Paris Climate Conference. If it had been pointed out in Paris that there had been a pause of nearly 20 years it would surely have been an inconvenient truth.

  10. CM,

    To hear you put it in their own words is classic! ….. “It’s worser than what we ever, ever thunk”.

  11. Lord Monckton, KISS. You don’t need a million charts and graphs to explain that a linear model won’t explain a curvilinear variable. The absorption of energy by CO2 shows a logarithmic decay, so one would not expect a linear trend in temperatures. The very fact that the “adjusted” temperatures are becoming more linear actually rule out CO2 as the cause. Here is a list of simple arguments that almost anyone could understand. We have to start making arguments that the man on the street can understand.

    Comprehensive Climate Change Beatdown; Debating Points and Graphics to Defeat the Warmists

    • There was a debate on the Guardian the other day. The skeptic was saying the effect of CO2 diminishes and plateaus; that there is no evidence of the linear relationship claimed by alarmists between CO2 and temperature in the historical record or the present. The alarmist was insisting that the IPCC has always claimed a non-linear, logarithmic relationship between temperature and CO2? Huh? Is that true? This is what they wrote “Successive doublings of CO2 cause the same amount of warming – currently thought to be about 3C . Only an exponential rise in CO2 will cause linear temperature rise”. Why are all the graphs linear then?? What is runaway warming?? Are they saying the point at which the levelling in the graph would start is at 3C? Or are they talking total rubbish (that’s my guess!)

      • Thanks for the up votes, but…were they talking total rubbish? Can somebody explain the “successive doubling” and logarithmic argument SUPPORTING alarmist claims, or is it nonsensical?

        • They are talking rubbish in a practical sense. If and when the atmospheric co2 concentration reaches 800 ppm a couple hundred years from now it should be x degrees warmer than now. If and when it reaches 1600 ppm it will be x degrees warmer again. So in a practical sense there is no call for alarm.

          Of course, that is if the hypothesis this is all based on is correct, and it could be completely over ridden by natural variation one way or the other.

      • They don’t seem to understand the implications of the numbers. The skeptic is correct that the effect will diminish. If the hypothesis is correct then most of the warming comes during the first portion of the increase of co2 concentration. That means that most of the warming has already happened. Any additional atmospheric co2 concentration becomes less and less significant. At about 400 ppm we are already at about 70% of the doubling over the estimated pre-industrial level and we have only seen a fraction of degree increase. An additional 2 plus degrees by the time we reach the doubling over the pre-industrial level several decades from now cannot happen because the additional increase in temps will be less than what we have already seen. And the equilibrium sensitivity estimate already includes the estimated impacts of feedbacks. This observation essentially proves that the ECS is not 3 degrees and is much closer to 1 degree, or even less.

        A plateau over the passage time is relative to the early increase in temp during the increase in atmospheric concentration. There will still be an increase in temps but that will occur at an ever slower rate as time passes.

        • “There will still be an increase in temps but that will occur at an ever slower rate as time passes.”

          Which was the skeptic’s original point. So, you’re saying in order for temperature to hit 3C in the next few decades, following a logarithmic curve, the temperature rise to date would have had to have already been far higher, right?

          I found this:

          “Therefore climate sensitivity is expressed as a certain amount of warming … for every doubling of the CO2 concentration. You get that amount of warming (probably close to/somewhat above 3 degrees) from 280 (pre-industrial CO2) to 560 – and again from 560 to 1120 ppm of CO2

          “You’d have to keep doubling the CO2 concentration for every 3 degrees of further temperature rise (please let’s not).

          True. In a sense that’s good news. You need massive emissions (or should we say runaway carbon feedbacks) to get from 3 to 6 degrees warming. But it also leads to a skewed perception of climate change, as in fact climate change is an accelerating process”

          Again, huh? Why is it an accelerating process; I thought it would, logically, be a decelerating process, if you need more of something to get an effect?? Feedbacks? I don’t understand why the CO2/temperature relationship only applies to the post-industrial era. If the assertion is that you get 3 degrees of warming from each doubling of CO2, is it historically accurate that the temperature increased by that amount when CO2 doubled from 140ppm to 280ppm, because that’s what his graph shows.


          • They are doing quite a dance there, trying to create the impression that it is an imperative that we still need to decrease co2 concentration when the math indicates that we do not. Their hypothesis that a doubling of co2 causes x amount of warming has created a problem for sounding the alarm that it will get “worser and worser and is worser than we thunk,” because proper application of the math, backed up by observation indicates the opposite.

            They claim that we should already be at 1.56 degrees warming but we are not. Measured it is less than 1 degree. They are basing their claim that it is about 51 % of the 3 degrees (as if the 3 degrees is a robust figure when its just made up figure anyway). A ECS estimate based on empirical observation is far more scientific, instead of based on a wild guess made by Hanson back in 1979. Additionally, they are not taking into account the diminishing effect but give each portion of the co2 increase from 280 ppm to 560 ppm equal effectiveness. Yes, I am saying that the temp increase to date would need to be far higher.

            Another sleight of hand is that they claim feedbacks and the potential warming to date have just not fully kicked in yet. How do they know that? They don’t. The climate sensitivity to co2 before feedbacks accepted by most is an insignificant 1.16 degrees. So to get a scary scenario they must claim that the feedbacks will surely be significantly positive, and that assumption is built into their models that are always wrong in their predictions both going forward or backwards. Observation indicates that feedbacks are not significantly positive. Now they are adjusting observations instead of resetting the hypothesis or starting over with a different one.

    • It’s a pleasure! Just look at the snarling and sniveling of the climate Communist trolls here. They don’t like the facts, do they?

  12. Equilibrium sensitivity is a little more complex …

    It’s a theoretical concept that has no basis in reality.

    We have been banging in and out of glaciation for the last half million years. We see that the conditions during the latest interglacial have been extraordinarily clement. That’s nice for us however we will almost certainly bang back into glaciation.

    Show me an equilibrium anywhere in the paleo record. The Earth’s climate system is not time invariant. There is no equilibrium. Period. End of story.

    • I hope he doesn’t. Ted Cruz will understand this kind of work is an effort to enshrine CO2 warmism that flagrantly violates physics.

      • Don’t be silly. If the furtively pseudonymous “coolclimateinfo” can make a proper scientific case against radiative forcing from greenhouse gases, let it submit that case for peer review at a respectable journal.

  13. Any climate system gain over unity is a free-energy device, a perpetual motion machine.

    No energy in the system exists without being put there by the sun first, outside of volcanism.

    The law of conservation of energy must be observed. Over-unity systems are in violation of it.

    You can’t get something from nothing. CO2 isn’t capable of producing heat, and if it stores any, it was heat put there by the sun. Climate change is reducible to daily 1au TSI/insolation.

    Therefore all system gain estimates over unity including the lowest are impossible fantasies that ignore the laws of physics.

    The idea of climate over-unity system gain is Orwellian double-think, it’s promotion is double-speak.

    Furthermore there is no such thing as an equilibrium temperature in the climate.

    So radiative forcings are a big nothingburger.

    • Are you saying that an atmosphere and changes in its composition make no difference?
      For example:
      The mean surface temperature of the Earth is 15C (288K).
      The mean surface temperature of the Moon is -23C (250K).
      (I am not a scientist).

      • My solar climate work is about solar supersensitivity of the ocean and climate.

        Energy in the atmosphere first came from the sun; climate change follows solar changes.

        2014-15 I did a study that assumed only a variable solar input over a 26 year tuning period, where I found empirically-derived solar input decadal warming/cooling equivalent thresholds of 94 v2 SIDC SSN, 120 sfu DRAO F10.7cm radio flux, and 1361.25 W/m^2 LASP SORCE TSI. Then I tested it in 2016-now with real-time solar climate data by making successful prediction(s), to find sea surface temperatures after the tuning period responded precisely as predicted with only ongoing solar forcing, to date.


        The sun and not CO2 caused the warming of the 20th century.

        The OLR (outgoing longwave radiation) emitted from the ocean, detectable in the air, originates from variable incoming sunshine having been absorbed at depth, which then upwells and emerges at the surface. Solar energy produces tropical evaporation with it’s latent heat, that adds more energy to the atmosphere.

        Non-condensing gases can only pass around this heat before it finally escapes upward.

        It’s a mistake in TOA (top of atmosphere) calculations attributing the OLR to radiant warming from higher concentrations of CO2.

        There’s a time element – air’s OLR and WV latent heat are delayed solar responses.

      • Are you saying that an atmosphere and changes in its composition make no difference?
        The lapse rate enhances surface temps by 33C while reducing upper tropospheric temp. Nowhere does CO2 appear in the equation for lapse rate. Water does appear via condensation which moderates the lapse rate. Atmospheric composition affect the energy required to compress air so it has some effect but nothing like what is put forward under GHG theory.

      • True, but try staying out in a coat, below freezing, for a week without food (energy).

        In a week you’re not even going to be room temperature, no matter the coat…

        • …well, no, but if the earth didn’t have gaseous coat on, we wouldn’t be here discussing it.
          All I was trying to do was help with an an analogy. Stabilising at a higher temperature by putting on a heavier coat is not “a perpetual motion machine”.

          The next stumbling block might go something like:
          but how can a cooler object (the sky) heat up a warmer one (the ground), thats against the…?

          • Stabilising at a higher temperature by putting on a heavier coat is not “a perpetual motion machine”.

            That’s not what I said.

            There’s no stabilizing at a higher temperature by CO2.

            The next stumbling block…

            I think you are still enamored by a false idea, CO2 “warming”.

            Here’s my real world analogy:

            If CO2 warming is a real thing, then why did temperatures drop from the 1940s into the 1970s when CO2 was rising?

            If CO2 warming is a real thing, then why have both atmospheric and ocean temperatures fallen since the 2016 El Nino peak, with such ‘extreme’ CO2?

          • coolclimateinfo

            “If CO2 warming is a real thing, then why did temperatures drop from the 1940s into the 1970s when CO2 was rising?”

            Perhaps because CO2 warming is not the only forcing on climate. Did anyone claim that it was?

            “If CO2 warming is a real thing, then why have both atmospheric and ocean temperatures fallen since the 2016 El Nino peak, with such ‘extreme’ CO2?”

            Perhaps because there is always an expected dip in global temperatures after an El Nino. It was pretty widely expected that temperatures would fall back a bit from the 2016 peak, wasn’t it?

        • coolclimateinfo makes the same mistake as official climatology: it neglects to take into account the fact that the Sun is shining continuously, wherefore its analogy breaks down before it gets its boots on.

          • RyanS

            You’re alright. It’s a contentious issue. The heart of the debate.

            I will say this about the blanket and coat analogies – CO2 ‘warming’ hasn’t appeared to prevent any cold extremes during the winters, nor crop losses, nor hail nor winter storm damages. Seems powerless.

            The fact is the idea of the CO2 climate control has been drilled so deeply into the pysche of the average person that any deviation from it produces extreme dissonance, even in scientists, and some skeptics…

            This ‘work’ is ‘unreal’.

          • CO2 doesn’t control the climate, but it (along with other GHGes) does play it’s part in providing us with a livable temperature on this little mudball.

          • I make no such mistake. The sun shines but the energy is variable, every day.

            We live within this variation. What you say about solar is based on ignorance.

            No barbs from you are going to change the absurdity of over unity CO2 gain.

          • Here’s a little experiment for you to try. All you need is a sink (bathroom or Kitchen will do) Faucet and a mesh stopper/drain cover and somekind of gunk (putty, clay, or the like) that can be used to clog the mesh.

            Turn the faucet on without a stopper in the sink. Water rushes in from the faucet and as quickly rushes out the hole at the bottom. The sink will never fill up as the inflow and outflow are equal (the system gain, ie amount of water in the sink, is unity).

            Now put a clean mesh stopper in without changing the amount of flow from the faucet. Immediately you will notice the sink retains a little bit of water as the mesh lets the water through at a slower rate than before causing the water flow to back up that little bit.

            Now use they gunk to cover some of the holes in the mesh and notice the sink retains even more water as the flow through the mesh impedes more of the outgoing water. You’ve achieved a system gain (the amount of water in the sink) “over unity” even though the flow of water into the sink remains the same as it always was.

          • That is an incorrect statement as far as the Earth’s surface is concerned.
            The Sun shines all the time, but it only shines on the surface and the atmosphere during daylight hours, which is not “all the time”.

          • It continuously shines on the surface 24 hours a day, there isn’t a single hour of the day where the sun isn’t shining on the surface of the Earth somewhere. It just not shining on the same spot continuously.

        • The source of energy for the earth is the sun. (A tiny bit of geothermal, but far enough below rounding error to be ignored for most practical applications.)

          • MarkW,
            There is not enough observational data to support the conjecture as to how much geothermal energy is and has been pumped into the oceans, 70% of the earth’s surface to an average depth of over 12, 000 ft, which are much more efficeint at storing it than the earth surface or atmosphere. It may be, also a conjecture, a much more important factor than it is considered to be. The real answer is that we do not know.

          • But official climatology leaves out the emission temperature when considering feedback. It forgets the sunshine.

    • Any climate system gain over unity is a free-energy device, a perpetual motion machine.

      Could a small change in a parameter, like inobtanium concentration, lead to a large change in another parameter like temperature? Sure. For sake of argument.

      The thing is that it isn’t precisely like an amplifier where a change in voltage or current on the input leads to a change in current or voltage on the output. If you change the CO2 concentration, and that’s your system input, how can you say that you have any particular gain if your output is temperature?

      I really don’t think the feedback amplifier analogy is valid for the climate. It’s just Jim Hansen’s concoction to try and increase the effect of CO2.

      • I really don’t think the feedback amplifier analogy is valid for the climate.

        This has been my main point. It’s an open system. In a true Bode analogy, the sun is the power supply and the input, while CO2 responds like everything else to the output, the heat moving out of the ocean from the sun in the form of OLR and water vapor.

        CO2 doesn’t generate feedback, positive or negative, to the input, the power supply, the sun.

        CO2 doesn’t have the thermal capacity of the ocean, which is the real store of solar energy.

        CO2 enhances life but doesn’t warm or cause climate change or any weather events.

      • “how can you say that you have any particular gain if your output is temperature”
        You can if you analyse it properly. The correct method is as a two port network, as I expand on here. There are two inputs to an amplifier – current and voltage, and two outputs, and the device model makes two linear equations connecting them. It can easily happen that a mostly voltage input, say, produces a mostly current output, as with a triode valve. You can then convert that to a voltage output with a load resistor.

        The key is the impedance change. If the temperature increase can then, by say evaporating water, create a larger flux increase, without the increase being totally quenched, then you have a loop gain.

        Feedback isn’t a Hansen “concoction”. He commented that Bode theory gives a way of looking at the linearised equations relating flux and temperature. That exists independent of Bode analysis. If you think Bode helps, fine, but it isn’t needed.

        • Without Hansen’s postulated positive feedback, there is no reason to believe that enhanced atmospheric CO2 will have more than a small beneficial effect.

          In that regard, Hansen’s analysis is no different than Mann’s analysis that sought to eliminate the very inconvenient Medieval Warm Period. It is a hallmark of activists that they indulge in motivated reasoning rather than the dispassionate search for truth.

          • “In that regard, Hansen’s analysis is no different than Mann’s analysis that sought to eliminate the very inconvenient Medieval Warm Period.”
            Hansen’s analysis of atmospheric physics has nothing to do with Mann’s statistical analysis of paleoclimate proxies.

          • commieBob

            “It’s pretty obvious that they are both examples of motivated reasoning.”

            Whereas Lord M’s previous continued referencing of the RSS TLT v3 data set, in the teeth of its producers’ long term warning that it contained an obvious cool bias, is what exactly?

          • Mr Rice, obviously a partisan rather than a dispassionate observer, should really direct to Dr sMear the question why he continued to issue global temperature that he knew to be incorrect. On several occasions I reported that dr sMear had stated that he preferred the terrestrial datasets to his own dataset. He could and should have made corrections sooner. Don’t blame me for using his data, coupled with his own statement that he preferred other datasets, coupled with the data from the other datasets. In short, don’t be prejudiced. Try to look at these things with an open mind.

          • “On several occasions I reported that dr sMear had stated that he preferred the terrestrial datasets to his own dataset. He could and should have made corrections sooner. “
            This is silly. Terrestrial datasets measure surface temperature, reliably. Microwave sounders measure tropospheric temperature, less reliably. There is no reason why RSS should not continue doing their best to measure TLT, even if surface temperature can be assessed more accurately.

          • Mr Stokes is silly. If Dr Mears thought the terrestrial datasets better than his own dataset, he should have made corrections sooner.

          • “He could and should have made corrections sooner.”

            My understanding was that Dr Mears waited until the new version of RSS had passed peer-review before announcing the results, whereas Dr Spencer announced version 6 of UAH as a beta a long time before it had been published.

            “In short, don’t be prejudiced. Try to look at these things with an open mind.”

            Always good advice. Difficult to see how to do it if you continually use names like “Dr sMear”.

    • Coolclimateinfo is wandering off topic. The presence of greenhouse gases inhibits the escape of solar radiation displaced to the near-infrared at the Earth’s surface and emitted. There is no violation of the second law of thermodynamics in the greenhouse theory.

      From 1850 to 1930 the trend in global mean surface temperature was zero. That is known in physics as a “local equilibrium”.

      Radiative forcings are changes in the net down-minus-up radiation at the Earth’s emission altitude. To some extent, these changes can be measured from space, and their causes deduced. Since the changes are in fact measured, there is no point in trying to deny that they exist unless one can explain what the satellites have done wrong.

      Let us try to keep on topic and not make wild, unsupported, anti-scientific statements.

      • there is no point in trying to deny that they exist

        This is a strawman argument, as I mentioned OLR many times before, and the true relationship of OLR to solar forcing.

        Radiative forcing is not the root cause or the original source of the energy driving temperature change, as is implied. It is a nothingburger because the energy in the atmosphere derives from the sun via the ocean, with a time delay.

        If you need proof of that statement, UAH says the globe temp correlates 97% with ocean temperature, and a land temperature 76% correlation to the ocean.

        From 1850 to 1930 the trend in global mean surface temperature was zero. That is known in physics as a “local equilibrium”.

        …is an invention of the climate establishment. An local equilibrium “point” should be a year or less, not 80 years. What a joke.

        The physics is not there for a positive system gain, over-unity, perpetual motion atmospheric temperature increase from CO2. The structure of your system is wrong. There is no positive feedback amplifying incoming solar energy.

        All that is entirely on topic.

        • Coolclimateinfo is not approaching these questions scientifically. Sub specie aeternitatis, or “in the light of geological time”, an 80-year equilibrium is indeed a local equilibrium, but it is an equilibrium.

      • If you and Nick are so keen on electronic analogies, the oceans are an enormous capacitor. These also contribute to amplification. Perhaps coolclimateinfo overplays the case a bit, but the paleo correlation of CO2 and temperature is sooo poor, if you relied on this to build your stereo, you would be listening to white noise.

        • So where does the lack of warming from 1850 to 1930 fit in with “recovery from the Little Ice Age”, and why wasn’t this lack of warming replicated in the period since the solar peak in the early 1960s? Clearly something other than solar influence is affecting the climate.

    • “CO2 isn’t capable of producing heat, ”

      Correct, it doesn’t produce heat, the sun does that. What it does do, is slow the loss of that sun-produced heat by a little bit.

      Here’s another way to look at it. imagine a coin machine. You put in coins at one end and the other end spits those coins out of multiple slots depending on the type/size of the coin. Inside the machine, where the coins are sorted is a chamber that can hold thousands of coins if need be. Further imagine that the rate you put the coins in matches the rate it spits out the coins. unity.

      Now, let’s imaging that the machnine’s inner workings has a flaw. The coins bounce around in the inner chamber while being sorted such that sometimes a coin misses its slot and continues to bounce in the machine a bit longer than it’s fellow coins. Say this equates to putting coins in at the rate of 100 per unit of time and the machine is spitting them out at a rate of 99 per unit of time.

      Time Unit 1: you are feeding the machine 100 coins, 100 coins are in the machine, 99 coins come out of the machine leaving 1 in the machine

      Time Unit 2: you are feeding the machine 100 coins, 101 coins are in the machine (100 you just fed it plus the 1 left over from the previous Time Unit), 99 coins come out of the machine leaving 2 in the machine

      Time Unit 3: you are feeding the machine 100 coins, 102 coins are in the machine (100 you just fed it plus the 2 left over from the previous Time Unit), 99 coins come out of the machine leaving 3 in the machine.

      and so on. Why the machine is creating money in violation of the conservation of money, it’s a perpetual money making machine, by your logic! But no, it isn’t. It’s simply that some of the money is held back while additional money is being added.

      • What it does do, is slow the loss of that sun-produced heat by a little bit.

        How can anyone tell if or by how much CO2 slowed the escape of heat? The concentration has changed so little overall since 1850, ie, from a low level to a bit less low level. It brings me back to the following questions in the context of exactly how long does CO2 hold on to heat?

        If CO2 warming is a real thing, then why did temperatures drop from the 1940s into the 1970s when CO2 was rising?

        If CO2 warming is a real thing, then why have both atmospheric and ocean temperatures fallen since the 2016 El Nino peak, with such ‘extreme’ CO2?

        How long did it really hold onto the heat while the ocean was cooling?

        The next question is how much of the .04% CO2 of the atmosphere is really in play, is readily available? How can it be all of it when the plants are consuming it year-round? So how much CO2 is really available to do all this magnificent heat transfer into the ocean while simultaneously keeping up the air temperature?

        • CO2 does not “hold on to heat”. It absorbs and re-emits it. It does this continuously for as long as it’s in the atmosphere. As long as the sun shines, CO2 will act as a warming influence on the atmosphere. It’s not the only forcing on climate; other forcings can surpass it’s effect over shorter terms, but it’s a long-lived forcing due to its atmospheric residence time.

          So we should expect to see things like ENSO dominate climatic conditions over shorter periods and things like enhanced greenhouse gases to dominate over longer periods. The pattern should be one of gradual temperature increase set against ups and downs caused by ENSO fluctuations. That’s exactly what’s observed. It’s quite simple, really.

          • Not so simple at all. The rate of warming is about half what IPCC had originally predicted in 1990. Perhaps the chief reason for IPCC’s over-predictions is now clear: it had misdefined “temperature feedback”.

  14. Has Christopher and his team bothered to to submit this to any reputable body ?
    He clings to the cherry picked pause (please look at the long term rise ) and ignores inconvenient facts.

    • As a matter of fact, all the datasets showed the Pause until, one by one, they were altered so as to airbrush the Pause away. Perhaps WTF disagrees with railway engineer Pachauri, formerly of IPCC, who admitted in a speech in Melbourne that the Pause existed and that it raised legitimate questions about IPCC’s predictions. Perhaps WTF also disagrees with NOAA’s State of the Climate report for 2008, which said that a period of 15 years or more without global warming would indicate a discrepancy between prediction and reality. The Pause was 18 years 8 months long in the UAH dataset and 18 years 9 months long in the RSS dataset (before it was tampered with, that is). Those, like it or not, are the inconvenient facts.

    • You write . . .
      “He clings to the cherry picked pause (please look at the long term rise ) and ignores inconvenient facts”.
      Ok thanks WTF, which period would you prefer for a non cherry-picked factually correct picture of global warming caused by man?
      Also would you be kind enough to confirm the following which would allow us to understand your understanding of global warming:
      1. Do you believe additional CO2 causes additional global warming?
      2. Do you believe man is responsible for most of the additional CO2 in recent history?
      3. If so, between which two dates has man primarily been responsible for emitting the additional CO2?
      Thanks in advance.

    • There was never anything cherry picked about the pause.
      If you look at the long term rise, over the last 200 years, and compare it to CO2 levels, you will see that there is no correlation.
      It’s only when you cherry pick the last 30 years that an apparent correlation appears.
      PS: According to your “scientists”, a pause was supposed to be impossible while CO2 levels were rising.

    • If by this you mean one of the mainstream climate journals then I fear you seriously misinterpret what reputable means.

  15. Centennial-equivalent global warming rates for January 1990 to June 2018. IPCC’s two mid-range medium-term business-as-usual predictions…

    Why do I get a feeling of déjà vu?

    Monckton quotes two IPCC mid-range medium-term business-as-usual predictions, which he compares with data from 1990 to 2018.

    He doesn’t state which of the 5 IPCC reports these come from, but going by previous example he’s probably taking both from the first IPCC report from 1990. Neither of the figures he quotes to two decimal places appear in the report. In fact as I pointed out in his previous article, the graph from that report predicts only slightly more warming up to 2018 than the HadCRUT data set shows.


      • And I disagreed with all your “comprehensive” answers in a previous thread. Hence my déjà vu. For the record this is the Monckton’s answer from apreviosu thread:

        Bellman nicely shows the absurdity of IPCC’s forecast. For a start, the 1990 starting-point of his red line is 1 K above the pre-industrial level, when HadCRUT4 shows only 0.45 K. Correct for that error in IPCC’s graph, and IPCC’s prediction is for a near-straight-line warming of around 3.8 K to 2100, against the HadCRUT4 warming rate of 1.72 K.

        The simple answer is it doesn’t matter how much warming there has been since pre-industrial. The warming IPCC predicted since 1990 is starting at the point where 1990 is on the graph.

        If you want to say that their prediction of how much warming there will have been in 2025 since 1765 is wrong you would have a point. But how much warming they expected since 1990 is independent of the starting temperature.

        • The furtively pseudonymous Bellman is, as usual, incorrect. IPCC had incorrectly represented the actual, observed warming rate from 1850-1990. One must correct for that before deriving IPCC’s predictions.

          • Why must one correct for differences prior to 1990? We are not talking about what their prediction was since 1765, we are talking about how much warming they expected since 1990.

            Moreover, if one must correct for this, why didn’t you do that when you claimed they predicted warming at 2.78°C / century? I assume this is derived by taking their approximate claim of 1°C of warming from 1990 to 2025 and dividing it by 36. But if you want to be consistent you should have divided 1.55°C by 36 to get a predicted warming rate of 4.3°C / century.

          • The furtively pseudonymous Bellman again tries to find petty fault, and again fails. Bellman should read, very carefully, the two separate IPCC medium-term predictions. It will find that one is compared with the pre-industrial era, and the other is compared with the then present, i.e., 1990.

          • Nope. Both of the predictions you site are compared with the pre-industrial era.

            Page xxii

            Under the IPCC Business-as-Usual (Scenario A) emissions of greenhouse gases, the average rate of increase of global mean temperature during the next century is estimated to be about 0 3°C per decade (with an uncertainty range of 0 2°C to 0 5°C)

            This will result in a likely increase in global mean temperature of about 1°C above the present value (about 2°C above that in the pre-industrial period) by 2025 and 3°C above today’s (about 4°C above pre-industrial)
            before the end of the next century.

            My emphasis.

            It goes on to say that the projections are are shown in Figure 8, the graph I used above, and that “Because of other factors which influence climate,
            we would not expect the rise to be a steady one.”

            Figure 8 is described as

            Simulation of the increase in global mean temperature from 1850-1990 due to observed increases in greenhouse gases, and predictions of the rise between 1990 and 2100 resulting from the Business-as-Usual emissions

            As I noted before, the best estimate line of that graph shows 0.8°C warming between 1990 and 2025, suggesting that the 1°C figure they quote is a only meant to be a rough estimate.

            They then go on to discuss patterns of change by the year 2030, and summarize regional variation in a box on page xxiv, where they state

            The numbers given below are based on high resolution models, scaled to be consistent with our best estimate of global mean warming of 1.8°C by 2030.

            For values consistent with other estimates of global temperature rise, the numbers below should be reduced by 30% for the low estimate or increased by 50% for the high estimate.

            This figure is puzzling as it doesn’t agree either with the value given two pages previously, or the graph. The graph suggests 1.8°C warming above pre-industrial by 2025 and more like 2.0°C by 2030.

            But as you say there are probably many inconsistencies in the report and I doubt they expected 28 years later people would be trying to calculate exact rates of change based on uncertain estimates.

  16. I’m writing to the hospital I was born in and asking them to correct my birth temperature 1 degree less!

  17. However, an interesting analysis by Professor Fritz Vahrenholt and Dr Sebastian Lüning (at diekaltesonne.de/schwerer-klimadopingverdacht-gegen-rss-satellitentemperaturen-nachtraglich-um-anderthalb-grad-angehoben) concludes that his dataset, having been thus tampered with, can no longer be considered reliable

    Is that the right link. I don’t see any analysis in the news article, just a lot of unsubstantiated claims of data massaging and fraud.

      • Am I the only one amused at the irony of being called lazy for asking for a reference?

        I can imagine how that would have gone with any of my professors, if I told them they were lazy when they expected me to provide proper references.

        • Phillip – No, you are not. Omitting sources is a common Monckon M.O. – makes it laborious to fact-check.

          As of now, he has four up votes for his comment (down from 5 a moment ago). I’m not sure I’d call that amusing. Rather sad, really.

    • And so the good Lord uses a work of fiction by two Energy executives with no Physics/climate expertise, whereby they enumerate all the usual climate myths, as supporting evidence of his repetition of said climate naysayer myths by dint of his lack of physics and climate expertise.
      The only relevant connection/reasoning between the good Lord and these authors is the overarching ideologically driven need to push ABCD “science”, coz of course we cant be told that we, humanity, may just be polluting the planet in any way at all, and that there is no need to stop. Like hundreds of years of enlightenment and scientific advance can be boiled down to a conspiracy theory born of abject hatred of some imaginary project by “Lefties”.
      There may be just the odd person who actually does not buy Monckton’s rabid snake-oil selling, and as such the link below effectively rebuts the bollocks that Monckton links to, as though validation for his own badge of biased ignorance.


      Oh, and well done Christopher – you managed to reply to the “firtively pseudonymous” Bellman without using ad hom.
      Learning some manners are we? (sarc)

      • Anthony Banton,

        That link to criticism to Vahrenholt and Lüning is from 2012. It’s irrelevant to any of their analysis supposed of the RSS and UAH data sets.

        It is though worth pointing out that neither of them has any expertise in satellite data. It’s also ironic that Christopher Monckton is quoting the work of a Social Democrat. This would usually result in him being dismissed as a communist totalitarian if his views on global warming were different.

        I’d also like to echo your last paragraph, without the sarcasm.

  18. BREAKING NEWS: Magnitude 7.3 earthquake shakes the northern coast of Venezuela; widespread destruction, but nothing of monetary value destroyed. 😉

    • Many thanks to Willis for his kind comment. I seem to have made the true-believers angry again …

  19. Comparison of UAH satellite lower troposphere temperature and Mauna Loa CO2 concentration data shows that temperature change occurs independently of the CO2 change. However, comparison of the temperature with the annual rate of change of CO2 concentration shows a statistically significant positive correlation. This means that maxima in the temperature correspond to maxima in the annual rate of change of CO2 concentration which in turn must precede the maxima in the CO2 concentration. That is, temperature change precedes CO2 change so it is impossible for the latter CO2 change to cause the earlier temperature change. This proves beyond doubt that the Great CO2 induced Global Warming proposition by the UN IPCC is a fraud.

    For detail see : https://www.climateauditor.com

    The conclusion is obvious from taking the trouble to view a visual presentation of the data and apply a bit of common sense, woefully lacking in our World Leaders and politicians.

    • Hi Bevan,

      You are essentially correct, but nobody wants to talk about this – it spoils their party.

      Regards, Allan 🙂

      References and notes:

      By Allan MacRae, January 2008

      This is the discovery paper that proved that dCO2/dt changes ~contemporaneously with global temperature, and thus CO2 trends lag temperature trends by ~9 months in the modern data record. This figure is the proof:


      A similar lag of CO2 trends after temperature trends was observed by Humlum et al in their 2013 paper.
      – Changes in global atmospheric CO2 are lagging 11–12 months behind changes in global sea surface temperature.
      – Changes in global atmospheric CO2 are lagging 9.5–10 months behind changes in global air surface temperature.
      – Changes in global atmospheric CO2 are lagging about 9 months behind changes in global lower troposphere temperature.
      – Changes in ocean temperatures explain a substantial part of the observed changes in atmospheric CO2 since January 1980.
      – Changes in atmospheric CO2 are not tracking changes in human emissions.”

      This figure summarized their conclusions:


      Ole Humlum, Kjell Stordahl, Jan-Erik Solheim
      Global and Planetary Change, Volume 100, January 2013, Pages 51-69

      The climate science community still does not want to acknowledge this lag of CO2 trends after temperature trends because:
      – even if temperature drives CO2 AND CO2 drives temperature, the former clearly exceeds the latter and climate sensitivity to atm.OC2 (TCS etc.) must be very low.
      – since temperature primarily leads CO2 rather than lags CO2, the future (CO2) cannot primarily drive the past (temperature).
      – this observation should effectively end the scientific debate about the multi-trillion-dollar global warming/green energy scam.


      The reason the lag of CO2 trends after temperature trends is approx. 9 months is explained here – 9 months is one-quarter of an approx 36 month natural cycle, it is basic calculus:


      • typo:
        – even if temperature drives CO2 AND CO2 drives temperature, the former clearly exceeds the latter and climate sensitivity to atm. CO2 (TCS etc.) must be very low.

      • Notes for Bevan:

        Yes, I agree with you Bevan – global warming alarmism is a deliberate fraud, in fact it is the greatest fraud, in dollar terms, in the history of humanity.

        Excerpt from below:
        “Properly deployed, these squandered tens of trillions of dollars could have:
        – put clean water and sanitation systems into every village in the world, saving the lives of about 2 million under-five kids PER YEAR;
        – reduced or even eradicated malaria – also a killer of millions of infants and children;
        – gone a long way to eliminating world hunger.”

        Regards, Allan


        Thank you David for your comments on increasing atmospheric CO2. Let us assume for clarity and simplicity that your comments, effectively endorsing the Mass Balance Argument, are correct.

        However David you have not responded to my primary question, repeated below.

        Some more background info:
        1. Let us assume that atmospheric CO2 started to accelerate strongly after about (“~”) 1940, and continues to accelerate today, due to increasing fossil fuel combustion. .
        2. However, global temperature declined from ~1940 to ~1977, then increased ~1977 to ~1997, and has remained ~flat since about then, with some major El Nino spikes that have mostly or completely reversed.

        So there is a correlation of increasing CO2 with global temperature that is negative, positive and near-zero – certainly NOT at all convincing that CO2 plays a significant role in driving global temperature.

        Then there is this “elephant in the room” that nobody wants to discuss, that CO2 LAGS global temperature at all measured time scales, from ~~800 years in the ice core record to ~9 months in the modern data record.

        The key relationship in modern data is that dCO2/dt changes ~contemporaneously with global temperature, and its integral CO2 (delta CO2 above the “base CO2 increase” of ~2ppm/year) lags temperature by ~9 months. Therefore I conclude that temperature drives CO2 more than CO2 drives temperature, and both magnitudes are quite small and not dangerous.

        I wrote the paper that reached this conclusion ten years ago (January 2008) on Joe d’Aleo’s icecap.us. The initial response is that I was just wrong – that it was “spurious correlation – which was false nonsense. Then somebody actually checked the math and deemed it correct, but because they KNEW that CO2 was the primary driver of global temperature then it MUST BE a feedback effect (more false nonsense).

        Since then, the main response has been to ignore this huge inconsistency in the global warming mantra, because it disproves the hypothesis that dangerous global warming will result from increasing atmospheric CO2. In the last ten-years, tens of trillions of dollars of scarce global resources have been squandered on false global warming alarmism, and millions of lives have been sacrificed due to misallocation of these resources.

        Properly deployed, these squandered tens of trillions of dollars could have:
        – put clean water and sanitation systems into every village in the world, saving the lives of about 2 million under-five kids PER YEAR;
        – reduced or even eradicated malaria – also a killer of millions of infants and children;
        – gone a long way to eliminating world hunger.

        Repeating what I wrote above:

        All good so far, EXCEPT for this observation:
        The velocity dCO2/dt changes ~contemporaneously with global temperature, and its integral CO2 also varies with global temperature but LAGS global temperature by about 9 months.


        I suggest that the correct relationship of temperature and CO2 is as follows:
        [A] There is a “base increase” of atmospheric CO2 of about 2 ppm per year, generally assumed to be from man-made causes.
        [B] There is a clear signal on top of [A] that the velocity dCO2/dt changes ~contemporaneously with global temperature, and its integral CO2 also varies with global temperature but LAGS global temperature by about 9 months.
        [C] The sensitivity of CO2 to temperature must be greater than the sensitivity of temperature to CO2, or the clear signal described in [B] would not exist; also, the magnitudes of both sensitivities are small and not dangerous to humanity or the environment..

        Best regards, Allan

        • Thank you Allan for confirming my statement. It seems as though the world leaders, politicians and media think that the UN sits on the right hand side of God and cannot be criticised. They overlook the UN’s socialist (Marxist?) ambition of One World Government led by them (of course). Maurice Strong aimed to cripple the economies of the First World Nations be setting up the UN IPCC. With the introduction of unreliable, expensive renewables replacing coal-fired power stations, this is well under way.
          Their other lie is the Greenhouse Effect. If it was the cause of the Earth’s surface temperature being greater than the theoretical model, the temperature would be the same at all locations along a given latitude – same soil and rock, under the same atmosphere containing the same CO2 concentration and receiving the same radiation from the Sun. That is, if there was snow on the mountain tops at a given latitude, there would be snow everywhere else at that latitude, which is clearly not the case.
          Regards, Bevan

      • “The climate science community still does not want to acknowledge this lag of CO2 trends after temperature trends because” the research has been debunked; it has multiple errors.

        • Kristi do you mean that after 40+ years of collecting, processing and analysing geophysical data I cannot plot two time series on the same graph? Furthermore, how can one get ‘multiple errors’ from such a simple process?

        • Kristi,

          You cannot credibly make such a statement without providing some supporting reference or argument – I assume you are no longer a child in the sand box, where throwing sand forms the sole basis for your argument.

          I doubt that you even read the references before you wrote your screed.

          Furthermore, your statement is contrary to the evidence.

          The irrefutable evidence is provided in this stunning relationship between two variables, dCO2/dt and global temperature (see below). The rest is mathematics.

          As a result, CO2 trends LAG global temperature trends by approx. 9 months in the modern data record.

          In the ten years since publication in January 2008, this relationship has never been credibly challenged, let alone debunked. It has repeatedly been mis-stated to support bogus arguments, but never seriously challenged.

          The best argument put forth by warmist advocates is that “It MUST BE a feedback effect”. As currently presented, this not a scientific argument, it is a religious one, based on unquestioned faith by warmist minions in global warming dogma. “We KNOW that CO2 is the control knob that drives global temperature, therefore it MUST BE a feedback effect.”

          A similar religious argument would be “ASSUME that frogs have wings; therefore, they no longer have to bump around on their asses.” 🙂

          • Allen,
            I was referring to Humlum’s paper when I said it had been refuted. That was a published, peer-reviewed paper. Your article is a bunch of graphs on a blog, so it is not surprising to me that it wasn’t given much attention by the scientific community. Personally, I don’t even see the relationship you say is so obvious. Nor do I make much of the “irrefutable” evidence of the Woods for Trees graph.

            A graph is a visual aid to illustrate a relationship. Scientists do not make graphs and say, “See! That proves it!” They use statistical analyses to interpret the data behind the graphs.

            You suggest that “The Sun (with cosmic rays – ref. Svensmark et al) primarily drives Earth’s water cycle, climate, biosphere and atmospheric CO2” but this has been studied extensively, and variation in solar output cannot explain the modern climate record, much less atmospheric CO2 levels.

            I honestly do not understand what you’re thinking. Do you reject the theory of GHG effect on temperature? If so, why? What part of the fundamental explanation from physics is wrong?

          • Kristi

            Please provide the reference to the alleged refutation of Humlum et al – and it better be good. I have no patience with BSers, liars or imbeciles.

            You seem to think that arm-waving is a substitute for scientific argument – it is not – it is just nonsense.

          • Kristi, you should know by now that peer-review is a corrupted process whereby journal editors have ensured that papers not acceptable to the UN IPCC were debunked.

            As for proof of a proposition, the concept of correlation formed long before the development of modern statistical methods. If significant correlation exists, it is obvious in a graphical presentation of the data. Thus the fact that major peaks in temperature coincide with major peaks in the rate of change of CO2 concentration proves that there is a significant correlation between the two variables. This applies because it is empirical data, it is what has actually occurred in the past. Statistical analysis simply puts a measure to the degree of correlation and assigns a confidence level to that measure.

            In the case of the two aforementioned variables, http://www.climateauditor.com shows that the correlation was 0.26 with 460 degrees of freedom and a t-statistic of 6.28 implying an infinitesimally small probability that the correlation was equal to zero.

            As the temperature maximum rate of change precedes the maximum temperature which, in turn, coincides with the maximum rate of change in CO2 concentration, this shows that temperature change precedes CO2 change so it cannot logically be caused by the latter. That is, CO2 has not caused global warming.

            To confirm that conclusion, the correlation between temperature and CO2 concentration was 0.036 with 466 degrees of freedom and a t-statistic of 0.77 giving a probability of 0.44 that the correlation was zero.

            This was for the UAH satellite lower troposphere Tropics-Land temperature relative to the Mauna Loa Observatory CO2 concentration monthly data. Results from other locations confirmed these conclusions.

  20. If somebody said to you, that they had a type of graph, which showed clearly how much global warming had occurred in the past, and how much global warming was occurring now, would you dismiss it as rubbish, without even looking at it?

    Global warming is a very emotional subject. Many people “know” that it is a serious problem, and they will not even look at evidence which they think might suggest otherwise. I don’t think that this is a very “scientific” attitude.

    Global warming contour maps clearly show that global warming is happening. But how fast is global warming happening, and is it getting worse?

    Perhaps I should have called my global warming contour map, a “rate of change” graph. A “rate of change” graph can be made from the data of any time series. If temperature is used, then the graph shows how fast the temperature is changing, for every possible date range.

    A global warming contour map is made from a temperature series, like GISTEMP or UAH or weather balloon data. I don’t create the temperature series myself, scientists do. I perform a mathematical procedure on the temperature series, which is based on linear regressions (lots of linear regressions, normally between 150,000 and 350,000 linear regressions). The results are colour coded, and plotted on a graph.

    I am a computer programmer. The procedure has to be automated, because it would take several lifetimes to do it manually.

    If you are willing to “risk” learning something new, then you should check out my introduction to contour maps. The introduction uses Robot-Train train trips, and makes contour maps based on Robot-Train’s speeds (the “rate of change” of distance).


    Robot-Train contour maps are easier to understand, than global warming contour maps. But they are based on exactly the same mathematical principles. Speed is the “rate of change” of distance, and the warming rate is the “rate of change” of temperature.

    One of the first steps in investigating any scientific issue, is to accurately measure what is happening. The data then needs to be organised accurately and logically, so that it can be understood. This is especially important when there is a large volume of data. A global warming contour map does these tasks efficiently, and effectively. The human eye is designed to detect colour and shape. A global warming contour map turns warming rate changes, into colours and shapes.

    A global warming contour map is not biased towards alarmism, or denial. It is as unbiased as a line graph (actually, you can bias a line graph much more easily than a contour map).

    There are many more advanced global warming contour maps on my website.


    I am happy to answer any questions that you have.


    Sheldon Walker

    • So what if the earth is warming?
      The question is, how much, if any, of that warming is being caused by CO2.

      • Indeed, the Earth has been warming since the end of the little ice age. No-one denies that(except alarmist who try to eliminate the MWP and LIA from history). That “the Earth is warming” is not the same thing as “it’s man’s fault the Earth is warming”. And it’s deception pure and simple to equate the two without any verifiable evidence of the later.

        • John Endicott

          “Indeed, the Earth has been warming since the end of the little ice age.”

          No it hasn’t. The first part of the HadCRUT4 instrument record, from 1850 right up until to 1930, shows a slight cooling trend.


          Some folks here talk about a ‘pause’ in warming from (variously) 1998, 200, 2003 until about 2013. What on earth do they make of the great eighty-year ‘pause’ from 1850 1930!?

          Where does that leave the ‘warming since the end of the LIA’ theorists?

          • Yes it has.

            You are assuming that temps must only move monotonically in one direction, they don’t, as you can see in that graph – they bounce up and down, back and forth (it’s called natural variability), but the overall trend is one of warming from LIA to present. Yes there have been periods of slight cooling (and even a “pause”) but the trend over the *entire* period is one of warming. That’s something that both skeptics and alarmists agree with.


            1910: WE’RE ALL GONNA FREEZE!
            1940: WE’RE ALL GONNA BURN!
            1975: WE’RE ALL GONNA FREEZE!
            2015: WE’RE ALL GONNA BURN!
            2035: WE’RE ALL GONNA FREEZE!


  21. I can only echo what is being said about the adjustments in this post. I recently did some work on my own. Some may have seen it before but this time I can post pictures.

    I think it is easier to see the changes when a 6-Year moving average is used.


    It is needless to say that what I have here is controversial. I have found cyclical fits for the data you will see here. At one time I could explain these datasets without any contribution from CO2. Later I found a way to accommodate it as best I could.

    I have a very low ECS value. I think there is justification for that and that is at the end.


    The figure below is my attempt to replicate the Spencer figure. I even digitized the line he has in his chart. I call the Spencer.

    Perhaps, I should have mentioned it before but I have a very precise fit for the Mauna Loa CO2 values.
    You will also note that in these types of figures I included the new ECS vale identified by Lewis and Curry.
    In reviewing that work I remember Dr. Spencer saying that the value of 1.66 would be even lower if natural variability was included. I like to think that this is what I have done.



    What you see below comes from the recent adjusted data. I don’t analyze the RSS data anymore because I think it is contaminated.


    When you look at how the RSS data and the models I would almost accuse them of knowing the answer in the back of the book. The IPCC best estimate doesn’t look so bad now.


    Here you see it all together. Notice that there is an RSS curve dated 06/03/2017, Later that month the RSS data were altered. Note that the UAH and RSS data parallel each other and that you can clearly see that the RSS data were modified from the pivot point outward.


    I offer the following two figures in defense of my low ECS value. If humans are only responsible for 10 to 20 percent of the warming, then it aligns with my ECS value.


  22. Wow, this is quite a revelation! Scientists adjusted the data? Sheesh, that’s amazing news.

    Seems to me, though, it could be relevant to find out why they did so. Monckton repeats the common refrain: fraud, done for the sake of perpetuating a myth. Well, you could say that, sure, but you could also say that the oceans will boil by 2065. That’s the great thing about assumptions: you don’t have to know anything to make them. It’s all the easier if one resists actually educating oneself. Once you do look into it, then you’re confronted with something you have to think about, and who has time for that?

    However, without doing so, you’re left with a weak argument from a position of complete ignorance. It seems to me the very best way to show fraud would be to show that the reasons for it are not valid (which requires knowing them in the first place), that it’s done using methods that are amenable to tampering, that it makes an appreciable difference to the overall picture (otherwise, why do it?), and that it’s plausible that other groups that independently come up with the same or very similar trends are colluding. Then if you find evidence that someone has done something improperly, you have to make sure it wasn’t an error. Talk to the researchers involved! After all that, if things still don’t add up, THEN is the time to make accusations.

    It’s not enough to simply show that data were adjusted. Duh! No one is hiding the fact – it’s documented, for all to see. It would be completely negligent to NOT adjust the data, since there are obvious effects to account for (e.g. satellite drift, changes in procedures and instruments, UHI, reading errors…) that can only be dealt with statistically after the fact.

    Cherry-picked graphs covering a couple decades are an insult to the intelligence of WUWT readers. But then, it’s Monckton, after all – insults are his stock-in-trade.

    P.S. Someone may say, Show your evidence! But I’m not trying to rebut his argument, I’m showing that he doesn’t have one. Besides, anyone who has not taken the trouble to look at the ample, easily available evidence by now is not interested in truth, so it would be a wasted effort.

    • Kristi you need to read this:
      The primary experts that can review Mears/Wentz are Spencer/Christy.
      You come across as barking mad above so just stop it and retain some dignity.
      Anyway you’ll see the answer to your psychosis summarised here:
      “Judith Curry reflections:
      The climate models project strong warming in the tropical mid troposphere, which have not been borne out by the observations. The new RSS data set reduces the discrepancies with the climate model simulations.
      Roy Spencer’s comments substantially reduce the credibility of the new data set. Their dismissal of the calibration problems with the NOAA-14 MSU is just astonishing. Presumably Christy’s review of the original submission to JGR included this critique, so they are unlikely to be unaware of this issue. The AMS journals have one the best review processes out there; I am not sure why Christy/Spencer weren’t asked to review. I have in the past successfully argued at AMS not to have as reviewers individuals that have made negative public statements about me (not sure if this is the case with Mears/Wentz vs Spencer/Christy)”.

      • “The primary experts that can review Mears/Wentz are Spencer/Christy” I don’t see why, but that is beside the point.

        Where in Monckton’s monologue is there reference to this, or any other plausible argument demonstrating the record was improperly changed? That’s my point. Monckton accuses scientists of willfully tampering the temperature data to pursue an agenda: scientific misconduct. Even if Mears was in error, that doesn’t necessarily mean his motives were nefarious; the timing is not evidence.

        Scientific misconduct is a very serious accusation, yet it’s made around here often, and with little or no evidence. That to me demonstrates a desire to believe scientists lack integrity. I am not going to accept that just because a few researchers disagree with other researchers. Researchers have refuted Humlum – does that automatically mean he has no integrity, or has committed fraud? Where does it end? Debate is a good and necessary part of science.

        • If Ms Silber were not so relentlessly partisan, she would be a little more open-minded and a little less ready to shriek. Consider the RSS tampering. As very clearly shown on Professor Humlum’s telling graph, the data were left more or less unchanged until 2000, early in the Pause, and were then sMeared to introduce a significant warming that had not been present in the original, published data. In short, sMear, while sMearing climate skeptics as “denialists”, Adjustocened his data only for the period of the Pause.

          And what is one to make of the ludicrous “Tom” Karl of NOAA, who Adjustocened the ARGO dataset, the least bad ocean-temperature system we have, to make it fit the less inconvenient but also far less reliable temperatures measured at ships’ engine intakes or by buckets slung out from the deck.

          The truth is that no reliance can be placed on datasets tampered with by those who, like sMear and Karl, have publicly adopted an overtly hostile political stance towards the opponents of the thermo-totalitarianism to which Ms Silber unthinkingly subscribes.

          And it remains the fact that in some datasets, notably that of NASA GISS, first run by the appalling Hansen and now run by the dreadful Schmidt, a very large fraction of the total warming comes from adjustments, and not from the originally-reported measurements. The shrieking that arises from the climate-Communist camp every time I point this out shows that even the totalitarians are becoming embarrassed by the interminable and often unjustifiable tampering.

          • What justification would clipe give for Mr Karl’s Mannipulation of the ARGO bathythermograph record to wrench it into conformity with the far less accurate earlier shipboard measurements when ARGO – even after considerable data tampering – had still failed to show the desired rate of ocean warming?

    • Only in climatology would modern, state-of-the-art, designed for purpose, buoy systems be adjusted to ad hoc engine room water intake temperatures!

    • Keeping it simple for Kristi because, despite her writing ability, she is a simpleton. There are references with the details that you seek for why its done poorly. When its obviously done poorly but the researches refuse to fix it because the result are what the researchers want, you have your evidence of fraud. No emails required although there are plenty of those from ten years ago.

    • “Cherry-picked graphs covering a couple decades are an insult to the intelligence of WUWT readers. But then, it’s Monckton, after all – insults are his stock-in-trade.”

      That sounds like an insult.

    • That’s just it, real scientists don’t adjust the data. They adjust their theories to match the data.

      PS: As to cherry picked graphs that cover just a few decades, that’s all your side has Kristi.
      PPS: You haven’t shown that he doesn’t have an argument, all you’ve shown is that you don’t like the data he has presented.

      • That’s just it, real scientists don’t adjust the data. They adjust their theories to match the data.

        In this case the analysis is adjusted to account for calibration issues with the data, which is something scientists have to do on a regular basis.
        For example that’s what Spencer and Christie had to do in 1998 when it was pointed out to them that they had failed to correct for the orbital decay of the satellites they were using. (By Wentz & Schabel, RSS)
        Prior to doing so they were showing a cooling of 0.05 K per decade, after the correction they showed an increase of 0.07 K per decade.
        No one is adjusting the data (microwaves) they’re changing their analysis of that data based on changes in the performance of the satellite/sensor.

        • Spencer and Christie were not sceptics in 1998. You have the glaringly obvious fault in the recent adjustments by Mears who wrote about his zealousness for there being no pause – enough if you can’t critique their methods to know who to trust.

    • You really are an obnoxious piece of work aren’t you? Consistently all the time for months in every post you make you’ve exhibited a shocking lack of honesty, knowledge, decency and just plain common sense. You should be proud looking at yourself in the mirror daily, I guess.

      • Not sure who Venter’s comment is addressed to, but it would be best if he were specific about the instances of “shocking lack of honesty” etc.

        • Sorry Lord Monckton, not against you. It was addressed in reply to Kristi Silber’s post. I’m in full agreement with your post and my response was to Kristi Silber’s fact free diatribe against you . I also realise that my post is a bit over the top and would be fine if the moderators choose to delete it.

  23. The chart fiddlers need to be careful. Imagine that the past temperatures are continually cooled and the present temperatures are continually warmed. The temperature charts will eventually show that the present has reached the dreaded 2 degrees above pre-industrial times. When no major adverse impacts are felt in the real world, we’ll all need to find something else to worry about.

  24. Christopher,

    I find your post to be another excellent expose’ of the intellectual dishonesty and unscientific bias of the climate “industry”.

    Thank you,


    • Many thanks to JimG1 and to Warren (below) for their very kind comments. I am not sure how much of the temperature tampering is dishonesty, but at the very least one can say that a large fraction of the imagined global warming of recent decades is a consequence of that tampering.

  25. Anthony please keep Temperature-tampering-temper-tantrums at the top for at least another 48-hours!
    It’s too good to be missed by anyone that visits WUWT.
    Thanks in advance . . .

  26. Caption to what I believe is Fig.8 refers to RSS when I think UAH is intended. The same transposition occurs in the immediately following para. Splendid in all other respects.

    • Moderators, please fix these silly errors on my part in the head posting, and many thanks to Mr Forbes-Laird for pointing them out.

      [Figures 1-15 are now numbered, but it is not clear what needs to be corrected. .mod]

      • Most grateful to the moderators. As Mr Forbes-Laird has rightly pointed out, the caption to Fig. 8 refers to RSS when it should refer to UAH. The same error is made in the immediately following two-line paragraph. Please fix these two errors by replacing “RSS” with “UAH”. Thank you very much.

  27. Between ERBE and waters inability to absorb long wave it is clear that sensitivity is low, feedbacks negative and most of the warming due to an increase in insolation.

  28. Here is an article that supports Christopher Monckton´s take on “the aerosol fudge-factor”:

    Abstract: “Revised global model simulations predict a 35% reduction in the calculated global mean cloud albedo forcing over the Industrial Era (1750–2000 CE) compared to estimates using emissions data from the Sixth Coupled Model Intercomparison Project. An estimated upper limit to pre-industrial fire emissions results in a much greater (91%) reduction in forcing.”

    “It has been widely assumed in global climate models that aerosol emissions from fires in the Pre-Industrial were lower than in the Present Day, based on a misconception that total fire emissions have increased with human population density. Globally, most fire ignitions are caused by humans, which makes a positive scaling of total burned area, and hence total fire emissions, with human population density logical at first. However, recent analysis of global fire occurrence shows that, at a global scale, burned area declines with increasing population density”

    “The inclusion of more realistic Pre-Industrial fire emissions in climate and Earth system models is likely to cause a general reduction in the magnitude of the aerosol radiative forcings that they simulate, although limitations to cloud droplet concentrations that are imposed in some models will influence how they respond. Any subsequent adjustment to climate model processes through tuning, while still maintaining agreement with historical global mean temperature changes, will affect the climate sensitivity of the models, and hence future climate projections.”


    • Most grateful to Science or Fiction for his reference to a paper indicating that the aerosol fudge-factor has been overdone. This may well be of use to us as we wrestle with reviewers.

  29. I am on a whole new approach to this. But subscribe tolooking at the absolute scale of change and controls rather than digging around in the insignificant noise and confining discussion to noise level by design.

    Lamar Alexander’s “Going to War in Sailboats” or Indiana Jones “They’re digging in the wrong place!”

    The Earth’s climate has maintained a clearly well controlled equilibrium for the last 1 Million years of 100Ka ice ages, and 41Ka ice age equilibrium before that. that switches between two limiting states in a narrow band of a few degrees, say 8K on average. The switching impulse events are symchronous with the solar orbital cycles of the Earth.

    The tiny range of a few degrees in 300K range above the absolute zeromof space we travel through is tightly controlled by water vapour evaporation from the oceans, which can both warm or cool the planet, depending on absolute temperature and relative humidity. The point I make is this is a very strongly fed back system with a massive range of control between narrow limits, achieved by modifying solar insolation, conductive cooling and increasing water vapour GHE at lower limits.

    Hence the few W/m^2 debated here are largely noise that oceanic evaporation takes care of any significant amount of as part of the adaptive atmospheric feedback that depends upon this oceanic response, with some delay. And nothing about global climate is obvious on periodicites of less than lifetimes, and certainly careers.

    The system oscillates between two limiting states, caused by a 7Ka interglacal warming perturbation synchronous with 100Ka Milankovitch cycle, from which it then cools gradually back to its preferred and stable ice age state ready for the next 7Ka warming event. I suggest this warming effect is not stopped at the interglacial peak, but is rather contained by cloud formation that imposes the higher limit of the range, which the clouds maintain until the effect has dissipated and cloud cover reduces, effectively entering the next neoglacial.

    The lower stable state ice age is warmed by solar plus atmospheric warming of the surface by the insulating effect of the atmosphere, plus some dynamically varied by temperature H2O GHE, I don’t have a figure for this low end part of water vapour control.

    The upper interglacial limit is established by strong cooling feedback caused by increasing cloud albedo as temperatures rise. The scale of this effect is dominant. 50W/m^2 of albedo currently. To this we add the 90/Wm^2 of evaporation/transpiration cooling of the oceans at interglacial temperatures (now) to release the water vapour that carries much more heat to the absolute zero of space as it warms than the gaseous air can.

    Point? Natural controls create a stable range for cooling and warming extremes, and impose strong negative feedback in a constantly changing balance within the dynamic system “GAIA”.

    The other effects of trace gasses, volcanoes, etc are a trivial few W/m^2, and the dominant feedbacks need change only slightly to balance the system at a new equilibrium state, with water vapour in the appropriate form.

    In particular, during interglacial warming, the warming effect keeps coming, at a whole 0.001K pa, but the clouds simply shut it down when it gets close to the natural upper limit (precipiation evidence supports this). This occurs while CO2 is still increasing due to the lagging release from warming oceans. But this has has no runaway effect as the overall warming is firmly shut down at the interglacial peak, by overwhelming cloud control. The clouds have it. Runaway is simply a denial of the big picture of planetary climate control, by introverted convention bubble climate scientists whose world is concerned only with proving bad things about trace gasses we produce in theoretical computer models to support CO2 taxes and bogus renewable industries, not how the real big picture climate system looks after us.

    Further, Investigating perturbations on a par with noise within the natural range of such a system, where the natural controls are so much more powerful than the tiny perturbations so are easily balanced out by the control system, and it all happens on timescales of lifetimes, seems utterly pointless. The Earth is not a static system that added or subtracted heat simply warms or cools. It has a very smart active lagging with added water creating strong limiting responses to variation constantly at work, with a control range of around half the total solar insolation of 340W/m^2. Not a few W/m^2.

    It can look after itself, thanks, and doesn’t need our half baked climate pseudo scientists suggesting ways to mess with it for their pointless grants, scrutinising insignificant changes in their CO2 blinkers, unaware of the massive scale of the dominant global control system, or deliberately concealing it.

    Having said that, if we could heat the oceans steadily with enough water cooled power stations to deliver 150TW to the oceans, as well as 75TW of incremental useful energy, that could equate to the natural interglacial event energy supply rate to the oceans and keep us out of the next interglacial.

    I estimate today’s global generation at 2.5TW, mainly American, Europe uses HALF the energy per capita. 400M Americans, 6% of the world’s population profligately consume 40% of all electrical energy from their 1TW of generation, the developing world MUCH less. So getting everyone up to US levels today would require 17TW. For 11B people at US energy use level 26TW.

    We could even deliberately over generate at 3 times this level to maintain the interglacial steady state and cut it back to what we need at the next interglacial, in 80Ka.

    In other units, a steady extra 0.5W/m^2 direct forcing of the liquid oceans would easily maintain the interglacial by replacing the fading interglacial pulse, not in the atmosphere, which would simply cancel such direct atmospheric heating out by insolation reduction.

    Problem then is how we stop using it when the natural effect returns in 80Ka or so, if we are still arond after in 80Ka, 80 times our currentvtenure as hom sap?

    Or would even more clouds form to manage the problem and stop this relatively small effect in the rebalanced system, that now prefers its cloud controlled higher limit state to the lower GHE limited ice age limit?

    Either way we will all be long dead and future scientists will be mocking the CO2 and renewable energy climate change protection racket of the early 21st Century, justified by academic priests for their own gain and the massive profit of a fast subsidy buck renewable renewable energy industry. A transparent fraud, exploiting a primitive, fearful public, kept ignorant of scientific reality by their educational institutions, and using the confidence tricks of snake oil government promoted by dishonest international elites for their own profit. Global legalised crime. CEng, CPhys, MBA.

    • It seems since I’ve been around for a while that the predictions of the past were exaggerated and no one has apologised for being bad guessers. Wrapping stuff up with formulae and equations cannot hide the reality of our own experience. It’s got slightly warmer than it was before the 1970s in the northern hemisphere. Whether this is down to CO2 is unproven. Politicians jumped onto the bandwagon as “green” energy provides a rich seam of feel good taxation while saving the planet. Everyone will look stupid if the emperor’s new clothes were in fact imaginary, so the groupthink of the believers has to go on …. until reality sets in and all the lemming scientists will agree with a new theory and tell everyone they never really believed CO2 was the driver of global warming and knew it was just a theory, not actual scientific fact. I hope I’ll live that long!

      Drawing graphs to show an upward trend continuing at the same rate is pointless as all the temperature graphs of the past few centuries were up and downy and no particular trend can be established without fiddling the start date and fiddling the number of stations for the world to get an average. If we all agree a start date is 1900 or some such date and the number of stations is a constant and all of the stations are away from buildings, airports, factories emitting smoke etc, then we can compare apples with apples and pears with pears. Satellite data – the new accurate data – is proving not so accurate as orbits decay, ocean buoys and Stephenson’s screens mysteriously disappear so only the ones left can be compared… the lost ones cannot count in “corrected” data… it’s a muddle and muddles however dressed up mathematically are still garbage in garbage out muddles.

      What we see is a lot of people moving goalposts, peeling apples and comparing them with pears, and shuffling datasets together for reasons that are simply inexcusable other than they got the “wrong” result if they didn’t use data that wasn’t adjusted or shuffled about.

      It”s the sun, stupid, that governs the planet’s climate, always has and always will.

    • Brian’s comment is fascinating. He is of course right that the Earth’s climate is near-perfectly thermostatic, and that in the past the solar cycles had more of an impact on global temperature than anything else. His suggestion that we should generate very large amounts of power to stave off the next Ice Age is intriguing, but is he sure that a mere 75 TW would be enough?

  30. In accounting you go to jail for adjusting the past.

    And accounting was invented to avoid taxes. Climatology is hard at work to increase taxes.

    Could this explain why accounting fraud gets you jail time while the same practice in climatology gets you the Nobel.

    • Ferdberple,
      When I was in the corporate world my CFO and I argued regularly over the accounting numbers. As an engineer I looked at the numbers as if they were factual of not. He assured me that accounting is an art not a science. I am sure he is now retired but I don’t think in jail.

  31. Here’s a pertinent link:



    A recent comparison (1) of temperature readings from two major climate monitoring systems – microwave sounding units on satellites and thermometers suspended below helium balloons – found a “remarkable” level of agreement between the two.
    To verify the accuracy of temperature data collected by microwave sounding units, John Christy compared temperature readings recorded by “radiosonde” thermometers to temperatures reported by the satellites as they orbited over the balloon launch sites.

    He found a 97 percent correlation over the 16-year period of the study. The overall composite temperature trends at those sites agreed to within 0.03 degrees Celsius (about 0.054° Fahrenheit) per decade. The same results were found when considering only stations in the polar or arctic regions.”

    end excerpt

    The balloon data verfied the accuracy of the satellite data before, so since controversial changes have been made for both satellites, we should do another comparison of the balloon data to the satellite data. Let’s see which one comes closer to the balloon data.

    At the time of the comparison, the satellite charts looked nothing like the bogus, bastardized surface temperature charts where both the satellite charts showed 1998 as the second warmest year in the satellite record, behind only 2016 (0.1C warmer) whereas none of the bogus, bastardized Hockey Stick charts show that, instead showing 1998 as a much cooler “also-ran”.

    If the balloon data confirms the satellite data then that means it does *not* confirm the bogus, bastardized Hockey Stick charts because they look nothing like the satellite charts (at least not during the time of the test).

    We need another balloon data comparison test to shut up the naysayers.

  32. Isn’t there a published paper that shows almost every discovery made “proving” AGW occurred only after data adjustments?

  33. The largest change came in March 2013, by which time my monthly columns here on the then long-running Pause had already become a standing embarrassment to official climatology.

    Sorry to play the pedant again, but as far as I can tell the first of Lord Monckton’s monthly columns on Pause was a year later in March 2014


    Though he used similar graphs back in September 2013.

    Before March 2013 the claim was that there had been no warming in RSS data for 22 years, so clearly not talking about the same definition of a pause.

    • Lord M continued to use the RSS TLT data (v3) for a good while even after Carl Mears of RSS said publicly that it contained a known cooling bias. Now Lord M acts as if the introduction of RSS v4 came as a bolt from the blue. AFAIK Lord M never once made mention of Mears’s caveat during his RSS ‘pause’ series.

      • Mr Rice is incorrect. I noted on several occasions that Dr Mears preferred the terrestrial datasets to his own.

  34. Should we cynically assume that these adjustments – up for RSS, GISS, NCEI and HadCUT4, and down for UAH – reflect the political prejudices of the keepers of the datasets? Lüning and Vahrenholt can find no rational justification for the large and sudden alteration to the RSS dataset so soon after Ted Cruz had used our RSS graph of the Pause in a Senate hearing.

    A good reason for their failure would be that the change to the RSS method was made before Ted Cruz used the graph in the senate hearing.
    The paper was received by the journal in October 2015.
    Also Lüning and Vahrenholt appear to be unaware that the database is made up from a series of different satellites with different sensors and variation of orbits over time.

    • See upthread, where there is a good discussion of the characteristics of the different satellites.

      • Really, where? Doesn’t address the point that L & V don’t discuss the nature of the satellite record, but discuss it as if it were a continuous record with no change of calibration etc.
        The Mears and Wentz paper gives a very good discussion of the satellite issues.

  35. “The denialists really like to fit trends starting in 1997, so that the huge 1997-98 ENSO event is at the start of their time series, resulting in a linear fit with the smallest possible slope”


    Perhaps the “denialists” learned the trick from the “warmists” who seem to use 1979 as a starting point for every temperature trend they observe, ending it with the huge 1997-98 ENSO event, resulting in a linear fit with the large possible slope.

    • See the head posting, and in particular the graph showing that the temperature trends on two datasets starting before and after the 1997 el Nino follow straight lines, proving that the Pause had endured long enough for the influence of the 1997 el Nino on the trend to have become negligible.

      • I’d agree with that. What caused the appearance of a pause is as much to do with a string of about 5 or 6 very warm years from 2001 onward, combined with a couple of cold La Niña years later.

        But wherever you place it the pause will always be a cherry-picked artifact until you can demonstrate a a statistically relevant reason for starting a trend at a given point.

        • Bellman is on to a loser, as usual. The work of Professor Brown and Werner Brozek, shown in the head posting, demonstrates that whether one included or excluded the Great El Nino of 1997-8 the temperature trend in the subsequent couple of decades was the same.

          The NOAA State of the Climate report of 2008 plainly stated that if temperatures were unchanged for 15 years or more a discrepancy between prediction and reality would have occurred. Nearly all of the datasets, before the tamperings that occurred towards the end of the Pause, showed approximately 15 years without warming. RSS showed 18 years 9 months without warming before another el Nino brought the Pause to an end. UAH showed 18 years 8 months without warming. Given that one-third of Man’s entire influence on temperature had occurred in that period, that was a remarkable discrepancy between prediction and reality.

          In most of my pieces on the Pause, it was explicitly stated that the start date was derived as the earliest date from which a zero trend could be obtained.

          The significance of a long Pause is that it dampens the longer-term warming rate.

          • Bellman is on to a loser, as usual. The work of Professor Brown and Werner Brozek…whether one included or excluded the Great El Nino of 1997-8 the temperature trend in the subsequent couple of decades was the same

            I was agreeing with you on that point.

            The significance of a long Pause is that it dampens the longer-term warming rate.

            No it doesn’t. Even in the UAH data the trend has increased over the pause period.



          • Bellman is, as usual, so desperate to find fault that it is wrong again. The effect of the 1998 el Nino, as Fred Singer has pointed out, was to lift the entire temperature regime to a higher baseline, from which, if the warming rate had continued as before the uplift, would have led to a far higher temperature than has occurred owing to the Pause, which, therefore, dampens the long-run warming rate.

          • The effect of the 1998 el Nino, as Fred Singer has pointed out, was to lift the entire temperature regime to a higher baseline…

            This makes no sense to me. How could a single el Niño even cause the next 20 years to be warmer, especially as all the warmth had dissapeard in the following two years?

            However, if this is acknowledging that the 21st century was on the whole somewhat warmer than the late 20th century, I’d agree and say that’s why only showing the trend of the warmer period and calling it a pause is at best disingenuous.

            if the warming rate had continued as before the uplift, would have led to a far higher temperature than has occurred owing to the Pause, which, therefore, dampens the long-run warming rate.

            I think what you are trying to say here is that underlying warming should include the warming of the 1998 spike. This would mean that you were wrong to claim the there had been an 18 year pause starting 1997, as temperatures were continuing to rise until after the el Niño.

            But even if you allowed that, the central problem remains, you haven’t demonstrated any statistically significant change in the underlying rate of warming caused by the pause. UAH 6 shows around 1.5°C / century warming to 2000, up from less than 1°C before the El Niño. The rate of warming to present is around 1.3°C / century. This difference is not statistically significant.

          • If Bellman thinks I should not have mentioned the Pause, why has he not asked IPCC to correct railroad engineer Pachauri’s statement in February 2013 that the Pause existed?

          • As always a direct quote and source would be helpful. What did Dr Pachauri actually say about the pause? Was he talking about the same pause as you?

            I did try googling this, but all I found where claims based on a claim reported in an Australian newspaper with no direct quotation.

            If he was playing the same statistical games as you, looking back as far as possible to find a negative trend, then yes, it’s just as meaningless as your monthly posts. I’m not sure why you think I have any say in what goes on in the IPCC though.

            And just the for the record I’ve never suggested that you shouldn’t have brought up the pause, I would just like some evidence that it is a real thing, or that it has any impact on the overall rate of warming.

  36. Have Lord Monckton and company ever considered the work of Dr, Nic Nikolov? He and another scientist recently published a study demonstrating how CO2 is a very minor force in terrestrial climate. He’s easily found on Twitter.

  37. An interesting anomaly appeared from my work which attempts to extract the climate signal from global temperature. My approach was to look at the years/months which show the least amount of ENSO/volcanic/AMO/PDO effects. In doing so I found that RSS 4.0 shows .27 C of warming and UAH 6.0 .28 C of warming between 1980-2018.

    The differences between the two satellite data sets completely disappears. Here’s the years/months I used but the only important ones are the first and the last.

    April-August 1980-81 14.5 C (58.1F) .03C
    April-August 1990….. 14.6 C (58.1F) .05C
    April-August 1995-96 14.6 C (58.2F) .10C
    April-August 2001-02 14.8 C (58.6F) .31C
    April-August 2007….. 14.7 C (58.5F) .24C
    April-August 2014….. 14.8 C (58.6F) .30C
    April-August 2018….. 14.8 C (58.6F) .30C

    April-August 1980-81 14.4 C (58.0F) -.06C
    April-August 1990….. 14.5 C (58.1F) .02C
    April-August 1995-96 14.6 C (58.2F) .09C
    April-August 2001-02 14.7 C (58.4F) .19C
    April-August 2007….. 14.7 C (58.4F) .18C
    April-August 2014….. 14.7 C (58.4F) .17C
    April-August.2018….. 14.7 C (58.5F) .22C

    • Most interesting. It would be excellent to get these conclusions into a reviewed journal of standing.

  38. Lord Monckton continues to assert “Using the corrected value of net anthropogenic forcing, the system-gain factor falls to 1.13, implying Charney sensitivity of 1.13 x 1.04, or 1.17 K”, despite the fact that at https://wattsupwiththat.com/2018/08/15/climatologys-startling-error-of-physics-answers-to-comments/#comment-2433180 I demonstrated the error in that calculation, which was to ignore any change in feedback between 1850 and the time of CO2 doubling (say 2100). Here is my proof again. (LM made a promise to respond to this criticism, but it sure is a long time a’coming. How long, oh Lord, how long?)

    Regarding notation, I shall use the equation E = R/(1-f) at the head of this thread, rather than the notation in his document error-summ.pdf (“the PDF”), though I do take the values of these variables exclusively from that document. So R is the “reference temperature”, including GHG forcing but no feedbacks, f is the feedback ratio, and E is the equilibrium temperature after feedbacks have applied and settled down. The PDF also uses the variable A = 1/(1-f). I use the qualifier ‘1’ to correspond to Monckton’s “Equilibrium 1” date of 1850. Thus,

    E1 = R1/(1-f1)

    where R1 = 254.8K (called T_{r1} in the PDF), E1 = 287.55K (called T_{q1}), f1 = 1-R1/E1 = 0.1139.

    I then use the qualifier ‘2’ to correspond to the “Equilibrium 2” date of 2011. Thus,

    E2 = R2/(1-f2)

    where R2 = 254.8+0.68 = 255.48K, E2 = 287.55+1.02 = 288.57K, f2 = 1-R2/E2 = 0.1147.

    Now, my “dissection” to arrive at E2-E1 was:

    E2/E1 = (R2/(1-f2)/(R1/(1-f1))

    E2-E1 = E1[ (R2-R1)/R1 + (f2-f1)/(1-f2) + [(R2-R1)(f2-f1)]/[R1(1-f2)] ]
    I can spell that out in easier steps if that is required. I then drop the last term because it is the product of two small first-order (but important) quantities:

    E2-E1 = E1[ (R2-R1)/R1 + (f2-f1)/(1-f2) ] (*)
    This equation (*), which is new as far as I can tell, establishes that a change in equilibrium temperature E arises from two sources, an ‘R’ part and an ‘f’ part. From the figures above we have the two parts being

    E1(R2-R1)/R1 = 0.77K
    E1(f2-f1)/(1-f2) = 0.26K

    Thus E2-E1 = 1.03K which agrees with the PDF’s 1.02K to within rounding error, and corroborates the algebraic manipulations which led to it. I then move on to consideration of “Equilibrium 3”, for a doubling of CO2 from 1850 values. In previous comments I reused the qualifier ‘2’ for this, following the lead of the PDF itself, but it will be clearer if I use ‘3’ instead here.

    Thus R3 = R1 + 1.04 = 254.8+1.04 = 255.84K.

    Now, the said doubling has not happened yet, so we don’t know what f3 and E3 will be. But if we accept the Monckton et al view of feedback, we know that E3 = R3/(1-f3), and therefore by (*) with ‘2’ replaced by ‘3’,

    E3-E1 = E1[ (R3-R1)/R1 + (f3-f1)/(1-f3) ] (**)

    In Monckton’s preceding comments at 4.16am and 4.34 am, he made the assertions “the system-gain factors for 1850 and 2011 are so near-identical that one may safely use that value in deriving Charney sensitivity” and “he would rather continue relying on the error-prone delta-value system-gain equation exclusively used in climatology” respectively.

    The first assertion is an understandable error, because the values f1 = 0.1139 and f2 = 0.1147 do look very close. But a consequence of Monckton’s desire _not_ to use delta values, means that when one multiplies the small difference f2-f1 (divided by 1-f2 in (*)) by the largeish number E1 = 287.55K one gets a not insignificant number 0.26K which provides a contribution of one quarter of the total E2-E1.

    The second assertion is just a misunderstanding of my mathematics, which I hope is cleared up by the present elaboration.

    Now, the nub of the matter is the value of S = E3-E1, the total equilibrium warming from a doubling of CO2. The PDF uses the value

    1.17K = 287.55(255.84-254.8)/254.8 = E1(R1-R3)/R1

    Thus, comparing with (**), it has been assumed that f3 = f1, precisely in line with Monckton’s first assertion above. Yet, if it were the case that f2 = f1, then E2-E1 would be only 0.77K, which is quite a large discrepancy from the PDF’s quoted 1.02K, and one which could not be overlooked. Since f2-f1 is not zero, by the small amount 0.0008 which has these significant consequences, it seems unwise to assume that f3-f1 is zero.

    In my earlier comment I argued that f3-f1 should be at least f2-f1, and more likely twice as much because it applies to a whole doubling of CO2 rather than a half doubling. Hence I would add 0.52K to that 1.17K to get 1.69K.

    And some climate scientists may find arguments for f3-f1 > 2(f2-f1), while others may find arguments for it being smaller. But, if one accepts this Monckton et al view of sensitivity (and I have expressed reservations), then the argument is all about the value

    …………………………f3 – f1…………………….

    • In response to Rich, the reference and equilibrium temperatures in 1850 were 254.8 K and 287.55 K respectively, giving a system-gain factor 1.1285 (the ratio of equilibrium to reference temperature).
      The reference and equilibrium sensitivities from 1850-2011 were 0.68 K and 1.02 K respectively, assuming that the very uncertain official mid-range estimates are correct, giving a system-gain factor (using the delta-value equation) of 1.1295.
      These deltas should in fact be adjusted to cancel out the aerosol fudge factor, giving 0.85 K and 0.95 K respectively, for a system-gain factor 1.12.
      If, however, one were to assume that there is a real growth in the system-gain factor, and that that growth should be allowed for, giving a system-gain factor 1.1305, the Charney sensitivity would be the product of the enhanced system-gain factor 1.1305 and the reference sensitivity 1.0363 K to doubled CO2, giving Charney sensitivity of 1.17 K, as before.
      For system-gain factors 1.1315, 1.1325, 1.1335 respectively, Charney sensitivities would be 1.17 K, 1.17 K and 1.17 K respectively.

      • I thank Lord Monckton for finally replying. I am about to go on a short holiday, after which I shall study LM’s argument. I note, however, that he has not referred to my mathematics above, which is incontrovertible. I am not sure if this is due to a lack of understanding of it, or a recognition that it encompasses an inconvenient truth for him and his co-authors.

        • Rich whines that I had “made a promise to respond” to his “criticism, but it sure is a long time a’coming. How long, oh Lord, how long?”, and yet, rather than replying at once to my response, he wanders off on “a short holiday”. This double standard is regrettable. Rich should understand that, like him, I do other things than climate, and I am under no obligation to reply immediately or at all to what he calls his “criticism”.

          Stripped of a lot of unnecessary and ludicrously roundabout math, which still contains several errors, and on which I do not propose to waste any time, Rich is saying that there is a difference of 0.001 between the system-gain factors A(1) for 1850 and A(2) for 2011. If he had read the original head posting to which he was responding, I had taken explicit account of this difference by pointing out that, using official climatology’s delta-value system gain equation, A(2) would be 1.5 rather than the 1.13 we had derived using the absolute-value equation. This would give Charney sensitivity 1.55 K (not far off his 1.7 K) rather than 1.17 K.

          If Rich had read the present head posting, he would realize that official climatology’s mid-range estimate of the net anthropogenic forcing is just that – an estimate – and that that estimate had been artificially reduced by the introduction of an over-large negative aerosol forcing, which Professor Lindzen has justifiably called a “fudge-factor”. Upthread here, he will find a reference to a recent peer-reviewed paper discussing this problem: and IPCC’s forthcoming report on the hydrosphere and cryosphere may be revisiting it as well. Removing just two-thirds of that fudge-factor, as the head posting makes clear, gives A(2) = 1.13 even using the delta-value system-gain equation, implying 1.17 K Charney sensitivity.

          Among the many mistakes made by Rich in his mathematical trip round the houses is that he has attempted to apply the system-gain factor for 2011 to a doubling of CO2 compared with 1850, mixing and matching his variables to suit what appears to be a preconceived but erroneous notion. This gets him into the following mess, for instance:

          1.17K = 287.55(255.84-254.8)/254.8 = E1(R1-R3)/R1

          What I think he means is

          1.17K = 287.55(255.84-254.8)/254.8 = E1(R2-R1)/R1

          The central point he has missed is that far greater uncertainty arises if one uses the delta-value equation than if one uses the absolute-value equation, where even quite large absolute variances in the values of reference and equilibrium temperature make very little difference to the value of the system-gain factor.

          Finally, even if Rich’s 1.7 K Charney sensitivity were correct, that would still be only half of the CMIP5 models’ current mid-range Charney sensitivity 3.4 K.

          • OK, I’m back after a delay, and now ready to tap in the last nail.

            The executive summary is that Monckton’s quoted sensitivity of 1.17K in the presence of increasing feedback does not satisfy, in a material way, his own basic equations.

            But first let us review the bidding across papers and blog threads. Monckton et al’s draft paper proposes that Earth’s mean temperature can be modelled by E = R/(1-f) where E is equilibrium temperature including feedbacks, R is “reference” temperature including greenhouse gases but no feedbacks, and f is the feedback ratio. They also use A for the value 1/(1-f) to represent the system gain factor. In the thread https://wattsupwiththat.com/2018/08/15/climatologys-startling-error-of-physics-answers-to-comments , at a comment dated August 19 2:45pm, I objected to the way that Lord Monckton was calculating the sensitivity to a doubling of CO2, E3-E1, where ‘1’ refers to the year 1850 and ‘3’ refers to the future date at which 1850’s CO2 has been doubled. Since my comment was unanswered, I repeated it at https://wattsupwiththat.com/2018/08/21/temperature-tampering-temper-tantrums/#comment-2436801 on this thread, and Monckton did then answer at August 23 3:31pm and at August 24 7:38pm, to which I am responding in reverse order.

            Response to Monckton’s August 23 3:31pm:

            I am sorry that Lord Monckton thinks I am applying double standards regarding speed of reply. The difference is that I did reply at once to LM’s response, just not in the fullest terms, whilst giving an explanation for the delay. I wish to make it clear that I am not using “official climatology’s delta value system gain equation”, but Monckton’s own equation. As to talk of aerosols, whilst interesting, it is not germane to the matter at hand, which is the purely mathematical question of the correct calculation of sensitivity = E3-E1 given R1, R3, f1, f3. I do, though, agree with Monckton’s sentiment that a value of 1.7K is only half of the CMIP5 mid-range, and in my view does not appear alarming.

            Response to Monckton’s August 23 3:31pm:

            Despite his having previously asked me to supply variables, equations and values in a concise form, Monckton’s own reply contained only values. To help the reader, at the bottom of this comment I have reproduced Monckton’s reply with variable names too, thereby showing that his

            1.17K is derived from (R3-R1)/(1-f3) = A3(R3-R1)

            However, as noted in my earlier detailed comment, that expression is incorrect for E3-E1. For,

            E1 = R1/(1-f1)
            E3 = R3/(1-f3) = A3 R3

            So it is true that if f1 = f3 then E3-E1 = (R3-R1)/(1-f3) as claimed, but not otherwise – the equation (**) I gave supplies the correct value. Without using (**) we can calculate E3-E1 directly. E1 has been established as 287.55K, R3 is R1+1.04 = 254.8+1.04 = 255.84, A3 is (putatively) 1.1305 so

            E3 = 255.84(1.1305) = 289.23K
            E3-E1 = 1.68K

            Notice that 1.68K, the sensitivity = equilibrium temperature increase after doubling CO2, is a rather larger number than the 1.17K which Monckton wrote down four times in his reply.

            Here is (**) again, but converted to a new (***) to take advantage of the equation E1/R1 = 1/(1-f1):

            E3-E1 = (R3-R1)/(1-f1) + E1(f3-f1)/(1-f3) (***)

            Like (**), Equation (***) is incorrect by a tiny missing term. Now, we can calculate the second term here, given that E1 = 287.55K, f1 = 1-1/A1 = 1-1/1.1285 = 0.11387, f3 = 1-1/1.1305 = 0.11544,

            E1(f3-f1)/(1-f3) = 287.55(0.00157)/(1-0.11544) = 0.51K

            and this agrees with the discrepancy between the correct value 1.68K above and Monckton’s 1.17K.

            QED, as they say.

            (Note that for those preferring A’s to f’s (***) can be rewritten as E3-E1 = A1(R3-R1) + (A3-A1)R1.)

            Here is a request for readers – if you agree with my analysis then please hit the +1 button. Not that mathematics is done by democracy of course 🙂

            So here is my annotated version of Monckton’s reply; I have used [] for new text, and deleted the bit about aerosols which, whilst of some interest, is not germane to the discrepancy at hand:

            //In response to Rich, the reference and equilibrium temperatures in 1850 were [R1=] 254.8 K and [E1=] 287.55 K respectively, giving a system-gain factor [A1=] 1.1285 (the ratio of equilibrium to reference temperature).

            The reference and equilibrium sensitivities from 1850-2011 were [R2-R1=] 0.68 K and [E2-E1=] 1.02 K respectively, assuming that the very uncertain official mid-range estimates are correct, giving a system-gain factor (using the delta-value equation) of [A2=] 1.1295.

            If, however, one were to assume that there is a real growth in the system-gain factor, and that that growth should be allowed for, giving a system-gain factor [A3=] 1.1305, the Charney sensitivity would be the product of the enhanced system-gain factor [A3=] 1.1305 and the reference sensitivity [R3-R1=] 1.0363 K to doubled CO2, giving Charney sensitivity of [A3(R3-R1)=] 1.17 K, as before.

            For system-gain factors [A3=] 1.1315, 1.1325, 1.1335 respectively, Charney sensitivities would be 1.17 K, 1.17 K and 1.17 K respectively.//

          • Well, again it is nigh on four score and seven hours without a reply from Lord Monckton. (Alas I have to go to work rather than waiting for that hour.)

            But at least I now know that there are good reasons for this. For example, “Rich should understand that, like him, I do other things than climate”. Well, that is very good, and I hope Lord Monckton is having an enjoyable time with them. Also, “I am under no obligation to reply immediately or at all”. That is true, though given the dynamics of the situation, in which Monckton is effectively publicly defending a “thesis”, one would have thought that he would wish to respond to non-frivolous comments such as mine. Before my latest comment, Monckton’s attitude was that I had provided “a lot of unnecessary and ludicrously roundabout math, which still contains several errors, and on which I do not propose to waste any time”; I hope that my latest comment has eliminated errors and simplified the mathematics a bit, so that it is clear why E3-E1 is 1.68K rather than 1.17K given the putative input variables and Monckton’s own equations, with which we have been working, and that therefore Monckton might take a more positive view.

            On the other hand, I am happy to admit that when finding a new theorem or equation, one does not always find the simplest proof first time. And in fact here is a simpler one; by directly looking at the difference between the desired value of E3-E1 and the Moncktonian expression

            (R3-R1)/(1-f1) (@)

            one obtains

            E3 – E1 – (R3-R1)/(1-f1)
            = R3/(1-f3) – R1/(1-f1) – (R3-R1)/(1-f1)
            = R3(1/(1-f3) – 1/(1-f1))
            = R3(f3-f1)/((1-f1)(1-f3))
            = E3(f3-f1)/(1-f1) (****)

            and this is exactly correct whereas (***) misses a small error term. In fact the correction term E1(f3-f1)/(1-f3) in (***) is exactly correct if Monckton were to replace f1 by f3 in (@), since in similar fashion

            E3 – E1 – (R3-R1)/(1-f3)
            = E1(f3-f1)/(1-f3)

            Now that I have established a simpler derivation of the correction term which gives the value 0.51K in the said circumstances, I am happy to wait for Lord Monckton to mull this over and then issue an acknowledgement (and I hope that perhaps his eminent co-authors will have some influence in this). Whilst waiting, I may of course write an occasional reminder on WUWT, and go on the occasional holiday 🙂

  39. MOB writes: “Note that RSS’ warming rate since 1990 is close to double that from UAH, which had revised its global warming rate downward two or three years ago. Yet the two datasets rely upon precisely the same satellite data. The difference of almost 1 K/century in the centennial-equivalent warming rate shows just how heavily dependent the temperature datasets have become on subjective adjustment rather than objective measurement.”

    Neither RSS nor UAH have “subjective adjustments”. Their dataset have been compiled using data from satellites whose slowly drifting paths didn’t cross over the same location at the same time every day. Until recently, RSS and UAH have processed the satellite data without correcting for this drift and they were in reasonable agreement with each other about warming. Now, both groups are exploring different methods for NON-SUBJECTIVELY correcting for satellite drift. Different methods are producing different answers, and no amateurs have the slightest idea (except prejudice) which method, if any, will turn out to be best.

    The bottom line? MOB’s over-publicized Figure 1 showing a long pause was constructed with data that we now know is badly flawed.

    MOB also tells us: “As things turned out, [Dr Mears] need not have bothered to wipe out the Pause. A large el Niño Southern Oscillation did that anyway.”

    A strong El Nino lasts for about one year (or six months if one looks only at the central portion that clearly risings about background change). Unlike the 97/98 El Nino, current temperature hasn’t fallen to the pre-El Nino levels. Evidence for new average about 0.2 K higher that the plateau that existed during the Pause is growing. We certainly are nowhere near returning to Pause levels.

    • Frank,

      Is it true that the same satellite data is used in both cases, or are there two satellites?

      Is it true that when different methods produce different results, a choice between the two is essentially subjective, until one or other methods can be proven to be the better one?


  40. Good points, at least from within the international discussion. But the discussion itself was started off on the wrong foot. Global warming in an Ice Age is always a good thing. Always! All change creates problems, but global cooling in an Ice Age is far, far worse. And we need far more CO2 — and have needed it for 30+ million years — ever since CO2 starvation shocked plants worldwide into evolving C4 species.

  41. When has Monckton ever apologized for his slur on the honor of America for calling the atomic bombings of Japan an “atrocity”?


    Hundreds of thousands of American, and even some British lives, and millions if not tens of millions of Japanese lives were saved by Truman’s decision.

    Why is such an enemy of America allowed to post here? Let the cowardly Limey bastard educate himself:


    Being of southern Italian ancestry, none of my ancestors had the great pleasure and honor of shooting down his Redcoat ancestors and relatives like the dogs they were, but at least some of my grandkids can claim that honor and pleasure.

    What a bug-eyed, pusillanimous pussy, disgusting, revolting, shameless dishonorable, moist, stinking splat of subhuman excrement!

    • One British division was scheduled for Operation Coronet, the invasion of Honshu after the invasion of Kyushu in Operation Olympic in 1945, in which I would have participated.

      (SNIPPED) mod

      • Mod,

        You don’t think it’s relevant that Monckton’s own dad might have been saved by the “atrocity” the worm claims?

        His dad, who by the way avoided a lot of combat by going to staff school in the US. While my comrades and I were fighting and dying in the Pacific.

        Fine. I’m outa here, as the kids say. You can keep your precious pet megalomaniacal, anti-American, second generation aristocrat.

    • We bagged this one!


      Was Bomber Command’s raid on Dresden an atrocity too, Limey liar?

      I’m in moderation for daring to point out that Monckton is scum. The slimeball claims that the action which saved millions if not tens of millions, including probably mine, for sure many of my comrades’ and maybe his dad’s, and kept half of Japan from going Commie, to be like North Korea today, was an “atrocity”.

      I have nothing but utter, complete and total contempt for this worm. And that goes double for this site for giving this slimeball ink.

      • Moderators, please delete all comments from “sgt”. They offend against site policy.

        [Noted, forwarded. .mod]

Comments are closed.