Correlation Between NCEP/NCAR Re-analysis and Other Global Temperature Anomaly Data

Guest Post By Walter Dnes

On his website, Nick Stokes has set up an interesting global temperature anomaly visualization, based on NCEP/NCAR re-analysis data. The dry technical details are listed in this blog post. Based on his explanation, I’ve been able to closely duplicate his work to within +/-0.003 Kelvin degree, using different tools.

This post concentrates on using the NCEP/NCAR results to predict the prior month’s temperature anomalies for various global data sets, before they’re released. Here is a graph comparing NCEP/NCAR reanalysis anomalies with GISS, NCEI, UAHv6, RSS, and HadCRUT4 anomalies, from April 2015 to March 2016:

NCEP/NCAR Data comparison – Image Credit: Walter Dnes

There seem to be 2 separate populations, exhibiting different behaviours. They are reminiscent of 2 portions of a broken hockey-stick. For the purposes of this post, the cut-off between the 2 populations will be set to NCEP/NCAR anomaly of +0.45. This is an arbitrary point in the middle of a large gap, and may be changed in future months as more data accumulates. A table of monthly values for NCEP/NCAR, and the major global temperature data sets follows, along with slope, y-intercept, and extrapolated April values. I’d prefer to use 12 months of data for the extrapolation, but there are only 6 months (i.e. October 2015 to March 2016) in the population with NCEP/NCAR anomaly > +0.45. That is what is used for this post.

Month NCEP/NCAR HadCRUT4 GISS UAHv6 RSS NOAA/NCEI
2015/10 +0.567 +0.820 +1.06 +0.412 +0.457 +0.9817
2015/11 +0.513 +0.810 +1.02 +0.327 +0.435 +0.9657
2015/12 +0.621 +1.010 +1.10 +0.451 +0.546 +1.1224
2016/01 +0.665 +0.908 +1.13 +0.541 +0.665 +1.0424
2016/02 +0.840 +1.061 +1.34 +0.833 +0.978 +1.1913
2016/03 +0.783  +1.063 +1.28 +0.734 +0.842  +1.2181
Function
=slope() +0.810 +1.002 +1.553 +1.720  +0.769
=intercept() +0.407 +0.489 -0.483 -0.490  +0.575
Extrapolation
 2016/04 +0.645 +0.929 +1.14 +0.519 +0.620  1.0717

WordPress does not natively support embedded spreadsheets, without using plugins. So we’ll have to pretend that the above table represents a spreadsheet, with the word “Month” in the upper left corner, i.e. cell A1. In that case, the slope and y-intercept of the HadCRUT4 data would be calculated as “=slope(C2:C7,$B2:$B7)” and “=intercept(C2:C7,$B2:$B7)” respectively. GISS slope and y-intercept would be “=slope(D2:D7,$B2:$B7)” and “=intercept(D2:D7,$B2:$B7)” respectively, etc, etc.  Using the “$” prefix allows the formulas to be typed in to cells C9 and C10, and then the cell can be copied over horizontally for the other 4 data sets.

Given the slope and y-intercept, we can use a high-school math linear equation found here.

y = mx + b

  • “y” is the forecast anomaly for a temperature data set
  • “m” is the value of slope()
  • “x” is the value we supply, i.e. the NCEP/NCAR index
  • “b” is the value of intercept()

Continuing with the spreadsheet model, cell B12 would have the NCAR/NCEP value for the month we’re forecasting. Cell C12 would have the formula “=C9*$B12 + C10”. This cell can then be copied over horizontally for the next 4 cells.

There is one “sanity-check”, assuming a correlation. This month’s NCEP/NCAR anomaly is +0.645, which is between the values of December 2015 (+0.621) and January 2016 (+0.665). Consequently, one would expect the 5 other data sets to be somewhere in between their values for those 2 months. This is a “rule-of-thumb”, not an absolute guarantee.

Notes

  • Some WordPress fonts can make the dollar sign “$” difficult to distinguish from uppercase s “S”.  There are no uppercase s in the spreadsheet cell formulas listed above. if it looks like uppercase s, it’s actually a dollar-sign.
  • The above forecasts are approximate, and anything within +/-0.1 is going to have to be considered “good enough”.
  • The NCEP/NCAR re-analysis data runs a couple of days behind real-time. As of this posting, it has only been updated to April 28th. The values for April 29th and 30th have been assumed to be the same as April 28th, and the 30-day NCEP/NCAR anomaly has been extrapolated on that assumption.
  • Based on a few months of testing, GISS seems to come in closest to the extrapolated values. UAH and RSS come a close 2nd and 3rd. NOAA/NCEI used to have a good correlation until March 2016. That month they zigged, up, when everybody else zagged, down. HadCRUT4 has the lowest correlation of the 5 data sets.
Advertisements

63 thoughts on “Correlation Between NCEP/NCAR Re-analysis and Other Global Temperature Anomaly Data

  1. You are looking at El Nino? The last 6 mo. are in group B (greater than 0.45), all earlier data is in group A.
    There is little difference in HAD and GISS, large difference in UAH and RSS. between groups A and B.
    So how is this? We saw with the El Nino of 1998 that the satellite data was much more sensitive than the surface records. So again, we have an El Nino and the satellite records respond much more dramatically. Indeed, the slopes for groups A and B hardly seem different at all with the GISS and NCEI data sets.
    Seem plausible?

    • Yes, I think that is right. It’s the different responses of surface and trop to El Nino, and the reanalysis mainly groups with the surface (as it should). It might be interesting to make a whole atmosphere NCEP/NCAR; but it would require dealing with several different level flies.

      • Nick, did you use tm2?
        or did you combine tm2 with sst?
        And did you extract a tmin and tmax? most os the reanalysis I’ve looked at either
        gives you 4 synoptic for the day and tmax, but getting tmin would require some work?
        As trends in tmin and tmax may diverge that can cause issues
        I have monthly ECMWF if you want it

      • Steven,
        I used sig995 rather than tm2 (and no sst). I explained in the original post why – it seemed to me that the NCEP/NCAR V1 tm2 was a join of two sets (pre and post 1996) with messiness at the join, while sig995 seemed a more consistent calculation. But anyway, I found PSD calcs for global monthly tm2 figures, which matched very well the sig995 averages.
        I didn’t deal with tmax/tmin, or 6 hourly values. One of the attractions to me of NCEP/NCAR V1, apart from prompt publication, is that they publish a daily average grid. AFAIK the reanalysis calc just updates every 3 or 6 hours, so min/max would be pretty rough.
        I don’t rely too much on reanalysis for trends, and try not to talk about long term records etc. I don’t think they claim that the data is homogeneous over time. There will be drift in the kinds of data that they assimilate over time.
        Thanks for the offer of monthly ECWMF. That could be an interesting check. I’ve used N/N in preference to ECWMF because the files are fairly small and promptly updated; I’ve been looking at GFS and CFSR because of the NOMADS facility, but haven’t found anything similar for ECWMF.

      • What is with this slope stuff?
        All that I can see is that the anomalies were sorted in ascending order and then a slope line driven through.
        Why not sort the anomalies in descending order instead? Ascending/descending; neither representation of slope is correct.

    • To use a boxing analogy, it is certainly reasonable that the NCEP/NCAR numbers punch above their weight for the satellite data sets during a strong El Nino. The extra warm ocean areas cause much more evaporation. And when this water vapor condenses, the extra heat that is released is higher up in the lower troposphere and not on a Stevenson screen two metres above the ground.

    • There is little difference in HAD and GISS, large difference in UAH and RSS. between groups A and B.

      That brings up an interesting thought. The satellite data basically demanded a split at 0.45. But is the split at 0.45 for the other three data sets real or just noise? And if it is just noise, should a straight line have been drawn from beginning to end for those three? But if the split is real, what would have caused it? Could the North Pacific blob have had anything to do with it for example?

    • “The satellite data basically demanded a split at 0.45.”
      I think it’s really an x-axis split, and due to the big jump in NCEP/NCAR from .368C in Sept 2015 to .567C in Oct, which could be just El Nino. But I don’t think it makes that much difference to the surface data trend lines; the parts on each side of the break look pretty similar.

    • It’s most likely El Nino. But if so, there’s a lag of several weeks or even a few months to figure out. Nino34 temperatures peaked in mid/late November 2015. The satellite data (RSS and UAH), along with NCEP/NCAR peaked in February 2016. The land/sea data sets (HadCRUT4/GISS/NCEI) appear to be peaking in March 2016. A “shifted correlation” analysis could be interesting.

    • “We saw with the El Nino of 1998 that the satellite data was much more sensitive than the surface records.”
      IIRC, that was not the case with the large 1982-83 El Nino.

    • Interestingly, if you have a look at the 1998 EL Nino, that ocean “pre-warming” apparent in the current El Nino, occurred more as ocean “post-warming”.. No idea what that means.
      Certainly, it will be interesting to see where both land and sea temperatures go from here. !

      • “2. Use real temps”
        I have.
        What lemon are you trying to sell this time.
        I see the link, I know the stupidity and ignorance.

      • Mr. Mosher: A recent posting by a UAH satellite data keeper (?Spencer? or ?Christy?, I don’t remember) pointed out some significant differences between the RSS and UAH datasets. It was his contention that RSS data was impacted by a faulty satellite (?AMSU14?) and use of models for RSS diurnal adjustments rather than observations as is the UAH practice. I am not aware that those concerns have been addressed. Have you run your surface vs satellite using UAH data?
        Dave Fair

  2. I must laugh though…. giving values at 3 or 4 decimal places..
    Error margin is, how much… +/- half a degree ? 😉

    • Your observation is correct. I set up my spreadsheet to consistently use the same number of digits for a data set… if a data set comes with 4 digits (e.g. NCEI), that’s how many the extrapolation uses. This is something to consider tweaking for next month.

      • This is something to consider tweaking for next month.

        I get this complaint very frequently. We all know that everything is about +/- 0.1. Do they expect you (or me) to reduce all numbers to the nearest tenth? One person even suggested that we only know things to the nearest degree. I responded that to make that person happy, my whole table would be just 0 or 1. How useful would that be? I would not go to all that trouble to tweak anything however a statement that all numbers can be +/-0.1 should be adequate.

      • “We all know that everything is about +/- 0.1. Do they expect you (or me) to reduce all numbers to the nearest tenth?”
        I don’t think you should. I actually don’t think everything is about ± 0.1. I see it asserted, but with little substantiating analysis. The fact is, the suppliers, who do know something, give the data as they do. Apart from trying to make a point, there is actually nothing lost by keeping that precision, whatever your oinion about ±0.1. And as Werner says, if you drop too much precision, you just take yourself out of the conversation.
        Precision tells you multiple things. Walter quoted an agreement to ±0.003. That tells you about the accuracy of the numerical methods. That is independent of data quality, and is important to know. But accuracy isn’t simply “± 0.1”. You need to break down sources of uncertainty. In fact the major sources (GISS, NOAA etc) will indeed say they have uncertainty approaching 0.1°C (mostly a bit less) for a monthly global average, and a bit less again for annual. That doesn’t mean the higher precision is meaningless. The uncertainty they quote relates to some model of variation. In fact most is spatial – what difference might you have got if you measured in different places. Another big one can be temporal – what might you have got if the weather had unfolded in a different way (this is the usual uncertainty applied to trend). Those don’t undermine the precision of the number you measured. The uncertainty is a what if something else happened.

      • so for eg when the Australia BOM finds that the average of a site over a month can be above the actual max temps for the whole month, there is no problem with the homogenization program. sounds like more than 0.003 deg error to me, but we cant let fantasy get in the way of real science can we!

      • Nick S says, “The uncertainty is a what if something else happened”
        ————————————————
        No, the uncertainty is a what if you measured differently, extrapolated differently, etc. This is one of many reasons the anomaly only system has shortcomings. Apparently the absolute global mean trend over multiple decades, has little correlation to the anomaly trend. Many of the models produce a GMT nowhere near reality as measured in the 70s and 80s. Both anomalies and absolute are important, as are all the disparate methods of measuring and verifying, and that is what produces uncertainty.

      • State what you think the error margin is, then.. problem solved.
        +/- 0.1… roflmao !!!

      • “if you drop too much precision, you just take yourself out of the conversation.” indeed, that would never do 😉

    • despite what folks think it still correct to give the result to the higher precision.
      A simple example will suffice
      Take a scale that gives you weight +- 1lb
      Weigh an object three times
      A) 10 lbs +-1
      B) 10 lbs +-1
      C) 11 lbs +-1
      Now predict what you will record if the object is weighed on a perfect scale.??
      Simple 10.333.. lbs. That’s the best prediction.. it minimizes the expected error.
      When we create temperature “averages” what we are doing is PREDICTING what would
      be measured at unmeasured locations if we had perfect instruments measuring those unmeasured
      locations. That is the basic definition of spatial statistics. Predicting measurements at locations
      where you have no data.
      The goal is to minimize the error of prediction.
      So, in the example of weight we are saying that 10.333 will be a BETTER PREDICTION
      than say 10.0 or 11.0, of course 10.333 will have error bars around it.
      Such that if you weigh the object with a perfect scale— or heck even a better scale–
      10.333 will be closer to the truth than any other number.. so 10.333 will be closer than say 10.3.
      Not much of a mystery. You can even prove this for yourself by taking data, holding some out
      and then predicting the hold outs.

      • The only way to determine what it would say on a better scale, is to get a better calibrated scale and see what it weighs.
        But in general, good equipment does not specify more digits than are meaningful. That is to say, every measurement could be high or low by 0.45 kg, so to specify the weight to better than 0.5kg falsely gives the impression there is more validity to the result than there is.
        The only way you should increase the precision, is when you’ve done careful analysis to determine that there isn’t a consistent error in one direction like urban heating, operator bias (intentionally increasing the temperature on instructions from the president of the country), etc.
        However, in order to determine that, you would need to have a basic quality control system with on site inspection and calibration of all equipment. But laughably, the quality control system for global temperature is far far worse than someone producing marmalade.

      • Steven Mosher. That logic seems perfectly sound for three measurements made in the same place at the same time. The instrument error will be minimised.
        But if there’s a shaded fence separating the probes then it would give false readings. A displacement of only 10cm will give less meaningful results.
        In global temperature measurement systems are the measurements taken less than 10cm apart?
        No.

      • This is only reasonable if your values are i.i.d. If your scales had a changing basis dependent on
        the weight being measured then your results would be invalid. Systemic bias through instrumentation
        cannot be eliminated this way.

      • This is a very poor example.
        A weighing device that has a +10% acceptable reading would be hard to find due to it being so useless
        More likely the 11lb would be a careless reading.
        Such outlier results are often rejected
        To make the point even clearer 10 sets with weighing device with a 1% reading error all values in lb
        9.9, 10.1, 10, 11, 9.9, 10.1, 10, 9.9, 10.1, 10.
        Most experimental scientists would reject the 11lb reading as an outlier and give the weight as
        10lb + or – 0.1 lb

      • Doesn’t matter Mosher, in the end you are offering a measurement your equipment cant obtain

      • You have seriously OUTED yourself, Mosh, as a mathematical illiterate…
        So funny !! 🙂

      • “Such that if you weigh the object with a perfect scale— or heck even a better scale–
        10.333 will be closer to the truth than any other number.. ”
        No, that’s not logical. More likely to be close to the truth, is logical.

      • personally i would take the scale back to the vendor i bought it from and ask for my money back.

      • If your numbers were 9.9, 10.1, 10, 11, 9.9, 10.1, 10, 9.9, 10.1, 10, then 11 would be an outlier and should be rejected, but if your numbers were 10, 11, and 10, then all should be considered and 10.3 should be given as the best average, even if it is to an extra digit.

      • Any of you guys are welcome to take my challenge.
        I have the object.
        Weighed three times.
        My prediction is the true weight is 10.333.
        Choose a different number. And place your bet.
        One restriction. You have to guess whole numbers.
        The winner is the person who gets closest to the truth.
        I choose 10.333
        It’s a fun bet. But it doesn’t fully capture the problem.
        Understand. I measured this thing with a crappy scale.
        The average of those crappie measurements is 10.33.
        This doesn’t mean I know the weight to 1/100th.
        It means if I have to predict what a perfect scale will read then I will choose 10.33. If you think there is a better prediction then state the prediction and why it is better.
        What folks don’t get is that we are not really averaging the temperatures. We are using the measurements at known locations made with imperfect devices to predict the temperature at unmeasured locations. We can make predictions at any precision we choose.

      • Steven Mosher
        “Any of you guys are welcome to take my challenge.
        I have the object.
        Weighed three times.
        My prediction is the true weight is 10.333.”
        Why not 10.332? Or 10.334? Or 10.331? Or 10.342? Or 10 even? otr eleven even? Etc. etc etc?
        It is obviously more like to not be 10.333 than to be 10.333 . . (unless the fix is in, O probability impaired one ; )

  3. Hmm, a bit like asking the question: what size is the unicorn population in Australia v the camel population and coming up with the answer 2.2 unicorns per acre to every toad. the toad is an introduced species and the population numbers are unknown, but the scientists are certain we need to cull the population of the camels because they fart too much co2.

  4. Has anyone looked at the time change of temperatures in various horizontal and vertical zones to see if there is a picture of a heat locus from which the pattern spreads slowly enough to identify?
    Is there any evidence to choose between exogenous and endogenous (to the oceans) heat sources for the blip?
    Geoff.

  5. “I’ve been able to closely duplicate his work to within +/-0.003 Kelvin degree, using different tools.
    That is relay funny, but today is May 1st not April 1st.

    • I don’t get your point. According to his post, Nick apparently used the “R” language to crunch the sig995 netcdf data set. I don’t know “R”, so I used a different approach. The fact that I got essentially the same numbers as Nick, when crunching the same data, using a different methodology, gives me confidence that I did the number-crunching correctly.
      I used freely available linux netcdf libraries and utilities to convert the netcdf files to straight text files, which I then number-crunched with bash scripts. OK, now you can laugh.
      The only non-bash-script portion was calculating the cosines of the grid-polygon midpoints. I did that in a spreadsheet, and exported those numbers as text, and imported them into a bash script. bash doesn’t handle floating point math, I scaled up numbers by several factors of 10 and treated everything as integer math, Since Kelvin temperatures are always positive, it was easy to emulate round-off during division.
      One other consideration is that I used the built-in posix date function in linux. To easily handle the sig995 netcdf data, a 64-bit OS (linux/whatever) is necessary. The date function on 32-bit linux only handles the period… Fri Dec 13 20:45:52 UTC 1901 to Tue Jan 19 03:14:07 UTC 2038. The sig995 data time variable is given as “hours since 1800-01-01 00:00:0.0”, which is outside the range of 32-bit time_t. And in January 2038, 32-bit time dies anyways.
      64-bit posix time, on the other hand, works until 15:30:08 UTC on Sunday, 4 December 292,277,026,596 and goes back the same ridiculous amount (approx 292,277,022,656 BC ?).

      • +/-0.003 Kelvin degree?
        but which data has such accuracy ?
        None that I know of. I doubt that any global temperature data set (in absolute or anomaly form) is accurate even at 0.1 Kelvin degree. Calculating ‘supposed’ accuracy beyond the one measured it appears to me to be absurd, and that is without taking into account any error bars.
        I do not question your integrity, or integrity of your processing method, just the claim that +/-0.003 Kelvin degree of anything related to the global temperature makes any sense.
        I am happy to agree to disagree and go away.

      • ” the claim that +/-0.003 Kelvin degree of anything related to the global temperature makes any sense.”
        I think it’s a test of numerical processing, not data accuracy. NCAR/NCEP publish a grid of numbers. Then arithmetic is done to extract a global average. Walter and I did that differently, getting answers with that degree of agreement. That tells that the arithmetic was done reproducibly, whatever you think of the reliability of the original numbers.
        So you might say – why any discrepancy at all? Different integration formulae is one possibility. Here you’d normally use mid-point or trapezoidal in latitude, both good. But they are a bit different at the poles, and it’s hard to say which is right. And there is also minor fuss about leap year issues etc.

      • Mr. Stokes
        Thanks for the clarification. My problem is if you wish, the term ‘global temperature’ in my view, borders on meaningless (averaging extremes in range of +/- 30 or more degree C), no reflection on work you and others put into it. Now I’ll go away as promised.

      • “averaging extremes in range of +/- 30 or more degree C”
        That is the point of using anomalies. Yes, if you had to take account of the very large expected (for one place/time) range of temperature, then there are all sorts of problems. But when you subtract out the norm, the remainder (anomaly) has much smaller range and, more importantly, much greater spatial homogeneity.

  6. Major difference between 1998 and 2015 was mixing, both sea surface and atmosphere were more mixed in 2015 due to a divergent jet stream. A mixed El Nino have much less impact on a diffused atmospheric jet.

  7. Replying to “ATheoK May 1, 2016 at 6:07 pm” as a new thread because WordPress won’t allow replies nested that deep…
    I needed a slope to allow using the classic “y = mx + b” linear equation for extrapolating/forecasting April’s values for HadCRUT4/GISS/UAH/RSS/NCEI. It’s a general convention that the x-axis values increase from left to right. Those were the driving factors in how I set up the graphs.

  8. This theoretical analysis is simply a cover-up to make poorly constructed data sets look like they are based on science. Unfortunately, real temperature curves are sometimes subject to change by forces we don’t understand. Take the existence of the hiatus in the eighties and nineties, something you never heard of. It is present in satellite data which is how I discovered it in 2008. But it has been covered up by an imaginary “late twentieth century warming” in all ground-based temperature curves. It is clear from satellite data hat there simply was no warming from 1979 to 1997. These dates go from the beginning of the satellite era to the beginning of the giant super El Nino of 1998 You can see what the real curve looks like in Figure 15 of my book “What Warming?” Since no one was listening to me about this I decided to put a warning about it into the preface of my book when it came out in 2010. I reused that figure in an article I posted in October 29th last year in WUWT. That article criticized Karl et al.’s attempt to declare the twenty-first century hiatus non-existent. Amazingly, a Bob Tisdale accused ne in a comment of fabricating the data in Figure 15. He is the same man who thinks that El Ninos are warming up the world. His act is of course pure libel which he has to publicly retract and apologize for. Fortunately, I was able to get NASA’s own description of temperature pertaining to the eighties and nineties they had issued in 1997. This is what NASA has to say about temperature:
    “Unlike the surface-based temperatures, global temperature measurements of the Earth’s lower atmosphere obtained from satellites reveal no definitive warming trend over the past two decades. The slight trend that is in the data actually appears to be downward. The largest fluctuations in the satellite temperature data are not from any man-made activity, but from natural phenomena such as large volcanic eruptions from Mt. Pinatubo, and from El Niño. So the programs which model global warming in a computer say the temperature of the Earth’s lower atmosphere should be going up markedly, but actual measurements of the temperature of the lower atmosphere reveal no such pronounced activity.”
    Note the fact that NASA specifically rejects the validity of computer-predicted temperature rise for this period. I can see now how, despite NASA’s warming, those computer predictions became the seed for changing that section into a new “late twentieth century warming.”
    With that, they effectively erased the first hiatus we had. (But not completely, it is still visible in satellite data). The second hiatus is the twenty-second century hiatus we are experiencing now. This is the one that Karl et al. were supposed to have buried. Two hiatuses gone by that – is there any meaning to it? The answer is yes, when we follow through on it.
    What happens when a hiatus arrives is that from that point on there is no increase of global temperature while atmospheric carbon dioxide just keeps increasing. Why is this a big deal? you may ask. It is a big deal because according to the Arrhenius greenhouse theory, any increase of atmospheric carbon dioxide must be accompanied by an increase of global temperature. This is the greenhouse effect at work. But what we have experienced instead for the last 18 years or so is a steady increase of atmospheric carbon dioxide with no increase of global temperature. If true, this means that Arrhenius greenhouse theory is simply not working – it predicts warming and we don’t get any. Therefore, that vaunted greenhouse effect the IPCC and 200 plus world governments are supposed to be fighting is simply not there! How can this be when the science is settled and our fate is sealed by the global greenhouse effect? The answer: there is no global greenhouse effect. And by the way, did you know that carbon dioxide is no more than three percent of global greenhouse gas total? The largest amount of global greenhouse gas is water vapor, which makes up 95 percent of total global greenhouse gas volume. And the Arrhenius greenhouse theory leaves water vapor completely out. Small wonder that its predictions are failing. But there is another greenhouse theory that does include both carbon dioxide and water vapor as its subjects. It is the Miskolczi greenhouse theory or MGT. It predicts the existence of today’s hiatus accurately and should be used in place of the Arrhenius greenhouse theory that makes false predictions about a non-existent greenhouse effect.

  9. Please see the following:
    http://www.woodfortrees.org/plot/gistemp/from:2015/offset:-0.21/plot/hadcrut4gl/from:2015/offset:-0.09/plot/hadsst3gl/from:2015/offset:0.16/plot/rss/from:2015/offset:0.236/plot/uah/from:2015/offset:0.247
    All were offset to start at the same place in January 2015. Then they diverged like an accordion during 2015. Then they were all very close again in January 2016. One would possibly think that with a certain NCEP/NCAR value, the changes would not be that extreme, except for Hadsst3. But it just goes to show that the strangest things can happen in any given month.

  10. Here is my exchange with Dr. Spencer:
    Werner Brozek says:
    May 2, 2016 at 3:07 PM
    According to:
    http://www.moyhu.blogspot.com.au/p/latest-ice-and-temperature-data.html#NCAR
    The January average was 0.665 and the April average was 0.636. Yet UAH had January at 0.54 and April at 0.71. I realize that the two are not measuring the same thing, but due to the adiabatic lapse rate, I would not have thought that the difference would be that large. Does anyone have any thoughts on why the January-April difference is so large?
    Reply
    Roy Spencer says:
    May 2, 2016 at 4:15 PM
    Werner:
    You are talking about a 0.1 to 0.2 deg. C discrepancy in surface versus deep-layer temperatures changes when the difference in absolute temperatures is several tens of degrees. So, the change in lapse rate is tiny. But even on a 1-month time scale we know that there can indeed be such changes, mainly due to intraseasonal oscillations in tropical convective activity, because it takes time for the troposphere to convectively respond to changes in stability.
    Reply
    Werner Brozek says:
    May 2, 2016 at 5:22 PM
    Thank you!

  11. Post Mortem
    ========
    April data is starting to come in…
    UAH preliminary is value +0.71 versus my extrapolation of +0.519. The final UAH value will have 3 digits, but should be very similar.
    RSS is +0.757 versus my extrapolation of +0.620
    I had been hoping to be within +/- 0.1

    • I had been hoping to be within +/- 0.1

      To illustrate how tough that can be, I tried predicting the two by comparing UAH and RSS via the following:
      https://ghrc.nsstc.nasa.gov/amsutemps/amsutemps.pl
      2010 was closest to 2016, and just by eyeballing April for ch06, it looked to me that 2016 would be about 0.3 above 2010. That would have given 0.626 for UAH and 0.794 for RSS. So I was 0.084 too low for UAH but 0.037 too high for RSS.
      That is a relative difference of 0.12 between RSS and UAH, regardless how good or bad my eyeballing may have been. And this is even comparing apples and apples so to speak.
      I realize May is only 4 days old, but all 4 days so far for 2016 are below 2010 on ch06. The May 2010 values for UAH and RSS were 0.414 and 0.526 respectively.
      Will May make up for things? I guess we will have to wait and see.

      • That option was not available with the NCEP/NCAR data. It was the warmest April on record, so it couldn’t be “in between” 2 older values. The first 2 days of May 2016 NCEP/NCAR were warmer than in May 2010 as follows…
        20100501 287.969
        20100502 288.061
        20160501 288.083
        20160502 288.174
        May 1 was 0.114 K (or C) warmer, and May 2 was 0.113 K warmer. It’s still early, but May is on pace to break yet another monthly record. NCEP/NCAR, with data back to 1948, has already set 9 consecutive monthly records; Aug 2015 through Apr 2016.

  12. Let me repeat the highlights of my comment asbove:
    1. There is no temperature increase during the hiatus while atmospheric carbon dioxide keeps going up.
    2. The Arrhenius greenhouse theory, the one IPCC uses, cannot handle this fact and falsely predicts warming.
    3. A false prediction by a a scientific theory invalidates it as a scientific theory. It cannot correctly predict global temperature because it totally nignores water vapor which constitutes 95 percent of atmospheric greenjouse gas volume.
    4. The only greenhouse theory that can handle both carbon dioxide and water vapor as greenhouse gases simultaneously is Misko;lczi greenhouse theory or MGT. It correctly predicts the hiatus temperature where Arrhenius fails.
    5. Because MGT is the only theory that can co it it should nit should replace the use of the Arrhenius greenhouse theory which makes wrong predictions about global temperature.
    Note: I am well aware of the opposition to Miskolczi that goes back to 2007 when his basic theory was published. I have looked at a number of his critics whose comments amount to nothing more than paeudo-scientific arguments by mathematically illiterate individuals.

    • Oops – those typos again. I apologize. I corrected them all and am posting the corrected version below. Can you please erase the original attempt I made yesterday?
      Let me repeat here the highlights of what I posted on May 1st:
      1. There is no temperature increase during the hiatus while atmospheric carbon dioxide keeps going up.
      2. The Arrhenius greenhouse theory, the one IPCC uses, cannot handle this fact and falsely predicts warming.
      3. A false prediction by a scientific theory invalidates it as a scientific theory.The reason it cannot correctly predict global temperature is that it totally ignores water vapor. Water vapor constitutes 95 percent of atmospheric greenhouse gas by volume.
      4. The only greenhouse theory that can handle both carbon dioxide and water vapor as greenhouse gases simultaneously is Miskolczi greenhouse theory or MGT. It correctly predicts the hiatus temperature where Arrhenius fails.
      5. Because MGT is the only theory that can do it it should replace the use of the Arrhenius greenhouse theory which makes wrong predictions about global temperature.
      Note: I am well aware of the opposition to Miskolczi that goes back to 2007 when his basic theory was published. I have looked at a number of his critics/ Their comments amount to nothing more than pseudo-scientific argumentation by mathematically illiterate individuals. It is time to bring out the correct greenhouse theory.

Comments are closed.