Examples of How the Use of Temperature ANOMALY Data Instead of Temperature Data Can Result in WRONG Answers

This post comes a couple of weeks after the post EXAMPLES OF HOW AND WHY THE USE OF A “CLIMATE MODEL MEAN” AND THE USE OF ANOMALIES CAN BE MISLEADING (The WattsUpWithThat cross post is here.)

INTRO

I was preparing a post using Berkeley Earth Near-Surface Land Air Temperature data that included the highest-annual TMAX temperatures (not anomalies) for China…you know, the country with the highest population here on our wonder-filled planet Earth. The graph was for the period of 1900 to 2012 (FYI, 2012 is the last full year of the local TMAX and TMIN data from Berkeley Earth). Berkeley Earth’s China data can be found here, with the China TMAX data here. For a more-detailed explanation, referring to Figure 1, I was extracting the highest peak values for every year of the TMAX Data for China, but I hadn’t yet plotted the graph in Figure 1, so I had no idea what I was about to see.

Figure 1

The results are presented in Figure 2, and they were a little surprising, to say the least.

NOTE: Monthly TMAX data from Berkeley Earth are described as the “Mean of Daily High Temperature”. Conversely, their TMIN data are described as the “Mean of Daily Low Temperature”. [End note.]

Because of elevated highest-annual TMAX temperatures (not anomalies) in the early part of the 20th Century, the linear trend for that subset was basically flat at a rate of 0.006 deg C/decade, as calculated by MS EXCEL. (Yeah, I know, too many significant figures, so go ahead and read it to yourself as 0.01 deg C/decade, or 0.0 deg C/decade, if you’d prefer.)

Figure 2

Yup, that’s right. In addition to the Contiguous U.S. (Figure 3), China also had high surface temperatures in the first half of the 20th Century. (Splain that, oh true-blue believers of human-induced global warming.)

Figure 3—(It’s from an upcoming post. Stay tuned.)

THE PROBLEM WITH USING ANOMALIES

So I felt this would provide a great opportunity to present illustrations to confirm what many of us understand: The use of temperature anomalies in scientific studies can provide wrong answers…very wrong answers. That is, wrong answers to surface temperature-related questions can be caused by using temperature anomaly data instead of temperature (not anomalies) data. (Or as members of the climate science community like to call them “absolute temperatures”, assumedly to help differentiate them from anomalies. Maybe climate scientists should simply state “temperatures, not temperature anomalies” instead of “absolute temperatures”, which riles purists. Then again, “temperatures, not temperature anomalies” grows tiring when you’re reading and writing it.)

How wrong are the answers if you use anomalies, you ask? Figure 4 presents the highest annual TMAX temperature anomalies (not actual temperatures) for China, along with the annual July temperature anomalies (not actual temperatures), both for the term of 1900 to 2012. The highest annual TMAX temperature anomalies (not actual temperatures) for China show a noticeable warming rate of 0.12 deg C/decade, when, in reality, no long-term warming of the actual highest annual TMAX temperatures existed during that period.

Figure 4

Referring to the Berkeley Earth TMAX webpage for China, July shows the highest value of the monthly temperature conversion factors listed. As also shown in Figure 4, the July TMAX temperature anomalies for China give a better answer, but still not the correct one. Obviously, the highest annual TMAX temperatures for China don’t always occur in July.

THE PROBLEMS CARRY OVER TO THE GLOBAL NEAR-SURFACE LAND AIR TMAX TEMPERATURE DATA

For the sake of illustration, I ran through the same process with the GLOBAL near-land surface air TMAX temperature data from Berkeley Earth. The same basic problems exist with the global highest annual TMAX anomaly data, but the July TMAX trend values are correct. See Figures 5 and 6.

Figure 5

# # #

Figure 6

AND THEN THERE’S THE BERKELEY EARTH TAVG TEMPERATURE DATA

While we’re on the subject, do not go looking for “Mean of Daily High Temperature” (TMAX) answers using average monthly (TAVG) temperature data, Berkeley Earth’s standard near-surface land air temperature anomaly dataset. The TAVG data are the wrong data to use from Berkeley Earth when looking for TMAX answers.

This warning also carries over to the standard NCDC/NCEI or CRUTEM4 near-surface land air temperature anomaly data. They’re not TMAX datasets. If you want a TMAX dataset other than the one from Berkeley Earth, see the “Monthly observations” webpage at the KNMI Climate Explorer. They have a couple. (Thanks, Geert Jan.)

That’s it for this post. It gave me the opportunity to present Figures 2 and 3 in advance of the post I’m preparing.

Enjoy yourself in the comments below, and have a great rest of your day.

STANDARD CLOSING REQUEST

Please purchase my recently published ebooks. As many of you know, this year I published 2 ebooks that are available through Amazon in Kindle format:

And please purchase Anthony Watts’s et al. Climate Change: The Facts – 2017.

To those of you who have purchased them, thank you. To those of you who will purchase them, thank you, too.

Regards,

Bob Tisdale

Advertisements

200 thoughts on “Examples of How the Use of Temperature ANOMALY Data Instead of Temperature Data Can Result in WRONG Answers

  1. Australia also had relatively high temperatures in the early part of the 20 th century. Unfortunately, the UHI effect for major cities overwhelms the earlier warm periods.

    • You can easily manufacture a trend, including using data generated by a pure sine wave, if you simply start the data at a ‘trough’ and or end it at a ‘crest’. Ta daaaa, fake trend from even the best data and the headline is yours.

      • John Brignell’s “Number Watch” appears to be DOA, but previously from his excellent website, we learned of the statistical trick called “start and end date bias”.

        Used it quite often myself: you have 10 years of data but it looks bad. So you remove a few years from the start (reasoning: we weren’t as robust in collecting the data, used different methods, can’t compare apples to oranges, etc.), remove the last year (hasn’t been audited), etc.

        Repeat as needed until you can go to your boss with (defensible…somewhat) good news.

      • I use this explanation when talking to non-math individuals:
        Pendulums swing up and down, have you ever seen one on its upswing continue and fly off into space?

  2. Bob,
    If you want to claim that using anomalies gives the wrong answers to “surface temperature-related questions”
    then you need to give a list of such questions. Similarly I am sure there are plenty of questions where not using anomalies gives the wrong answer. All you have done is shown that different measurements give different results which is a something that would be obvious to most people.

    • Mr. Jackson, you said “All you have done is shown that different measurements give different results”

      It looks to me like Bob Tisdale has shown that the same measurements, either shown in their raw form (as absolute temperatures) or converted into anomalies give different results.

      • Well no. There is no raw form. The only raw data is the temperature measured at individual
        stations scattered across the globe. The moment you start to process that data in any way whether
        to produce an average temperature or an average anomaly in either time or space you have processed data rather than raw data. An averaged temperature is just as much processed as an anomaly and it has larger errors in many cases.

        • Mr. Jackson, you said “Well no. There is no raw form”.

          OK, if I understand your point, it is that both the TMAX average anomalies and the TMAX average absolute temperatures are processed data and neither are raw data and both therefore have errors. Based on that, the difference in trends from the two sets of numbers is probably just an artifact of the way they were processed. Then the questions is, which trend is closer to the correct trend? I don’t know if there is a way to determine the answer to that question.

          • Actually, if I want to know if an area (or a continent or the Earth) are trending towards a higher temperature, and at what rate, I want to use temperature data. I have never been clear why people use “anomalies” which just hides the real answer.

            Using anomalies allows the presenter to twist the statistic one more time before giving an answer.

            Temperature is either rising or it isn’t, and if it is it rises over a given period of time at a certain rate. Using TMAX at least tells me the maximum temperature is rising (or it isn’t).

            I would be much more comfortable with a set of numbers that tell me the average, the max, the min, and the variance over time. Plot those on a graph and it allows you to derive your own interpretations. I am sure there is even a better set of measurements that could be used, so this Global Average Temperature (meaningless) and plotting of anomalies is just useless to me.

          • Hi,
            The trends are difference because they are measuring different things. The max of
            an anomaly is not the same thing as the anomaly of the max. If you think about it the day with the highest anomaly could be in winter but the day with the highest maximum could also have the smallest anomaly (if it was the same temperature as last year). And since they are measuring different things it is not surprising that the trends are different.

          • RicDre and Robert of Texas and Percy Jackson,

            As I understand it, without any interpolation of temperatures where there are no stations, the difference could simply be an artifact of station siting. As more stations are added (or removed), if the coverage is different you’d get different averages. In China, for example. if most of the stations were at low latitudes/altitudes in the early part of the century, then more were added in the north or in mountains during mid-century, you’d get lower average temperatures. This is why anomalies are more appropriate – they avoid the variance due to site. Each station then has temperature data relative to the average for that specific site – that’s the anomaly.

            You could still get skewed averages with anomalies if some regions have less or more than average change over time, of course. So if the high latitudes have greater coverage than the low there could be an exaggerated warming trend, which is a good reason for processing the data to account for such differences. Raw data can be misleading.

            Does that make sense?

          • The problem Kristi is you are pretending your anomaly is a divergence from a scientific baseline but it isn’t it is just a random selection they have made. The “baseline” can actually have other signals one you can clearly see which is a long term uptrend since like 1850. That is why I asked Nick Stokes when he did his ridiculous analysis to run his same analysis back on every ten year period in his so called baseline years. It will throw up massive anomalies because his baseline isn’t a scientific baseline far from it and it’s clear it has at least one big signal buried in it.

            This comes back to basic science you need controls, anomalies in science are supposed to be from a scientific baseline not some stupid statistical analysis. Feynmans “Cargo Cult Science” speech is sort of how science is supposed to work if you haven’t read it before it is worth a read

            http://calteches.library.caltech.edu/51/2/CargoCult.htm

            Why Climate Science doesn’t want to follow that convention is because apparently it is an emergency and we have on some period of “x” years to save the planet. Unfortunately that itself is a lie the planet is just under more or less pressure and if we can do eco friendly things we should. Destroying whole economies is as dangerous as an eco problem and as you have seen in France you run the very real risk of conflicts.

          • The use of anomalies artificially makes temperature data look more accurate than it really is. However, even the use of absolute temperatures can have this fault also.

            Two examples: 1) temperature drift of accuracy over time, say a decade; and 2) precision in reading a given instrument.

            Neither of these are ever included in the data sets, nor when they are input to climate models. Have you ever seen what the anomaly range is from measurement errors in a data set? Have you ever seen what a climate model generates as far as errors go, not from statistical errors, but from a concurrent analysis of measurement errors of the inputs. Has anyone here ever discovered if climate model coding has anything included to assess measurement errors and how they affect the outputs?

          • Here is another story to contemplate. My boss comes in and says design a bridge to carry the weight of 20, 3000 lb vehicles at any one time. So I put 10 supports underneath it and design each one for 6,000 lbs of weight. I put a strain gauge in the center and plot anomalies every day. Guess what, everyday at 3 am the strain gauge shows a 60,000 lb load. Since that is a zero anomaly, my graph for the next five years shows a flat line of never exceeding the maximum load. Then one day my boss comes in and says the bridge has a collapsed section and asks what might have happened. I show him my anomaly graph and he says “you dumba**, you are fired”! I ask myself why.

        • larger errors in many cases.
          +======
          That is one of many problems with anomalies. Can make the data look better than it actually is by hiding the variability.

          But of course the actual error has not been reduced. It is simply hidden in the difference between the anomaly and the absolute.

        • Hi,

          OK, I am confused, you said “the day with the highest maximum could also have the smallest anomaly (if it was the same temperature as last year)” but if two days a year apart have the same temperature, wouldn’t they also have the same anomalies but not necessarily the smallest anomaly?

          • RicDre,
            Imagine that the first of January always had the highest temperature of the year which for some reason was fixed at 40C. Then the anomaly for that day would be zero. Whereas if the coldest day of the year was warmer by 0.2 degrees then it could have the highest
            anomaly of the year despite being the coldest day of the year.

        • “Well no. There is no raw form. The only raw data is the temperature measured at individual
          stations scattered across the globe. The moment you start to process that data in any way whether to produce an average temperature or an average anomaly in either time or space you have processed data rather than raw data. An averaged temperature is just as much processed as an anomaly and it has larger errors in many cases.”

          Not so, the original data remains intact, all you’ve done is make a derived output or analysis. but the data has not changed one bit.

          So how is deriving an average of a dataset ‘processing’ new ‘processed’ data?

          • WXcycles- an average of two temperatures is not a temperature. Tavg has no physical meaning, particularly for anomalies because they’ve already been differenced from another T. T is intensive, not extensive. A change in the energy in a parcel of air changes the temperature and pressure of the air, depending on the boundary conditions.

            The temperature in St. Paul is zero. The temperature in Dallas is 80. What is the temperature half way in between?

            If you have two beakers of water with different temperatures you can mix them together but there is no way to calculate the final temperature without knowing the mass in each beaker.

          • Philo “an average of two temperatures is not a temperature.”

            That’s right, it’s an AVERAGE of the data set.

            As I said.

    • “Similarly I am sure there are plenty of questions where not using anomalies gives the wrong answer.” don’t be sure because you are wrong …

  3. ?????????
    What On Earth Is Going On Here?
    As near as I can tell, your Fig. 3, and Fig.4 should be absolutely identical.
    Indeed, you should be able to turn your absolute temps, Fig. 3, into anomalies by just changing the labels on the Y axis.
    So what is going on here?
    (Not a rhetorical question, please.)
    Things which come to mind:
    1) You used BEST for absolute temps, fair enough.
    2) You used BEST for anomaly data, and
    3) BEST has some extra processing for the anomaly data which was not done on the absolute temp data.
    OR:
    1) They are not the same data sets *in kind*.
    That is, one is highest annual, the other is the highest monthly.
    So you are looking at two different things.

    SO:
    What happens if you take your Fig. 3, absolutes, and do your own anomaly reduction on it?
    Now that would be interesting.

    ***************************************
    As an aside, we all know the technique of “p-value mining”, so we can slice and dice a large data set until we get a plot we like. Highest June, highest July, highest Aug. highest annual, you get the idea.

    • “Indeed, you should be able to turn your absolute temps, Fig. 3, into anomalies by just changing the labels on the Y axis.”
      No, you can’t. The anomaly average is the average of the anomalies for each station, based on that station’s history.

      Suppose that most stations in the first part were in the south, and in the latter part were in the north. Then the average temperature will go down. But the north stations don’t necessarily have cooler anomalies, nor the south warmer. Average anomaly isn’t affected in the same way.

      • Anomaly calculated station by station before the data is aggregated, rather than after.
        Thanks for the reply.

      • Oh, I didn’t scroll down enough to see Nick’s reply. Obviously we were thinking along the same lines.

      • The problem is Nick and you refuse to look at the problem an anomaly calculation requires a baseline and your baseline has massive signals buried in it (at least one is blatantly obvious).

        So lets give the layman stupid answer to you above, in your situation if we have an oscillation in the normal climate pattern that it wobbles north and then south. As you noted you will show an anomaly which actually isn’t an anomaly at all it’s the oscillation. Net result you call something climate change that is just a normal oscillation.

        That is why in science if you do anomalies you better be dam sure of your baseline. Unlike you in science we would analyze the hell out of our baseline and we would be tougher on it than our harshest critic. Just take a look at LHC or LIGO background measurement checks if you wants some idea how harsh it is treated.

      • Hi Nick,
        Would you agree anomalies or absolute temperatures make no difference unless there are missing or flagged measurements? Anomalies adds a smaller error than absolute temperature when we average over missing data because we fallback to anomaly rather than 0.

        Here is an example expressed in SQL. You don’t have to run it but the code can serve as explaination. If you could run it you will see that anomaly and absolute curves are 100% parallel. (I am using “.” instead of space as qcflag for readability, mid is the station id)

        #Calculating and storing base temperature for all stations’ January readings
        DROP TABLE base;
        CREATE TABLE base AS
        SELECT mid,AVG(tavg) AS base
        FROM ghcnm_v3_tavg_qcu
        WHERE YEAR(date) BETWEEN 1950 AND 1959
        AND MONTH(date)=1
        AND tavg>-99 AND qcflag=”.”
        GROUP BY 1
        #Only accepting stations with all measurements OK
        #Change to check deviation
        HAVING COUNT(*)=10;

        #Comparing raw average with anomaly
        SELECT YEAR(date)
        #Here we compare the two and its constant 2.23
        ,ROUND(AVG(tavg)-AVG(tavg-base),2)
        #We can plot them individually to check if they are parallel
        ,AVG(tavg) ,AVG(tavg-base)
        ,COUNT(*)
        FROM base b, ghcnm_v3_tavg_qcu a
        WHERE a.mid=b.mid
        AND YEAR(a.date) BETWEEN 1950 AND 1959
        AND MONTH(a.date)=1
        AND tavg>-99 AND qcflag=”.”
        GROUP BY 1

        • Matz,
          Yes. Here it is mathematically. If s is station, m month, then for each month there is a set of weights w(s,m) which gives the spatial average. Without area weighting, they would be equal, and equal to 1/N, N= number of readings in the month. And of course, w=0 if there is no reading.

          so Ave T = sum_s(w(s,m)T(s,m)) = sum_s(w(s,m)L(s)) + sum_s(w(s,m)A(s,m))
          where L is the station normal (say, 30 yr average) and A the anomaly (T=L+A).

          Now if the set of stations is always the same, w will not depend on m and the first part, sum_s(w(s)L(s)), would be constant. But if w has a variable distribution of zeroes, it isn’t. In fact, a giveaway is that the Ave T is usually dominated by sum_s(w(s,m)L(s)) (because L varies a lot). But this doesn’t have any weather information at all. It just reflects the set of stations that report.

          • Thanks Nick,

            Then it becomes very important how you calculate the base temp.
            How would you best integrate these to represent the grid-cell they are in?
            http://cfys.nu/graphs/StockholmGrid_January_SM.xlsx (from GHCNMv3 TAVG_QCU)

            The problem I have is how do I get the statistically most accurate base temperature for the stations.
            You will find the curves deviates a lot because the measurement location moved from inland to ocean over the years. This example is extreme in terms of dampening but UHI would impact in a similar way. A large amount of grid cells in GHCNMv3 would be similar to this one.

            I got an interesting response from Steven M the other day and your view would be appreciated. It is for the app I told you about.
            I want to implement an “anomaly algorithm” selection before sending you the link.(and yes it has taken a while because new/better ideas pops up all the time)

            Thanks again Nick.

          • No opinion Nick?
            I am currently implementing the CAM method as proposed by Steven Mosher in an earlier thread.
            So far it seems to verify that my serialization method is OK. With serialization I can stick with actual temperature readings (aligned @ overlapping years) and hence track drifts caused by thermometer change/move or UHI.

            Well, I think I can 😎

      • Well thanks, Nick, and Tony L.
        I hadn’t a clue what was going on until you explained it for me. Raises a lot more questions, of course, but at least now I understand the head post!!

  4. The primary issue I have with using anomalies is that it becomes difficult to check prior years against the current year.

    If the historical record for city x is 58.x for the year 19xx, I can compare and contrast the ave temp against other years. Using anomalies, I cant tell what has been adjusted

  5. Well, there is something that begs the question, even tho I’m not contesting the conclusions of the study:

    What is the reference period against which the anomalies are computed?

    • Bob Tisdale provides a link to the China TMAX data.
      From the data description at the top of that file, there is this:

      % Temperatures are in Celsius and reported as anomalies relative to the
      % Jan 1951-Dec 1980 average.

      • China is a funny one because it has a historic 200 year warming and drying trend which has nothing to do with CAGW and everything to do with a natural process. So any work with it I would look carefully at because there is a massive natural trend in the data.

  6. Bob,
    You aren’t showing that anomalies give wrong answers. You are showing that averaging temperatures and averaging anomalies give different answers. And of course they do. That is why people use anomalies. It is the average of absolute temperatures that is unreliable and should be discarded.

    The reason is that you don’t have the same stations over the period. If, say, cooler places were added in the later period, then that will pull the trend down, even if no individual stations are cooling. But it won’t pull the anomaly trend down. That is why anomalies are better.

    • Mr. Stokes, you said “The reason is that you don’t have the same stations over the period.”

      Questions:

      1) Do they adjust the raw temperature data for station change like the one in the example you gave?

      2) Are both the absolute temperatures and anomalies based on raw temperature data adjusted for station changes?

        • Yes, agreed but: in the “global warmin” the word “global” is a misnomer. The local temperatures are controlled by the Sun under local “Climate System”. Here global average in terms of anomaly or absolute value has no role what so ever. The local met data directly interact with agriculture and or disease.

          Dr. S. Jeevananda Reddy

        • cont– same is the case with carbon dioxide — vary with seasons region. The southern hemisphere presents lower values and northern hemisphere higher values. Even in northern hemisphere, the value goes up with increased latutude. So, this is also a local and not a global factor

          Dr. S. Jeevananda Reddy

          • That is a very important thing for greenhouse effect the height in the atmosphere matters, Nick Stokes probably has data because he was messing around with that sort of analysis.
            The problem is earth has an oblateness perturbation which should actually be a signal you should be able to see in climate science. That is actually what I would be using for my baseline if I was playing in climate science because it’s clean and obvious what it is.

          • I should also say if you are in doubt the signal would be large enough to find, do a search on “Coherence between sea level oscillations and orbital perturbations”. That is sea level changes so you can guess how much it moves the atmosphere and they actually need to take it into account to make sure satellites don’t scrape against the atmosphere.

    • Nick, adding anomaly of the cooler place does not solve the problem but actually hides it.

      According to the controversial hypothesis polar regions will warm up faster than the tropics. This suggests that the statistical distribution of the anomaly of a cooler place and warmer place would be quite different.

      Averaging them up is, therefore, problematic.

      Again, our blue ball does not have one temperature on its surface. No amount of statistical trickery can help to deduce it.

      • Anomalies are not perfectly homogeneous. But they are a vast improvement. If you drop a polar station and add a tropic, that might make a difference of 30-40 degrees. If you do the same with anomaly, and with polar warming, the difference might be a degree or so. The effect on the average is much less.

        • Just to clarify. If we look at Figure 2, for TMAX China, if we only looked at stations that didn’t move during the time period, and we had perfect knowledge of temperatures between stations (since they may be sparse), you would expect the slope of the trend line, to be similar to the slope of the trend line doing the same TMAX using anomalies.
          I am not trying to put words in your mouth, just checking to see if it follows logically or not.

          • It’s not to do with stations moving. It’s about stations that weren’t reporting at all for part of the period. China would have had few stations in 1900, many in 2012. But yes, if all stations reported all the time, the anomaly average would track the average absolute, with a constant difference. Otherwise not.

          • Mr. Stokes, you said “It’s not to do with stations moving…”

            Just as a clarification and a follow-up to our previous discussion, are you saying that they do not adjust the raw temperatures for stations that are added but they do adjust the raw temperatures for stations that are moved? If so, then I understand why you said that adjustments are not relevant in the case of station moves.

        • So back when climate scientists did calculate absolute global temperatures, they’d blindly drop a polar station and add a tropical one without making any adjustments to the calculations to compensate for this change? Come on, do you really expect us to believe they were that stupid? Or do you really believe that yourself?

          And doing the same thing using anomalies is just as wrong! In fact, if all of the weighting, base period, etc, are the same, the global anomaly you calculate using some weighted average of the stations is the same as the absolute global temp from those stations relative to the absolute global temp of the base period for those stations.

      • ChrisB,

        The data always have to be processed further using weighting and interpolation to compute a global average. Converting to anomalies is the first step in that process. It’s not statistical “trickery” unless you think math is witchcraft.

        Maybe that’s one reason graphs created here don’t always agree with the published figures – people are simplifying the calculations.

        • kristi silber
          December 13, 2018 at 10:39 pm
          “The data always have to be processed further using weighting and interpolation to compute a global average. ”

          Rubbish – ridiculous rubbish even!

          If one wants a global average of a global dataset one sums all sites and divides by the number of sites. The honest averaged result of doing that is all that such highly asymmetric point data sites can supply.

          • That’s a problem when a large number of the ‘sites’ are nothing more than an interpolation of a station thousands of kilometres away.

          • Indeed, and almost all datasets above 60deg N are only just above 60 deg N, almost none above 70deg. And Satellites are useless above 60degN. BUT the arctic ‘anomalous anomalies’ are the hottest by far, now I wonder why?

          • “That’s a problem when a large number of the ‘sites’ are nothing more than an interpolation of a station thousands of kilometres away.”

            You dont interpolate that far. Even though you could. You can infact test how far you can interpolate by “holding out” samples of data

          • Steven Mosher – “You can infact test how far you can interpolate by “holding out” samples of data”

            Frankly Stephen, the whole process of interpolation and excuses for interpolation with such limited isolated data on a terrain, is highly improper and unethical.

            That however is not to say that global area temp mapping can not be derived from space instruments, and ground-truthed plus tested by ADSB-B type aircraft reports, and WX balloon flights.

            i.e. what really occurs in WX sim forecasting.

            That at least is credible as a process and ethical questions become unreasonable, as long as the assumptions and measures are constrained and explained.

            Interpolating Australian Temps from point sources is without question utterly dishonest and improper, certainly not observations any more, or science, and applied to Earth it is plainly absurd even on a limited basis.

            And yet we are to consider it as ‘data’?

            NO.

          • Steve:
            **“That’s a problem when a large number of the ‘sites’ are nothing more than an interpolation of a station thousands of kilometres away.”

            You dont interpolate that far. Even though you could. You can infact test how far you can interpolate by “holding out” samples of data**

            Can you or anyone tell me how far from Eureka temperatures are extrapolated?

    • Ok, how many new stations were added over the period and where? This can be corrected for.

      If large areas of the planet have no stations, and these areas happen to be in the coldest parts of the planet; wouldn’t adding stations in these areas and bringing down whatever trend existed without them be an improvement?

    • That is why anomalies are better.

      Get real! They’re better only in covering up the fact that most temperature indices are computed not from a fixed, but from an ever-changing set of stations. That’s simply not a scientifically rigorous way of detecting any temperature change, let alone physically meaningful stored-energy change. Moreover, permitting anomalies to be shuffled ad liberum into the computed global average is an open invitation to index manipulation. Such is the stagecraft of the Climate Follies.

  7. That’s what comes from an obsession with producing ‘something’ from data which might be considered unfit for purpose in other disciplines.

    • michael hart

      As a non scientist and a non mathematician, that’s my contention.

      Land surface temperatures in the 1850’s were considered a local issue and the quality of data then is unreliable at best and it was largely confined to the ‘western’ world. The data from then until the mid 20th Century wasn’t much better but the global significance was recognised even if a reliable network didn’t exist. Satellites in the 1970’s were an ongoing experiment and largely remain so although they are much better than they were.

      Messing about with historic data infilling, it with guesswork and then presenting it as reliable is just dishonest.

      • “Messing about with historic data infilling, it with guesswork and then presenting it as reliable is just dishonest.”

        You spelled “Modern Climate Science” wrong….

        As I’ve said before, if innumeracy had the social stigma that illiteracy does, we’d have more resources going into teaching and retaining math skills. People almost brag about not being able to do math. Nobody brags about not being able to read.

        And if we had less innumeracy, we’d have less panic.

        But the media (“if it bleeds, it leads”) and politicians (“never let a crisis go to waste”) don’t want numerate news consumers and voters. Bad for business.

        • I couldn’t agree more, Caligula Jones.
          While being nothing exceptional in mathematical abilities or training, my formal background in Chemistry means it is almost impossible for the media to alarm me with a “toxic chemicals/biology” story.

          But it is a general grounding in mathematics that can make people far more resilient to scares from almost any source. Being able to do some back-of-the-envelope calculations quickly allows a disinterested reader to ask questions which will frequently bring most scare stories crashing to the ground.
          If the media knew they wouldn’t get much traction with most of their scare stories then we would probably see fewer of them, and scientists speaking to the media would be more likely to return to their former role of being generally quite properly dull when speaking publicly in a professional capacity.

          • Unfortunately, the Numberwatch UK website appears to be gone, but it was a great resource for learning the nasty bits of statistical chicanery.

            People truly don’t know that “the danger is in the dose” and that most of the scares these days are due to the media (deliberately, but most journalists have not math skills at all anyway) hyping literally minuscule changes to things that can be due mainly to better measuring. See: grafting tree rings and other proxies such as sea shells, etc. to modern thermometer readings…

    • And moderating temperatures are supposed to be a problem why exactly?

      Less harsh winters and warmer nights, what’s not to like?

      • @Rich Davis – Indeed. But that’s why they talk of “averages,” because they can hide all the inconveniently unalarming truths by using “averages” to make it sound like a doomsday heat wave cometh to smite us all for our carbon sins. Ditto for the largest effect occurring at the poles – like 35 below zero instead of 40 below zero is a “catastrophe.”

        All the while, of course, obfuscating the fact that NOT A SCRAP OF EMPIRICAL EVIDENCE exists to show that ANY of it has ANYTHING TO DO with CO2 levels or the MINUSCULE human contribution thereto.

  8. I’m not clear about what these “temperature anomalies” are. I’m under the impression that they are the difference between what the temperature actually was and what it was expected to be on past performance, but I’m probably wrong about that.

    Could someone explain it in simple, non-technical language, for the benefit of a philosopher whose mathematical prowess is just sufficient to cover both hands and one foot?

    • RoHa,

      It starts with selecting a base period, which is usually 30 years, but could be longer. It’s rather arbitrary. The temperatures for each of the 12 months for each station are averaged over those 30 years.

      The anomaly is the difference between a month’s 30-year average, and the average temp of the month in a given year.

      So, say that between 1951 and 1980 the average November temperature (averaged across all those years) in Minneapolis was -7 C. If the average in November 2015 was -5 C, the anomaly would be 2 C. An average of -8 C in 1977 would be a -1 C anomaly.

      Does that make sense? (Average monthly temperature is the average of the daily means.)

      • …so just to be clear, every station has an anomaly for every month of every year. Daily anomalies could be computed just as well – or annual ones, for that matter – but what’s important is to difference between some average over at least 30 years, and the data in question.

        This important part of this is that each site has a temperature data point that is relative to an average for that site. Then you know if a particular place is getting warmer or colder or staying the same over time. Whereas, if you averaged the temperatures for Timbuktu, Taiwan and Toronto, any change could be obscured by regional variation.

        • The problem with that analysis method is you have no idea why it could be because you simply had a signal you have failed to pick in your baseline period. Convince me your baseline has no signal already in it and I might be interested other than that you are wasting your time.

      • “It starts with selecting a base period, which is usually 30 years, but could be longer. It’s rather arbitrary.”

        Therein lies the problem. The amount of the anomaly is dependent on the base period. When presenting that info it is too easy to make your case by cherry picking the base period.

        • And, guess what? 1951-1980 was the coldest period in the modern record. What a coincidence that the Warmists have selected it as the base period for their Anomalies…

          • Yeah, its kind of a “coincidence” like its a coincidence that someone’s bar had a small fire after the owners refused to pay for protection from the local “career offender cartel” (a real phrase, actually).

            Here in Toronto, we “lost” a whole degree of “heat” in that time. Yes, I’ll take warming, thank you:

            30s 7.28
            40s 7.46
            50s 7.95
            60s 7.14
            70s 6.90
            80s 7.42
            90s 8.17
            00s 8.77
            10s 9.21

  9. Bob, very nice presentation for discussion.

    Both methods are at best indications of temperature. The value of anomalies, is that they tell you where the heat has gone to. Neither method tells the observer where the heat originated.

    Surface 2m heat is highly transportable, that is why the arctic region has the higher anomaly values. The strength of Antarctica cold has a large influence on heat transport direction at certain times of the year. Just relying on temperature values without detailed data from the mechanisms that transport heat {wind} and block heat {clouds} etc is futile.

    Heat transported can be measured in many places throughout it’s journey. Same heat different place. Understanding wind movement and velocity is vital. Christy commented in his January 2016 report that a spot in Russia warmed anomalously by 7c from minus 27 to minus 20c. My research found that the only thing that was different was the prevailing wind direction. It came from the south west instead of the NNW. However it had a profound effect on the outcome. We do not have the depth of absolute knowledge to be definitive about absolute temperatures.

    Thanks Bob, and seasons greetings.
    Martin

  10. That’s a problem when a large number of the ‘sites’ are nothing more than an interpolation of a station thousands of kilometres away.

    • In particular given that much of the supposed “warming” occurs at the poles, where the “interpolation stations” constitute most of the so-called “data.”

  11. The fundamental issue is that the temperature anomalies are created by assuming the raw source data captured the underlying sample distributions. They are ASSUMED to be Gaussian and hence the Central Limit Theorem is applied. That is how you can get lower uncertainties for the anomalies.

    The problem is that this means your data is hypothetical. Real measurement distributions maintained and calibrated show much larger noise. When you can see a distribution it is often skewed by continuous drift or discontinuous drift which if not recalibrated would mean exotic distributions, often unique to the sample.

    In order to get a useable data set that meets the criteria of studying climate (i.e. changes in temperature of 0.1 degrees C per decade) they had to make these assumptions. Which is fine for the study of hypotheticals.

    But saying that this IS the temperature is daft. It is not born out by the quality of the instrumentation or that the instrumentation was every designed or maintained to achieve the level of uncertainty needed to be used for climate studies. Even Lamb knew this.

    So people arguing here and there completely miss the point: You cannot and should not use these data sets to take real world actions. It is completely unethical.

    • You just hit the nail on the head! Look back through the comments, do you see one where measurement error is included when discussing either absolute or anomaly temperatures? I didn’t.

      The baselines that are calculated are assumed to be accurate to 0.001 degrees. What horse hockey. Neither the accuracy nor the precision of the measurements used to calculate it are close to that. In many cases, by two or three orders of magnitude. Same with anomalies. If you use measurements that have a precision of +- 0.5 degrees, how do you assume your averages are any better?

  12. There is no global warming, it is global ‘less cold’.

    TMIN increases, and this was always predicted, but not TMAX.

    In fact GH gasses reduce extremes of temperature, and make the planet more habitable.

      • Correct me if I’m wrong, but wasn’t the first “measurable” amount of “global” warming basically Siberian nights getting less cold?

  13. Science is not “the development of anomalies into a coherent theory representing reality” it’s the exclusion of anomalies and inclusion of real verifiable uniform data to provide a representation of the normal system… and THEN you examine the anomalies in that framework.

    • So you are happy … So now run the same analysis of the period immediately before your baseline and your baseline period. Guess what happens your baseline becomes an anomaly so you may want to ponder why?

      If you want the answer to that have a look at the historical temperature graph, I would probably suggest Berkley Earth because it is one of the more conservative. If you arrive at the correct answer you will see the more aggressive your temp rise from history the more issue you have with your baseline period .. why?

        • It should be, but isn’t. Instead, a 30-year period (which is meaningless in “climate” terms, and thus immediately suspect) is used.

          • Is this the 30 year running mean that they always refer to?
            So as time goes on, the anomaly period steps forward a year?

        • No it doesn’t help because what you are trying to do is extract the natural background variations. What you really need to do is find a signal within the data, if you look at carbon dating or isotope marker science it is generally how you approach the problem.

          I have said before there is one signal I would be surprised if you couldn’t find in the data which would be the Earth oblateness perturbation and that for example would be a good baseline.

      • Anomaly data = Data-offset

        Every child knows that adding or subtracting constant from time series does not change anything what matters which is the derivative.

  14. To pretend that China has an average temperature, either raw or anomolous, that has any meaning is an academic exercise in mental masturbation. China is some 2,100nm from north to south, from borderline sub-arctic to sub-tropical with a host of different climatic zones, from very cold to hot and wettish via parched desert and exceedingly high mountains in between.
    The same point applies to any country. Is it not time to start looking at climatic zones rather than broad-brush geographical entities?

  15. The real issue here is that air temperature is the incorrect metric if you are attempting to measure the amount of energy being ‘trapped’ in the atmosphere. Humidity changes the enthalpy (specific heat) of the air considerably. So that a 100% humid volume of air at 75F contains twice the energy in kilojoules per kilogram as 0% humidity air at 100F. So a double the amount of energy is required to raise temperatures in a humid Louisiana Bayou as in the deserts of Utah and Arizona.
    Varying levels of humidity can therefore account for the air temperature change all on their own. This is seen in the temperature plots of air cooling at night time. Absolutely nothing to do with CO2 or downwelling infrared just straight adiabatic enthalpy changes.

    The correct metric for energy content is kilojoules per kilogram. To calculate that it is necessary to know the enthalpy of the air (from the humidity) and the temperature of the air.
    https://www.engineeringtoolbox.com/enthalpy-moist-air-d_683.html

    You can do all the clever mathematics you want on the wrong metric and you will still get the wrong answer.

    However, as it appears that the incorrect answer is useful in supporting some arguments for funding in climate ‘science’ – incorrect use of temperature as a metric for atmospheric energy content will continue.

    • “The real issue here is that air temperature is the incorrect metric if you are attempting to measure the amount of energy being ‘trapped’ in the atmosphere.”

      1. actually you would want to look at OHC, which we do. and OHC continues to increase.
      There was an LIA. Its getting warmer.
      2. Ideally you would also want to measure the increase in heat in the atmopshere, but historically
      we only have tmperature. It too tells us that it is getting warmer.

      Any way you slice it, any metric you use which indicates an engery imblance tells the same story.

      1. temperatyure tells the same story
      2. Sea level rise tells the same story
      3. OHC tells the same story
      4. Animal migration
      5. Ice loss
      6. plant migration

      every indicator we have tells the same story. It is getting warmer. Nobody denies climate change.
      There was an LIA. It is getting warmer. All evidence points to warming planet whether you
      measure it in Kelvin, or C or F. Whether you report Absolute T or anomalies. Whether you report
      it to .01 c or .1C Every sign, every proxy, every metric. Any way you slice it. It is getting warmer.
      Whether you use 30 stations or 300 or 3000 or 30000. It is getting warmer. Climate changes.

      Its long past time for denying that it’s getting warmer. yes we could have measured it more perfectly.
      But the amount of brain power Skeptics waste on challenging a record that proves what they already believe— namely climate changes and there was an LIA– is astounding.

      Not all skeptics waste their time challenging a solid record of warming. The smarter ones focus there energy and “credibility capital” on these issues.

      A) Why is warming.
      B) will it continue to warm
      C) how much
      d) and what if anything should we do about it

      • Steven Mosher – would you kindly provide some acknowledgement that you understand the role of CO2 in the Carbon Cycle of Life?

      • @Nick
        To the “issues”/questions, some logical answers:

        A) Why is warming. – Natural climate variability, just like all of the warming and cooling that has occurred over the last 4.5 Billion years or so of the Earth’s existence.

        B) will it continue to warm – Unknown, since we lack sufficient knowledge of the climate system to reasonably predict the future.

        C) how much – Again, unknown – too many variables, including those we have yet to identify, observe, measure, and understand, to know how much, OR if, it will warm – or cool – and when.

        d) and what if anything should we do about it – We simply ADAPT to the changes that DO occur (which is all we CAN do, since we are NOT in control of the climate and do NOT have any effect on it that is measurable or significant), while simultaneously attempting to expand our knowledge of how the NATURAL forces that ACTUALLY drive the Earth’s climate work, so that we can be prepared for what comes.

        • Age. This is why I came to change my mind about cagw, as a non science based citizen. I explored for years the challenges to my religion, and discovered the whole theory was based on population reduction and walked redistribution. I knew that years ago and now the global elected and unelected “elites”, who control the purse strings, are giving their tell openly. It’s a collectivist plot to fundamentally change society. Figueres said it, and many others.

          Your response highlights the most logical approach, because it isn’t from a misguided faith in bad science and corrupted scientists with an agenda. It’s a logical approach, and as far as I can tell, vastly sure in rational thinking.

          My frustration with folks like Mr. Mosher is simple: very smart people have put a lot of emotional investment into an idea whose outcome anyone they be correct, is extremely negative and anti-human. It’s an ideology that refuses to put emotion aside and genuinely question whether they have their foundation built upon rock instead of sand. Worse yet, the attitude towards those of us who don’t share that religion comes off as snug and arrogant, two things highly detestable to me, just after dishonesty.

          I keep falling back to my mother, who upon working very diligently to put herself through nursing college while I was a child, was on the team that installed the first completely artificial heart in a human, then worked her way up to run two plastic surgery clinics at Hershey med center… She still believes the Catholic Church is the path God ordained.
          She hasn’t truly investigated the politics surrounded how the book was organized (council of nicea), the corruption of the church fathers and clergy throughout history, the break in papal lineage, the errant dogma, transubstantiation as cannibalism, adopted pagan traditions from conquered societies, etc etc, and she doesn’t think the crusades were a defensive war because she’s fallen for that bit of secular propaganda. All the while, she thinks I’m lost because I don’t take the authority of the church as valid or sacred, since it is a man made entity.

          I see this same exact behavior with very smart people when it comes to this cagw narrative, now morphed into climate change (yet still always discussed in terms of warming).

          You can be as book smart as you want, but that doesn’t preclude one from fooling oneself. I’m ok with it, but let’s stop pretending you have the moral or righteous highroad.

      • Who denies that it has warmed since the LIA, nobody that I know.
        But we do ask other questions and one of them is about adjustments to the past, I have asked you before (the last time on this thread https://wattsupwiththat.com/2018/12/09/urban-heat-island-influence-inadequately-considered-in-climate-research/) and will keep asking .

        Tell me about the NOAA Global Temperatures for 1995, 1997 & 1998 and how they have changed as follows
        in 1998 the Report for 1997 showed
        62.45F or 16.92C
        and also stated 1995 was
        62.30F or 16.83C
        in 1999 the Report stated that the temperature for 1998 was higher than 1997.
        Currently the value for 1997 is
        58.13F or 14.53C

        So can you explain to me how the latest calculations reduced the 1997 temperature by
        4.32F or 2.39C?

        Now don’t give me the baseline change that is on the NOAA website as the reason as they made the mistake of posting the 1997 temperature as an Actual Temperature instead of an Anomaly.

        • “So can you explain to me how the latest calculations reduced the 1997 temperature by
          4.32F or 2.39C?”

          I would like to hear that explanation myself. Steven ought to know all about it. Every nuance.

          Explain how NASA and NOAA disappeared 1998’s position in the hierarchy of hot years in the temperature record, right before our eyes. 1998 went from significant to insignificant. That was required so they could use their “hotter and hotter” narrative. It wouldn’t work if 1998 was the second hottest year in the modern record.

          That’s how bad it is: They can disappear 1998 right before our eyes, and all we can do is sit here and watch them bastardize the temperatures records one more time, and complain about it.

      • Steven,
        I agree with you that ocean heat content is the correct metric for global warming, especially as the top 6 meters or so of the ocean contain as much energy as the entire atmosphere. I also agree with you that the oceans have been warming.
        However, the anthropogenic global warming hypothesis is that CO2 emissions into the atmosphere have resulted in an increase in the percentage of CO2 molecules which absorb outgoing long wave radiation (infrared) and either immediately re-emit the radiation ~50% of which becomes ‘downwelling infrared’, or, pass on that energy by collision with N2 and O2 molecules raising their kinetic energy – in aggregate their temperature. As shown above by Bob, the accepted observations are that the atmospheric temperatures have risen by ~1C in a century. In the unlikely event that all the extra energy causing that temperature increase could be transferred into the oceans it would be immeasurably small, far smaller than the amount of ocean temperature rise. Nevertheless, the hypothesis is that this miniscule ‘warming‘ is sufficient to increase evaporation of water into the atmosphere and “water vapor is the most powerful green house gas” so this will inevitably lead to rapid increase to catastrophic global warming.
        But the hypothesis is ignoring several important issues:
        1. Yes warm air over the ~70% of the Earth’s surface covered with water and the land areas covered with plants (another 20%) will increase evaporation. That has three effects first is evaporative cooling of the surface as the latent heat of evaporation is removed from the surface and convected upward; this heat is released higher in the atmosphere as the water vapor changes state and cold precipitation of liquid or frozen water further cools the surface (cf Willis Eisenbach cooling hypothesis). Second, the clouds formed as the water vapor changes state increase albedo reflecting solar energy back into space reducing solar warming (cf Lindzen ‘Iris hypothesis’). Finally, the increased humidity increases the enthalpy of the atmosphere allowing the carriage of significant amounts of latent heat energy without increasing temperature [Perhaps the tropospheric hot spot is actually a latent heat spot?]
        2. ‘Downwelling’ infrared radiation does not heat the water surface as it can only penetrate as far as the first water molecule and it is absorbed increasing the vibrational energy of that molecule to a level where it may change state to vapor. The excited surface molecules then leave the surface taking the latent heat of evaporation with them. Then see 1 above.
        1 and 2 are the hydrologic cycle which is a strong negative feedback to warming of any kind. So I agree the oceans are warming but it has nothing to do with CO2 and infrared. There are only two ways to warm the oceans: geothermal/volcanic warming from underneath which is probably minor; and, short wave solar energy that penetrates several tens of meters into water. If we assume that the geothermal input is constant, then the variability of the ocean heat content must be due to variation in the short wave solar energy caused either by variations in the sun or by variations in albedo. The hydrologic cycle that causes the albedo can also be affected too if the winds alter perhaps due to inertial effects in the atmosphere caused by Length of Day changes due to gravitational effects from the Moon, Sun and possibly the planets.
        In short – a warming atmosphere will cool most of the Earth’s surface by evaporative cooling in the way a hand drier feels cold on wet hands – it is why driers are warm air. and why people blow on coffee to cool it.

        Neither downwelling infrared nor a warm atmosphere can warm the oceans or transpiring vegetation which is ~90% of the Earth’s surface. Indeed their effect is to cool the surface. The CO2 initiated global warming hypothesis is illogical and falsified.

        • Hi Ian,

          are there any physical measurements of this nighttime down welling LWIR?

          Totally agree with you on your comment ” There are only two ways to warm the oceans: geothermal/volcanic warming from underneath which is probably minor; and, short wave solar energy that penetrates several tens of meters into water”

          • Hi Scott

            You wrote: “Inputs plus assumptions plus calculations do not match current observations ie no hot spot and do not work with historically higher CO2 levels by thousands of ppm. therefore as it stands the entire concept is flawed. Which means one of, or a combination of inputs, assumptions or equations are wrong or missing something very important.”

            I’m writing because I would like you to understand the difference between radiation transfer calculations and “climate models” (AOGCMs) and weather forecast models. The physics of the interactions between radiation and matter is described by quantum mechanics and has been well understood for almost a century, though in increasing detail and refinement in recent decades. To calculate how much energy radiation is transferred along a path from one location to another, one needs to first specify the composition of the atmosphere: the mixing ratios of all of the GHGs (including highly variable water vapor), the pressure and the temperature at all locations along the path. When those inputs are carefully measured, there is excellent agreement between observation and theory. (I posted two references on this subject and a third to a Wikipedia article on Schwarzschild equation for radiation transfer.)

            For radiation transfer, you also rely upon a database of absorption cross-sections for more than 100,000 absorption lines that have been carefully studied in the laboratory. The most important lines were studied long ago and compiled for use by aeronautical and aerospace engineers needing reliable information about radiative heat transfer long before the hysteria about climate change. To reduce computational expense, short-cuts (narrow and broad-band methods) for dealing with many different lines at once have been created and validated using “line-by-line” methods.)

            AOGCMs and weather forecasting models are a totally different. They contain software that calculates radiation transfer, and also heat transfer by convection, evaporation, condensation and conduction. The model generates all of the REQUIRED INPUTS for radiation transfer calculations INTERNALLY. They contain grid cells that are far too large properly describe all of the phenomena occurring inside each grid cell, particularly condensation, precipitation and turbulent mixing. These processes are described by adjustable parameters that must be tuned as a group to match observations. We know that the output from weather forecast programs is useful for a few days to weeks, before uncertainty the conditions used to initialize the model takes over. (Changing the input temperatures by a few hundredths of a degree changes the forecast two weeks in the future.) AOGCMs are far more complex than weather forecast models. They need to deal with heat flux within the ocean, sea ice, and other issues. There are so many parameters in an AOGCM, there is no way to rigorously “tune” then – select an optimum set of parameters that best describes today’s climate. Studies with simplified models where different model parameters have been systematically explored suggest that many different parameter sets give models that reproduce observed climate equally well (or more accurately equally poorly). The climate sensitivity of AOGCMs changes significantly with how they are parameterized. (With 1000 simplified models, ECS ranged from 1.5-11.5 K when parameter were selected from within a plausible range at random.) Models also assume that the uncertainty associated with initialization that quickly degrades weather forecasts doesn’t prevent them from properly describing average weather – climate.

            In summary, radiative transfer calculations are well-validated science; AOGCMs are unvalidated hypotheses containing some validated science. Radiative transfer calculations say that rising GHGs will reduce radiative cooling (forcing measured in W/m2) to space and that implies an unknown amount of warming, if nothing else changes. However converting forcing to an amount of warming (in degC) is an incredibly complicated process that can’t be validated. I’m hoping you recognize the difference. Given that even the IPCC provides a 66% confidence interval for climate sensitivity of 1.5-4.5 degC and admits that even 1 degC is possible, the IPCC recognizes some of these problems. Nevertheless, they fraudulently provide projections from a collection of AOGCMs with a mean ECS of 3.3 degC without clearly explaining the limitations of the models that produce these projections.

            Both weather prediction programs and AOGCMs predict average warmer temperatures in the Sahel compared with the Congo. Only over-simplified ideas about radiation transfer fail here.

            The missing hot spot could represent a problem with AOGCMs OR with our ability to accurately measure the warming rate 10 km above the surface in the tropics. Given all the complaints about uncertainty in rising surface temperature, I’m not sure the apparent lack of a enhanced warming in the upper tropical troposphere is real – even though radiosondes currently clearly say no hot spot exists. If enhanced warming high in the troposphere doesn’t develop, an important negative feedback that limits ECS (lapse rate feedback) doesn’t exist. Since I already have plenty of other evidence that AOGCMs are wrong, I hope a hot spot will eventually be discovered. I don’t know enough about this subject right now to know if there is any real hope.

          • Hi Frank,

            I don’t know why you would want a hot spot to be discovered? This suggests you are well aware there is a flaw in the whole argument. Add to this the desire to be pro science at the TOA then ignore it for LWIR to penetrate and warm the ocean.

            The fact that the Hot Spot doesn’t exist, along with the failure to account for the past, suggests as I have said, regardless of how pretty the equations are, they fail to replicate what we observe & measure in the climate and the more we walk the climate models forward, the worse they perform.

            Which comes back to my original argument something is either wrong and or missing in their models.

          • Hi Scott

            I already have plenty of good reasons to suspect that AOGCMs are seriously flawed (even though radiative transfer calculations and radiative forcing are reliable). I don’t need the absence of a hot spot in the upper tropical troposphere to convince me AOGCMs are wrong.

            When a parcel of air rises, it expands and cools. If the temperature of the air surround the parcel is cooler than the risen parcel, the risen parcel will keep rising. Simple calculations show that the atmosphere should be unstable with respect to buoyancy-driven convection when the lapse rate (drop in temperature with altitude) is 9.8 K/km for dry air and as low as 4.9 K/km for humid air in the tropics. On the average, we observe a lapse rate of 6.5 K (higher in the drier upper troposphere and lower in the moister lower troposphere). “Everyone” believes this average lapse rate is a result of buoyancy-driven convection.

            As the planet warms, if absolute humidity rises the lapse rate should go down, because more latent heat is being release as air rises. A falling lapse rate means that 1 degC of warming at the surface could be associated with 2 degC of warming in the upper atmosphere – where most of the photons escaping to space are emitted. So we can obtain the benefits of increased radiative cooling to space associated with 2 K of warming in the upper atmosphere at a cost of only 1 K of warming at the surface. This is called lapse rate feedback and it decreases ECS.

            Of course, increased water vapor (a GHG) decreases radiative cooling to space. Positive water vapor feedback overwhelms negative lapse rate feedback in simple calculations. If positive water vapor feedback is operating as expected, but unknown factors are preventing enhanced warming in the upper tropical troposphere compared with the surface, climate sensitivity will be much higher than it would be otherwise. If the absence of a hot spot means that absolute humidity isn’t rising as fast as expected in the upper tropical troposphere, then both feedbacks will be weaker than expected and climate sensitivity could be much lower than expected.

            You can say the absence of a hot spot means models are wrong, but wrong could mean that their ECS is too low – or too high. We want the upper troposphere to warm more than the surface, but not if that is associated with the expected amount of water vapor feedback. The absence of a hot spot – if real – tells us we are NOT getting the benefits of negative lapse rate feedback. It doesn’t tell us whether positive water vapor feedback is developing as expected.

          • Hi Frank

            Thanks for your response. what it tells me is there are a lot of “if’s and maybe’s” they just don’t know. Particularly as they don’t understand what is happening now and cant explain what happened in the past.

            Therefore spending $trillions to stop CO2 when they know very very little about how it isn’t doing what it is supposed to, is just plain stupid.

            The heat has left the building its not in the atmosphere and its not in the ocean, its gone. Time people did a rethink on what they are missing.

          • Hi Scott:

            I agree with this thought: “The heat has left the building its not in the atmosphere and its not in the ocean, its gone. Time people did a rethink on what they are missing.”

            A more scientific way of putting this is to discuss the overall climate feedback parameter (alpha) – how much additional OLR and SWR are emitted or reflected to space per degree of warming. (W/m2/K).

            If alpha is -2 W/m2/K, then the 1 degC of warming since pre-industrial means that we are emitting 2 W/m2/K more radiation to space. This would nearly negate the current forcing of about 2.7 W/m2/K and leaves a current imbalance of 0.7 W/m2 that is accumulating in the ocean. This is what ARGO says is happening. The heat hasn’t “left the building”, the planet is emitting it to space rather than having it accumulate in the ocean.

            If alpha is -1 W/m2/K, then the 1 degC of warming since pre-industrial means that we are emitting 1 W/m2/K more radiation to space. This doesn’t negate most of the current forcing nearly of about 2.7 W/m2/K and leaves a current imbalance of 1.7 W/m2 that is accumulating in the ocean. Much less heat is leaving the planet because of the warming we have already experienced.

            Take the reciprocal of -2 W/m2/K, and get 0.5 K/(W/m2). If 3.7 W/m2 = 1 doubling of CO2, we can convert W/m2 into doublings and get 1.85 K/doubling. Roughly the ECS of Nic Lewis’s energy balance models. Doing the same thing for -1 W/m2/K affords 3.7 K/doubling. Roughly the ECS of AOGCMs. The crucial question comes down to how much of the current forcing is escaping (the planet) for space and how much is flowing into the ocean.

            Having the upper troposphere warm more than the surface (a “hot spot”) helps more heat leave the planet for a given amount of surface warming. Having more water vapor there hinders cooling.

          • Hi Scott: Second try. Mixed up W/m2/K and W/m2

            I agree with this thought: “The heat has left the building its not in the atmosphere and its not in the ocean, its gone. Time people did a rethink on what they are missing.”

            A more scientific way of putting this is to discuss the overall climate feedback parameter (alpha) – how much additional OLR and SWR are emitted or reflected to space per degree of warming. (W/m2/K).

            If alpha is -2 W/m2/K, then the 1 degC of warming since pre-industrial means that we are emitting 2 W/m2/K more radiation to space. This would nearly negate the current forcing of about 2.7 W/m2 and leaves a current imbalance of 0.7 W/m2 that is accumulating in the ocean. This is what ARGO says is happening. The heat hasn’t “left the building”, the planet is emitting it to space rather than having it accumulate in the ocean.

            If alpha is -1 W/m2/K, then the 1 degC of warming since pre-industrial means that we are emitting 1 W/m2 more radiation to space. This doesn’t negate most of the current forcing nearly of about 2.7 W/m2 and leaves a current imbalance of 1.7 W/m2 that is accumulating in the ocean. Much less heat is leaving the planet because of the warming we have already experienced.

            Take the reciprocal of -2 W/m2/K, and get 0.5 K/(W/m2). If 3.7 W/m2 = 1 doubling of CO2, we can convert W/m2 into doublings and get 1.85 K/doubling. Roughly the ECS of Nic Lewis’s energy balance models. Doing the same thing for -1 W/m2/K affords 3.7 K/doubling. Roughly the ECS of AOGCMs. The crucial question comes down to how much of the current forcing is escaping (the planet) for space and how much is flowing into the ocean.

            Having the upper troposphere warm more than the surface (a “hot spot”) helps more heat leave the planet for a given amount of surface warming. Having more water vapor there hinders cooling.

        • Ian: You are correct in saying that DLR is absorbed in the top 10 um of the ocean, but that doesn’t mean that absorption of DLR doesn’t add energy to the ocean. Whether an object warms or cools doesn’t depend on a single flux (such as DLR), it depends on all fluxes. If the sum total of all fluxes into an object is greater than the sum total of all outward fluxes, then the objection warms.

          Consider the top 10 um of the ocean. It receives roughly 333 W/m2 of DLR, but loses 80 W/m2 of latent heat (plus 10 W/m2 of sensible heat) and 390 W/m2 of upward LWR. (Penetration of LWR into the ocean and emission of LWR from the ocean occurs from the same 10 um layer). To maintain a constant temperature, SWR is needed, but most well below the top 10 um. Some heat is conducted into the top 10 um, but that simply cools a thicker layer at the top. Given the net outflow of energy, the mm or cm of the ocean cools until it is denser than the water immediately below, at which point it sinks and warmer water rises to the surface by convection. When the wind is blowing hard enough, the ocean is also physically mixed. So in the long run the 333 W/m2 of DLR contributes all of its energy to the mixed layer of the ocean.

          So instead of DLR “boiling away” the top 10 um of the ocean, the top 10 um of the ocean is actually cooler than the bulk water 10-100 cm immediately below. If SWR were absorbed below the surface and never made its way to the surface by convection, the surface of the ocean would freeze and the water immediately would eventually boil. When examined closely, the idea that DLR doesn’t add energy to the ocean is pure nonsense.

        • Ian wrote: “However, the anthropogenic global warming hypothesis is that CO2 emissions into the atmosphere have resulted in an increase in the percentage of CO2 molecules which absorb outgoing long wave radiation (infrared) and either immediately re-emit the radiation ~50% of which becomes ‘downwelling infrared’, or, pass on that energy by collision with N2 and O2 molecules raising their kinetic energy – in aggregate their temperature.”

          99+% of the time a CO2 molecule absorbed a photon, the resulting excited state is relaxed by collisions with other molecule. Re-emission is negligible. Near the surface collisions occur about once a nanosecond (though not every collision results in relaxation), while the average excited state requires 1 second to emit a photon. So essentially all photons are emitted by CO2 molecules that are excited by collisions. The fraction of molecules in an excited state is given by the Boltzmann distribution and depends on temperature. This causes the temperature dependence of emission seen in Planck’s Law, the S-B equation and the Schwarzschild equation.

        • Ian wrote: “In the unlikely event that all the extra energy causing that temperature increase could be transferred into the oceans it would be immeasurably small, far smaller than the amount of ocean temperature rise.”

          Consider a thin layer of atmosphere. It emits equal amounts of energy in all direction, but only the component of flux in the +z or -z direction effects the temperature of the planet. Those z components are equal. According to Beer’s Law, absorption is proportional to the density absorbing molecules, the distance traveled and the INTENSITY OF THE RADIATION ENTERING THE LAYER. Upward fluxes (or photons) originate at the surface or lower in the atmosphere where it is warmer, while downward fluxes originate where it is colder. So any layer absorbs more upward radiation than downward radiation. This is why the intensity of outward LWR decreases from an average of 390 W/m2 at the surface to only 240 W/m2 at the TOA, while DLR starts at zero at the TOA and increases to an average of 333 W/m2. As you increase the density of GHGs, the reduction in outward flux becomes greater. Some people call the 150 W/m2 reduction in outward flux the “GHE” and the addition reduction from rising GHGs the “enhanced GHE” or radiative forcing.

          However, GHGs in our atmosphere absorbed LWR so well that the 160 W/m2 of solar radiation that reaches the surface can’t escape as fast as it arrives. (Net radiative cooling = 390 (OLR) – 333 (DLR) W/m2). That is why we need convection (100 W/m2) to cool the surface of the planet, mostly via latent heat. However, convection doesn’t occur without an unstable lapse rate. Thus the temperature gradient of the troposphere is determined by convection, not radiation. For the temperature of the atmosphere to remain constant, the upward flux and downward flux of radiation+latent heat+sensible heat must be equal at all altitudes. By the tropopause, the atmosphere has become transparent enough that upward LWR equals downward SWR and convection is no longer needed. Consequently, surface temperature is determined by the temperature at the top of the troposphere and the lapse rate from there to the surface. And the temperature of the upper troposphere needs to be hot enough to emit 240 W/m2 to upward to space. When there are more GHGs in the upper atmosphere, 240 W/m2 is no longer escaping to space from the upper troposphere and it starts to warm. That warming slows convection, and warming propagates to the surface via the lapse rate and reduced convection, and eventually into the ocean as increased DLR. The surface can warm because of reduced convection as well as increased DLR.

          The only way heat can leave out planet is by radiation. The choke point in outward flux of heat occurs in the upper troposphere, where only radiation can carry heat to space. When GHGs slow down radiative cooling to space (the choke point), heat builds up EVERYWHERE below, but mostly in the ocean.

          A simple calculation will show you that a 1 W/m2 radiation imbalance at the TOA provides enough power to warm the atmosphere and a 50 m mixed layer of ocean at an INITIAL rate of 0.2 degK year. (If all of that heat remained in the atmosphere, the initial warming rate would be 3 K/yr, IIRC) In reality, all of the heat doesn’t remain in the mixed layer, some is convected below and radiative cooling immediately starts increasing as the mixed layer starts warming. So a 1 W/m2 radiative imbalance easily can change temperature over a year or more, even though 1 W/m2 appears tiny compared with the hundreds of W/m2 that move up and down through the atmosphere.

          The bottom line is that some subtle misunderstandings can make it difficult for us to understand how our climate system actually functions. The radiative forcing from rising GHGs is a tiny perturbation in massive fluxes of energy.

          The 20 ppm rise in CO2 over the last decade is about 1/14 of a doubling (1.05^14 = 1.98) which is 0.25 W/m2 of radiative. That is enough power to warm a 50 m mixed layer at 0.05 K/yr and 0.5 K/decade. The rise isn’t that big because warming is sending more heat back to space and into the deep ocean.

          • Frank,

            Can you provide references to experimental/observational evidence that:
            * A body of water is warmed by infrared in the CO2 emission bands at 3 watts/sq meter?
            * A body of water is warmed (or cools slower) when the air above the water is warmer than the water.

            Note that is observational evidence not models or claims based on mathematical conjectures

          • I am afraid Science is proving that bit about the choke point wrong.
            The more CO2 the faster the outer atmosphere cools, I know you have quoted the theory, so how does that work if it is a choke point?

          • Ian: Below is the data from the spreadsheet I used to calculate that a 1 W/m2 imbalance can warm at an initial rate of 0.2 K/year. Since heat (technically power) comes in units of W/m2, one needs to know heat capacity of air and water per m2. For air, that comes from the weight (pressure) of air above every m2. For ocean, there are 50 m3 of “mixed layer” below every m2 of surface absorbing radiation. (We call this the mixed layer because heat from seasonal warming in the summer is on the average turbulently mixed into a layer this deep every year.) I call this an initial rate, because – as soon as any warming begins – more heat starts being emitted through the TOA. If the planet emits and reflects 2 W/m2 more LWR and SWR per degK of surface warming (2 W/m2/K), then 0.5 K of warming (2.5 years worth at an initial rate of 0.2 K/yr) would restore steady state after an abrupt force of 1 W/m2. 2 W/m2/K implies steady state for doubled CO2 is +1.8 K. In reality, the imbalance gradually shrinks with warming, producing an negative exponent approach to steady state and the initial rate begins slowing around a year. 1 W/m2/K or 3 W/m2/K implies a steady state for doubled CO2 of +3.6 or +1.2 K.

            I calculate heat capacity in term of kJ/m2/K and power in terms of kJ/m2/yr. Dividing the latter by the former gives warming in K/yr. The rest of the calculation is fairly straightforward. About 10% of the heat capacity per m2 is atmosphere.

            Since this is my own calculation, I would greatly appreciate being informed of any errors. (I’m not interested in spreading false information.)

            Heat Capacity of Water 4.186 kJ/kg-degK
            Heat Capacity of Air Constant P 1.012 kJ/kg-degK

            Weight Air 10318.228 kg/m2

            Heat Capacity of Air 10442.047 kJ/m2-degK

            Weight Water 1.000 kg/1000 cm3
            1000.000 kg/m3
            50 m Mixed Layer 70% of surface 35000.000 kg/m2

            Mixed Layer Heat Capacity 146492.500 kJ/m2-degK

            total heat capacity 156934.547 kJ/m2-degK

            1 W/m2 1.000 J/s/m2
            1 Year 31556736.000 J/m2
            1 W/m2 31557 kJ/m2/yr

            Initial Warming Rate 0.201 degK/yr/(W/m2)

          • Hi Frank the whole problem with the raising the emission height hypothesis, is that it assumes water vapour as a constant just like you did in your spreadsheet. We know that is not true unless you are happy to say deserts have the same water column as the tropics.

            all the calculations in the world are worth nought, if one of the assumptions you base your theory on is incorrect.

            This site goes through the calculations for proving that water is the driver. All heat does is move to a different location and escape. We have seen the evidence of this with the El Nino heat, its gone it was never retained it left the TOA never to be seen again.

            https://okulaer.wordpress.com/2017/04/15/the-congo-vs-sahara-sahel-once-more/

          • @ Frank December 15, 2018 at 12:31 pm

            I asked

            Can you provide references to experimental/observational evidence that:
            * A body of water is warmed by infrared in the CO2 emission bands at 3 watts/sq meter?
            * A body of water is warmed (or cools slower) when the air above the water is warmer than the water.

            Note that is observational evidence not models or claims based on mathematical conjectures

            And you provide an excel sheet based on assumptions – but to several places of decimals?

            This is the real problem with climate ‘science’ -nobody- is willing to do even basic experiment. I would remind you that it was basic experiments that falsified the Phlogiston theory.

          • Scott wrote: “Hi Frank the whole problem with the raising the emission height hypothesis, is that it assumes water vapour as a constant just like you did in your spreadsheet. We know that is not true unless you are happy to say deserts have the same water column as the tropics. All the calculations in the world are worth nought, if one of the assumptions you base your theory on is incorrect.

            If you take column total precipitable water (24? mm) and divide by the average daily precipitation rate (about 2.7? mm/day), you find that the average water molecule remains in the average for only 9 days and only 5 days in the tropics. So the amount of water vapor in the atmosphere changes with temperature much faster than temperature can be changed by forcing from water vapor. That is why the effect of changing water vapor is treated as a feedback (something that effects radiative fluxes, but varies with temperature) rather than a forcing (something that effects radiative fluxes independent of temperature). Water vapor isn’t ignored.

            To a first approximation, it doesn’t matter what GHG or GHGs are present in the atmosphere. At most altitudes, GHGs absorb more upward radiation (emitted by GHGs lower in the atmosphere or by the surface) than they emit upward. And they emit more downward radiation than they absorb from above. Since the photons that escape to space are emitted at the surface and all altitudes (with relatively little from the first 2 kilometers where water vapor is very opaque), there really isn’t a clear defined characteristic emission level, so I use the concept of reduced radiative cooling to space rather than a rise in the characteristic emission level. (The average photon escaping to space is emitted from higher when more of any GHG is present.)

            However, I was trying to explain to Ian how the energy absorbed by GHGs high in the troposphere ends up warming the surface and the ocean. That happens because convection slows down when the upper atmosphere gets warmer relative to the surface. Less convection means the surface warms.
            __________________

            Scott also writes: This site [Okulaer link] goes through the calculations for proving that water is the driver. All heat does is move to a different location and escape. We have seen the evidence of this with the El Nino heat, its gone it was never retained it left the TOA never to be seen again. Okulaer concludes:

            “OK. So at this point the situation at the surface looks like this:

            The Congo: Heat IN (Qin(SW)), 177.96 W/m2; radiant heat OUT (Qout(LW)), 51.08 W/m2. Missing: 126.88 W/m2.
            Sahara-Sahel: Heat IN (Qin(SW)), 178.84 W/m2; radiant heat OUT (Qout(LW)), 103.13 W/m2. Missing: 75.71 W/m2.
            Still, we know that the heat IN is balanced by the total heat OUT in both regions. The surface heat loss, after all, is not restricted to radiation. We also have conductive and evaporative losses, Qout(cond) and Qout(evap). In the Congo, these other heat loss mechanisms will have to take care of the 126.88 W/m2 that are left after we have accounted for the radiative loss. In the Sahara-Sahel region, however, they will only have to rid the surface of an additional 75.71 W/m2, about 51 W/m2 less.”

            Okulaer has focused only on radiative heat fluxes. He hasn’t taken into account the heat loss from latent heat (evaporative cooling) and a modest amount of sensible heat. Worst of all, Okulaer hasn’t considered horizontal transport of heat. The tropics exports a large amount of heat towards the poles (meridional transport). These all fall in the “missing” category. All of these factors contribute to making the Congo modestly cooler than the Sahel.

            HOWEVER, when you consider the planet as a whole, horizontal transport of heat doesn’t change GMST*. And heat can only escape from the planet as OLR. (In some cases, one might consider more reflection of SWR as heat escaping from the planet.) Convection moves heat around inside our climate system (and most of it is found in the ocean). Only radiation brings heat in and out. When there is a radiative imbalance at the TOA with more heat coming in than going out, somewhere in the climate system that heat must be accumulating! That is conservation of energy. The concept of radiative forcing doesn’t tell us where that heat will accumulate (the Congo, Sahel, Arctic or ocean), only that it must be somewhere. The concept of radiative forcing doesn’t tell us how much the climate system must warm before the radiative imbalance at the TOA is reduced to zero and a new steady state is reached. Climate scientists attempt to answer both of these questions with AOGCMs, but there is no compelling reason to believe the answers they produce. (There are many ways to construct an AOGCM, no easy way to tell which is right, and they all give different answers.) We don’t need AOGCMs to understand why rising GHGs reduce radiative cooling to space and why that must result in warming somewhere.

            *Temperature doesn’t accurately track energy within the climate system. When a parcel of air rises, expands under reduced pressure and cool. It still contains the same amount of heat, but its temperature is different. There is a concept called “potential temperature” that includes PdV work being lost by the expanding parcel and gained by another parcel of air that descended to take the place of the rising parcel of air. There is also the latent heat that the parcel would gain if all of its water vapor condensed. Adding that gives “moist potential temperature”. Regions of the atmosphere that are being well-mixed by convection have the same moist potential temperature. Water and air have different heat capacities. When I said above that the heat (energy) left behind by reduced radiative cooling to space must be somewhere, technically one should look for it as moist potential temperature in the atmosphere and ocean heat content, not simply as higher temperature. Fortunately, the mixed layer and atmosphere are well-mixed, making surface temperature a reasonable proxy for the total energy in the climate system.

          • Hi Frank,

            I am sorry but you have made way too many assumption and averages so your calculation will work. The real world doesn’t work to averages otherwise we would not see deserts dropping to freezing over night after being extremely hot during the day. you say the heat has to go somewhere, yes it does, out into space or we would have had runaway warming long ago in the past, remember CO2 has been in the thousands of ppm and no run away warming.

            Also I am yet to see a physical measurement of back radiation at night hitting the surface, if you have any data on this I would love to see it.

          • Ian: The values for the heat capacity of air and water, the mass/weight/pressure of air above every m2, etc. are those you can look up in Wikipedia. Roughly 70% of the Earth is covered by ocean at least 50 m deep. Traditionally, the depth of the mixed layer is average depth seasonal warming penetrates in the ocean. Willis calculated it once at WUWT from ARGO data. 50 m is typical, but the longer the period, the deeper heat diffuses.

            When you copy and paste data from a spreadsheet, sometimes you get more figures than were showing in the cell. Sorry the number of significant digits isn’t appropriate.

          • Hi Scott: I’m sorry you think I am making too many assumptions. There are dramatic difference between locations. Let’s try for simple:

            1) Does the upward flux of radiation (averaging 390 W/m2 at the surface) get weaker by the time it reaches space (240 W/m2)? FWIW, I call this reduction the GHE. If there were no reduction on the way to space, the surface would have to be much colder for incoming and outgoing radiation to be equal. (How much colder depends on assumptions.)

            2) If GHGs aren’t responsible for this decrease, what is? Yes, there are clouds, but cloudy skies account for only a small fraction (25? W/m2) of this difference according to spacecraft.

            3) Do you expect more GHGs to reduce the current rate of radiative cooling to space? If not, why not? I call this the enhanced GHE, others call it radiative forcing.

            4) Doesn’t conservation of energy demand that any reduction in radiative cooling to space result in warming somewhere below the TOA – ASSUMING THAT NOTHING ELSE CHANGES?

            I can’t say where that heat will accumulate without making assumptions. Warming will produce other changes. Eventually radiative balance at the TOA will be restored. How much warming that takes is the fundamental unanswered question in climate science. Answering that question requires making assumption.

            The physics that produces these reductions in radiative cooling to space is described in the Wikipedia article on the Schwarzschild equation. That equation has been derived from quantum mechanics in Reference 10, written by a global warming skeptic. If you don’t accept QM, say goodbye to everything we understand about chemistry, starting with the periodic table. So I’m assuming QM is correct to get quantitative answers and understanding. However, steps 1-4 above don’t depend on QM and physics you may not understand.

            https://en.wikipedia.org/wiki/Schwarzschild%27s_equation_for_radiative_transfer

          • Hi Scott: You can see validation of radiation transfer calculations in section 3 of the paper linked below. This one happens to use a detector in a plane above the tropopause looking down at OLR emitted by the ground and modified by absorption and emissions from GHGs in the atmosphere.

            https://pdfs.semanticscholar.org/b2a5/c8dd360d50cb900d39b28a5a68639df02edf.pdf

            Here is an older reference (1988) to validation of both OLR and DLR (Figure 2). The authors are complaining bitterly about 5% discrepancies at some altitudes and that part of the problem is the inability to measure (observe) fluxes accurately enough and characterize the atmosphere through which the radiation is passing accurately enough to know whether the discrepancies are in observations or calculations. Some refinements have been made over the past 30 years. Recent published research goes beyond the simple validation experiments you are looking for. (Some skeptics claim that the equipment used in some validation studies is fundamentally flawed.)

            https://journals.ametsoc.org/doi/pdf/10.1175/1520-0477-69.1.40

            There are probably dozens to hundreds of other papers on this subject. I searched Google Scholar for radiative transfer calculations validation. When you read, remember that accurate observations of radiation (and the temperature and composition of the atmosphere input into calculations) is challenging. Even today, the CERES spacecraft is off by about 4 W/m2 in its estimate of the planet’s radiative imbalance. Don’t expect perfection.

          • Hi Frank,

            I do commend your efforts, However

            Inputs plus assumptions plus calculations do not match current observations ie no hot spot and do not work with historically higher CO2 levels by thousands of ppm. therefore as it stands the entire concept is flawed.

            Which means one of, or a combination of inputs, assumptions or equations are wrong or missing something very important.

            If you feel strongly enough that the Okulaer link I gave is so wrong (which by the way matches far more closely to observations than anything else proposed) feel free to post your thoughts there. He does respond to any comments posted. I will watch with interest.

            Science starts with a hypothesis, which needs to be tested against observations to proved valid or not. If it is not, you are allowed to postulate why (like DWLWIR at the surface), however this too needs to stand up to measurement.

            Otherwise I propose starting again with observations then working backwards instead of starting with CO2 and trying to massage the data to fit.

          • AC Osborne wrote: I am afraid Science is proving that bit about the choke point wrong.
            The more CO2 the faster the outer atmosphere cools, I know you have quoted the theory, so how does that work if it is a choke point?

            I was frustrated about this problem for a long time. Double CO2 and you double both absorption and emission. How do we know that the net result is a slowing down of cooling to space? In truth, the slowdown is a small NET different in two large changes.

            The atmosphere generally gets colder with altitude. Absorption by a layer of atmosphere is proportional to the intensity of the upward radiation, which is emitted from below where it is warmer. Emission is proportional to the local temperature. This leads to an upward LWR flux at the surface of 390 W/m2 being reduced to 240 W/m2 at the TOA – and to 236 W/m2 after a hypothetical instantaneous doubling of CO2 (according to calculations).

            In the stratosphere, where the temperature increases with altitude, upward LWR increases slightly. When GHGs rise there, the stratosphere cools and OLR increases with altitude slightly. Over Antarctica, there is little change in temperature with altitude on the average and no change in upward flux. And so far, no warming.

          • AC Osborne asked why radiative cooling to space is a choke point.

            Radiation is the only way for heat to leave our planet from space. Well below the tropopause, the atmosphere is too opaque to let enough LWR escape to balance the 240 W/m2 that is arriving from the sun. However, when the lapse rate is unstable, convection can move as much heat upward as you need. The surface is hot enough so that convection moves 100 W/m2 upward from the surface that can’t escape as radiation. That amount gets smaller the higher and more transparent to LWR the atmosphere. However, convection requires an unstable lapse rate, which means that the temperature rise from the altitude where convection is no longer needed (the tropopause) to the surface is under the control of the lapse rate. In the Earth’s temperate zones, the tropopause is at 11 kilometers and 216 degK and the surface is 11*6.5 (K/km) = 72 K warmer or 288 K. If rising GHGs make the atmosphere more opaque, the tropopause moves higher to where the atmosphere is less opaque and the distance to the surface is longer. So is the temperature increase on the way to the surface gets bigger

            Many people argue about why Venus is so hot, so the following may not help. On Venus (with 100 atm of pressure and 90% CO2), the atmosphere doesn’t become transparent enough until 50 km above the surface and the lapse rate isn’t unstable until 10 K/km. Therefore the surface is about 500 degK warmer than the tropopause.

        • Ian asked: Can you provide references to experimental/observational evidence that:
          * A body of water is warmed by infrared in the CO2 emission bands at 3 watts/sq meter?
          * A body of water is warmed (or cools slower) when the air above the water is warmer than the water.

          What idiocy! We live on a planet where the ground is emitting an average of 390 W/m2 (blackbody radiation for 288 K) and the sky is emitting about 330 W/m2 (because it is cooler and doesn’t have an emissivity near unity because it doesn’t emit in the atmospheric window. Everywhere we look 24 hours a day, we are blinded by LWR about with as much power as SWR during daytime. Air temperature rises and falls 10 degC between day and night. Heat is also carried away by conduction and convection. And Ian wants me to demonstrate that 3 W/m2 (1%) of additional radiation makes things warmer out in the real world where all of this confusion is occurring.

          It is common knowledge that humidity (and cloudy nights) are warmer than less humid (and clear nights. The dramatic effect of humidity is apparent in deserts, which cool off at night by radiative cooling to space much more than non-deserts with similar daytime highs.

          The heat capacity of water is much higher than the heat capacity of air, so detecting a change in the cooling of water at night in the desert and non-desert would be much more challenging. Having enough insulation to prevent heat in the water from escaping by conduction into the air that is cooling faster would be nearly impossible. Roy Spenser has reporter on some experiments of this type.

          I calculated for you that an imbalance of 1 W/m2 (of any non-reflected wavelength) was enough to warm 50 m of ocean at an initial rate of 0.2 K/YEAR. That would be 10K/yr of warming in 1 m and 100 K/yr of warming in 10 cm. That is about 0.15 K/12 hour night. Or 0.5 K/night for your 3 W/m2. And I need enough insulation to prevent any heat from leaving the water. Doing such experiments out in the real world where conditions can’t be controlled is challenging. Such experiments must be done under carefully controlled conditions in the laboratory. It makes more sense to quantify radiation with a sensitive spectrophotometer (some can detect single photons) than waiting 1 day with well insulated liquid water and a thermometer, the way people did experiments in the 18th century. Nope, 17th century. Even John Tyndall detected absorption of infrared by measuring electric current, not with a thermometer, If the subject weren’t global warming, you would be happy to accept the the results of careful laboratory experiments.

      • “There was an LIA. Its getting warmer.”

        It has warmed and cooled and warmed and cooled and warmed again, within a narrow range, since the Little Ice Age, and currently it is cooling, not warming. We are currently about 0.6C cooler than Feb. 2016. So are we warming or cooling? I say cooling.

        Claiming “it’s getting warmer” does not describe reality, it’s a dodge.. It ignores everything that happened between the Little Ice Age and the 1970’s. And you are also ignoring everything that has happened since Feb. 2016.

        What is ignoring the facts a sign of?

        The truth is it is not any warmer today than in the 1930’s or several other periods in the past. There is no unprecedented heating in the atmosphere. The warming can all be attibuted to Mother Nature. It warmed just as much in the past, before CO2 was an issue, as it has warmed today, so obviously there was already enough energy in the Earth’s atmosphere to warm to those levels, no CO2 required. And no CO2 required for the current warming.

        When the temperatures exceed the highpoint of Feb. 2016, come back and see me. Otherwise, quit claiming “There was an LIA. Its getting warmer.”.

        Anyone want to guess when temperatures will get back to the highpoint of Feb. 2016? It took a long time for the highpoint of 1998 to be reached again (18 years).

        Here’s the UAH satellite chart for reference:

        https://www.drroyspencer.com/wp-content/uploads/UAH_LT_1979_thru_November_2018_v6.jpg

      • A) Why is warming.
        B) will it continue to warm
        C) how much
        d) and what if anything should we do about it

        That’s an interesting list and one the IPCC and the AGW industry seem to have abandoned as already answered:
        A) CO2 and methane.
        B) Yes!!!
        C) Lots!!!
        D) Assign governments and the UN the right and power to run the world economy and transfer vast sums to third world dictators and governments and the UN. Right now! Because we just passed our thirtieth tipping point!

        It’s time someone acknowledged that the incredible nature and politicization of climate science in large part drives the less than credible and politicized responses to it.
        Were the IPCC and the AGW industry responsible, circumspect and not politicized the response to it would be equally muted.
        Instead it pumps out relentless GIGO, along with occasional sensible science, and refuses to self correct.

      • Steve:
        **Not all skeptics waste their time challenging a solid record of warming. The smarter ones focus there energy and “credibility capital” on these issues.

        A) Why is warming.
        B) will it continue to warm
        C) how much
        d) and what if anything should we do about it**

        It is not warming that skeptics are challenging – it is the cause. And nobody has conclusively PROVED that CO2 is the main cause.
        All of A,B,C,d are given questionable responses by the CAGW crowd.

        • “It is not warming that skeptics are challenging – it is the cause. And nobody has conclusively PROVED that CO2 is the main cause.
          All of A,B,C,d are given questionable responses by the CAGW crowd.”

          That is strange. If you put up a GISS chart showing warming people will call it fraud.
          That is challenging the evidence of warming
          they will say its all UHI
          That is challenging the evidence of warming
          They will say global temperature makes no sense
          That is challenging the evidence of warming
          They will say you cant interpolate
          That is challenging the evidence of warming
          They will say that using anomalies is wrong
          That is challenging the evidence of warming
          They will say your stated precision is wrong
          That is challenging the evidence of warming
          They will reject the adjustments
          That is challenging the evidence of warming

          Which is odd because they also say they believe in an LIA and believe the climate is always changing…

          yet

          They challenge the evidence of warming, that you tell me they actually believe in

    • Ian W: You are partially right. “Surface” temperature is rising due to a radiative (energy) imbalance at the TOA due to rising GHGs. However, not all of that energy is going into raising surface temperature. Most of it is going into warming the ocean. Some of it is going into warming the bulk of the troposphere. Some of it is going into increasing the absolute humidity of the atmosphere. Some of it is going into melting sea ice and ice caps.

      However, we have a fairly well-mixed climate system that distributes energy throughout the climate system and these reservoirs in a relatively consistent manner. With large seasonal local changes in temperature, we see large seasonal local changes in absolute humidity than keeps relative humidity fairly stable. So while it is theoretically possible for temperature to go up because absolute humidity has gone down, I doubt this is true in practice. By far, the largest fraction of the energy from the TOA radiative imbalance is accumulating in the ocean. Thanks to ARGO, we now know beyond any doubt that the total energy in our climate system is increasing.

      If you look into the physics of evaporation, you’ll find that the rate of evaporation depends on wind speed and undersaturation of the air. Temperature is unimportant, except as it contributes to undersaturation (1-RH). So when RH drops, evaporation increases; and when it rises, evaporation decreases – independent of temperature.

      • Temperature is unimportant, except as it contributes to undersaturation (1-RH).

        Temperature is one of the inputs into the calculation of relative humidity. So varying temperature will vary RH with all else constant.

        Hence the reason hand and hair dryers are heated.

        • Ian: Stick your wet hands outside the window of a moving car on a winter day. Do you really need warm air to speed up evaporation? No. However, be careful to avoid frostbite. Both the wind and the evaporation will quickly chill your hand.

          The following is safer. Bring a pan half full of water to nearly boiling and let the temperature stabilize. It isn’t evaporating very fast, because the air in the top half of the pan is nearly saturated with water vapor. Use your vacuum cleaner to remove the nearly saturated air from above the water and replace it with much drier air. What happens to the rate of evaporation?

      • You quote GHG theory as if it is a fact, when in the atmosphere it is not.
        It is H2O in all it’s forms that control the temperatures, not CO2 at 400ppm.

        • AC: Quantum mechanics is a “fact” – a well-tested theory that has survived repeated challenges (including from Einstein) for a century.

          The Schwarzschild equation for radiative transfer was derived from quantum mechanics and has been known for a century. It’s predictions have been validated by observations in the atmosphere. It is a “fact” that it predicts about 3.7 W/m2 reduction in radiative cooling to space from a doubling of CO2 (if nothing else changes).

          It also predicts a 1.0 W/m2 reduction in radiative cooling to space through a US Standard Atmosphere if water vapor increases by 7% everywhere. Saturation (equilibrium) vapor pressure rises about 7% per degK of warming. So climate scientists say that the warming from rising CO2 will be amplified by an expected increase in water vapor. It is not a fact that water vapor will rise 7%/K everywhere in the atmosphere, because water vapor is not in equilibrium with liquid water everywhere in the atmosphere (equilibrium = 100% relative humidity).

          Notice that we are talking about a likely DOUBLING of CO2 in the atmosphere and only a 7%/K increase in water vapor.

          There is a lot of water vapor near the surface, but it drops to 3 ppm at the tropopause. It is a fact that CO2 is the dominant GHG absorbing upward flux and emitting photons that escape to space from the upper troposphere.

          It is a fact that both water vapor and CO2 are important.

          Do these facts constitute a “GHG theory” or an “AGW or CAGW theory”? That depends on what you think these terms mean. There is a lot of sloppy thinking about what has or has not been proven or refuted by observations. The interaction between radiation and GHGs is well understood, so we can say radiative forcing is a “fact”. So far, no facts mentioned above tell us how to convert a radiative forcing into a specific amount of warming – a prediction that can be tested. No facts mentioned above say that all observed warming is due to rising GHGs.

      • The notion that “temperature is unimportant” is contradicted by the well-established Clausius-Clapeyron relation (q.v.). In fact, evaporation goes up exponentially with water temperature.

        BTW, there’s no empirical evidence that downwelling LWIR heats anything more than the topmost surface skin of the ocean. The claim that evaporative cooling of that layer creates significant downward, density-driven mixing into much lower layers is pure fantasy. One need only look at the absence of any significant diurnal cycle of subsurface temperatures observed in tropical doldrums to recognize the patent lack of acquaintance with wide-spread oceanographic conditions. Mixing is largely wind-driven and involves warm near-surface layers below the surface skin.

        • 1sky1: You are confusing the concept of equilibrium between liquid and water vapor (determined by the C-C equation) and the RATE at which liquid water evaporates, whether or not evaporation ever produces an equilibrium. The C-C equation does define the maximum amount of water vapor air can hold at a given temperature (100% relative humidity). Much of the atmosphere is not saturated with water vapor. The air 2 meters above the surface of the ocean typically has about 80% RH, because turbulent mixing brings down in some air that is usually drier (and colder). Only a thin layer of air adhering to the surface of the ocean is saturated with water vapor. Vertical diffusion of water vapor molecules over distances of a meter or so is extremely slow. The rate-limiting step in evaporation is the turbulent mixing needed to move water vapor vertically in the boundary layer. Turbulent mixing is proportional to horizontal wind speed.

          Water molecules don’t stop leaving the liquid phase for the gas phase simply because the air above the water is saturated. Net evaporation stops because water vapor molecules in saturated air are returning to liquid water as fast as other molecules leave liquid water for the gas phase. At 50% RH, half as many water molecules are returning to liquid water as fast as they are leaving it. During turbulent mixing, therefore, the undersaturation (1-RH) of the air brought very close to the surface of the water is critical to the rate of evaporation. So undersaturation is the second key factor in the rate of evaporation and temperature is important only in calculating undersaturation (as an amount, not as a percentage).

          Of course, evaporation cools the surface of the ocean, but colder water is denser and sinks, bringing an warmer water to the surface.

          Rising air masses are often form clouds and therefore are saturated with water vapor. When there are clouds at the surface (fog), the air is 100% saturated. But most of the air in the atmosphere is far from being in equilibrium with liquid water.

          • I never claimed anything about the highly variable RATE, per se, of evaporation. Your display of Wiki-expertise along with the rubric, “colder water is denser and sinks,” merely serves as rhetorical cover for the egregiously wrong notion that the evaporating surface skin mixes appreciable cooler water into subsurface layers in situ, “bringing warmer water to the surface.” In reality, the mass of the surface skin, which is held together under the Knudsen layer by surface tension, is relatively miniscule and no appreciable mixing can be effected.

          • Turbulent mixing is also needed to get the colder skin layer of the ocean to sink and the ocean is far more viscous than the atmosphere. However conduction can bring heat very short distances into the colder skin layer. This cools a mass of the denser water adhering to the surface until it detach from the surface and sinks, bringing warmer water to the surface. Stronger winds can also overcome viscosity.

            However, these details are unimportant. The big picture is that something on the order of 100 W/m2 of SWR is deposited within a few meters of the surface. That heat must go somewhere, or that water will eventually boil. It can’t go down, conduction is too slow and the water below is colder and denser (except in polar regions). This eliminates buoyancy-driven convection to lower depths. You are smart enough to realize that – one way or another – the heat from SWR must reach the surface, then the atmosphere and then space.

            These old wive’s tales about “DLR not heating the ocean” are pure mis-direction. SWR heats the bulk of the ocean. The energy from DLR is deposited in a thin layer and keeps that layer from freezing while heat from SWR is rising to the surface.

            ARGO tells us that an average of 0.7 W/m2 of heat has been entering the ocean from above for the last decade. This is a much less than 1% of the energy in DLR and SWR. The rest goes up.

          • Lacking clear and realistic conception of in situ interactions near the air-sea interface, you switch the subject from density-driven sinking of the LWIR-absorbing surface skin to turbulent mixing and buoyant rise of SW-heated subsurface layers. That doesn’t make for rational discussion.

          • 1sky1wrote: “Lacking clear and realistic conception of in situ interactions near the air-sea interface, you switch the subject from density-driven sinking of the LWIR-absorbing surface skin to turbulent mixing and buoyant rise of SW-heated subsurface layers. That doesn’t make for rational discussion.”

            Observations show that the skin temperature of the ocean (as measured by microwaves emitted from the same layer absorbing DLR) show that it is usually cooler than the bulk ocean below measure with thermometers. Than implies that heat flow is upward. That flow occurs by conduction (which is only effective over short distances), buoyancy-driven convection, and turbulent mixing (when the wind is strong). I’m perfectly willing to admit is a complicated situation.

            However, you totally ignored the reasons I offered above providing a SIMPLE rejection of the idea that “DLR doesn’t warm the ocean”. The heat from SWR deposited below the surface must be going upward. The skin of the ocean would freeze at night without roughly 333 W/m2 of energy from DLR, because OLR and evaporation are removing roughly 390 and 80 W/m2.

            A rational discussion would deal with these two “elephants in the room”, before challenging details of the complicate process of heat flow at or near the surface. (In my experience, no one acknowledges these elephants.)

          • The skin of the ocean would freeze at night without roughly 333 W/m2 of energy from DLR, because OLR and evaporation are removing roughly 390 and 80 W/m2.

            Such ubiquitous surface freezing would take place only in the total absence of an atmosphere. There’s precious little evaporation at night and the net radiative heat transfer into the atmosphere is in the double, not the triple, digits. (There’s a failure to recognize that DLWIR is not an independent source of thermal energy, but is physically inseparable from OLR). Conduction alone is sufficient to prevent such freezing throughout most of the globe.

            In any event, the issue is not the hypothetical freezing of the surface skin, but your unconvincing rejection of the idea that “DLR doesn’t warm the ocean,” insofar as the subsurface layers are clearly implied. Being far removed from the main topic of this tread, that’s not a discussion that deserves further time at yuletide.

        • 1sky1 writes: “There’s precious little evaporation at night and the net radiative heat transfer into the atmosphere is in the double, not the triple, digits. (There’s a failure to recognize that DLWIR is not an independent source of thermal energy, but is physically inseparable from OLR). Conduction alone is sufficient to prevent such freezing throughout most of the globe.”

          The rate of evaporation does not depend on sunlight or surface temperature for that matter. Look at the equation for latent heat transfer at this website for ENGINEERS, not climate science. The rate of latent heat transfer (and therefore evaporation) depends on wind speed and undersaturation, not surface temperature. Even when the surface of the ocean freezers, the rate of sublimation depends on wind speed and undersaturation. It just slows slight because the heat of sublimation is modestly bigger than the heat of evaporation (by the heat of fusion). The rate of evaporation doesn’t give a *&*#$_$! about whether it is day or night.

          If you’ve read Willis on the tropical oceans, you might remember that SST drops about 1 degK during its daily cycle. (As soon as the surface cools at night, it becomes denser than the water immediately below that is warmed by SWR during the day. The heat from SWR can’t keep building up indefinitely, it can’t be conducted very far, it can’t be convected into denser water below. However, it can rise to the surface when the thin skin that loses heat by evaporation and radiation gets too cold. The rate of evaporation during day and night is equal to a first approximation.

          https://www.engineeringtoolbox.com/cooling-heating-equations-d_747.html

          You are correct that OLR and DLR are not independent. The net radiative flux must be from hot to cold. However, the surface of the ocean emits thermal IR as if it were a gray-body with an emissivity of nearly 1; but the atmosphere does not. The atmosphere doesn’t emit (or absorb) in the atmospheric window. The DLR photons that do arrive at the surface are emitted from a variety of altitudes – and therefore from variety of temperatures. The rate of emission depends on temperature. DLR varies with the nature of the atmosphere above the ocean, OLR depends only on SST.

          If there is no wind, there is no turbulence to move sensible and latent heat. The difference between typical OLR and DLR represents a net loss of about 50 W/m2 for the ocean, all of which arises from the top 10 um of the ocean (before conduction). So how fast does the top 1 cm of the ocean cool (by net radiation) when there is no wind. The heat capacity of water is 4.2 kJ/kg/K. 1 m2 of water 1 cm deep is 10,000 cm^3 or 10 kg of water. So the heat capacity of surface water is 42 (kJ/m^2)/cm/K. 50 W/m2 is 50 (J/m2)/s. Dividing the former by the later gives about 1000 (s/K)/cm. In other words, an isolated 1 cm layer of water radiatively cools 1 degK in the presence of typical OLR and DLR in 1000 sec or about 17 minutes. A 1 mm layer cools 1 degK in 100 second or about 2 minutes. And that 10 um layer that is the source of all LWR would cool 1 K/s if heat were not being conducted into that layer. The thermal conductivity of water is 0.6 W/m/K or 60 W/cm/K. So if there were a 1 degK difference between the skin layer of the ocean and the temperature 1 cm below, a flux of 60 W/m2 could be conducted across this distance and continuous support 50 W/m2 of net radiative cooling. However, if that heat needed to come from 2 cm below, it couldn’t be conducted fast enough to keep up with the demand from radiative cooling. So, in about 15 minutes with normal OLR and DLR and no wind, the water has radiated away enough energy to cool an isolated 1 cm by an average of 1 K and conduction can no longer keep up. When the surface has become 2 K colder, 50 W/m2 can be conducted over a distance of 2 cm. In the real world, the colder denser water detaches from the surface and convection begins in the absence of turbulence.

          The bottom lines is that conduction can not provide enough heat from below at night until conduction is being driven by a very large temperature gradient. Assuming, of course, the above calculation is correct.

          • The rate of evaporation does not depend on sunlight or surface temperature for that matter.

            Instead of doing calculations according to the tacit premises of "engineering toolboxes" or relying upon the anecdotal "authority" of Willis, try getting acquainted with what scientific instruments deployed by oceanographers reveal about air-sea interactions in situ. There's an extensive literature on the subject that contradicts your mental picture. Have a merry Christmas.

          • 1sky1 wrote: “Instead of doing calculations according to the tacit premises of “engineering toolboxes” or relying upon the anecdotal “authority” of Willis, try getting acquainted with what scientific instruments deployed by oceanographers reveal about air-sea interactions in situ. There’s an extensive literature on the subject that contradicts your mental picture. Have a merry Christmas.”

            1sky1 is offering the usual BS. Somehow DLR doesn’t heat the ocean. It doesn’t matter what anyone says, the references aren’t adequate, the physics is wrong, there is more to learn (but no references are offered), blah, blah, blah. I could have cited a blog article from Isaac Held about what factors control the rate of evaporation and would have been told that wasn’t what engineers say. The same goes for transfer within the ocean: every night the surface cools and convection brings the heat deposited by SWR to the surface (if turbulence hasn’t distributed it. There is a nice series of posts at SOD on this subject with a dozen references at the link below. The calculated deductions in my previous comment look brilliant (in my biased opinion, of course) in light of the figures from publications there.

            https://scienceofdoom.com/2011/01/18/the-cool-skin-of-the-ocean/

            One can be skeptical about the IPCC consensus without spreading misinformation about the basics of heat and radiation transfer. However, confirmation bias makes it difficult for humans to learn anything that conflicts with their strongly held beliefs unless they have an open mind and listen to compelling arguments from both sides. For Christmas, I’m hoping for a world where people escape from their polarized corners of the Internet, traditional media, social media, FAKE news and Russian propaganda. I’m looking for just one unambiguous victory over confirmation bias – even my own.

        • One need only google: q=evaporation+rate+temperature+dependence&tbm=isch&source=iu&ictx=1&fir=0QSh2bho4xoqrM%253A%252CR1TP1hj_0fFAjM%252C_&usg=AI4_-kSlV0Y-N6Da5xrBGZKVkddJKNiyYA&sa=X&ved=2ahUKEwjN6uzoprLfAhUTEXwKHZ_6CxsQ9QEwCnoECAUQBg#imgrc=0QSh2bho4xoqrM:

          to give lumps of coal for those who “learned” their physics in the blogosphere!

          • 1sky1. Thanks to you, I have gotten my Christmas wish, and must admit that I wasn’t completely right about the evaporation rate. It is proportional to TWO things: wind and ABSOLUTE unsaturation. Absolute unsaturation is saturation vapor pressure * (1-relative humidity). So one can say the rate is proportional to THREE things: wind, relative undersaturation, and saturation vapor pressure.

            I left the impression that the rate of evaporation of 0 degC and 30 degC ocean would be the same at 80% relative humidity with the same wind speed. This is absurd, and I should have recognized it immediately. If the 7%/K approximation were valid over this range, the evaporation rate at 0 degC would be about 10% of that at 30 degC with the same wind speed. 10X is a non-trivial mistake.

            While my red face fits with Christmas decorations, our goal (hopefully) is to discover what is true, not what we want to be true. Wind and undersaturation are critical, be not exactly as I described.

  16. David Chappell, let’s take your thinking a step further. It was pointed out at WUWT years ago, answering the question, “Are temperatures changing?” depends on analyzing the changes, not the temperatures themselves, nor the anomalies. After all, temperatures cannot be meaningfully averaged, but the differentials can. So, you can take the historical record at a weather station and calculate the trend to know how temperatures have risen or fallen during that history. And as Bob says, Tmaxs and Tmins are preferable to Tavg. Of course, there will be missing data at that station, which are presumed to lie on the trendline of the existing data. Of course, there will be warming and cooling cycles; the changepoints can be identified and periodic trends calculated within the overall station history.

    Suppose you want to know if temperatures are changing in your region, your state or province, your nation or continent? In this approach of Temperature Trend Analysis, you calculate the trends at all the stations having reasonable historical coverage within the geography . Then you average the trends of those stations. And, for example, you find that Southeast US has cooled the last 60 years.

    Somehow, climate scientists led themselves into a blind alley seeking a mythical global mean temperature, rather than respecting the fact the climates are plural, and even micro in their uniquenesses. Looking at temperature trends is far better, except it tends not to give alarming results.

  17. From a layman’s perspective isn’t this discussion really about whether one crude approximation is better than another crude approximation? (I understand they are the best we have.)

    If I wanted an accurate “average” of an individual station wouldn’t I plot the CONTINUOUS temperature over a 24 hour period and then divide by the area under the curve? (forgive me; been a long time since I studied differential calculus). Plots with the same min/max but different curve shapes would give different averages.

    And if we want to know if the earth is heatingup what is the significance of “averaging” 3 temperatures; one over an ocean, one over a desert and one over a rain forest? If we are measuring heat isn’t that as silly as my measuring the temperature at the tip of a matchstick and the temperature of my bathwater and trying to get meaning from the average of the two? (that of course is an extreme example.)

    • Yes you would, but then you would only have data for the period since totally automatic Stations have been in use.
      All of the previous data only had two actual values recorded, the Min and the Max per day.
      So to compare the old Thermometers with modern electronic stations they only use max & min, that however brings about a problem, especially in the max reading as electronic devices react much quicker to temperature changes than the old mercury thermometers.
      This has been shown to be true by various people in Australia looking at the BOM dataset.

      I agree about it not measuring heat.

      • Yes you would, but then you would only have data for the period since totally automatic Stations have been in use.

        Agreed. Thus the consequence is that we only have “data”, that to a certain extent is inaccurate and theoretically incorrect, to determine whether the planet, on a Kelvin scale, has made a very small relative change.
        The point is, how do we know those fundamental inaccuracies are not more meaningful than these Talmedic like discussions concerning min/max methods of temperature recording?

        Of course I’m stretching the point, but sometimes we seem to get lost in the trivia or the abstract.

    • “From a layman’s perspective isn’t this discussion really about whether one crude approximation is better than another crude approximation? (I understand they are the best we have.)”

      One crude approximation shows the 1930’s to be as warm or warmer than current temperatures. This would show that today’s warmth also happened in the recent past and is not unprecedented and does not require any CO2, just as the warmth of the 1930’s did not require CO2 to reach its temperature plateau.

      I’ll go with the crude approximation that shows the 1930’s as equal in warmth to today. Then we can stop wasting our time building windmills and ruining economies and we can get on with living our lives without the scaremongering about CO2 causing unprecedented heating of the Earth’s atmosphere. CO2 is nowhere to be seen and is unnecessary to account for current warming..

      Nothing to see here! It’s not any warmer now than in the past. “Hotter and Hotter” is BS (Bad Science).

  18. This is the one thing that always bugged me about agw cagw etc.

    why use anomaly figure- and not showing the tmax tmin numbers

    the anomaly is supposed to be the diverance from the baseline to the average of the daily temp.

    why has no one(that i know of) done a study that instead of running the anomaly….run a study that

    just checks tmax vs tmax” base years” and tmin vs tmin “base years”

    would it be because we would get an answer that would confuse the heck out of people because if we look at the tmax trends it seems to be -basically- flat(~going by what was presented here about 0.6c per century).
    while the tmin trend different than the tmax.

    in an ideal -rural undeveloped world- tmax trend should be the same as the tmin trend
    but a developed world- due to increased impervious areas (heat sinks/urban heat “islands”) the tmin trend would be affected more than the tmax due to the fact that heat sinks release heat slower than areas without said sinks.

  19. Vegematic Science, “slicing and dicing” actual thermometer readings is not science. Adding and subtracting stations farther North and South and then comparing the averages is not science. Interpolating, which is actually extrapolating when there is no thermometer near where you have claimed to have created new “Data,” is not science.

    Once a number is recorded, changing it for any purpose would have gotten you thrown out of my engineering school…

  20. Somewhere between a blazing inferno and a large popsicle, the Earth will ultimately reach its desired temperature, regardless of how we bipeds do things. Do more people live near the Equator, or near the polar regions? Most folks I know tend to like warm better than cold.

  21. Anomalies from an arbitrary baseline is not data. I always cringe when anomaly and “data set” is used in the same sentence as if either word is an appropriate title referring to first order data. Anomalies are a statistical calculation with two risks: 1) Anomaly calculations from raw data are a necessary step when calculating significance because you have to compare to some arbitrary or idealized data set. In classical scientific methods, charting raw data then allows descriptive statistics such as mean, median, mode, range, and trend. Anomaly calculations can lead to false positives known as making the elephant’s trunk wriggling due to too many parameters being imposed on the data. 2) Anomaly calculations are used when uncontrollable variables have contaminated your plots. Which puts the researcher at high risk of false positives, commonly known as putting lipstick on a pig.

    With so many uncontrolled variables in temperature sensor data sets, the best we can hope for are descriptive statistics. Bob has demonstrated this in perfect fashion.

    To go beyond that level is data mining for a biased purpose. Which is what I believe CO2 AGW researchers have done when they push their anomaly calculations beyond what that calculation can accurately prove. That said, solar sourced climate change proponents and researchers make the same mistake.

    The caution then with temperature data is don’t mine it. If you do, you can find angels dancing on the head of a pin in significant numbers.

    • If an anomaly is smaller than the measurement error of the data used to calculate it, have you just found an angel.

    • “Anomalies from an arbitrary baseline is not data.”

      so if you set zero at the freezing point of water…..

  22. This article is pretty useless to me since the author does not explain why absolute temperatures and renormalized temperatures should give a different trend. Unless we have an explanation for that I just learn that the anomalies that we are being presented do not correspond to the idea of anomalies that we typically have. Unless I am being told what explains the difference I don’t see how we can conclude that anomaly data “are more wrong” than the averages of absolute data.

  23. Seems like there’s a clear explanation for the difference between the two trends.

    The first graph for china plots the highest annual raw TMAX, so it will have its greatest value only in the summer.

    The second graph for china plots the highest annual TMAX Anomaly, so it can have its greatest value during any month of the year, even in winter.

    Considering that global warming has its largest footprint in the winter and much less in summer, it’s no wonder that the raw TMAX shows no trend while the TMAX anomaly does show a trend consistent with winters getting warmer over time. It’s not that anomalies are erroneous per se, as evidenced by the flat trend on the July China TMAX anomaly chart. It’s just in this case the two methods are capturing certain aspects of max temperature in different ways.

    That said, I agree with the author’s overall premise over his series of posts that we shouldn’t just focus on anomalies but also look at the raw temperatures because it does give a different perspective.

    • Bob Vislocky, you began your comment with, “Seems like there’s a clear explanation for the difference between the two trends.”

      There is, and I stated it in the post: Obviously, the highest annual TMAX temperatures for China don’t always occur in July.

      I just didn’t bother to publish the graph that confirmed my statement:

      Cheers,
      Bob

  24. The problem with absolute temperatures is that they are harder to manipulate. In 1997 the world warmest year ever according to NOAA at that time was 62.5 deg F, still published on their annual summary page. In the 2016 world temperature summary that was said to be the warmest year ever it was listed at 58.7 deg F on NOAA’s site when adding the anomaly to the base temp of 57 deg F.
    The 1951 to 1980 world ave temp was listed in the New York Times as being 59 deg F in the 1990’s but suddenly changed to 57 deg F at a later date. How can we do science with such large changes in data? Of course now we only see anomalies since 1998 for good reason? Let me know if I am missing something.

    • Anthony,

      The reason is different data sets are used for the different years. If you average temperatures across the set you will get a different results depending on where the bulk of the stations were located in that specific data set.
      Why they published those figures is strange though. It probably fit the agenda at the time…

      • Utter Bull.
        Either the Temperature was 62.5F or it wasn’t, if it wasn’t why did they say it was, why did they say that 1995 was only slightly cooler and 1998 was even warmer?
        They have cooled the past to increase the trend, full stop.

    • Anthony, I have been asking this question for years and never, ever had a satisfactory answer.
      1995, 1997 and 1998 were all far warmer than they say current temperatures are.

  25. To me, the main benefit of anomalies over ‘raw’ data is that anomalies let you see relevant changes over time more easily. ‘Raw’ data doesn’t always do that in an effective way, in my view. By ‘relevant changes’ I mean changes that can actually affect you.

    As a simple analogy, I’ll use human body temperature expressed in degrees Celsius (C). A healthy human body temperature is in the region of 36.5 to 37.5 deg C. Go much more than 0.5 C above or below that and you will start to feel unwell.

    Let’s switch to anomalies now, and say that the average of 36.5 to 37.5 = 37; so call 37 C the new ‘average’, or the new ‘zero’. If your body temperature rises to, say, 37.3C then you are +0.3C warmer than ‘average’. No biggie. Still within the healthy range. It’s just an expression of how ‘anomalous’ your current temperature is compared to a long term ideal, or ‘average’.

    So is it easier to notice these small but possibly important changes in body temperature if we plot them as anomalies or on an ‘absolute’ scale? Lets plot it out.

    The following charts use *exactly* the same base data. Chart 1 shows a simulated rise in human body temperature from ‘normal’ to ‘fever’ over a period set in 20 minute segments based on the C temperature scale (food poisoning say). Fig. 2 shows the same data on an analogy scale. Which do you think imparts the most relevant information?

    Fig. 1: https://i.postimg.cc/sDkrsy1D/Bob-Christmas-raw.png

    Fig. 2: https://i.postimg.cc/qqW9JXJZ/Bob-Christmas-anom.png

    • Sorry, does not compute, to achieve the same effect just start the actual temperature garph scale at 36.0C and us the same sized scale.
      When the earth’s temperature varies +/- 50C 0.1C change is laughable.

      • A C Osborn

        Sorry, does not compute, to achieve the same effect just start the actual temperature garph scale at 36.0C and us the same sized scale.

        In other words turn it into a virtual ‘anomaly’ but give it a different name? Bob usually starts his non-anomaly charts at zero C.

        When the earth’s temperature varies +/- 50C 0.1C change is laughable.

        Until recently Earth’s long-term *global average temperature* hadn’t varied by much +/- 0.5 C over the past ~10,000 years. The rise above ‘pre-industrial’ is currently around 1.0 C, not 0.1 C.

  26. IMHO,anomalies are misleading in terms of global temperatures .
    IMHO There is no real ‘average'( which can be ‘manufactured from data figures& any data one may choose ‘ ,&is used for con venience) .
    Surely to measure any real warming or temperature increases we need( as accurately as possible ),actual readings .
    For instance , in the last 30 years has any temperature that has been recorded any where , exceeded any previous recorded temperatures anywhere on earth ,eg Death Valley, etc. which has some of the highest recordings .??
    A couple of years ago or so ,I remember reading on here ,a good article on averages,means,etc &what they mean, &how they can be misleading . Unfortunately i cannot remember the authors name .

Comments are closed.