EXAMPLES OF HOW AND WHY THE USE OF A “CLIMATE MODEL MEAN” AND THE USE OF ANOMALIES CAN BE MISLEADING

THE POST HAS BEEN UPDATED.  SEE THE UPDATE AT THE END OF THE POST:

Alternate Title: An Average of Climate Models, Which Individually Give Wrong Answers, Cannot, By Averaging Them, Give the Right Answer, So A Model Mean Can Be Very Misleading. And The Use of Anomalies in Model-Data Comparisons, When Absolute Values Are Known, Can Also Be Very Misleading

INTRODUCTION

For the purposes of examples only, I’m going to initially present comparison graphs of monthly global land-ocean surface temperature data and model outputs, all of which include (roughly) 3.5 to 4 deg C annual cycles. Why am I presenting model-data comparisons with overlapping annual cycles, you ask? There’s something very unusual about a couple of ensemble members from the CMIP5 archive (the simulations with Historic & RCP8.5 Forcings) when you compare them to the Berkeley Earth data. You’ve got to see this to believe it! I couldn’t make this up. And they provide wonderful lead-ins to a discussion of one of the climate science community’s favorite presentation devices, the multi-model mean, and they provide wonderful lead-ins to a discussion of anomalies.

The climate science community regularly averages the outputs of climate model simulations of Earth’s climate for use in scientific studies. For example, they may call the average a multi-model mean or a multi-model ensemble-member mean, depending on the groups of model outputs they’re averaging. I’m providing the model-data comparisons, because they provide wonderful examples of why averaging a bunch of models that individually give the wrong answers CANNOT hope to provide the correct answer. The model mean is simply an average of models that provide the wrong answers. Or, if you like, the climate model mean is a consensus of wrong answers, with some more wrong than others. And that, after a good number of decades, is what we expect from climate science—a consensus of wrong answers—because the foundation of climate science is global politics.

THE DATA AND CLIMATE MODELS

I’m using the monthly land+ocean surface temperature data from Berkeley Earth, because Berkeley Earth, on their global land+ocean surface temperature data page for that product, provides monthly factors that can [be] added to their monthly anomaly data by users in need of the global mean surface temperature data in absolute, not anomaly, form.  Sadly, Berkeley Earth does not specifically state the source of those absolute values. We’ll discuss that later on in the post. [See the update at the end of the post.]  Regardless, the model-data presentations are only for example purposes, so let’s proceed.

Climate model outputs from the CMIP5 archive, which was used by the IPCC for their 5th assessment report, are available from the KNMI Climate Explorer. For the models in this post, I’m using models with historic and RCP8.5 forcings, and I’ve selected the climate model ensemble members that provide the warmest and coolest average surface temperature during the period of 1850 to 1900, which the IPCC now uses for pre-industrial conditions. That is, of the simulations of global mean Surface Air Temperatures (TAS), from 90S-90N, from the 81 individual ensemble members, these two examples provide the warmest and coolest global mean surface temperatures during the period of 1850 to 1900. The coolest (lowest average absolute GMST for the period of 1850-1900) is identified as IPSL-CM5A-LR EM-3 at the KNMI Climate Explorer, and the warmest (highest average absolute GMST for the period of 1850-1900) is identified there as GISS-E2-H p3. The average global mean surface temperatures for the other 79 ensemble members during preindustrial times reside somewhere between the two ensemble members shown in this post. The two ensemble members are the same model outputs used in the recent post What Was Earth’s Preindustrial Global Mean Surface Temperature, In Absolute Terms Not Anomalies?. The WattsUpWithThat cross post is here.

As you’ll recall from that post, for the IPCC-defined pre-industrial period of 1850 to 1900, there is a 3-deg C difference between the global mean surface temperatures of the ensemble member IPSL-CM5A-LR EM-3 (at 12.0 deg C) and the ensemble member GISS-E2-H p3 (at 15.0 deg C). You might say, they’ve got the actual global mean surface temperature surrounded.

MODEL-DATA COMPARISONS 1 – BERKELEY EARTH VERSUS GISS-E2-H p3

Figure 1 is a model-data comparison of monthly global mean surface temperatures in absolute form. With the overlaps of the model and data annual cycles, it’s pretty difficult to see what’s so unusual about the model output and the data.

Figure 1

Figures 2 and 3 present the model and data individually. It’s still difficult to see what’s so unusual.

Figure 2

# # #

Figure 3

So, to make it easier to see, in Figure 4 (click to enlarge), I’ve illustrated the two graphs (Figure 2 and 3) side by side, with the data on the left, and the GISS ensemble member on the right.

Figure 4

That’s right! The global mean surface temperatures in the GISS-E2-H p3 are so warm that the 1850s in the ensemble member output aligns with the most-recent decade (2008-2017) of the data’s global mean surface temperatures. So we can say, because the GISS-E2-H p3 ensemble member is not only off in terms of global mean surface temperature (too high), it is also off in terms of time. That is, based on the time periods when the model output and data overlap, the GISS-E2-H p3 is simulating surface temperatures for some time in the future with respect to the observations-based data, not the same time period as the data.

And to confirm that the 1850s in the model align with most-recent decade (2008-2017) of the data’s global mean surface temperatures, see Figure 5. In it, I’ve compared the 10-year-average annual cycles in monthly model and data global mean surface temperatures, with 2008-2017 used for the data and 1850-1859 used for the GISS-E2-H p3 ensemble member.

Figure 5

If it wasn’t for the nearly 160-year difference in time, I’d be willing to admit that there’s a reasonable agreement in the annual cycles. Unfortunately for the GISS-E2-H p3 ensemble member, the 158-year difference does exist between the model and data.

MODEL-DATA COMPARISONS 1 – BERKELEY EARTH VERSUS IPSL-CM5A-LR EM-3

Now, in Figures 6 through 10, we’ll run through the similar sequence of model-data comparison graphs for the Berkeley Earth global mean surface temperature data and the output of the IPSL-CM5A-LR EM-3’s simulation of it.

Figure 6

# # #

Figure 7

# # #

Figure 8

# # #

Figure 9

# # #

Figure 10

Yup! That’s right. As shown in Figures 9 and 10, the global mean surface temperatures of the first ten years of the data (1850-1859) align with the global mean surface temperatures of the last ten years of the IPSL-CM5A-LR EM-3 ensemble member (2008-2017). The IPSL-CM5A-LR EM-3 ensemble member could also be said to be off in terms of time, but in this case, based on the point at which the model and data align, the ensemble member is almost 16 decades too soon with respect to the observations-based data.

As I said in the opening, you’ve got to see this to believe it! I couldn’t make this up.

WHY PRESENTING A “MULTI-MODEL MEAN” AND THE USE OF ANOMALIES INSTEAD OF ABSOLUTE VALUES CAN BE MISLEADING, ESPECIALLY WHEN THE ABSOLUTE VALUES ARE KNOWN

Above, using two worst-case examples, we’ve seen how poorly two CMIP5-archived climate models actually simulate global mean surface temperatures as represented by data. Now it’s time for the catch. We’re assuming the values of the adjustment factors provided by Berkeley Earth for their data are correct. Berkeley Earth doesn’t cite the source of the adjustment factors. If they do somewhere and I’ve missed it, please correct me and provide a link in the comments. [See the update at the end of the post.]

We may get an idea of the source from Figure 11. In it, for the commonly used period of [1951-1850 typo ] 1951-1980, I’ve compared the average annual cycles of the Berkeley Earth global mean surface temperature data and the average annual cycles of the two CMIP5 ensemble members (IPSL-CM5A-LR EM-3 and GISS-E2-H p3) along with the average of those two ensemble members (a.k.a. the model mean).

Figure 11

Based on how closely the average of the two extremely poor ensemble members matches the data, I suspect Berkeley Earth used the model mean of one of the groups of historic simulations associated with one of RCP scenarios. Don’t know for sure, but I’ll look. Maybe one of the regular denizens at WattsUpWithThat who work with Berkeley Earth will provide the answer and save me some time. [See the update at the end of the post.]

Regardless, the graphs in this post were provided as examples…fun examples. I could just as easily have used modeled and observed sea surface temperatures where the data are in absolute form and also furnished in anomaly form.

Figure 12 includes two model-data comparisons of global mean surface temperatures in time-series format for the period of 1850 to 2017. In the top graph, the data and ensemble member outputs are presented absolute form, while, in the bottom graph, they’re presented in anomaly form, referenced to the often-used period of 1951-1980.Figure 12

As noted at the bottom of Figure 12, If You Were a Climate Scientist And You Wanted to Illustrate How Well a Group of Terrible Climate Models Simulated Global Mean Surface Temperatures—Or Any Other Metric—Would You Present Them in Absolute or Anomaly Form? Also, Would You Present the Model Mean or the Scattered Individuals? Consider that the next time you read any climate science report with model-data comparisons.

Again, in the top graph, even though the individual ensemble members have provided the wrong answers when compared to the data, when we average the wrong answers, we get something close to the correct answer as represented by the data.

Let’s put that in perspective as it relates to the oft-cited climate model ensemble members that use historic forcings to simulate past climate as far back as 1850 and RCP8.5 forcings for computer-aided, crystal-ball-like prognostications of the future under an unrealistic future scenario. A multi-model mean of 81 ensemble members, where the models are all giving wrong answers, some worse than others, simply provides us with a consensus of the wrong answers. And to compound that, the RCP8.5 scenario is, more and more often as days go by, being found to be unrealistic.

And in the bottom graph, the same climate model outputs and data are compared, but this time in anomaly form. Look at how much better the models appear to simulate the long-term observed global mean surface temperatures. Keep those two graphs in mind the next time someone presents a model-data comparison with the data and model outputs presented in anomaly form…and try not to laugh.

Oh, go ahead and laugh, we might as well have fun while we endure this nonsense.

Oops, almost forgot. I need to thank Berkeley Earth for publishing those monthly conversion factors. I had a lot of fun preparing this post, and I couldn’t have done it without those factors.

That’s it for this post. Have fun in the comments and enjoy the rest of your day.

And there are people who wonder why I’m a heretic of the religion of human-induced global warming/climate change. At least I’m a happy heretic. I laugh all the time about the nonsense prepared by whiny alarmists.

UPDATE

Stephen Mosher of Berkeley Earth was kind enough to explain the source of Berkeley Earth’s absolute temperature adjustment factors in the comment here.  Stephen Mosher writes (excluding his typical rudeness to bloggers at WUWT):

The SOURCE of the absolute temperatures is the data which comes in absolute T

All other methods use station temperatures and then they construct station anomalies and then they combine those anomalies.

Our approach is different. we use kriging

First.

Temperature at a location is decompsoted into a 2 elements: A climate element and a weather element

T = f( Lat, Elevation) + Weather

F(Lat, Elevation) is the climate element. It states that part of the temperature is a function of the latitude
of the station and the elevation of a station. Think of this as a regression. ( short aside, there is also a seasonal compoment)

if you take MONTHLY averages then you can show that over 90% of the monthly mean is explained by the latitude of the station and the elevation of the station ( Willis showed something similar with
satitille temps) Using this regreession approach allows you to predict temperatures where you have no
observations. A simple example would be you have the temperature at the base of hill and you can predict the temperture up to the peak… with error of course, but we know what those errors look like.

This “climate” part of the temperature is then subtracted from T

W= T – f(L,E)

This is the “residual” the 10% that is not explained by latitude and elevation. We call this
“weather” Its this residual that changes over time.

After decomposing the Temperature into a fixed climate ( F(L,E)) and a varaiable weather component
we then use krigging to interpolate the weather feild.

The Temp we give you is the absolute T. the source is the observations.

Thank you, Stephen.

STANDARD CLOSING REQUEST

Please purchase my recently published ebooks. As many of you know, this year I published 2 ebooks that are available through Amazon in Kindle format:

To those of you who have purchased them, thank you. To those of you who will purchase them, thank you, too.

Regards,

Bob

PS: Will I continue to present model-data comparisons using a multi-model mean or a multi-model ensemble-member mean? Of course I will, because I use them to show how poorly the consensus (better said group think) of the climate modeling groups, as represented by the model means, simulates a given metric, usually sea surface temperatures in absolute form or 30-year trends in global mean surface temperatures. And now I have this post to link to those future posts.

0 0 votes
Article Rating
111 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Latitude
November 29, 2018 7:25 am

Something I would like to see….at what point was the model actually run?
..how much is hind casting…and how much is prediction?

Past temperatures have been adjusted down…to show a faster rate of warming.
..if the model is hind cast/tuned to that….it will never be able to predict the future

LdB
Reply to  Latitude
November 29, 2018 8:19 am

You can always match the hindcast because you have results so you can tune it, but you will never match the future for any real length of time forward of now it’s impossible all you can generate is probabilities and even the probabilities are problematic because you are hoping your ensemble covers the range. It’s not a matter of computer power etc, the problem is not reducible in a classical sense it is the same problem with weather forecasts they are useless beyond 3-5 days.

The models like a weather forecast for tomorrow say something which may or may not be of value depending what you need to know and what your allowance for error is. If you want them to match exactly the real world then they can’t do it and it is silly to expect it.

Ferdberple
Reply to  LdB
November 29, 2018 8:42 am

The future can be predicted for a one sided coin. However as soon as a coin has two sides the future becomes a probability where temperatures are free to go up or down regardless of what we might do with co2.

AGW is not Science
Reply to  LdB
November 29, 2018 9:35 am

I actually used to work with someone who was involved in weather-related transactions and asked him point blank how far out you could rely on weather forecasts.

“Two days.” That was his answer, without a moment’s hesitation.

Max Dupilka
Reply to  AGW is not Science
November 29, 2018 10:13 pm

As a long time forecaster we do a detailed forecast for the first two days and then a much more general description beyond that. How far out you can have good faith in the forecast depends a lot on the weather pattern. With a slowly developing and fairly stable pattern you can have good confidence out to about 5 days. With a rapidly developing system you might only be reasonably confident out a few hours.
And what type of forecast are you looking for? It is generally easier to forecast temperatures at a location than precipitation. We also do humidity forecasts for fire weather, and that is a real tough one.

Reply to  Latitude
November 29, 2018 9:19 am

…. and why not, makes the model farming a doddle bringing in a rich grant harvest.
Make the business flourish, do it the HadCRUT way (see here ), don’t forget to sprinkle the completed product with the fine scent of CACCA verde.

Reply to  Latitude
November 29, 2018 10:14 am

Each CMIP5 model estimates monthly global mean temperatures in degrees Kelvin backwards to 1861 and forwards to 2101, a period of 240 years. The dataset from CMIP5 models includes 145 years of history to 2005, and 95 years of projections from 2006 onward. Presumably CMIP6 will do history up to 2014 and forecasts forward after that.

Reply to  Latitude
November 29, 2018 2:58 pm

THERE ARE NO
LONG TERM
CLIMATE MODELS !

The computer games that are called “models”
make such wrong predictions that it is obvious
they are NOT models of the climate on this planet.

Perhaps they are models of the climate on Uranus?

If they are supposed to be climate models for Earth,
then they are failed prototypes … unless the goal
was for predictions of warming that are triple of reality,
… and then they would be perfect.

In fact, the so-called “models” were designed to
support the 1979 Charney Report wild guess
of climate sensitivity to CO2 ( which everyone here
should know is really an unknown number,
not very likely to be more than 1.0 degrees C. ).

What are called “climate models”,
are really nothing more
than junk science propaganda
to support the “CO2 is Evil” cult.

They deserve no respect from anyone here —
they only deserve ridicule, and the first step
of ridicule is to stop calling them “climate models”.

They are nothing more than
computer games, with the results
decided in advance,
designed by
goobermint bureaucrats
with science degrees to
to provide the politicians
with the “answers” they wanted,
thereby earning permanent
job security for those bureaucrats
with science degrees!

Wild guesses about the future climate,
have been converted into “models”
(that make wrong predictions)
which appear to be complex real science.

In fact, the “models” are
nothing more than
complex junk science.

My climate science blog,
with over 28,000 page views:
http://www.elOnionBloggle.Blogspot.com

My article awarding Barack Obama the first
“Climate Buffoon of the Year” award,
the most popular article in four years !
http://elonionbloggle.blogspot.com/2018/11/barack-obama-honest-global-warming.html

Timo Soren
November 29, 2018 7:42 am

S-B equation says that a proportion of T^4 is essential to radiation. Can’t imagine how a climate model does not use this at some point in their work, hence if they have a 3 degree baseline difference then they have a 3 degree difference in their models 4th power. Hence highly increased yet…

The conclusion is then that a 3 degree difference in our temp now is really irrelevant.

Gary Mount
Reply to  Timo Soren
November 29, 2018 9:09 am

The 3 degrees Celsius difference represents about 4.2% difference in radiated energy at average annual global surface (1.5 to 2 meters above ground) temperatures (286 to 289 kelvin comparison used in my calculations).

LdB
Reply to  Gary Mount
November 29, 2018 7:02 pm

No it doesn’t thermal radiation isn’t restricted to that sort of behaviour. You can get the difference without ever changing the radiated energy but by changing the polarization, absorption, albedo etc.

I have shown the clip a number of times to make the absolute point you can put a blue ballon inside a red ballon and fire a ruby laser straight thru a red ballon and pop the blue ballon because the red laser will pass thru the red ballon without generating much heat as it is a colour match. There is no difference in the laser beam as it strikes both ballons (if anything it is fractionally weaker as it hits the blue ballon) the difference is simply the colour of the material and so it reaches a different temperature as the beam passes thru. I guess we could have the beam passing thru a red ballon and then spray black paint at it and you should be able to guess what happens.

So if you measure a difference all you can say is something changed it is silly to go for the conclusion the radiation energy changed, that is one of many possibilities.

David L. Hagen
Reply to  Timo Soren
November 29, 2018 6:38 pm

That absolute temperature difference in turn makes an exponential difference in vapor pressure. See:
D. Koutsoyiannis, Clausius-Clapeyron equation and saturation vapour pressure: simple theory reconciled with practice, European Journal of Physics, 33 (2), 295–305, doi:10.1088/0143-0807/33/2/295, 2012.
http://www.itia.ntua.gr/en/docinfo/1184/

Editor
November 29, 2018 7:55 am

Typo ==> “We may get an idea of the source from Figure 11. In it, for the commonly used period of 1951-1850,”

Something wrong with the dates….they graph uses 1951-1980.

sycomputing
Reply to  Bob Tisdale
November 29, 2018 8:12 am

Thanks for an interesting post.

Since we’re doing typos, you might’ve missed a verb in the bold portion below:

” . . . provides monthly factors that can added to their monthly anomaly data by users in need of the global mean surface temperature data in absolute, not anomaly, form.”

Dale S
November 29, 2018 8:00 am

HADCRUT has a file with its absolute values for the 1961-1990 reference period for 5×5 degree regions of the world. After weighting for area and calculating this works out to be, by month:

Jan 12.11
Feb 12.25
Mar 12.87
Apr 13.86
May 14.89
Jun 15.59
Jul 15.86
Aug 15.71
Sep 15.07
Oct 14.09
Nov 13.03
Dec 12.36

This has a very close shape to figure 11 and as you might expect from the differing base period, is only a bit lower in absolute terms.

While the global annual cycle for the models looks to have the right shape, I used the HADCRUT file to figure the annual cycle to the 5 degree by 5 degree region I lived in. Comparing to about ten models for the same region in the same period, I found the models matched each other nicely (other than varying on the absolute temperature), with a uniform rise and fall like you see in the annual cycle. However, that didn’t match the HADCRUT shape, which did not rise and fall uniformly, and they all ran cold (compared to observations) outside winter.

Reply to  Dale S
November 29, 2018 10:59 am

see my comment further above at 9:19am

Chris Morrison
November 29, 2018 8:01 am

Climate models are the sacred texts for the new “settled” religion.

Garbage in – Gospel out.

Tom in Florida
Reply to  Chris Morrison
November 29, 2018 9:15 am

+76

Reply to  Chris Morrison
November 29, 2018 6:05 pm

That is funny! A vey apt description of climate models.

Tom Halla
November 29, 2018 8:04 am

With that much variation in the purported temperatures, either the actual records are grossly inadequate, or the adjustments are larger than the supposed change over time. Berkeley Earth is fairly clearly “adjusted”, as the 1940-1975 cooling has gone away, as has the late 1900’s cooling.

AGW is not Science
Reply to  Tom Halla
November 29, 2018 9:45 am

The Ministry of Truth has been working overtime.

Alan Tomalty
Reply to  AGW is not Science
November 29, 2018 1:31 pm

I dont trust Berkeley Earth data. Only the UAH satellite data that corresponds to balloon data will nail the Alarmist coffin.

Dee
November 29, 2018 8:18 am

Another very informative exposé.

Uncertainties in data render uncertainties in the anomalies derived from said data.

I read the headline of the Guardian today, which screamed of all sorts of broken global average temperature records in recent years.

https://www.theguardian.com/environment/2018/nov/29/four-years-hottest-record-climate-change

Nowhere in the article is it divulged what the temperature is, was, or how much it has been “broken” by.

It’s very much like reading a report of a new land speed record, where the reporter doesn’t know what speed was achieved, what previous record it broke and whether any of the competition officials had a stopwatch.

I also foolishly logged on to the WMO website because it’s their report the Guardian was raving about, and nope, there’s no mention there either of what temperature we’re at, what temperature we were at, or what temperature they’d like us to be at.

I felt rather empty afterwards.

Richard III
November 29, 2018 8:33 am

Why haven’t the purveyors of these various “models” worked out the differences between them and produced a work product they can all agree on? If my “model” is “better” than your “model”, why haven’t we discarded yours? If they are both equally speculative, why should anyone believe either?

AGW is not Science
Reply to  Richard III
November 29, 2018 9:42 am

Well said! An “ensemble” of models with a THREE DEGREE range between their “results” screams “They don’t have a clue what they’re talking about” loud and long, especially when they’re wailing about a temperature increase to date from their convenient Little Ice Age starting line of LESS THAN A DEGREE Celsius and the “potential” supposed horror show of a degree and a half in total, which like the 0.8 to date won’t even be noticed.

Another Ian
Reply to  Richard III
November 29, 2018 12:36 pm

From

https://wattsupwiththat.com/2018/11/28/an-assessment-of-the-4th-national-climate-assessment/

“All climate models fail to predict the weather or climate, with the possible exception of the Russian model INM-CM4(Volodin, Dianskii and Gusev 2010). ”

Bob

I wonder if you’re going to have a look at this one?

William Capron
November 29, 2018 8:40 am

So we are to believe that though two wrongs don’t make a right, 35 averaged wrongs do?

Ferdberple
November 29, 2018 8:47 am

Statistically anomalies hide the variability apparent in the absolute data. They thus mask natural variability and have mislead climate science to under estimate natural variability.

AGW is not Science
Reply to  Ferdberple
November 29, 2018 9:44 am

Especially when they discard the actual temperature readings in favor of “homogenized” guesswork with no reasonable justification.

Clyde Spencer
Reply to  Ferdberple
November 29, 2018 10:35 am

Any number can be used as a subtrahend to create an anomaly. The choice of that number has implications for how the variance is reduced. Any base, such as the lowest diurnal low from pre-industrial records could be justified. Or, even using some totally arbitrary number such as 4π. It seems to me that the choice of a 30-year temperature base was driven, at least in part, by a desire to emphasize recent warming. It makes for convenient red-blue graphs, pointing out all the recent red.

Steven Mosher
Reply to  Clyde Spencer
November 29, 2018 6:55 pm

wrong.

Clyde Spencer
Reply to  Steven Mosher
November 29, 2018 9:11 pm

Oh great Mosher, I bow down to your monosyllabic wisdom!

AFR
November 29, 2018 9:04 am

Wait a minute, wait a minute… How does Figure 12 make sense?

I thought an anomaly is obtained from the original data simply by subtracting the reference period values. In such a case IPSL-CM5A-LR EM-3 and GISS-E2-H p3 should still be ~3 degrees apart in the bottom graph.

It looks like maybe they were first adjusted to the model mean and then the anomaly was computed? That shouldn’t still be called an anomaly should it?

And how does that make sense? That’s like saying “Sure my temperature simulation is way off, but I’m sure my little wiggles along the way are spot on!”

Max Dupilka
Reply to  AFR
November 29, 2018 10:25 am

I am wondering the same thing. Just subtracting the same climate mean from each data set will still keep the same relative separation. What am I missing?

Reply to  AFR
November 29, 2018 10:34 am

I thought an anomaly is obtained from the original data simply by subtracting the reference period values.

Not to my knowledge. Each individual model uses its own data for 1951-1980 reference period. In the case of the models the “original data” is what the individual model outputted. What this does is eliminate the offset between the models.

AFR
Reply to  Greg F
November 29, 2018 10:51 am

Ah! I never realized that. I’ve always thought the model anomalies showed their difference from measured reality (i.e. the measured reference period), not their own reality. Interesting.

Gums
November 29, 2018 9:15 am

Salute!
Why can’t we use a 30 year baseline from 1920 to 1950?
What is wrong with Kelvin? And best estimate of last million or so years global temp?
What is the “optimum” global temperature in Kelvin? How do the climate experts know that is the “perfect” temperature for mankind, plants and all the critters on Earth?
How many alarmists are bonafide biologists and can show effects of a degree up or a degree down that lasts long enough to influence vegetation and animal reproduction/distribution/locale/migration patterns etc?
Gums asks…

Tom in Florida
Reply to  Gums
November 29, 2018 9:21 am

The perfect human temperature is one that will maintain our body temperature at 98.6 F without the use of external clothing yet will not overtax our sweat glands during exertion.

Russ Wood
Reply to  Tom in Florida
December 3, 2018 5:07 am

Having occupied a lot of ‘open offices’, with a central control (someone else’s), I can say with certainty that the optimum indoor shirt-sleeves working temperature is %$%@*())(*(_)&*^…
or, more simply – something different!

DWR54
Reply to  Gums
November 29, 2018 9:43 am

What is wrong with Kelvin?

Fair point. Bob refers the use of “absolute” temperatures throughout this article but all his ‘absolute’ charts are in degrees celsius! Zero degrees Celsius is equivalent to 273.15 degrees K. The Celsius scale could be considered an ‘anomaly’ of the absolute scale, where the baseline, or ‘zero’, is set to a single point: 273.15C.

By using Celsius instead of Kelvin (which is the scale the models are actually use, AFAIK), Bob inadvertently makes a point in favor of anomalies. They make change in any system easier to detect.

DWR54
Reply to  DWR54
November 29, 2018 9:46 am

273.15K even! Tut!

Mark Hansford
Reply to  DWR54
November 29, 2018 11:02 am

The base point never moves and is the most relevant temperature for life to exist – freezing point of water. Celsius therfore isnt an anomaly as it is permanently fixed and never fluctuates. It makes sense to represent the Kelvin scale as whole numbers Kelvin above and below freezing point, naming it Celsius is a bit deceptive IMO.

DWR54
Reply to  Mark Hansford
November 29, 2018 12:39 pm

Yes, but in relation to ‘absolute’ temperatures, zero Celsius is a baseline from which we derive our temperature ‘anomalies’ (+/- ‘zero’). The fact that 273.15K happens to be the temperature at which water freezes on earth, at standard pressure, is useful to us, which is why we make it our ‘anomaly’ base for the Celsius scale.

This illustrates the point that we refine ‘absolutes’ all the time to make them more relevant to our experiences. That’s what anomalies do.

Saying that global temperatures today are ~288K compared to a long term average of ~287K is pretty useless in terms of human experience.

Saying that global temperatures today are ~1.0 degree (K or C) above the long term average over the period of human civilization, where the long term variation in that period was rarely if ever +/- 0.5 degrees from the long term average, makes it more relevant.

Why would we want to diminish the relevance of this by using absolute temperature scales? Who benefits from that?

Marcus
Reply to  DWR54
November 29, 2018 3:27 pm

“98.6 F without the use of external clothing”

Do you really want to give Hillary an excuse to be running around naked ? Oooh, heebee jeebee’s !

Hivemind
Reply to  Gums
November 29, 2018 6:15 pm

“Why can’t we use a 30 year baseline from 1920 to 1950?”

Why can’t we decide what the Earth’s temperature should be and use that as the basis for the calculations?

LdB
Reply to  Hivemind
November 29, 2018 7:20 pm

You can’t because you have to set parameters such as polarization, albedo, absorption etc and that relies on both the atmosphere and ground characteristics. So whatever values you pick might be right for some short time but even later in the year it will be wrong and you have no hope for it a long period of time. The bottom line here is the value is supposed to wander around, it is natural and life on earth has evolved to deal with it.

The more open question is how much is man changing it and is it catastrophic.

Clyde Spencer
Reply to  LdB
November 29, 2018 9:15 pm

Polarization???

TomRude
November 29, 2018 9:29 am

Well we are doomed unless…
https://www.cbc.ca/news/technology/un-warns-global-temperatures-rise-2100-1.4925170
The CBC like any other good servile media offers this predictable release from the WMO -every time before a COP, it is always worse than we thought-

“Greenhouse gas concentrations are once again at record levels and if the current trend continues we may see temperature increases 3 to 5 C by the end of the century,” Secretary General Petteri Taalas said in the WMO’s annual statement on the state of the climate.

As I predicted last year, each year will be incrementally worse. Why? Because they can turn the thermometer on or off as they wish. A slight change in algorithm and here we are, unprecedented high temperature year. they could really make 2100 next year if they wish, but the rope would be too obvious.
By 2020 the 2100 prediction will be 7C, sea level rise by 10 m.
By 2025 the 2100 prediction won’t matter because NATO’s recklessness will have managed to do us all.

November 29, 2018 9:40 am

From what I’ve seen, the use of anomalies assume a 0.0% accuracy and a +-0.0 precision. This is so fundamentally wrong that every scientist who does this shouldn’t be allowed to publish a study. Models who use data as if it was this accurate and precise shouldn’t be believed either.

Adjusting data within the limits of precision and accuracy is just as futile. There is no way to gain any additional accuracy or precision by doing so. Each and every published paper should be required to address the shortcomings of the past temperature data and explain how it affects the outcome of the paper’s results.

If we built bridges or telephones using data like this, the bridges would collapse and no one’s cell phone would possibly work.

DWR54
November 29, 2018 10:07 am

From what I’ve seen, the use of anomalies assume a 0.0% accuracy and a +-0.0 precision.

Anomalies are just differences from representative averages. You can turn any data into anomalies. Take the classic ‘heights of kids in year 8 at school’, say. You measure these in a representative group and calculate an average; say 5 ft. So 5 ft becomes your zero, based on your representative sample.

Any year 8 kid you measure can now be described in terms of ‘x’ above/below average and that ‘x’ figure can be expressed as an ‘anomaly’, perhaps in terms of ‘inches different from average’.

Yes, there will be imprecision at all points during the measuring process; different observers, different tools, typos, etc… However, as Nick Stokes has pointed out very clearly here many times, the use of anomalies greatly reduces the margins of uncertainty compared to the use of more absolute values.

This is not to say that anomalies remove uncertainty altogether. If you download the HadCRUT4 data you will see that they publish a wide range of uncertainty in their monthly updates. NOAA also publish the 95% error margins in their monthly global temperature anomaly estimate.

Michael Jankowski
Reply to  DWR54
November 29, 2018 10:44 am

Tbe “reduction in uncertainty” is fool’s gold. It’s not really an improvement. You’re shifting the data closer to the mean and generally down an order of magnitude or more with anomalies when working in celsius. The units are still degrees C, but it is apples to oranges. You are just presenting the same thing in a different way. You can’t just look at the magnitude behind the +/- and say, “Oh, it got smaller, so this method is better.”

Clyde Spencer
Reply to  DWR54
November 29, 2018 10:48 am

DWR54

You said, “However, as Nick Stokes has pointed out very clearly here many times, the use of anomalies greatly reduces the margins of uncertainty compared to the use of more absolute values.”

That reduction is illusory. The standard deviation (SD) of the raw data is what it is, and is related to the range through the Empirical Rule. The SD expresses what kind of random variation can be expected, and the probability of a sample deviating from the mean of the sample population. If you reduce the range by subtracting a number that is a significant fraction of the raw data, you will reduce the range of the result. But, the raw data will still vary just as much.

Reply to  Clyde Spencer
November 29, 2018 2:14 pm

Clyde,
“The standard deviation (SD) of the raw data is what it is”
That depends on what the meaning of “is” is. There are two ways (at least) that data could deviate:
1. measurement – variation you might get if you measured the same thing differently
2. sampling – variation if you chose different things to represent the population
Anomaly relates to sampling error. If you choose different points, you get a varying estimate of the sample mean due to:
a) choosing points that are normally warmer or cooler places
b) choosing points that were warmer or cooler than they normally are
It is b) that you want to know about. Anomaly removes the sampling variability due to a), which you don’t want to know about.

Clyde Spencer
Reply to  Nick Stokes
November 29, 2018 9:18 pm

Stokes,
I see that you attended the Bill Clinton school of sophistry.

Kurt
Reply to  Nick Stokes
November 29, 2018 11:48 pm

Nick:

What if you have a system where the amount by which temperature at a location varies over time about its local average, in response to external and unpredictable events like rain or a change in the direction of wind (or even rising concentrations of GHGs), depends on its absolute temperature? Are you still no longer interested in standard deviations due to sample choices of locations having temperatures higher or lower than the population average?

Reply to  DWR54
November 29, 2018 10:51 am

Your 95% error margin is a statistical calculation based on the distribution of values not on the range of measurement errors. It simply does not include measurement error. Tell us what the average would be if your measuring devices had a 5% accuracy and a +- 1 inch precision. Then tell us what the range of any given anomaly is. These measurement errors are not removed by averaging or any other statistical manipulations unless they are measurements of the same thing with the same instrument. They remain throughout any calculations you do.

You are basically stating that your calculation of the average has 0% accuracy and +- 0.0 precision errors. Consequently the anomaly also has these. This just violates every concept taught in metrology.

Reg Nelson
Reply to  DWR54
November 29, 2018 10:56 am

The problem is they aren’t measuring all of the kids — they are measuring a small percent. 75% of the kids (75% — being the percent of the Earth where there are no surface stations) are never measured. Instead the height of these kids are estimated. Phil Jones admitted that the height of the kids (temperature) in the Southern Hemisphere is largely made up.

So using anomalies of data that is largely made up is meaningless and you can’t say, with any certainty, whether the kids are getting taller or not. Error estimates based on imaginary data is equally meaningless.

And even if you did measure all of the kids, and found the kids were getting taller, it doesn’t tell you why.

Another issue is how many years do you have measure the height of the kids to identify a trend. The Earth is 4.55 billion years old. Instrumental data is less than 300 years old; satellite data about 40. Both of these are a tiny, tiny, tiny fraction of the Earth’s history.

Reply to  Reg Nelson
November 29, 2018 1:43 pm

“Phil Jones admitted that the height of the kids (temperature) in the Southern Hemisphere is largely made up.”
An endlessly repeated lie.

“The problem is they aren’t measuring all of the kids”
Yes, that is sampling. That is how we get to know about the world. We never measure everything.

Reg Nelson
Reply to  Nick Stokes
November 29, 2018 2:36 pm

“Phil Jones admitted that the height of the kids (temperature) in the Southern Hemisphere is largely made up.”
An endlessly repeated lie.

From Phil Jones’s climate gate emails:

“For much of the SH between 40 and 60S the normals are mostly made up as there is very little ship data there.”

Link: http://di2.nu/foia/foia2011/mail/2729.txt

What part of that is an a “An endlessly repeated lie.” ?

Please educate me.

Reply to  Reg Nelson
November 29, 2018 3:21 pm

“Please educate me.”
“between 40 and 60S” is not the Southern Hemisphere
But more importantly
“normals are mostly made up”
Normals are not the temperatures. The situation was that they had a whole lot of new temperature information from drifter buoys. They did not have a lot of information about the region in the reference period (1961-90), which are the best source for a normal used for anomaly. In those circumstances, the right thing to do is to use whatever information you have to compute normals. The actual value of normals is not critical; it is important to avoid various ways in which they could induce spurious trends, but that is well studied.

Reg Nelson
Reply to  Reg Nelson
November 29, 2018 3:37 pm

Phil Jones explicitly says ““For much of the SH between 40 and 60S the normals are mostly made up as there is very little ship data there.”

Nick , this means there is no temperature for this area, and really nearly all of the Southern Hemisphere. You must know this. Do ypu not?

Reply to  Reg Nelson
November 29, 2018 3:46 pm

“this means there is no temperature for this area”
A more complete quote is:
“The issue Ray alludes to is that in addition to the issue of many more drifters providing measurements over the last 5-10 years, the measurements are coming in from places where we didn’t have much ship data in the past. For much of the SH between 40 and 60S the normals are mostly made up as there is very little ship data there.”
They are getting a whole lot of temperature data. The issue is the calculation of normals, which is something different.

Reg Nelson
Reply to  Reg Nelson
November 29, 2018 8:05 pm

“They are getting a whole lot of temperature data. The issue is the calculation of normals, which is something different.”

LMAO what exactly does a “whole lot of temperature data” mean exactly? Is that a scientific term? How do you define that? How can you calculate “normals” from data which doesn’t actually exist?

Reg Nelson
Reply to  Nick Stokes
November 29, 2018 3:04 pm

“The problem is they aren’t measuring all of the kids”
Yes, that is sampling. That is how we get to know about the world. We never measure everything.
____

For sampling to work, it has to be both random and representative of the population as a whole — simple statistics.

Surface temperature data is not randomly or universally sampled.

Your argument is both incredibly weak and incredibly wrong.

Reply to  Reg Nelson
November 29, 2018 3:25 pm

” it has to be both random and representative of the population as a whole”
No, it doesn’t have to be random. That is a way of achieving representativeness, but is not essential for its own sake. Grid sampling, for example, is not random.

And in spatial sampling, it doesn’t have to be representative of the population as a whole. It has to be representative of its area. Spatial integration (area weighting) does the rest.

Steven Mosher
Reply to  Reg Nelson
November 29, 2018 6:44 pm

Surface temperature data is not randomly or universally sampled.

universal sampling? ( an oxymoron)

The simple fact is that you dont understand the spatial coherence of temperature and dont understand the sampling required to faithfully construct an estimated feild

Reg Nelson
Reply to  Reg Nelson
November 29, 2018 8:46 pm

“No, it doesn’t have to be random. That is a way of achieving representativeness, but is not essential for its own sake. Grid sampling, for example, is not random.”

If it is not random, than it will be biased. For example if I did a poll for the last presidential election and only surveyed known Democrats, then my poll is neither random nor representative of the whole. The results are influenced and determined by the samples selected. Grid sampling this data does not make it, in anyway, more relevant or more accurate accurate,

Reg Nelson
Reply to  Reg Nelson
November 29, 2018 9:04 pm

Steven Mosher November 29, 2018 at 6:44 pm
Surface temperature data is not randomly or universally sampled.

—–

My point exactly. The stations (samples) measured are predetermined (by location and history) and not representative of the Earth as a whole. On that we can agree.

Loren Wilson
Reply to  Nick Stokes
November 29, 2018 5:07 pm

The point is, are you measuring enough? Many of us think the answer has been no for quite a while, with some improvement once the satellites began measuring a reasonable fraction of the earth’s surface and atmosphere. There is a great deal of extrapolation in the results. You can’t call it interpolating, because when I interpolate, I have a smooth curve with data points close enough together that my interpolated result will be close to the true value. This is testable in my field of chemical engineering, because I can interpolate from a set of known data using every other point for my grid, reduce the grid size, and interpolate again. If the interpolated values differ, then there are issues. In climate, you don’t have enough stations before 1900 to even try this. It would be educational if you tried this for your current temperature estimates using half as many stations. How much does it change, what does the uncertainty do, and how does it compare with the satellite data?

Reply to  Loren Wilson
November 29, 2018 6:24 pm

“It would be educational if you tried this for your current temperature estimates using half as many stations.”
Yes, I do just that here. Not just halving, but a systematic reduction process, reducing many times. And repeating many times (with random variation). The outcome:
1. Reduction doesn’t make much difference to the mean temperature.
2. The spread is the best measure of sampling error. You can reduce from 5000 to about 500 stations before the spread rises to 0.1°C.

Clyde Spencer
November 29, 2018 10:13 am

With an ensemble of simulations or predictions, logically, there can only be one best result. Averaging that best result with all the poorer results dilutes the best result and ends up with a mean that has little predictive value. The assumption is that the model runs will be both high and low, and therefore cancel out. However, from what I have seen, the Russian model(s) run a lot cooler and appear to be the best results compared to reality. Thus, the assumption of cancellation is wrong, and the averaging of the ensemble provides poorer results than what a single model may be capable of.

What needs to be done is identify the best result or best 10% of the results, verify that the results can be replicated, and look for things in common that set them apart from the poorly performing models. That is, the models need to be verified and ranked, and only the best used. If multiple runs of the same model provide widely differing results, then serious thought should be given to abandoning trying to create predictive models based on current approaches.

RobR
November 29, 2018 10:16 am

A double-edged sword to be certain, as the mean of the models is too hot relative to observed (albeit adjusted) temperatures. Since projections are falsifiable, they meet the criteria for scientific prediction.

That fact that all but one models runs hot, allows us to draw stronger inferences regarding high and middle sensitivity estimates. Essentially, the modelers have disproved their own catastrophic predictions.

Steven Mosher
November 29, 2018 10:19 am

“Sadly, Berkeley Earth does not specifically state the source of those absolute values. We’ll discuss that later on in the post. Regardless, the model-data presentations are only for example purposes, so let’s proceed.”

HUH?

Bob. in addition to doing model and observation comparisons wrong you cant even take the time
to read our paper which explains that we use absolute temperatures.
The SOURCE of the absolute temperatures is the data which comes in absolute T

All other methods use station temperatures and then they construct station anomalies and then they combine those anomalies.

Our approach is different. we use kriging

First.

Temperature at a location is decompsoted into a 2 elements: A climate element and a weather element

T = f( Lat, Elevation) + Weather

F(Lat, Elevation) is the climate element. It states that part of the temperature is a function of the latitude
of the station and the elevation of a station. Think of this as a regression. ( short aside, there is also a seasonal compoment)

if you take MONTHLY averages then you can show that over 90% of the monthly mean is explained by the latitude of the station and the elevation of the station ( Willis showed something similar with
satitille temps) Using this regreession approach allows you to predict temperatures where you have no
observations. A simple example would be you have the temperature at the base of hill and you can predict the temperture up to the peak… with error of course, but we know what those errors look like.

This “climate” part of the temperature is then subtracted from T

W= T – f(L,E)

This is the “residual” the 10% that is not explained by latitude and elevation. We call this
“weather” Its this residual that changes over time.

After decomposing the Temperature into a fixed climate ( F(L,E)) and a varaiable weather component
we then use krigging to interpolate the weather feild.

The Temp we give you is the absolute T. the source is the observations.

Reg Nelson
Reply to  Steven Mosher
November 29, 2018 11:04 am

“The Temp we give you is the absolute T. the source is the observations.”

This is true but misleading. Their is no surface temperature data for the oceans. It doesn’t exist.

Steven Mosher
Reply to  Reg Nelson
November 29, 2018 6:47 pm

Reg,
for the ocean we use SST

Sea surface temperature, which is CONVENTIONALLY defined as temperture in the first meter or so.
This is different than the Skin temp ( which satillites see)

IF you read a text and TRY to misunderstand it, then you are not interested in what is true

Reg Nelson
Reply to  Steven Mosher
November 29, 2018 8:21 pm

Steven, as I said in my post there is no SST data to use. It doesn’t exist. I didn’t misunderstand anything. Where does the SST data that you use come from? Ship buckets? ARGO floats that are not fixed in position and whose data initially showed cooling before the data was manipulated?

Gamecock
November 29, 2018 10:22 am

“I do not believe in the collective wisdom of individual ignorance.” – Thomas Carlyle 1795-1881

Steven Mosher
November 29, 2018 10:28 am

“ased on how closely the average of the two extremely poor ensemble members matches the data, I suspect Berkeley Earth used the model mean of one of the groups of historic simulations associated with one of RCP scenarios. Don’t know for sure, but I’ll look. Maybe one of the regular denizens at WattsUpWithThat who work with Berkeley Earth will provide the answer and save me some time.”

dumb

do you randomly speculate on shit that is written up in papers and then ask other to correct you?

http://berkeleyearth.org/wp-content/uploads/2015/08/Methods-Appendix-GIGS-13-103a.pdf

seriously Bob, is it too much to ask you to actual read a paper or ask me?

I have people ( children, high schoolers, grad students, scientists, politicians) who write me every month
to ask me to explain what we did. And I answer all their mails.

None of them, not in the school children would speculate when they could just ask.

Further if you want to see how you should do comparisons

Look

Reg Nelson
Reply to  Steven Mosher
November 29, 2018 11:21 am

I have a question for you, Steven. What percent of the data in your paper is actual, measured data that hasn’t been adjusted?

David Bidwell
Reply to  Steven Mosher
November 29, 2018 1:03 pm

Someone is getting a bit testy. Scientific criticism is not an assault on your ego. Science can move forward if criticism/doubt is discussed, reasonably. Take it as an opportunity for education, for all our benefits.

Steven Mosher
Reply to  Bob Tisdale
November 29, 2018 8:19 pm

Bob

here is what you wrote

‘And there are people who wonder why I’m a heretic of the religion of human-induced global warming/climate change. At least I’m a happy heretic. I laugh all the time about the nonsense prepared by whiny alarmists.”

1. you speculate, wrongly, that we use models for our absolute T.
2. you call their work nonsense
3. you call them alarmist and whiny

And when someone hits you back with facts ( You could have asked me, but preferred
to throw a false speculation out there) you call them rude.

snowflake much.

Look, if you want to impugn the character and work of other people with your posts
I have no issue. Its a free world and I love free speach. But do not expect me to be polite
when you are rude to begin with.

Reg Nelson
Reply to  Steven Mosher
November 29, 2018 9:14 pm

Isn’t kriging a form of modelling? If not what exactly is it?

It’s certainly not the truth, just someone’s biased version of the truth.

Max Dupilka
November 29, 2018 10:37 am

When it comes to weather forecasting we often see large differences between various models. I have found, through many years of experience, that just averaging between, say two, models seldom produces the best result. Generally the actual outcome with be much closer to one model output than another. The problem is, you never know ahead of time which model will be best. And the best one in one case may be the worst in another case.

Clyde Spencer
Reply to  Max Dupilka
November 29, 2018 10:55 am

Max,
The model run that day by Sherlock Holmes’ smarter brother, Shear Luck?

AZ1971
November 29, 2018 10:39 am

Given what the poor spatial and temporal temperature record was prior to 1950 (or 1970, or 1980, take your pick) and with all the data massaging being done to bring locations in line with others nearby to account for UHI and poor siting, how reliable can it be said that what the global mean temperature was for years 1850-1950?

whiten
November 29, 2018 10:48 am

A silly question.

What actually is a “Jubile”???

How far back in time one has to go for the “source” of this thingy called “Jubile?
How scientific one may think or claim the meaning of “Jubile” could be in context of the initial condition, the source, versus the modern definition and application of it?

Any one?
I know this maybe a bit out of line, but got to say it.

The point here trying a make is that even thousands of years ago, the man of learning and knowledge then, will very much make sure not to average in the context of the “absolute” when it came to data measurements and observations, especially in regard to the dependence of initial condition, even when it could be considered that averaging in the context of anomalies was not quite strange or a blasphemy…and even considered as helpful if done properly and with no biases involved.

Sorry again for this comment, which seems off topic…and probably it could be.

cheers

Mark Hansford
November 29, 2018 10:55 am

Anomalies are brilliant arent they, they can be used very effectively to show through colour or graph any argument you wish to make. Just make sure the base of the anomalies is either not stated or in very small print somewhere.

I use a 10 day global forecasting site where it allows observation in absolute or anomaly temperatures. It uses the dates 1979 – 2000 average as the anomaly base (actually for once clearly stated). if you miss this bit of information you may be led to conclude that the current weather is anomalously warm. I dont understand why, when you give so many variables this particular parameter is always fixed. Using this time scale only adds in 2 years of the warmth created by the 1998 El Nino and nearly 20 years of generally cooler temperatures.

I can understand perhaps not going back to before satellite data as this generally involves splicing data sets or averaging differently calibrated equipment. However why in 2018 is the anomaly base not moving to give a new average, or if it is used to provide a constant base why is an alternative not supplied; I would expect to see at least an extension to represent 1979 – 2010 for instance as a decadal base of even 1989 – 2010 as a progression, or best still an adjustable base. Surely the longer the data set the more accurate the representation of anomaly.

Could it possibly be that using this as a base for anomalies gives the greatest amount of ‘hot’ colours on the charts, is it me just being cynical. Had the warmer years after the 1998 El Nino been used the colours would now be getting progressively colder.

DanH
Reply to  Mark Hansford
November 29, 2018 12:43 pm

There appears to be certain groups that benefit by showing these relative/anomaly values. AGW groups benefit from this as has been shown here. In the medical or pharmacology fields there is a measure called “relative risk” that is used to justify certain conclusions particularly with the efficacy of drugs. Absolute risk is often minimal so is not presented. Interesting games being played in multiple fields.

ChrisB
November 29, 2018 12:25 pm

Only take home message from this excellent presentation I’ve got was that climate models are unable to compute absolute temperatures.

This message is mind boggling for its repercussions. How come models using the same inputs and same physical principles vary in their outputs so much? Especially if the absolute temperature is much much bigger than the annual variability.

This suggests models simply cannot compute a consistent baseline temperature accurately. Rather all models operate on the model input differences, ie first order differentials. In other words they are clueless about the integration constant. Inability to compute this constant suggest to me that these models are flawed from the outset, because isnt that the aim of the entire exercise?

This is a quite a spectacular insight, thank you Bob.

Steven Mosher
Reply to  ChrisB
November 29, 2018 6:39 pm

“Only take home message from this excellent presentation I’ve got was that climate models are unable to compute absolute temperatures.”

wrong

Reply to  Steven Mosher
November 29, 2018 6:42 pm

Correct Steve!

Climate models are able “to compute absolute temperatures” wrong(ly).

Steven Mosher
Reply to  David Middleton
November 29, 2018 6:50 pm

All models are wrong.

For ALL models of any kind.

Observation – Model Predicted value = e

and “e” isnt zero.

It is trivially true and non informative to note that all prediction will have error.

The question is always what is the source of the error
and then
The question is, do we have a better understanding.

LdB
Reply to  Steven Mosher
November 29, 2018 7:35 pm

With times series model you missed one, a model prediction is only useful for a limited time 🙂

Coeur de Lion
November 29, 2018 1:01 pm

What happens if the globe cools? Who takes the credit? Us or them?

November 29, 2018 1:59 pm

A cumbersome title to obscure what is really very simple arithmetic.

“So we can say, because the GISS-E2-H p3 ensemble member is not only off in terms of global mean surface temperature (too high), it is also off in terms of time.”
No, you can’t, at least if you are trying to convey meaning. The simple fact is that the model is running about a degree warmer. That is all. It also happens that there is a trend, so you can alternatively interpret the displacement as being in time. But there is no point, it doesn’t add anything.

sycomputing
Reply to  Nick Stokes
November 29, 2018 4:32 pm

No, you can’t, at least if you are trying to convey meaning. The simple fact is that the model is running about a degree warmer. That is all.

Well said. I’m with you 97%. Why do we care? Why should we care?

I mean, that the models are consistently wrong really is meaningless to our overall assurance of the scientific certainty that our reliance on those models to predict the climate’s future state should govern our assumptions on energy production in the present is it not? What sort of addlepate doesn’t get that?

As she, the preeminent mulier prudentissima of this present (and no doubt, future-pasts) age recently so thoughtfully and eruditely expressed before another group of recalcitrant ne’er-do-wells also demanding some modicum of precision, “What difference – at this point, what difference does it make?”

Indeed.

Reply to  sycomputing
November 29, 2018 5:51 pm

“the models are consistently wrong”
Well, as it is said, all models are wrong, but some are useful. In fact, as anyone who has tried to solve heat transfer problems knows, it is perfectly possible to get a useful solution with all patterns and features accurate, but with an uncertain base level of temperature. The reason is that the laws of heat transfer, starting with Fourier’s Law, relate te real physics (flux) to temperature difference. The main exception is the Stefan-Boltzmann equation, which includes T in K. But as folks sometimes remind with silly graphs, climate fluctuations are a small percentage of that. The other main exception is phase change, especially of water.

This means that Earth’s temperature is weakly fixed (real or GCM); it doesn’t take much to shift it up or down, and the big processes that shift heat in or out can work almost as well with a degree r so different temperature. That is the reason for high sensitivity to CO2. It’s a pity that we can’t also function independently of temperature.

sycomputing
Reply to  Nick Stokes
November 29, 2018 6:51 pm

Well, as it is said, all models are wrong, but some are useful . . . That is the reason for high sensitivity to CO2. It’s a pity that we can’t also function independently of temperature.

And just to add to what you’ve said, is it not also a pity that we (or at least some of us) can’t (or in *their* case, “won’t”) function independently of the usefully faulty modeling ensembles, and rather just faithfully believe the independent-of-useful-evidence certainty that regardless of the fact that we haven’t really a clue just yet how to apply our physics to the inner workings of the outer climate, we know (or at least you and I know), with as much certainty as we faithfully believe it to be true, that nevertheless the climate (and therefore, CO2 sensitivity) works like we faithfully believe it does?

As the infallible IPCC has usefully (and independently, I must add) stated:

“In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible.”

http://ipcc.ch/ipccreports/tar/wg1/505.htm

Given the above, how could anyone doubt the conclusions of our consistently usefully faulty climate modeling ensemble schemes combined with the faithful usefulness of our lack of any certainty with regard to the physics of the climate?

I mean, really . . . what gives here with the ilks of these?

Steven Mosher
Reply to  Nick Stokes
November 29, 2018 6:52 pm

The model runs within 10% of the true figure !!!

This is an astounding level of accuracy for a complex model

sycomputing
Reply to  Steven Mosher
November 29, 2018 7:16 pm

This is an astounding level of accuracy for a complex model

Infinite “+”‘s for you Steven! And as Nick has usefully pointed out in faith above, though “all models are wrong,” it is nevertheless true that all models are useful for our belief, and by necessity therefore, our actions!

“Hear! Hear!” as we believe with all the usefulness that science historically has ascribed to Faith, that past performance is indeed (despite E.F. Hutton’s insistence to the contrary) indicative of future results!

RobR
Reply to  Nick Stokes
November 30, 2018 6:52 am

Nick. I appreciate comments from you and Mosh and think you guys shoot strait. Since the models are tuned for fidelity with historic temperatures, isn’t a degree a BFD in such a brief period of time?

Why try and sell it any other way? At the very least, (in the short-term) we can be confident that said model either (or both) fails to account for natural variability, or assumes excessive forcing.

Reply to  RobR
November 30, 2018 5:03 pm

Rob,
“Since the models are tuned for fidelity with historic temperatures, isn’t a degree a BFD in such a brief period of time?”
It isn’t a degree in a period of time. It is a 1 degree offset, like a constant of integration. Bob’s arithmetic, as I noted, just creates confusion. There is a 1° warming, and ys, that is a big deal. The fact that you can line that up to look like a 1° due to model disparity is just a coincidence.

The models aren’t tuned for fidelity with historic temperatures. Tuning is used to pin down specific parameters that are otherwise poorly characterised. If it is tuned against temperature, it will usually be a thirty year period, probably pre-industrial. The purpose of tuning is to get the model right, not to emulate a specific history affected by GHG change.

November 29, 2018 5:56 pm

Two additional issues about the use of temperature anomalies is that (1) the trend behavior of the calendar months are very different and (2) that results in a violation of the fixed seasonal cycle assumption that is the basis of the anomaly method of removing the seasonal cycle. As well as the issue of the use of OLS linear regression to understand trends in temperature without paying attention to OLS assumptions and their violations in the data.

I think the way to study temperature trends is to study them separately for the twelve calendar months and for even greater accuracy, for the 52 weeks. Pls see

https://tambonthongchai.com/2018/08/17/trendprofile/https://tambonthongchai.com/2018/08/17/trendprofile/

http://www.academia.edu/attachments/57223535/download_file?s=portfolio

http://www.academia.edu/attachments/49387730/download_file?s=portfolio

November 29, 2018 6:18 pm

If you want to show CO2 causes warming, you graph CO2 in the x-axis and some metric of temperature in the y-axis to compare them with statistical least squares. If you do so, you will find there is no statistically significant correlation, which resolves the issue completely. If you want to claim sun spots affects some metric of global mean temperature, you graph sun spots in the x-axis and global mean temperature in the y-axis and generate a least square line and its correlation coefficient. This is introductory statistics, so why has this not been done? Because there are no correlations above noise with either CO2 or sunspots to show. Trends are shown, which are stating that time creates climate and so clearly to stop undesirable climate, we must go back in time. Since this is not a reasonable conclusion, it does show at least that time series trends of unrelated variable has no trend meaning using least squares whatsoever and has no predictive meaning. Since all climate models continue to diverge from observation of global mean temperature it is not possible to conclude they are wrong, it is only possible to conclude they are total shit, random number generators, whose divergence from observation is infinite with time. To be even hopefully correctable, they must oscillate around the observed temperature mean, which they never do.

Peter Sable
November 29, 2018 9:14 pm

As Willis (and common sense) has shown, evaporation rates change with absolute temperature.

So the models being off by that far generally means their results are all hooey.

As if we needed another reason

November 30, 2018 2:08 pm

Emergency cancelled, resume all normal activities.
Good night folks, drive safely.
(The odds that you will come to harm due to an auto accident are infinity higher than the chances you will come to harm due to a changing climate caused by additional CO2 in the air.)