Study: A new method to evaluate overall performance of a climate model

From the INSTITUTE OF ATMOSPHERIC PHYSICS, CHINESE ACADEMY OF SCIENCES and the “pyramid schemes” department:

A new method to evaluate overall performance of a climate model

Many climate-related studies, such as detection and attribution of historical climate change, projections of future climate and environments, and adaptation to future climate change, heavily rely on the performance of climate models. Concisely summarizing and evaluating model performance becomes increasingly important for climate model intercomparison and application, especially when more and more climate models participate in international model intercomparison projects.

This is a pyramid chart showing the relationship between three levels of metrics for multivariable integrated evaluation of climate model performance. CREDIT XU Zhongfeng

Most of current model evaluation metrics, e.g., root mean square error (RMSE), correlation coefficient, standard deviation, measure the model performance in simulating individual variable. However, one often needs to evaluate a model’s overall performance in simulating multiple variables. To fill this gap, an article published in Geosci. Model Dev., presents a new multivariable integrated evaluation (MVIE) method.

“The MVIE includes three levels of statistical metrics, which can provide a comprehensive and quantitative evaluation on model performance.”

Says XU, the first author of the study from the Institute of Atmospheric Physics, Chinese Academy of Sciences. The first level of metrics, including the commonly used correlation coefficient, RMS value, and RMSE, measures model performance in terms of individual variables. The second level of metrics, including four newly developed statistical quantities, provides an integrated evaluation of model performance in terms of simulating multiple fields. The third level of metrics, multivariable integrated evaluation index (MIEI), further summarizes the three statistical quantities of second level of metrics into a single index and can be used to rank the performances of various climate models. Different from the commonly used RMSE-based metrics, the MIEI satisfies the criterion that a model performance index should vary monotonically as the model performance improves.

According to the study, higher level of metrics is derived from and concisely summarizes the lower level of metrics. “Inevitably, the higher level of metrics loses detailed statistical information in contrast to the lower level of metrics.” XU therefore suggests, “To provide a more comprehensive and detailed evaluation of model performance, one can use all three levels of metrics.”

###

The paper: https://www.geosci-model-dev.net/10/3805/2017/

Abstract:

This paper develops a multivariable integrated evaluation (MVIE) method to measure the overall performance of climate model in simulating multiple fields. The general idea of MVIE is to group various scalar fields into a vector field and compare the constructed vector field against the observed one using the vector field evaluation (VFE) diagram. The VFE diagram was devised based on the cosine relationship between three statistical quantities: root mean square length (RMSL) of a vector field, vector field similarity coefficient, and root mean square vector deviation (RMSVD). The three statistical quantities can reasonably represent the corresponding statistics between two multidimensional vector fields. Therefore, one can summarize the three statistics of multiple scalar fields using the VFE diagram and facilitate the intercomparison of model performance. The VFE diagram can illustrate how much the overall root mean square deviation of various fields is attributable to the differences in the root mean square value and how much is due to the poor pattern similarity. The MVIE method can be flexibly applied to full fields (including both the mean and anomaly) or anomaly fields depending on the application. We also propose a multivariable integrated evaluation index (MIEI) which takes the amplitude and pattern similarity of multiple scalar fields into account. The MIEI is expected to provide a more accurate evaluation of model performance in simulating multiple fields. The MIEI, VFE diagram, and commonly used statistical metrics for individual variables constitute a hierarchical evaluation methodology, which can provide a more comprehensive evaluation of model performance.

0 0 votes
Article Rating
113 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Mark from the Midwest
November 3, 2017 9:37 am

We already have methods to evaluate any model, sounds like these guys are trying to re-write the rules. Of course for many so-called climate scientists the real criteria is if the model helps them to get more funding.

Reply to  Mark from the Midwest
November 3, 2017 9:58 am

The derivatives market of climate modeling. Modeling the model.

Reply to  DonM
November 3, 2017 10:13 am

It’s a model derived from expectations. The main failure of climate science is not modifying those expectations when they are demonstrably incorrect.

Reply to  DonM
November 3, 2017 10:27 am

They just pressure the data keepers to make convenient adjustments to observation.
That is post-modern climate science at work.

Louis Hooffstetter
Reply to  DonM
November 3, 2017 6:36 pm

Richard Feynman said it best:

“It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.”
Dr. Richard P. Feynman

crackers345
Reply to  DonM
November 3, 2017 7:09 pm

LouisH: climate isn’t
an experimental science.

Patrick MJD
Reply to  DonM
November 3, 2017 10:37 pm

“crackers345 November 3, 2017 at 7:09 pm”

Define climate, mathematially.

Reply to  DonM
November 4, 2017 10:14 am

Patrick MJD,

Here are the equations that matter, relative to the planets sensitivity to forcing (i.e. affect of the surface temperature to changes in Pi).

Pi(t) = Po(t) + dE(t)/dt

Pi(t) is the post albedo energy arriving from the Sun, given as Psun*(1 – a), where Psun is the incoming solar power and a is the albedo. Po(t) is the energy emitted by the planet which in LTE (when dE(t)/dt == 0) is equal to Pi. E is the energy stored by the planet which increases when Po Pi.

Ps(t) = o*Ts^4
Po(t) = e*Ps(t)
Ps(t) = e*o*Ts^4

Ps(s) is the emissions of the surface, where the corresponding temperature, Ts is given by the SB Law. The constant o is the SB constant (5.67E-8 Watt/m^2 per degree K^4). The coefficient e is the ratio between the power emitted by the surface and the power emitted by the planet and is the emissivity of an EQUIVALENT gray body emitter whose temperature is Ts and whose emissivity is e.

Ts(t) = k*E(t)

The surface temperature Ts is linearly proportional to the energy stored by the system E (i.e. one calorie increases the temperature of 1cc of water by 1C).

Solving these equations for the sensitivity, dTs(t)/dPi(t) and we get,

dTs(t)/dPi(t) = (4*e*o*Ts^3)^-1

The measured value for e is about 0.61 and the measured value for Ts is about 287.5K. Plugging in the numbers, the sensitivity is 0.3 C per W/m^2. To the extent that some solar input does work that does not affect the surface temperature, the sensitivity will necessarily be less than this. Note that k, which is the linear proportionality constant between stored energy and the temperature drops out of the equation for the sensitivity, moreover; the sensitivity is highly temperature dependent going as T^-3.

Reply to  DonM
November 4, 2017 10:18 am

The formatter removed text around characters. It should say,

When Pi is greater than Po, E increases along with Ts. When Pi is less than Po, E decreases along with Ts.

Reply to  DonM
November 4, 2017 11:05 am

co2isnotevil November 4, 2017 at 10:14 am

Patrick MJD,

Here are the equations that matter, relative to the planets sensitivity to forcing (i.e. affect of the surface temperature to changes in Pi).

Pi(t) = Po(t) + dE(t)/dt

Pi(t) is the post albedo energy arriving from the Sun, given as Psun*(1 – a), where Psun is the incoming solar power and a is the albedo. …

Thanks, co2. The problem with this analysis is that Psun * (1-a), the amount of solar energy available after albedo reflections, is itself a function of the temperature.

This is because, in the all-important tropics where most of the solar energy enters the system, albedo goes up with the temperature. They are very highly correlated, as you can see below.

Since your analysis does NOT include this critical active temperature control mechanism, I fear that it cannot be used to calculate the sensitivity.

w.

Reply to  DonM
November 4, 2017 11:57 am

Willis,

“The problem with this analysis is that Psun * (1-a), the amount of solar energy available after albedo reflections, is itself a function of the temperature.”

Not as much as you think. Yes, the albedo in polar regions is larger than equatorial regions owing to ice and snow, but the decreasing albedo from melting ice and snow is quite small. It was larger coming out of the last ice age when there was a lot more of the surface covered in ice, but today, the average fraction of the planet covered by ice is pretty close to the minimum possible. Average polar temps are far below freezing and no amount of GHG action will ever be enough to melt it all and prevent it from returning in the winter. About the only thing that will cause this is when the Sun enters its red giant phase.

Considering that 2/3 of the planet is covered by clouds, which have about the same reflectivity as ice, 2/3 of all future melted ice has no affect on the net albedo. Polar regions receive less insolation to begin with and when you calculate the increase in the incident power from melting all ice and snow on the planet and distribute that power across the entire planet, its only a few W/m^2 and less than what’s required to achieve the global emissions increase (temperature increase) they claim arises by doubling CO2.

The sensitivity expressed as a change in temperature per change in input power, dTs(t)/dPi(t) is already a function of temperature and that function of temperature is independent of the albedo. None the less, since (1-a) is linear to e, whatever effect albedo has can be rolled into an equivalent value of e, both of which can be expressed as functions of the fraction of the planet covered by clouds. Note that the sensitivity expressed as a change in surface emissions per change in input power is constant, where

dPs(t)/dPi(t) = 1/e

Yes, e is a higher order function of temperature, but when we measure it over the last couple of decades, it’s remarkably constant coming in at about 0.6, where dPs(t)/dPi(t) is about 1.6 W/m^2 of Ps per W/m^2 of Pi. It’s even relatively constant from the poles to the equator where e increases only slightly as the average temperature transitions through freezing.

crackers345
Reply to  DonM
November 4, 2017 7:39 pm

Patrick MJD commented >>Define climate, mathematially.<<

too clever by five-halfs.

Reply to  Mark from the Midwest
November 3, 2017 11:04 am

One way to see if a model of a causal system is at least plausible is to vary the initial conditions and run the model. The model should always end up in the same state.

Curve fitting GCM’s to expectations requires so many assumptions and adjustments, any real physics gets lost. The observable consequence of this is the large effect initial conditions have on the modeled results. This is often misinterpreted as the consequences of chaos and complexity but is more symptomatic of an unstable model or uninitialized data.

crackers345
Reply to  co2isnotevil
November 3, 2017 7:10 pm

the full set of initial conditions aren’t
known — in particular deep ocean
currents, and aerosol loading.

Reply to  crackers345
November 3, 2017 8:12 pm

cracker345,
Initial conditions establish the starting values for state variables, where the model adjusts the state variables as the model runs and if the model is correct, the state variables will converge to correct values, independent of the starting value.

If you’re talking about model coefficients which the model doesn’t adjust, its even worse because only one value of the coefficient is correct and all others are not, thus averaging across difference values doesn’t help.

Patrick MJD
Reply to  co2isnotevil
November 3, 2017 10:39 pm

“crackers345 November 3, 2017 at 7:10 pm

the full set of initial conditions aren’t
known”

So “conditions” in the models are not known? Thank you for confirming models are rubbish!

crackers345
Reply to  co2isnotevil
November 4, 2017 7:41 pm

evil: climate models do not
solve an initial value problem,
they solve a boundary value
problem.

(ever studied PDEs?)

Reply to  crackers345
November 5, 2017 9:27 am

cracker345,
If it’s solving boundary problems, then its solving the wrong problem. Models are supposed to model how state changes. Besides, the boundaries are well known and there are only 2 of them that matter, One is the boundary between the atmosphere and space and the other is between the atmosphere and the surface.

crackers345
Reply to  co2isnotevil
November 4, 2017 7:43 pm

Patrick MJD commented >>So “conditions” in the models are not known? Thank you for confirming models are rubbish! <<

you clearly do not understand
GCMs, or how they are initialized
("spun-up").

(they don't solve an initial value problem.)

Andy Pattullo
Reply to  Mark from the Midwest
November 3, 2017 1:24 pm

I agree. There is plenty of straightforward evidence and some very valuable expert advice (e.g. Dr. Judity Curry) that most current GCM’s are useless in interpreting the real world. I can’t help but think this is an attempt to create some custom metric by which individuals may claim value in models that doesn’t exist (but I could be wrong). It smells a lot like how stinky subprime morgages were packaged into larger tranches and then folded into major investment vehicles of no real ultimate value while disguising all of the high risk and poor judgment that went into the original loans.

george e. smith
Reply to  Mark from the Midwest
November 3, 2017 2:04 pm

Well isn’t that what …. average …. is ?

Just a fictitious hodge-podge of a bunch of unrelated things that weren’t exactly observed by anyone anywhere, any how. But modelers get their jollies by imagining that it means something; well something besides maybe more grant moneys.

G

M Seward
Reply to  Mark from the Midwest
November 3, 2017 4:50 pm

Two possibilities with this.

1 It was put together by Chinese pinheads who think they are angels and this will tell them how much funding they need to meet their kpi’s going forward and the ‘pyramid’ characterisation just did not make it through translation so did not register.

2 It is a spoof, the giveaway being the ‘pyramid’ characterisation.

Who knows? Who cares?

Carbon BIgfoot
Reply to  Mark from the Midwest
November 8, 2017 2:14 pm

Maslow’s theory of needs does not apply to self-actualization of failed theory.

Latitude
November 3, 2017 9:38 am

Lame….models will never be right when they are constantly changing/adjusting temp history
History that they back cast to today….will not even be the same by the time they run the model
…and all the other crap they do to temp history

…and a few hundred other things

Urederra
Reply to  Latitude
November 3, 2017 2:53 pm

… and when there are several temperature datasets to pick and choose.

Models are often fed with one temperature dataset and the results are compared to a different temperature dataset. weather balloons or HadCRUT3 vs. HadCRUT4.

markl
November 3, 2017 9:38 am

When applied to current climate models do the results correlate with real world performance?

November 3, 2017 9:46 am

Lipstick on a pig.

Latitude
Reply to  ristvan
November 3, 2017 9:54 am

LOL….yep

MattS
Reply to  ristvan
November 3, 2017 5:24 pm

Well, that’s better than trying to put a pig on lipstick. 🙂

Bruce Cobb
November 3, 2017 9:46 am

The paper has lots of gobbledygook and horseshit, so good on them for that. Could definitely use more cowbell though.

F. Leghorn
November 3, 2017 9:53 am

Funny how ALL models could be evaluated by their actual predictions. I guess that would be too easy.

Edwin
Reply to  F. Leghorn
November 3, 2017 10:34 am

Not if they continually adjust inputs (data) and tweak assumptions to force their preconceived “predictions.” Way back when, just before all this was hitting the media, I was a very lukewarmer. I then listen to a presentation where it was obvious that the PI was changing the data to fit their assumption. I was also dealing with federal government scientists on other issues. It was not a pleasant interaction. So I began to question everything they were doing, not just my normal inborn skepticism. Like many things in and around government I have begun to believe that for CAGW “scientists” it is more than just ego and grant money but power. They have had a taste of power. Think about it! Almost all the governments in the world have people working on this issue and developing dramatic changes in policies that will affect the economic, therefore political, structure of the entire world.

WR
Reply to  Edwin
November 3, 2017 11:11 am

There are different levels of the true believers. There are the devious and power hungry as you say, but also many with ulterior motives (anti-capitalistic, anti-american, socialist, etc.), and of course plenty of useful idiots.

Thomas Homer
Reply to  F. Leghorn
November 3, 2017 10:53 am

” … ALL models could be evaluated by their actual predictions”

Even with mostly accurate predictions, a model’s underlying assumptions may be questionable. As an example, models were established for astronomical orbits based on a geocentric assumption. Celestial spheres were necessarily introduced to explain the planetary orbits. These models predicted those orbits quite well. Of course, the geocentric assumption came into question, and new models with a heliocentric assumption were shown to be just as accurate without the need for celestial spheres.

F. Leghorn
Reply to  Thomas Homer
November 3, 2017 12:42 pm

In other words they doctored the data. Deja vu all over again.

Reply to  Thomas Homer
November 3, 2017 1:55 pm

squiggy9000 November 3, 2017 at 12:42 pm

In other words they doctored the data. Deja vu all over again.

No, they kept the data and changed the theory.

Regards,

w.

george e. smith
Reply to  Thomas Homer
November 3, 2017 2:11 pm

And Lunarcentric models would be just as accurate, just more complicated.
Well the Mandelbrot Set is pretty complicated; and it isn’t even a model of anything !

G

Urederra
Reply to  Thomas Homer
November 3, 2017 3:01 pm

No, they kept the data and changed the theory.

As far as I know, there was no physical theory behind the geocentric model. The heliocentric model can be explained by the theory of universal gravitation.

My 2 cents.

Reply to  Thomas Homer
November 3, 2017 6:11 pm

Gew, beg to differ a bit. Mandelbrot set only becomes visible if you program its recursive function a certain way—100 recursions escape. Done over the complex plane from -1 to +1, -i to +i. Is an inverse of Julia sets. So exists, for sure. Just not obvious without some math effort. Unlike climate science, is fully reproducible. I progrsmmed a Mandelbrot set grnerator myself over 20 years ago. Slow compared to later algorithms, since brute force rather than spherical approximation.

Tom Halla
November 3, 2017 9:55 am

Trying to reduce something as complex as climate to a single number reminds me of Swift and his parody of science.

Ricdre
Reply to  Tom Halla
November 3, 2017 10:14 am

Oh, that’s easy…the answer is 42 (see Hitchhikers Guide to the Galaxy).

george e. smith
Reply to  Tom Halla
November 3, 2017 2:12 pm

How about 3 ; so it’s the same as Pi.

g

Reply to  Tom Halla
November 3, 2017 4:07 pm

They are doing something different

November 3, 2017 10:04 am

Start with a thorough understanding of what causes climate change.

Even a one page summary is acceptable.

Without that understanding, there are no real climate models.

Unfortunately that understanding does not exist today.

Therefore, we have only wild guess computer games, falsely called “climate models”,
that will make wrong predictions … until the temperature actuals are eventually “adjusted”
enough so the predictions look better!

The average temperature of our planet ‘s surface has remained in a one degree C. range since 1880,
even with haphazard measurements, lots of surface area not measured at all, and “adjustments” that may have doubled the warming in the raw data, and the data is owned by people who WANT to see a lot of global warming … yet we’ve been in a one degree C. range for 137 years!

Why would anyone with a functioning brain think such a tight average temperature range over 137 years is a ‘coming climate change catastrophe”?

Climate blog for non-scientists:
http://www.elOnionBloggle.Blogspot.com
over 12,000 page views so far
No ads – no money for me – a public service

Michael Jankowski
November 3, 2017 10:07 am

Poorly-worded, but as it notes, most models seem to be scored on an individual variable (global temperature). This at least would seemingly call BS on models that fair miserably to reproduce other parameters with accuracy.

Reply to  Michael Jankowski
November 3, 2017 11:19 am

You kind of hit the nail on the head. Even if one could say that a model semi-accurately forecasted ‘global temps’, so what. What is really needed is accuracy to the point where we know what will happen within areas/regions. Will the Outback, desert southwest of the US, or the Sahara see most of the warming? Or will it only be at the poles? Or maybe evenly spread? So, so much we (or the modelers) don’t know!

TonyL
November 3, 2017 10:16 am

My understanding is that the Chinese categorically reject CAGW and Climate Change altogether.
So they might be up to something else, like developing a new suite of models that actually produce useful long range forecasts. And maybe taking a poke at Western climate science in the process.

Edwin
Reply to  TonyL
November 3, 2017 10:40 am

Not sure they reject CAGW altogether, China just has a entirely different perspective about climate. In their long recorded history they have faced climate change. They understand that climate changes regardless of what humans do or don’t do. They know when they have been rich it was far easier to adapt to change. When poor China has had prolonged suffering and strife. They have learned that to build wealth, besides stealing technology from elsewhere, they need cheap and abundant energy. Since they see themselves as THE rising world power they are quite happy to allow Europe and North America to play this stupid CAGW game.

November 3, 2017 10:17 am

This climate model effort is analogous to making further adjustments to the epicycles and deferents in a Ptolemaic planetary system. With each refinement, new complications arise elsewhere in the model.

Their underlying fundamental assumptions are wrong in both cases.

Reply to  Joel O’Bryan
November 3, 2017 10:37 am

As a bit of trivia, the Ptolemaic model does work well in the gross sense of the big picture, as long as troublesome movement details are not examined too closely.

Planetarium projectors use the simplified mathematical relationships embodied in Ptolemaic calculations to project the night sky on the curved ceiling, creating the Earth centric view that wows and astonishes everyone when they first see this very realistic presentation. The Ptolemaic sky projection model works deceivingly very well in this application. This is the exact same model trap that climate modelers have fallen into. It appears correct to them in the gross, larger sense, so they believe their model represent fundamental realities of climate. They could not be more mistaken.

For more on the Ptolemaic model used in planetarium projectors:
http://www.polaris.iastate.edu/EveningStar/Unit2/unit2_sub1.htm

Reply to  Joel O’Bryan
November 3, 2017 11:21 am

+1

knr
November 3, 2017 10:32 am

The first rule of climate ‘science’ , if the models and realty differ in value , it is reality which always in error, somewhat underminds the need for this research. In addition the authors have made an error in their maths. For it is clear the ‘value’ of any model is not a function of its validity, rather it is in a direct relationship with the degree of support the model offers to further the AGW faith. Science has f all to do with it.

Edwin
Reply to  knr
November 3, 2017 10:52 am

I have actually had this argument with federal scientists in a public management meeting. After saying that for the issue at hand we had best data set they had ever seen, I asked if we had ALL the data but their models disagreed with the data what would they believe, what would they base their recommendations to rule and policy makers on? Their answer, The Computer Models. When one of the senior committee officials asked them to explain, they instead requested a 15 minute recess which turned into an hour. The politically appointed members of the management unit were NOT happy people. It led to a brief investigation of all those in that work unit. It helped, but only briefly.

Reply to  Edwin
November 3, 2017 11:10 am

Heretic!!! They surely labeled you with the dreaded “D” word as a way to salve their cognitive dissonance.

Gary Pearse
November 3, 2017 10:33 am

I’ll wait for McIntyre or Briggs to work this over! First it’s already compromised science when you have to use statistics at all (that ought raise a din from all sides). It’s essential for the social sciences and we know the wiggle room it makes for these irredeemably ideologically corrupted disciplines (of which climate science is the best example).

To me, this analysis constructs a phony “index” that will give a totally wrong model a pass. If you have a hard-wired falsified theory based on CO2 as the basis and you adjust this with a concocted uber-aerosols effect to protect the theory and an additional wrong-signed cloud parameter, you could end up with an excellent forecast with a little prestidigitation and get an index of 0.99.

I’ve been fearing this possibility as a pretext for jailing deplorables and putting the world under elitist governance. Thankfully their hubris and post no-idjit-left-behind enrollment policies in institutions of higher learning seems to have blocked their vision. Adding a coefficient “c” equal to 1/3 to multiply their formula by would have given them a heck of a scary fit.

Clyde Spencer
November 3, 2017 10:46 am

It appears to me that the authors have presented a rigorous, quantitative method for evaluating multiple model results to determine the best compromise model. However, one often is more concerned about one of the variables than the others. They then recommend assigning subjective weighting to the variable(s) of primary interest. They have then degraded the quantitative approach with subjective assessments of the weighting to be assigned.

The authors acknowledge that is is generally recognized that some models do a better job of predicting future temperatures than they do future precipitation, and vice versa. That is an interesting state of affairs because there is strong interaction between temperature and precipitation. That is, the surface commonly cools down during and immediately after a Summer rain, and high surface temperatures may result in virga. So, at first blush, it would appear that there are serious problems with the assumptions or constructs within the models when these interacting variables have different inaccuracies.

It should be obvious that the models aren’t fit for the purpose for which models are usually built, i.e. to predict future states with the perturbation of one or more input variables. Being able to identify the best compromise model is indeed like putting “lipstick on a pig.” What is needed is a paradigm shift in modeling where the numerous output variables, such as temperature and precipitation, are consistent with each other and track historical records much better than the 3X overestimate of future temperatures currently seen.

jpatrick
November 3, 2017 10:58 am

The right way to evaluate a climate model is to watch and wait for a few hundred centuries. This just isn’t compatible with human lifespan.

Resourceguy
November 3, 2017 11:12 am

The same methodology might be useful in detecting chronic bias not just of the model but the operator and the users.

son of mulder
November 3, 2017 11:19 am

How can the predictions for a chaotic system compared to reality be anymore than chance?

Reply to  son of mulder
November 3, 2017 11:33 am

Worse than that, you can only compare the results to the past, that is, what has already happened. Is it more than chance that a model can predict global temps accurately in the future even if it happened to stumble across the correct answer one time?

Reply to  son of mulder
November 3, 2017 4:07 pm

climate isnt chaotic

catweazle666
Reply to  Steven Mosher
November 3, 2017 6:06 pm

“climate isnt chaotic”

That’s not what the IPCC say.

“In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible.”

IPCC Working Group I: The Scientific Basis, Third Assessment Report (TAR), Chapter 14 (final para., 14.2.2.2), p774.

It’s not what Edward Lorenz said either.

“Lorenz’s early insights marked the beginning of a new field of study that impacted not just the field of mathematics but virtually every branch of science–biological, physical and social. In meteorology, it led to the conclusion that it may be fundamentally impossible to predict weather beyond two or three weeks with a reasonable degree of accuracy.

Some scientists have since asserted that the 20th century will be remembered for three scientific revolutions–relativity, quantum mechanics and chaos.”

http://news.mit.edu/2008/obit-lorenz-0416

Now, who to believe…a superannuated English Major with a record of Mannipulating temperature data to fit the AGW narrative or IPCC Working Group I and Ed Lorenz, one of the most distinguished climate scientists ever born…

Patrick MJD
Reply to  Steven Mosher
November 3, 2017 10:42 pm

“Steven Mosher November 3, 2017 at 4:07 pm

climate isnt chaotic”

This has to rate as dumbest post EVAH!

AndyG55
Reply to  Steven Mosher
November 4, 2017 1:09 pm

Yep, Mosh has degenerated into an empty sock.

Maybe Muller removed his hand ?

John
November 3, 2017 11:58 am

Off topic a bit, but I’m curious to get your takes on the flurry of propaganda articles coming out just before the UN climate talks starting on 11/6 and the findings of the “major federal climate report” that was release today.

Reply to  John
November 3, 2017 12:54 pm

Ha ha, the article mistates facts from the very opening by claiming that these models already “heavily rely on the performance”. Fraudulent reinitializations are already known to be the practice that allows some scientists to say this with a straight face.

Scott Cater
Reply to  John
November 3, 2017 1:05 pm

My guess is that the creators of this report are hold overs from the prior administration.

Crispin in Waterloo
November 3, 2017 12:11 pm

A computer big enough to run a programme complex enough to realistically represent the evolution of the weather through the ages thus creating a picture of the climate, would run very slowly. In fact it would run at about the same speed as the actual climate.

This coincidence would give the modelers something to gauge their success by, as the actual performance could be compared with the computer-calculated performance in real time, side by side, for generations. After some time, tweaking and all, they would be able to demonstrate they can back-cast the whole climate accurately. I think this would be a major step forward.

Reply to  Crispin in Waterloo
November 3, 2017 11:00 pm

Biosphere I, the original experiment.

Walter Sobchak
November 3, 2017 12:51 pm

Mathematical onanism with lubricants.

Another Ian
November 3, 2017 1:12 pm

A comment from a management school where the pyramid of management was being explained

“Oh is that how it works? I thought it was like a vegetarian’s outhouse where the turds float to the top”

Just saying.

November 3, 2017 1:18 pm

The utility and skillfulness of computer models depends on:
1. how well the processes which they model are understood,
2. how faithfully those processes are simulated in the computer code, and
3. whether the results can be repeatedly tested so that the models can be validated and refined.

Specialized models, which try to model reasonably well-understood processes like PGR and radiation transport, are useful, because the processes they model are manageably simple and well-understood.

Weather forecasting models are also useful, even though the processes they model are very complex, because the models’ short-term predictions can be repeatedly tested, allowing the models to be validated and refined.

But more ambitious models, like GCMs, which attempt to simulate the combined effects of many poorly-understood processes, over time periods too long to allow repeated testing and refinement, are of dubious utility.

E.g., NASA’s ModelE2 consists of about a half-million lines of moldy Fortran code, which it is safe to assume nobody actually understands. They’ve got so many fudge factors, “knobs” and pseudo-random number generator calls in there that they can make it do just about anything at all, but It doesn’t in any sense represent an understanding of the Earth’s climate system. What’s more, unlike weather models, which are comparably complex but get tested every week, the predictions of those GCMs are untestable. Ask any computer scientist whether he would trust an untestable 500,000 line Fortran program as the basis for multi-million dollar decisions!

Worst of all are so-called “semi-empirical models,” which aren’t actually models at all. So-called “semi-empirical modeling” is an oxymoron: “modeling” that doesn’t actually model anything. It is similar to modeling, but without reference to any physical basis. It is really just curve-matching. It can be made to produce just about any desired result.

GCMs are subject to criticisms that they don’t accurately model the real world, because of inconsistency with observations of things like clouds and the predicted tropical mid-tropospheric hot spot. Semi-empirical modelers neatly avoid such criticism, by not even trying to model the real world. It’s the worst sort of junk science.

RayG
Reply to  daveburton
November 3, 2017 10:29 pm

Please make that multi-billion dollar decisions that are much closer to a trillion dollars that a few billion dollars.

November 3, 2017 2:12 pm

“Inevitably, the higher level of metrics loses detailed statistical information in contrast to the lower level of metrics.”

Does this mean destroying accuracy with averaging?

November 3, 2017 2:15 pm

*accuracy of ‘first level’ metrics values

November 3, 2017 2:17 pm

“According to the study, higher level of metrics is derived from and concisely summarizes the lower level of metrics ”

lower resolution?

November 3, 2017 2:18 pm

The obfuscation is strong in this one 😀 I smell a rat

Steve Carousso
November 3, 2017 4:11 pm

quantifying how bad they stink

reallyskeptical
November 3, 2017 4:15 pm

Meanwhile:
“Directly contradicting much of the Trump administration’s position on climate change, 13 federal agencies unveiled an exhaustive scientific report on Friday that says humans are the dominant cause of the global temperature rise that has created the warmest period in the history of civilization.”

Oh, those rats.

richard verney
Reply to  reallyskeptical
November 3, 2017 8:43 pm

The swamp requires draining and we need to get rid of that rat infested hell hole.

Tom in Florida
November 3, 2017 4:16 pm

These are truly “if” and “then” models. If the input conditions actually come to pass, then the results will be accurate. The problem seems to be that they never get the “if” anywhere near reality.

Bob
November 3, 2017 4:49 pm

“The first level of metrics, including the commonly used correlation coefficient, RMS value, and RMSE, measures model performance in terms of individual variables.”

No. Performance is measured by the results being compared to actual measured conditions. These metrics are structured to measure the assumptions of one model compared to the assumptions of other models, and has no relevance to empirical data. Here is one more example of modelers trying to justify their existence.

NW sage
November 3, 2017 5:09 pm

The cast of the TV show Stargate had/has a word to describe papers of this kind: technobabble – it means all the things ‘babble’ means and it sounds suitably technical (meaning obtuse)

Dr. S. Jeevananda Reddy
November 3, 2017 5:28 pm

In the settled science scenario, on one side CO2 is increasing with the time and on the other the climate sensitivity factor coming down with the progression of time as presented by IPCC from their reports. This means the resulting temperature presents a zero trend. What will be the result of model tests???

Dr. S. Jeevananda Reddy

crackers345
November 3, 2017 5:41 pm

very few climate models are
intended to “predict” climate, and
that’s not how scientists use them.

they use them as experiments — change this
part over here, and see if it matches reality.

might be a particular parametrization, or a
different way of handling sea ice, or clouds,
or aerosol
pollution.

warming to 2100 can’t be predicted anyway.
models are run to 2100 with some assumed
scenarios, none of which will
actually take place.

models are calculations, very very complex
calculations, & the
interesting
questions are what if you change
this term A here to be a different term A’.

Tom Halla
Reply to  crackers345
November 3, 2017 5:59 pm

So if the computer models are not actually trying to describe the natural world climate, and be judged by their conformity to that real world, perhaps the writers of such exercises could be moved to the philosophy or theology departments, and no longer pretend to be doing science.

Clyde Spencer
Reply to  crackers345
November 3, 2017 6:58 pm

crackers345,
If the models can’t predict the future, then there is no evidence that they are simulating reality. Without being able to trust the outputs to be realistic, then how can one trust that anything that comes from them tells you anything about reality. The only thing that one can say with confidence is that if you change “term A,” then you will probably get results that are bounded by an uncertainty range of an ensemble. Actually, it is worse than that because of the tuning that goes into the models. It is not unlike picking a number between 1 and 1,000 to characterize the average age of men on Earth.

richard verney
Reply to  crackers345
November 3, 2017 8:41 pm

How about taking CO2 out of the mix and using RAW temp data from stations wholly not impacted upon by UHI, and hey presto.

In all likelihood the temperature today is no warmer than it was in around the late 1930s/1940 notwithstanding that approximately 95% of all manmade CO2 emissions have taken place since that date.

The funny thing is that that is what Biffas/Mann’s tree ring data was saying, and that is why they truncated it and spliced on the adjusted thermometer record.

November 3, 2017 6:27 pm

I trust most of you recognize the following as the Stefan Boltzmann radiation equation that quantifies the amount of energy emitted from a surface.
Q = σ * ε * A * T^4
Two points: this radiation is a surface property and NOT a bulk property and its direction is from hot to cold. There are those that suggest since all surfaces emit based on their surface temperature energy can flow from colder to hotter with a resulting “net” flow. This supposedly explains how “back” radiation can flow from the cold troposphere to the warmer “surface” (1.5 m above the ground). This phenomenon is not present in the radiative flow calculation from the sun to the earth (1,368 W/m^2) and earth’s ToA “back’ radiation (240 W/m^2) to the sun.
As often seen in text books this “net” phenomenon is reflected in a modified equation where ΔT, (T1 – T2), is simply substituted for T, i.e.:
Q12 = σ * ε * A * (T1^4 – T2^4)
So, if 1 is hotter than 2 “net” energy flows from hot to cold. If 2 is hotter than 1 the result is negative and “net” energy still flows from hotter 2 to colder 1.
This substitution is mathematically illegal. Here is how it actually works.
********
Two S-B surfaces a & b, any temperature. (BB if ε = 1.0, GB if ε < 1.0)
Qa = σ * εa * Aa * Ta^4 Qb = σ * εb * Ab * Tb^4
Which surface is hot or cold is irrelevant. Energy radiates from the hot to the cold and according to RGHE “theory” heat also radiates from cold to hot leaving a “net” radiative LWIR heat flow.
So, let’s do that math.
Subtract
(Qa – Qb) = (σ – σ) * (εa – εb) * (Aa – Ab) * (Ta^4 – Tb^4)
(σ – σ) = 0, i.e. ZERO!!!
Right side of equation goes to zero! Also goes to zero if ε, A or T are equal.
What does this illustrate/prove?
Conservation of energy: Qa = Qb
ZERO algebraic evidence of “back” cold to hot or “net” radiation.
Good thing, since that would grossly violate the laws of thermodynamics.

richard verney
Reply to  nickreality65
November 3, 2017 8:35 pm

comment image

Reply to  richard verney
November 4, 2017 8:14 am

Hey, this is MY marked up graphic!! R&C thoughts?

crackers345
Reply to  richard verney
November 4, 2017 11:40 am

so you believe that, unlike
all other objects/substances in the universe,
the atmosphere doesn’t
radiate??

Reply to  richard verney
November 4, 2017 1:01 pm

Crackers,

No, it radiates – from 32 km where the molecules end not primarily from the ground/surface.

crackers345
Reply to  richard verney
November 4, 2017 7:40 pm

nickreality65 commented >>No, it radiates – from 32 km where the molecules end not primarily from the ground/surface. <<

so atmospheric gases radiate at 32 km altitude,
but these gases don't radiate
near the surface?

and you have evidence of this?
if so it would completely
rewrite physics.

can't wait to read it

Reply to  richard verney
November 5, 2017 9:51 am

Crackers,

At the surface they radiate 63 W/m^2, NOT 396 W/m^2. BTW “surface” is 1.5 m above the ground.

Reply to  richard verney
November 5, 2017 10:26 am

nickreality65 November 5, 2017 at 9:51 am

Crackers,

At the surface they radiate 63 W/m^2, NOT 396 W/m^2.

Per Stefan-Boltzmann, a black body radiating at 63 W/m^2 has a temperature of -90°C … I believe you are not referring to “radiation” (how much the body is radiating). Instead, you seem to be referring to “net radiation” (how much the body is radiating MINUS how much the body is absorbing).

While both are valid ways to look at a situation, for mathematical calculations you need to consider the individual energy fluxes.

Next, you say that the “333 W/m2 comes from nowhere does nothing” … but in fact it comes from the atmosphere, and it leaves the surface warmer than it would be without the radiation from the atmosphere.

Finally, you say:

BTW “surface” is 1.5 m above the ground.

While this is true for what is called the “surface air temperature”, it is NOT true for the K/T diagram you are discussing. In that diagram, the surface is the actual surface.

w.

Clyde Spencer
Reply to  nickreality65
November 3, 2017 9:24 pm

nickreality65,
I’m afraid that you have made a mistake in your algebra. The two sigmas do NOT equate to zero!The S-B constant of proportionality is the same for both expressions. Assume for the sake of illustration that both emissivities are equal. And, let’s assume that the areas are equal. (Actually, the area of the atmospheric emissions is slightly larger than the surface of Earth, but I want to keep it simple for illustrations.) Therefore, the three parameters are common to both difference expressions and can be extracted. Thus, Qnet simplifies to the product of the 3 parameters multiplied times the difference of the absolute temperatures to the fourth power. That is, the net energy is proportional to the difference between the temps raised to the 4th power.

Tink about it for a moment. If the temperatures were equal, there would be no net energy difference. If one temperature were absolute zero, there would be only one term surviving, the one with the positive temperature. All values in between these two extremes are possible. I think that the atmospheric energy component should be divided by two to account for the fact that half is radiating into space, and half is radiating back towards the surface. That can be taken care of with the area term.

Reply to  Clyde Spencer
November 4, 2017 8:12 am

“…half is radiating into space, and half is radiating back…”

BUNK!

Sounds like W.E. and ACS’s infamous, opaque, dull, multi-shell models. Bogus. See my papers:

http://writerbeat.com/articles/14306-Greenhouse—We-don-t-need-no-stinkin-greenhouse-Warning-science-ahead-

http://writerbeat.com/articles/15582-To-be-33C-or-not-to-be-33C

http://writerbeat.com/articles/16255-Atmospheric-Layers-and-Thermodynamic-Ping-Pong

Clyde Spencer
Reply to  Clyde Spencer
November 4, 2017 10:56 am

nickreality65,
You didn’t respond to my major criticism that your algebra is wrong. Why should I bother reading more of the same?

Count to 10
Reply to  Clyde Spencer
November 4, 2017 11:57 am

Clyde is right. Algebraicly, what you wrote is abcd-efgh = (a-e)(b-f)(c-g)(d-h). This is a pretty big mangling of distributivity (if I have my terms correct).

Reply to  Clyde Spencer
November 4, 2017 1:32 pm

Q = σ * ε * A * T^4
“Surface” σ ε A T Result
A 5.670E-08 0.9 10000 288 3.511E+06
B 5.670E-08 0.5 12000 213 7.002E+05
A-B 2.810E+06

Q = σ * ε * A *(TA^4 – TB^4)
A 5.670E-08 0.9 10000 288^4 – 213^4 2.460E+06

NOT THE SAME!!!!!

Q = σ * ε * A * T^4
“Surface” σ ε A T
A 5.670E-08 0.5 12000 288 2.340E+06
B 5.670E-08 0.5 12000 255 1.438E+06
A-B 9.020E+05

Q = σ * ε * A *(TA^4 – TB^4)
A 5.670E-08 0.5 12000 273-255 4.512E+05

STILL NOT THE SAME!!!!!

My point is that you can’t just replace T with dT.

Reply to  Clyde Spencer
November 5, 2017 9:49 am

Yeah, I screwed up pretty good. I ASSUMED that if T^4 could be replaced with dT^4 than so could the other terms, A w/ dA, ε w/ dε, σ w/ dσ. But that’s not what happened. There is an ASSUMPTION that σ, ε and A are constant so they can be pulled out of the parens leaving behind dT^4.
Surface a and Surface b
Watob = (σ * ε * A * T^4)a – (σ * ε * A * T^4)b or w/ ASSUMPTION σ * ε * A *(Ta^4 – Tb^4)
As you mentioned, if Ta = Tb than the line goes to zero and Wa = Wb.
However, what happens of Ta = Tb and ε and A are NOT equal?
Than if Aa is larger than Ab, energy will flow from a to b EVEN THOUGH Ta = Tb.
Than if εa is larger than εb, energy will flow from a to b EVEN THOUGH Ta = Tb.
So, if Aa is larger than Ab, heat will flow from a to b even though Ta = Tb.
Sun Aa is huge compared to earth Ab.
And if εa is larger than εb, heat will flow from a to b even though Ta = Tb.
If surface a is opaque and dull and surface b is shiny and translucent, heat will flow from a to b even though Ta = Tb.
Should be easy enough to demonstrate in the lab, to Feynman’s satisfaction.
So, the area of the earth’s “surface” (How is that defined? The ground? or 1.5 m above the ground?) is enormous compared to the surface area of the GHGs (How is that even defined period?) as is the net flow.
The atmosphere is 99.96% transparent and with albedo reflective? How does that εa compare to GHGs εb?
I think what it all boils down to is the notion of “net” and “back” radiation is incorrect.
Which brings me to another point. Had a discussion w/ Scott Denning about this.
Denning’s hypothesis is: at 396 W/m^2 upwelling radiation the surface/ground will lose so much heat/energy so fast that it will get really^3 cold, even frozen. All that prevents this from happening is the 333 W/m^2 “back” radiation that compensates by warming/slowing the loss (396-333=63) the surface/ground, i.e. RGHE theory.
However, based on type K T/Cs I placed in the ground and the surface (1.5 m above the ground), the air heats and cools rapidly and a lot compared to the ground which heats and cools slowly.
During the day the sun heats both the air and ground and the air can be hotter than the mulch and grass covered ground. (Notice that the car, asphalt, hunks of iron, etc. can get much hotter than the air.)
At night the air cools quickly, becoming cooler than the ground and the ground cools slowly staying warmer then the air all night long.
As Feynman observed, if your theory doesn’t pass experiment, it’s wrong and RGHE, air warming the ground, fails the experiment.

November 3, 2017 7:02 pm

Here’s how to evaluate a climate model.

Leveut
November 3, 2017 7:18 pm

At the top of the pyramid is a single index. Would that, properly, be: 42?

Editor
November 3, 2017 10:29 pm

My plan is to invent a system, kinda like the Chinese system above, to judge scientists instead of climate models.Here’s the basic form, just as above.

It will have the same structure, where it starts by evaluating scientists on several different metrics. Those metric results move upwards to the midlevel in the graphic above, where they form the inputs to the multivariable integrated statistics system (MISS) regarding the scientist in question.

Then in the final step, shown at the top of the pyramid above, these MISS statistics feed into the high-level integrated topmost system (HITS). This final step “summarizes the three statistical quantities of second level of metrics into a single index and can be used to rank the performances” of various climate scientists.

At the end, this will give us a single number, the Scientific Value Index that perfectly expresses that scientist’s value to society. I predict that this system, which I have dubbed the “HIT and MISS System”, will allow us to … what was it … oh, yeah, “concisely summarize and evaluate” each scientist’s performance.

This is great, because it will settle all scientific debates immediately, definitively, and painlessly. If your Scientific Value Index is greater than that of your scientific opponent, you will be judged to have won the debate.

==========================

And if you can see what is wrong with my proposal … well, that’s exactly what is wrong with these scientists’ proposal for judging climate models.

Regards to everyone on what is a lovely rainy night here,

w.

michael hart
November 4, 2017 12:22 am

How about we get Google to lend us one of their A.I. bots?

You know, the ones with fiendishly clever algorithms they use to “tease out” meaning from billions of unrelated pages about cats. The algorithms that can identify fake news, after an initial training period under the loving keystrokes of a Google/YouTube Hero, who will teach the long-suffering computer how to recognize and correct wrong-think in climate science and beyond.

Imagine such a vista…

Old44
November 4, 2017 10:08 am

Two alternate methods:
1: The Fyenmann method, if the data/results don’t match the model, the model is wrong.
2: The BOM method, if the data/results don’t match the model, change the data.

crackers345
Reply to  Old44
November 4, 2017 11:38 am

except the data/results are themselves
sometimes wrong or (esp) incomplete, so evaluation is far
more
nuanced and complicated.

Count to 10
November 4, 2017 11:43 am

Realistically, the “average global temperature” parameter, however defined, should be unimportant in molding climate changes. The models need to be able to predict not only how temperature patterns change seasonally, regionally, and over the course of an average day, they should also be predicting humidity and precipitation on those levels as well. If they get those all wrong, then there is no point in even checking is their global average somehow tracks reality.

On that note, the most ridiculous thing I have seen on this whole topic is the way that measured increases in temperatures in specific conditions (winter, night, high latitude) are used to elivate the global average, which is then used to predict uniform warming everywhere and at all times.

Editor
November 4, 2017 5:23 pm

co2isnotevil November 4, 2017 at 11:57 am

Willis,

“The problem with this analysis is that Psun * (1-a), the amount of solar energy available after albedo reflections, is itself a function of the temperature.”

Not as much as you think. Yes, the albedo in polar regions is larger than equatorial regions owing to ice and snow, but the decreasing albedo from melting ice and snow is quite small. It was larger coming out of the last ice age when there was a lot more of the surface covered in ice, but today, the average fraction of the planet covered by ice is pretty close to the minimum possible. Average polar temps are far below freezing and no amount of GHG action will ever be enough to melt it all and prevent it from returning in the winter. About the only thing that will cause this is when the Sun enters its red giant phase.

Considering that 2/3 of the planet is covered by clouds, which have about the same reflectivity as ice, 2/3 of all future melted ice has no effect on the net albedo.

co2, it seems you’ve missed my point, likely my lack of clarity. Let me give it another shot.

First, the important correlation of temperature is not with the polar ice albedo.

The important correlation of temperature is with the tropical albedo, which is ruled by cloud cover and which responds quickly and dynamically to local temperatures. This in turn imposes strong controls on local temperatures.

Second, it is exactly that relationship between temperature and clouds that is missing in your analysis. When it gets warm in the tropics, clouds form. This changes the amount of solar energy available after albedo reflections … but you do not have any equation in your analysis for that most important connection.

Or in your terms,

a = f(T)

which means that the albedo (a) is some unknown function (f) of the temperature (T).

Where is that in your analysis?

Regards,

w.

Anne Ominous
November 12, 2017 12:14 am

Not trying to criticize this analysis, but:

Unless it manages to properly carry uncertainties all the way up FROM the data TO the results (and I hope it does), it has potential to be just as wrong as all the others.

robinedwards36
November 14, 2017 12:55 pm

I have only skimmed through this article – lack of time – and have been unable to find anything that relates to step changes in climate. Have I missed it, or is it not there?
Presuming that it is not there, no climate model is going to fit the real data adequately. Climate frequently changes abruptly. Where should I go to read something about this, either a refutation or a support for my ideas?

Reply to  robinedwards36
November 14, 2017 1:23 pm

Robin, that’s an interesting question that points to a more general problem with climate modeling. This is that many times, nature doesn’t do “gradual”. Nature does “edges”.

For example, there is no gradual transition from cloud to clear air. You are either in the cloud or you are not. Another example. Fifty miles out off the coast where I live, you often come across a clear line with green water on one side of the line and blue water on the other side of the line. It doesn’t shade gradually from one to the other. It undergoes, as you point out, a “step change”.

Computers, on the other hand, are the reverse of nature. They do “gradual” quite well … but step changes, not so much. I’m not saying that computers can’t do them … I’m saying that step changes are much harder to model accurately than are gradual changes.

Unfortunately, most of the interesting climate processes (tropical cumulus fields, dust devils, thunderstorms, the PDO, squall lines, the El Nino/La Nina pump, tornadoes, williwaws, cyclones, etc) are temperature-threshold based. When the temperature (or more accurately the temperature difference between surface and altitude) exceeds some local threshold, the phenomenon appears.

So ALL of them represent step changes. Makes for a very challenging system to model … and it’s the reason that the current class of climate models don’t work. All of those phenomena listed above act to regulate the temperature … but they are far too small to be included in the climate models.

As a result, they are trying to model the future temperature evolution of the planet, while not modeling the very climate phenomena that regulate the temperature … which is a fool’s errand.

w.