Study: A new method to evaluate overall performance of a climate model

From the INSTITUTE OF ATMOSPHERIC PHYSICS, CHINESE ACADEMY OF SCIENCES and the “pyramid schemes” department:

A new method to evaluate overall performance of a climate model

Many climate-related studies, such as detection and attribution of historical climate change, projections of future climate and environments, and adaptation to future climate change, heavily rely on the performance of climate models. Concisely summarizing and evaluating model performance becomes increasingly important for climate model intercomparison and application, especially when more and more climate models participate in international model intercomparison projects.

This is a pyramid chart showing the relationship between three levels of metrics for multivariable integrated evaluation of climate model performance. CREDIT XU Zhongfeng

Most of current model evaluation metrics, e.g., root mean square error (RMSE), correlation coefficient, standard deviation, measure the model performance in simulating individual variable. However, one often needs to evaluate a model’s overall performance in simulating multiple variables. To fill this gap, an article published in Geosci. Model Dev., presents a new multivariable integrated evaluation (MVIE) method.

“The MVIE includes three levels of statistical metrics, which can provide a comprehensive and quantitative evaluation on model performance.”

Says XU, the first author of the study from the Institute of Atmospheric Physics, Chinese Academy of Sciences. The first level of metrics, including the commonly used correlation coefficient, RMS value, and RMSE, measures model performance in terms of individual variables. The second level of metrics, including four newly developed statistical quantities, provides an integrated evaluation of model performance in terms of simulating multiple fields. The third level of metrics, multivariable integrated evaluation index (MIEI), further summarizes the three statistical quantities of second level of metrics into a single index and can be used to rank the performances of various climate models. Different from the commonly used RMSE-based metrics, the MIEI satisfies the criterion that a model performance index should vary monotonically as the model performance improves.

According to the study, higher level of metrics is derived from and concisely summarizes the lower level of metrics. “Inevitably, the higher level of metrics loses detailed statistical information in contrast to the lower level of metrics.” XU therefore suggests, “To provide a more comprehensive and detailed evaluation of model performance, one can use all three levels of metrics.”

###

The paper: https://www.geosci-model-dev.net/10/3805/2017/

Abstract:

This paper develops a multivariable integrated evaluation (MVIE) method to measure the overall performance of climate model in simulating multiple fields. The general idea of MVIE is to group various scalar fields into a vector field and compare the constructed vector field against the observed one using the vector field evaluation (VFE) diagram. The VFE diagram was devised based on the cosine relationship between three statistical quantities: root mean square length (RMSL) of a vector field, vector field similarity coefficient, and root mean square vector deviation (RMSVD). The three statistical quantities can reasonably represent the corresponding statistics between two multidimensional vector fields. Therefore, one can summarize the three statistics of multiple scalar fields using the VFE diagram and facilitate the intercomparison of model performance. The VFE diagram can illustrate how much the overall root mean square deviation of various fields is attributable to the differences in the root mean square value and how much is due to the poor pattern similarity. The MVIE method can be flexibly applied to full fields (including both the mean and anomaly) or anomaly fields depending on the application. We also propose a multivariable integrated evaluation index (MIEI) which takes the amplitude and pattern similarity of multiple scalar fields into account. The MIEI is expected to provide a more accurate evaluation of model performance in simulating multiple fields. The MIEI, VFE diagram, and commonly used statistical metrics for individual variables constitute a hierarchical evaluation methodology, which can provide a more comprehensive evaluation of model performance.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
113 Comments
Inline Feedbacks
View all comments
Mark from the Midwest
November 3, 2017 9:37 am

We already have methods to evaluate any model, sounds like these guys are trying to re-write the rules. Of course for many so-called climate scientists the real criteria is if the model helps them to get more funding.

Reply to  Mark from the Midwest
November 3, 2017 9:58 am

The derivatives market of climate modeling. Modeling the model.

Reply to  DonM
November 3, 2017 10:13 am

It’s a model derived from expectations. The main failure of climate science is not modifying those expectations when they are demonstrably incorrect.

Reply to  DonM
November 3, 2017 10:27 am

They just pressure the data keepers to make convenient adjustments to observation.
That is post-modern climate science at work.

Louis Hooffstetter
Reply to  DonM
November 3, 2017 6:36 pm

Richard Feynman said it best:

“It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.”
Dr. Richard P. Feynman

crackers345
Reply to  DonM
November 3, 2017 7:09 pm

LouisH: climate isn’t
an experimental science.

Patrick MJD
Reply to  DonM
November 3, 2017 10:37 pm

“crackers345 November 3, 2017 at 7:09 pm”

Define climate, mathematially.

Reply to  DonM
November 4, 2017 10:14 am

Patrick MJD,

Here are the equations that matter, relative to the planets sensitivity to forcing (i.e. affect of the surface temperature to changes in Pi).

Pi(t) = Po(t) + dE(t)/dt

Pi(t) is the post albedo energy arriving from the Sun, given as Psun*(1 – a), where Psun is the incoming solar power and a is the albedo. Po(t) is the energy emitted by the planet which in LTE (when dE(t)/dt == 0) is equal to Pi. E is the energy stored by the planet which increases when Po Pi.

Ps(t) = o*Ts^4
Po(t) = e*Ps(t)
Ps(t) = e*o*Ts^4

Ps(s) is the emissions of the surface, where the corresponding temperature, Ts is given by the SB Law. The constant o is the SB constant (5.67E-8 Watt/m^2 per degree K^4). The coefficient e is the ratio between the power emitted by the surface and the power emitted by the planet and is the emissivity of an EQUIVALENT gray body emitter whose temperature is Ts and whose emissivity is e.

Ts(t) = k*E(t)

The surface temperature Ts is linearly proportional to the energy stored by the system E (i.e. one calorie increases the temperature of 1cc of water by 1C).

Solving these equations for the sensitivity, dTs(t)/dPi(t) and we get,

dTs(t)/dPi(t) = (4*e*o*Ts^3)^-1

The measured value for e is about 0.61 and the measured value for Ts is about 287.5K. Plugging in the numbers, the sensitivity is 0.3 C per W/m^2. To the extent that some solar input does work that does not affect the surface temperature, the sensitivity will necessarily be less than this. Note that k, which is the linear proportionality constant between stored energy and the temperature drops out of the equation for the sensitivity, moreover; the sensitivity is highly temperature dependent going as T^-3.

Reply to  DonM
November 4, 2017 10:18 am

The formatter removed text around characters. It should say,

When Pi is greater than Po, E increases along with Ts. When Pi is less than Po, E decreases along with Ts.

Reply to  DonM
November 4, 2017 11:05 am

co2isnotevil November 4, 2017 at 10:14 am

Patrick MJD,

Here are the equations that matter, relative to the planets sensitivity to forcing (i.e. affect of the surface temperature to changes in Pi).

Pi(t) = Po(t) + dE(t)/dt

Pi(t) is the post albedo energy arriving from the Sun, given as Psun*(1 – a), where Psun is the incoming solar power and a is the albedo. …

Thanks, co2. The problem with this analysis is that Psun * (1-a), the amount of solar energy available after albedo reflections, is itself a function of the temperature.

This is because, in the all-important tropics where most of the solar energy enters the system, albedo goes up with the temperature. They are very highly correlated, as you can see below.

Since your analysis does NOT include this critical active temperature control mechanism, I fear that it cannot be used to calculate the sensitivity.

w.

Reply to  DonM
November 4, 2017 11:57 am

Willis,

“The problem with this analysis is that Psun * (1-a), the amount of solar energy available after albedo reflections, is itself a function of the temperature.”

Not as much as you think. Yes, the albedo in polar regions is larger than equatorial regions owing to ice and snow, but the decreasing albedo from melting ice and snow is quite small. It was larger coming out of the last ice age when there was a lot more of the surface covered in ice, but today, the average fraction of the planet covered by ice is pretty close to the minimum possible. Average polar temps are far below freezing and no amount of GHG action will ever be enough to melt it all and prevent it from returning in the winter. About the only thing that will cause this is when the Sun enters its red giant phase.

Considering that 2/3 of the planet is covered by clouds, which have about the same reflectivity as ice, 2/3 of all future melted ice has no affect on the net albedo. Polar regions receive less insolation to begin with and when you calculate the increase in the incident power from melting all ice and snow on the planet and distribute that power across the entire planet, its only a few W/m^2 and less than what’s required to achieve the global emissions increase (temperature increase) they claim arises by doubling CO2.

The sensitivity expressed as a change in temperature per change in input power, dTs(t)/dPi(t) is already a function of temperature and that function of temperature is independent of the albedo. None the less, since (1-a) is linear to e, whatever effect albedo has can be rolled into an equivalent value of e, both of which can be expressed as functions of the fraction of the planet covered by clouds. Note that the sensitivity expressed as a change in surface emissions per change in input power is constant, where

dPs(t)/dPi(t) = 1/e

Yes, e is a higher order function of temperature, but when we measure it over the last couple of decades, it’s remarkably constant coming in at about 0.6, where dPs(t)/dPi(t) is about 1.6 W/m^2 of Ps per W/m^2 of Pi. It’s even relatively constant from the poles to the equator where e increases only slightly as the average temperature transitions through freezing.

crackers345
Reply to  DonM
November 4, 2017 7:39 pm

Patrick MJD commented >>Define climate, mathematially.<<

too clever by five-halfs.

Reply to  Mark from the Midwest
November 3, 2017 11:04 am

One way to see if a model of a causal system is at least plausible is to vary the initial conditions and run the model. The model should always end up in the same state.

Curve fitting GCM’s to expectations requires so many assumptions and adjustments, any real physics gets lost. The observable consequence of this is the large effect initial conditions have on the modeled results. This is often misinterpreted as the consequences of chaos and complexity but is more symptomatic of an unstable model or uninitialized data.

crackers345
Reply to  co2isnotevil
November 3, 2017 7:10 pm

the full set of initial conditions aren’t
known — in particular deep ocean
currents, and aerosol loading.

Reply to  crackers345
November 3, 2017 8:12 pm

cracker345,
Initial conditions establish the starting values for state variables, where the model adjusts the state variables as the model runs and if the model is correct, the state variables will converge to correct values, independent of the starting value.

If you’re talking about model coefficients which the model doesn’t adjust, its even worse because only one value of the coefficient is correct and all others are not, thus averaging across difference values doesn’t help.

Patrick MJD
Reply to  co2isnotevil
November 3, 2017 10:39 pm

“crackers345 November 3, 2017 at 7:10 pm

the full set of initial conditions aren’t
known”

So “conditions” in the models are not known? Thank you for confirming models are rubbish!

crackers345
Reply to  co2isnotevil
November 4, 2017 7:41 pm

evil: climate models do not
solve an initial value problem,
they solve a boundary value
problem.

(ever studied PDEs?)

Reply to  crackers345
November 5, 2017 9:27 am

cracker345,
If it’s solving boundary problems, then its solving the wrong problem. Models are supposed to model how state changes. Besides, the boundaries are well known and there are only 2 of them that matter, One is the boundary between the atmosphere and space and the other is between the atmosphere and the surface.

crackers345
Reply to  co2isnotevil
November 4, 2017 7:43 pm

Patrick MJD commented >>So “conditions” in the models are not known? Thank you for confirming models are rubbish! <<

you clearly do not understand
GCMs, or how they are initialized
("spun-up").

(they don't solve an initial value problem.)

Andy Pattullo
Reply to  Mark from the Midwest
November 3, 2017 1:24 pm

I agree. There is plenty of straightforward evidence and some very valuable expert advice (e.g. Dr. Judity Curry) that most current GCM’s are useless in interpreting the real world. I can’t help but think this is an attempt to create some custom metric by which individuals may claim value in models that doesn’t exist (but I could be wrong). It smells a lot like how stinky subprime morgages were packaged into larger tranches and then folded into major investment vehicles of no real ultimate value while disguising all of the high risk and poor judgment that went into the original loans.

george e. smith
Reply to  Mark from the Midwest
November 3, 2017 2:04 pm

Well isn’t that what …. average …. is ?

Just a fictitious hodge-podge of a bunch of unrelated things that weren’t exactly observed by anyone anywhere, any how. But modelers get their jollies by imagining that it means something; well something besides maybe more grant moneys.

G

M Seward
Reply to  Mark from the Midwest
November 3, 2017 4:50 pm

Two possibilities with this.

1 It was put together by Chinese pinheads who think they are angels and this will tell them how much funding they need to meet their kpi’s going forward and the ‘pyramid’ characterisation just did not make it through translation so did not register.

2 It is a spoof, the giveaway being the ‘pyramid’ characterisation.

Who knows? Who cares?

Carbon BIgfoot
Reply to  Mark from the Midwest
November 8, 2017 2:14 pm

Maslow’s theory of needs does not apply to self-actualization of failed theory.

Latitude
November 3, 2017 9:38 am

Lame….models will never be right when they are constantly changing/adjusting temp history
History that they back cast to today….will not even be the same by the time they run the model
…and all the other crap they do to temp history

…and a few hundred other things

Urederra
Reply to  Latitude
November 3, 2017 2:53 pm

… and when there are several temperature datasets to pick and choose.

Models are often fed with one temperature dataset and the results are compared to a different temperature dataset. weather balloons or HadCRUT3 vs. HadCRUT4.

markl
November 3, 2017 9:38 am

When applied to current climate models do the results correlate with real world performance?

November 3, 2017 9:46 am

Lipstick on a pig.

Latitude
Reply to  ristvan
November 3, 2017 9:54 am

LOL….yep

MattS
Reply to  ristvan
November 3, 2017 5:24 pm

Well, that’s better than trying to put a pig on lipstick. 🙂

Bruce Cobb
November 3, 2017 9:46 am

The paper has lots of gobbledygook and horseshit, so good on them for that. Could definitely use more cowbell though.

F. Leghorn
November 3, 2017 9:53 am

Funny how ALL models could be evaluated by their actual predictions. I guess that would be too easy.

Edwin
Reply to  F. Leghorn
November 3, 2017 10:34 am

Not if they continually adjust inputs (data) and tweak assumptions to force their preconceived “predictions.” Way back when, just before all this was hitting the media, I was a very lukewarmer. I then listen to a presentation where it was obvious that the PI was changing the data to fit their assumption. I was also dealing with federal government scientists on other issues. It was not a pleasant interaction. So I began to question everything they were doing, not just my normal inborn skepticism. Like many things in and around government I have begun to believe that for CAGW “scientists” it is more than just ego and grant money but power. They have had a taste of power. Think about it! Almost all the governments in the world have people working on this issue and developing dramatic changes in policies that will affect the economic, therefore political, structure of the entire world.

WR
Reply to  Edwin
November 3, 2017 11:11 am

There are different levels of the true believers. There are the devious and power hungry as you say, but also many with ulterior motives (anti-capitalistic, anti-american, socialist, etc.), and of course plenty of useful idiots.

Thomas Homer
Reply to  F. Leghorn
November 3, 2017 10:53 am

” … ALL models could be evaluated by their actual predictions”

Even with mostly accurate predictions, a model’s underlying assumptions may be questionable. As an example, models were established for astronomical orbits based on a geocentric assumption. Celestial spheres were necessarily introduced to explain the planetary orbits. These models predicted those orbits quite well. Of course, the geocentric assumption came into question, and new models with a heliocentric assumption were shown to be just as accurate without the need for celestial spheres.

F. Leghorn
Reply to  Thomas Homer
November 3, 2017 12:42 pm

In other words they doctored the data. Deja vu all over again.

Reply to  Thomas Homer
November 3, 2017 1:55 pm

squiggy9000 November 3, 2017 at 12:42 pm

In other words they doctored the data. Deja vu all over again.

No, they kept the data and changed the theory.

Regards,

w.

george e. smith
Reply to  Thomas Homer
November 3, 2017 2:11 pm

And Lunarcentric models would be just as accurate, just more complicated.
Well the Mandelbrot Set is pretty complicated; and it isn’t even a model of anything !

G

Urederra
Reply to  Thomas Homer
November 3, 2017 3:01 pm

No, they kept the data and changed the theory.

As far as I know, there was no physical theory behind the geocentric model. The heliocentric model can be explained by the theory of universal gravitation.

My 2 cents.

Reply to  Thomas Homer
November 3, 2017 6:11 pm

Gew, beg to differ a bit. Mandelbrot set only becomes visible if you program its recursive function a certain way—100 recursions escape. Done over the complex plane from -1 to +1, -i to +i. Is an inverse of Julia sets. So exists, for sure. Just not obvious without some math effort. Unlike climate science, is fully reproducible. I progrsmmed a Mandelbrot set grnerator myself over 20 years ago. Slow compared to later algorithms, since brute force rather than spherical approximation.

Tom Halla
November 3, 2017 9:55 am

Trying to reduce something as complex as climate to a single number reminds me of Swift and his parody of science.

Ricdre
Reply to  Tom Halla
November 3, 2017 10:14 am

Oh, that’s easy…the answer is 42 (see Hitchhikers Guide to the Galaxy).

george e. smith
Reply to  Tom Halla
November 3, 2017 2:12 pm

How about 3 ; so it’s the same as Pi.

g

Reply to  Tom Halla
November 3, 2017 4:07 pm

They are doing something different

November 3, 2017 10:04 am

Start with a thorough understanding of what causes climate change.

Even a one page summary is acceptable.

Without that understanding, there are no real climate models.

Unfortunately that understanding does not exist today.

Therefore, we have only wild guess computer games, falsely called “climate models”,
that will make wrong predictions … until the temperature actuals are eventually “adjusted”
enough so the predictions look better!

The average temperature of our planet ‘s surface has remained in a one degree C. range since 1880,
even with haphazard measurements, lots of surface area not measured at all, and “adjustments” that may have doubled the warming in the raw data, and the data is owned by people who WANT to see a lot of global warming … yet we’ve been in a one degree C. range for 137 years!

Why would anyone with a functioning brain think such a tight average temperature range over 137 years is a ‘coming climate change catastrophe”?

Climate blog for non-scientists:
http://www.elOnionBloggle.Blogspot.com
over 12,000 page views so far
No ads – no money for me – a public service

Michael Jankowski
November 3, 2017 10:07 am

Poorly-worded, but as it notes, most models seem to be scored on an individual variable (global temperature). This at least would seemingly call BS on models that fair miserably to reproduce other parameters with accuracy.

Reply to  Michael Jankowski
November 3, 2017 11:19 am

You kind of hit the nail on the head. Even if one could say that a model semi-accurately forecasted ‘global temps’, so what. What is really needed is accuracy to the point where we know what will happen within areas/regions. Will the Outback, desert southwest of the US, or the Sahara see most of the warming? Or will it only be at the poles? Or maybe evenly spread? So, so much we (or the modelers) don’t know!

TonyL
November 3, 2017 10:16 am

My understanding is that the Chinese categorically reject CAGW and Climate Change altogether.
So they might be up to something else, like developing a new suite of models that actually produce useful long range forecasts. And maybe taking a poke at Western climate science in the process.

Edwin
Reply to  TonyL
November 3, 2017 10:40 am

Not sure they reject CAGW altogether, China just has a entirely different perspective about climate. In their long recorded history they have faced climate change. They understand that climate changes regardless of what humans do or don’t do. They know when they have been rich it was far easier to adapt to change. When poor China has had prolonged suffering and strife. They have learned that to build wealth, besides stealing technology from elsewhere, they need cheap and abundant energy. Since they see themselves as THE rising world power they are quite happy to allow Europe and North America to play this stupid CAGW game.

November 3, 2017 10:17 am

This climate model effort is analogous to making further adjustments to the epicycles and deferents in a Ptolemaic planetary system. With each refinement, new complications arise elsewhere in the model.

Their underlying fundamental assumptions are wrong in both cases.

Reply to  Joel O’Bryan
November 3, 2017 10:37 am

As a bit of trivia, the Ptolemaic model does work well in the gross sense of the big picture, as long as troublesome movement details are not examined too closely.

Planetarium projectors use the simplified mathematical relationships embodied in Ptolemaic calculations to project the night sky on the curved ceiling, creating the Earth centric view that wows and astonishes everyone when they first see this very realistic presentation. The Ptolemaic sky projection model works deceivingly very well in this application. This is the exact same model trap that climate modelers have fallen into. It appears correct to them in the gross, larger sense, so they believe their model represent fundamental realities of climate. They could not be more mistaken.

For more on the Ptolemaic model used in planetarium projectors:
http://www.polaris.iastate.edu/EveningStar/Unit2/unit2_sub1.htm

Reply to  Joel O’Bryan
November 3, 2017 11:21 am

+1

knr
November 3, 2017 10:32 am

The first rule of climate ‘science’ , if the models and realty differ in value , it is reality which always in error, somewhat underminds the need for this research. In addition the authors have made an error in their maths. For it is clear the ‘value’ of any model is not a function of its validity, rather it is in a direct relationship with the degree of support the model offers to further the AGW faith. Science has f all to do with it.

Edwin
Reply to  knr
November 3, 2017 10:52 am

I have actually had this argument with federal scientists in a public management meeting. After saying that for the issue at hand we had best data set they had ever seen, I asked if we had ALL the data but their models disagreed with the data what would they believe, what would they base their recommendations to rule and policy makers on? Their answer, The Computer Models. When one of the senior committee officials asked them to explain, they instead requested a 15 minute recess which turned into an hour. The politically appointed members of the management unit were NOT happy people. It led to a brief investigation of all those in that work unit. It helped, but only briefly.

Reply to  Edwin
November 3, 2017 11:10 am

Heretic!!! They surely labeled you with the dreaded “D” word as a way to salve their cognitive dissonance.

November 3, 2017 10:33 am

I’ll wait for McIntyre or Briggs to work this over! First it’s already compromised science when you have to use statistics at all (that ought raise a din from all sides). It’s essential for the social sciences and we know the wiggle room it makes for these irredeemably ideologically corrupted disciplines (of which climate science is the best example).

To me, this analysis constructs a phony “index” that will give a totally wrong model a pass. If you have a hard-wired falsified theory based on CO2 as the basis and you adjust this with a concocted uber-aerosols effect to protect the theory and an additional wrong-signed cloud parameter, you could end up with an excellent forecast with a little prestidigitation and get an index of 0.99.

I’ve been fearing this possibility as a pretext for jailing deplorables and putting the world under elitist governance. Thankfully their hubris and post no-idjit-left-behind enrollment policies in institutions of higher learning seems to have blocked their vision. Adding a coefficient “c” equal to 1/3 to multiply their formula by would have given them a heck of a scary fit.

Clyde Spencer
November 3, 2017 10:46 am

It appears to me that the authors have presented a rigorous, quantitative method for evaluating multiple model results to determine the best compromise model. However, one often is more concerned about one of the variables than the others. They then recommend assigning subjective weighting to the variable(s) of primary interest. They have then degraded the quantitative approach with subjective assessments of the weighting to be assigned.

The authors acknowledge that is is generally recognized that some models do a better job of predicting future temperatures than they do future precipitation, and vice versa. That is an interesting state of affairs because there is strong interaction between temperature and precipitation. That is, the surface commonly cools down during and immediately after a Summer rain, and high surface temperatures may result in virga. So, at first blush, it would appear that there are serious problems with the assumptions or constructs within the models when these interacting variables have different inaccuracies.

It should be obvious that the models aren’t fit for the purpose for which models are usually built, i.e. to predict future states with the perturbation of one or more input variables. Being able to identify the best compromise model is indeed like putting “lipstick on a pig.” What is needed is a paradigm shift in modeling where the numerous output variables, such as temperature and precipitation, are consistent with each other and track historical records much better than the 3X overestimate of future temperatures currently seen.

jpatrick
November 3, 2017 10:58 am

The right way to evaluate a climate model is to watch and wait for a few hundred centuries. This just isn’t compatible with human lifespan.

Resourceguy
November 3, 2017 11:12 am

The same methodology might be useful in detecting chronic bias not just of the model but the operator and the users.

son of mulder
November 3, 2017 11:19 am

How can the predictions for a chaotic system compared to reality be anymore than chance?

Reply to  son of mulder
November 3, 2017 11:33 am

Worse than that, you can only compare the results to the past, that is, what has already happened. Is it more than chance that a model can predict global temps accurately in the future even if it happened to stumble across the correct answer one time?

Reply to  son of mulder
November 3, 2017 4:07 pm

climate isnt chaotic

Reply to  Steven Mosher
November 3, 2017 6:06 pm

“climate isnt chaotic”

That’s not what the IPCC say.

“In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible.”

IPCC Working Group I: The Scientific Basis, Third Assessment Report (TAR), Chapter 14 (final para., 14.2.2.2), p774.

It’s not what Edward Lorenz said either.

“Lorenz’s early insights marked the beginning of a new field of study that impacted not just the field of mathematics but virtually every branch of science–biological, physical and social. In meteorology, it led to the conclusion that it may be fundamentally impossible to predict weather beyond two or three weeks with a reasonable degree of accuracy.

Some scientists have since asserted that the 20th century will be remembered for three scientific revolutions–relativity, quantum mechanics and chaos.”

http://news.mit.edu/2008/obit-lorenz-0416

Now, who to believe…a superannuated English Major with a record of Mannipulating temperature data to fit the AGW narrative or IPCC Working Group I and Ed Lorenz, one of the most distinguished climate scientists ever born…

Patrick MJD
Reply to  Steven Mosher
November 3, 2017 10:42 pm

“Steven Mosher November 3, 2017 at 4:07 pm

climate isnt chaotic”

This has to rate as dumbest post EVAH!

AndyG55
Reply to  Steven Mosher
November 4, 2017 1:09 pm

Yep, Mosh has degenerated into an empty sock.

Maybe Muller removed his hand ?

John
November 3, 2017 11:58 am

Off topic a bit, but I’m curious to get your takes on the flurry of propaganda articles coming out just before the UN climate talks starting on 11/6 and the findings of the “major federal climate report” that was release today.

Reply to  John
November 3, 2017 12:54 pm

Ha ha, the article mistates facts from the very opening by claiming that these models already “heavily rely on the performance”. Fraudulent reinitializations are already known to be the practice that allows some scientists to say this with a straight face.

Scott Cater
Reply to  John
November 3, 2017 1:05 pm

My guess is that the creators of this report are hold overs from the prior administration.

Crispin in Waterloo
November 3, 2017 12:11 pm

A computer big enough to run a programme complex enough to realistically represent the evolution of the weather through the ages thus creating a picture of the climate, would run very slowly. In fact it would run at about the same speed as the actual climate.

This coincidence would give the modelers something to gauge their success by, as the actual performance could be compared with the computer-calculated performance in real time, side by side, for generations. After some time, tweaking and all, they would be able to demonstrate they can back-cast the whole climate accurately. I think this would be a major step forward.

Reply to  Crispin in Waterloo
November 3, 2017 11:00 pm

Biosphere I, the original experiment.

Walter Sobchak
November 3, 2017 12:51 pm

Mathematical onanism with lubricants.

Another Ian
November 3, 2017 1:12 pm

A comment from a management school where the pyramid of management was being explained

“Oh is that how it works? I thought it was like a vegetarian’s outhouse where the turds float to the top”

Just saying.

November 3, 2017 1:18 pm

The utility and skillfulness of computer models depends on:
1. how well the processes which they model are understood,
2. how faithfully those processes are simulated in the computer code, and
3. whether the results can be repeatedly tested so that the models can be validated and refined.

Specialized models, which try to model reasonably well-understood processes like PGR and radiation transport, are useful, because the processes they model are manageably simple and well-understood.

Weather forecasting models are also useful, even though the processes they model are very complex, because the models’ short-term predictions can be repeatedly tested, allowing the models to be validated and refined.

But more ambitious models, like GCMs, which attempt to simulate the combined effects of many poorly-understood processes, over time periods too long to allow repeated testing and refinement, are of dubious utility.

E.g., NASA’s ModelE2 consists of about a half-million lines of moldy Fortran code, which it is safe to assume nobody actually understands. They’ve got so many fudge factors, “knobs” and pseudo-random number generator calls in there that they can make it do just about anything at all, but It doesn’t in any sense represent an understanding of the Earth’s climate system. What’s more, unlike weather models, which are comparably complex but get tested every week, the predictions of those GCMs are untestable. Ask any computer scientist whether he would trust an untestable 500,000 line Fortran program as the basis for multi-million dollar decisions!

Worst of all are so-called “semi-empirical models,” which aren’t actually models at all. So-called “semi-empirical modeling” is an oxymoron: “modeling” that doesn’t actually model anything. It is similar to modeling, but without reference to any physical basis. It is really just curve-matching. It can be made to produce just about any desired result.

GCMs are subject to criticisms that they don’t accurately model the real world, because of inconsistency with observations of things like clouds and the predicted tropical mid-tropospheric hot spot. Semi-empirical modelers neatly avoid such criticism, by not even trying to model the real world. It’s the worst sort of junk science.

RayG
Reply to  daveburton
November 3, 2017 10:29 pm

Please make that multi-billion dollar decisions that are much closer to a trillion dollars that a few billion dollars.

November 3, 2017 2:12 pm

“Inevitably, the higher level of metrics loses detailed statistical information in contrast to the lower level of metrics.”

Does this mean destroying accuracy with averaging?

November 3, 2017 2:15 pm

*accuracy of ‘first level’ metrics values

November 3, 2017 2:17 pm

“According to the study, higher level of metrics is derived from and concisely summarizes the lower level of metrics ”

lower resolution?