Study: A new method to evaluate overall performance of a climate model

From the INSTITUTE OF ATMOSPHERIC PHYSICS, CHINESE ACADEMY OF SCIENCES and the “pyramid schemes” department:

A new method to evaluate overall performance of a climate model

Many climate-related studies, such as detection and attribution of historical climate change, projections of future climate and environments, and adaptation to future climate change, heavily rely on the performance of climate models. Concisely summarizing and evaluating model performance becomes increasingly important for climate model intercomparison and application, especially when more and more climate models participate in international model intercomparison projects.

This is a pyramid chart showing the relationship between three levels of metrics for multivariable integrated evaluation of climate model performance. CREDIT XU Zhongfeng

Most of current model evaluation metrics, e.g., root mean square error (RMSE), correlation coefficient, standard deviation, measure the model performance in simulating individual variable. However, one often needs to evaluate a model’s overall performance in simulating multiple variables. To fill this gap, an article published in Geosci. Model Dev., presents a new multivariable integrated evaluation (MVIE) method.

“The MVIE includes three levels of statistical metrics, which can provide a comprehensive and quantitative evaluation on model performance.”

Says XU, the first author of the study from the Institute of Atmospheric Physics, Chinese Academy of Sciences. The first level of metrics, including the commonly used correlation coefficient, RMS value, and RMSE, measures model performance in terms of individual variables. The second level of metrics, including four newly developed statistical quantities, provides an integrated evaluation of model performance in terms of simulating multiple fields. The third level of metrics, multivariable integrated evaluation index (MIEI), further summarizes the three statistical quantities of second level of metrics into a single index and can be used to rank the performances of various climate models. Different from the commonly used RMSE-based metrics, the MIEI satisfies the criterion that a model performance index should vary monotonically as the model performance improves.

According to the study, higher level of metrics is derived from and concisely summarizes the lower level of metrics. “Inevitably, the higher level of metrics loses detailed statistical information in contrast to the lower level of metrics.” XU therefore suggests, “To provide a more comprehensive and detailed evaluation of model performance, one can use all three levels of metrics.”


The paper:


This paper develops a multivariable integrated evaluation (MVIE) method to measure the overall performance of climate model in simulating multiple fields. The general idea of MVIE is to group various scalar fields into a vector field and compare the constructed vector field against the observed one using the vector field evaluation (VFE) diagram. The VFE diagram was devised based on the cosine relationship between three statistical quantities: root mean square length (RMSL) of a vector field, vector field similarity coefficient, and root mean square vector deviation (RMSVD). The three statistical quantities can reasonably represent the corresponding statistics between two multidimensional vector fields. Therefore, one can summarize the three statistics of multiple scalar fields using the VFE diagram and facilitate the intercomparison of model performance. The VFE diagram can illustrate how much the overall root mean square deviation of various fields is attributable to the differences in the root mean square value and how much is due to the poor pattern similarity. The MVIE method can be flexibly applied to full fields (including both the mean and anomaly) or anomaly fields depending on the application. We also propose a multivariable integrated evaluation index (MIEI) which takes the amplitude and pattern similarity of multiple scalar fields into account. The MIEI is expected to provide a more accurate evaluation of model performance in simulating multiple fields. The MIEI, VFE diagram, and commonly used statistical metrics for individual variables constitute a hierarchical evaluation methodology, which can provide a more comprehensive evaluation of model performance.


newest oldest most voted
Notify of
Mark from the Midwest

We already have methods to evaluate any model, sounds like these guys are trying to re-write the rules. Of course for many so-called climate scientists the real criteria is if the model helps them to get more funding.


The derivatives market of climate modeling. Modeling the model.

It’s a model derived from expectations. The main failure of climate science is not modifying those expectations when they are demonstrably incorrect.


They just pressure the data keepers to make convenient adjustments to observation.
That is post-modern climate science at work.

Louis Hooffstetter

Richard Feynman said it best:

“It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.”
Dr. Richard P. Feynman

LouisH: climate isn’t
an experimental science.

Patrick MJD

“crackers345 November 3, 2017 at 7:09 pm”

Define climate, mathematially.

Patrick MJD,

Here are the equations that matter, relative to the planets sensitivity to forcing (i.e. affect of the surface temperature to changes in Pi).

Pi(t) = Po(t) + dE(t)/dt

Pi(t) is the post albedo energy arriving from the Sun, given as Psun*(1 – a), where Psun is the incoming solar power and a is the albedo. Po(t) is the energy emitted by the planet which in LTE (when dE(t)/dt == 0) is equal to Pi. E is the energy stored by the planet which increases when Po Pi.

Ps(t) = o*Ts^4
Po(t) = e*Ps(t)
Ps(t) = e*o*Ts^4

Ps(s) is the emissions of the surface, where the corresponding temperature, Ts is given by the SB Law. The constant o is the SB constant (5.67E-8 Watt/m^2 per degree K^4). The coefficient e is the ratio between the power emitted by the surface and the power emitted by the planet and is the emissivity of an EQUIVALENT gray body emitter whose temperature is Ts and whose emissivity is e.

Ts(t) = k*E(t)

The surface temperature Ts is linearly proportional to the energy stored by the system E (i.e. one calorie increases the temperature of 1cc of water by 1C).

Solving these equations for the sensitivity, dTs(t)/dPi(t) and we get,

dTs(t)/dPi(t) = (4*e*o*Ts^3)^-1

The measured value for e is about 0.61 and the measured value for Ts is about 287.5K. Plugging in the numbers, the sensitivity is 0.3 C per W/m^2. To the extent that some solar input does work that does not affect the surface temperature, the sensitivity will necessarily be less than this. Note that k, which is the linear proportionality constant between stored energy and the temperature drops out of the equation for the sensitivity, moreover; the sensitivity is highly temperature dependent going as T^-3.

The formatter removed text around characters. It should say,

When Pi is greater than Po, E increases along with Ts. When Pi is less than Po, E decreases along with Ts.

co2isnotevil November 4, 2017 at 10:14 am

Patrick MJD,

Here are the equations that matter, relative to the planets sensitivity to forcing (i.e. affect of the surface temperature to changes in Pi).

Pi(t) = Po(t) + dE(t)/dt

Pi(t) is the post albedo energy arriving from the Sun, given as Psun*(1 – a), where Psun is the incoming solar power and a is the albedo. …

Thanks, co2. The problem with this analysis is that Psun * (1-a), the amount of solar energy available after albedo reflections, is itself a function of the temperature.

This is because, in the all-important tropics where most of the solar energy enters the system, albedo goes up with the temperature. They are very highly correlated, as you can see below.

Since your analysis does NOT include this critical active temperature control mechanism, I fear that it cannot be used to calculate the sensitivity.



“The problem with this analysis is that Psun * (1-a), the amount of solar energy available after albedo reflections, is itself a function of the temperature.”

Not as much as you think. Yes, the albedo in polar regions is larger than equatorial regions owing to ice and snow, but the decreasing albedo from melting ice and snow is quite small. It was larger coming out of the last ice age when there was a lot more of the surface covered in ice, but today, the average fraction of the planet covered by ice is pretty close to the minimum possible. Average polar temps are far below freezing and no amount of GHG action will ever be enough to melt it all and prevent it from returning in the winter. About the only thing that will cause this is when the Sun enters its red giant phase.

Considering that 2/3 of the planet is covered by clouds, which have about the same reflectivity as ice, 2/3 of all future melted ice has no affect on the net albedo. Polar regions receive less insolation to begin with and when you calculate the increase in the incident power from melting all ice and snow on the planet and distribute that power across the entire planet, its only a few W/m^2 and less than what’s required to achieve the global emissions increase (temperature increase) they claim arises by doubling CO2.

The sensitivity expressed as a change in temperature per change in input power, dTs(t)/dPi(t) is already a function of temperature and that function of temperature is independent of the albedo. None the less, since (1-a) is linear to e, whatever effect albedo has can be rolled into an equivalent value of e, both of which can be expressed as functions of the fraction of the planet covered by clouds. Note that the sensitivity expressed as a change in surface emissions per change in input power is constant, where

dPs(t)/dPi(t) = 1/e

Yes, e is a higher order function of temperature, but when we measure it over the last couple of decades, it’s remarkably constant coming in at about 0.6, where dPs(t)/dPi(t) is about 1.6 W/m^2 of Ps per W/m^2 of Pi. It’s even relatively constant from the poles to the equator where e increases only slightly as the average temperature transitions through freezing.

Patrick MJD commented >>Define climate, mathematially.<<

too clever by five-halfs.

One way to see if a model of a causal system is at least plausible is to vary the initial conditions and run the model. The model should always end up in the same state.

Curve fitting GCM’s to expectations requires so many assumptions and adjustments, any real physics gets lost. The observable consequence of this is the large effect initial conditions have on the modeled results. This is often misinterpreted as the consequences of chaos and complexity but is more symptomatic of an unstable model or uninitialized data.

the full set of initial conditions aren’t
known — in particular deep ocean
currents, and aerosol loading.

Initial conditions establish the starting values for state variables, where the model adjusts the state variables as the model runs and if the model is correct, the state variables will converge to correct values, independent of the starting value.

If you’re talking about model coefficients which the model doesn’t adjust, its even worse because only one value of the coefficient is correct and all others are not, thus averaging across difference values doesn’t help.

Patrick MJD

“crackers345 November 3, 2017 at 7:10 pm

the full set of initial conditions aren’t

So “conditions” in the models are not known? Thank you for confirming models are rubbish!

evil: climate models do not
solve an initial value problem,
they solve a boundary value

(ever studied PDEs?)

If it’s solving boundary problems, then its solving the wrong problem. Models are supposed to model how state changes. Besides, the boundaries are well known and there are only 2 of them that matter, One is the boundary between the atmosphere and space and the other is between the atmosphere and the surface.

Patrick MJD commented >>So “conditions” in the models are not known? Thank you for confirming models are rubbish! <<

you clearly do not understand
GCMs, or how they are initialized

(they don't solve an initial value problem.)

Andy Pattullo

I agree. There is plenty of straightforward evidence and some very valuable expert advice (e.g. Dr. Judity Curry) that most current GCM’s are useless in interpreting the real world. I can’t help but think this is an attempt to create some custom metric by which individuals may claim value in models that doesn’t exist (but I could be wrong). It smells a lot like how stinky subprime morgages were packaged into larger tranches and then folded into major investment vehicles of no real ultimate value while disguising all of the high risk and poor judgment that went into the original loans.

george e. smith

Well isn’t that what …. average …. is ?

Just a fictitious hodge-podge of a bunch of unrelated things that weren’t exactly observed by anyone anywhere, any how. But modelers get their jollies by imagining that it means something; well something besides maybe more grant moneys.


M Seward

Two possibilities with this.

1 It was put together by Chinese pinheads who think they are angels and this will tell them how much funding they need to meet their kpi’s going forward and the ‘pyramid’ characterisation just did not make it through translation so did not register.

2 It is a spoof, the giveaway being the ‘pyramid’ characterisation.

Who knows? Who cares?

Carbon BIgfoot

Maslow’s theory of needs does not apply to self-actualization of failed theory.


Lame….models will never be right when they are constantly changing/adjusting temp history
History that they back cast to today….will not even be the same by the time they run the model
…and all the other crap they do to temp history

…and a few hundred other things


… and when there are several temperature datasets to pick and choose.

Models are often fed with one temperature dataset and the results are compared to a different temperature dataset. weather balloons or HadCRUT3 vs. HadCRUT4.


When applied to current climate models do the results correlate with real world performance?

Lipstick on a pig.




Well, that’s better than trying to put a pig on lipstick. 🙂

Bruce Cobb

The paper has lots of gobbledygook and horseshit, so good on them for that. Could definitely use more cowbell though.

F. Leghorn

Funny how ALL models could be evaluated by their actual predictions. I guess that would be too easy.


Not if they continually adjust inputs (data) and tweak assumptions to force their preconceived “predictions.” Way back when, just before all this was hitting the media, I was a very lukewarmer. I then listen to a presentation where it was obvious that the PI was changing the data to fit their assumption. I was also dealing with federal government scientists on other issues. It was not a pleasant interaction. So I began to question everything they were doing, not just my normal inborn skepticism. Like many things in and around government I have begun to believe that for CAGW “scientists” it is more than just ego and grant money but power. They have had a taste of power. Think about it! Almost all the governments in the world have people working on this issue and developing dramatic changes in policies that will affect the economic, therefore political, structure of the entire world.


There are different levels of the true believers. There are the devious and power hungry as you say, but also many with ulterior motives (anti-capitalistic, anti-american, socialist, etc.), and of course plenty of useful idiots.

Thomas Homer

” … ALL models could be evaluated by their actual predictions”

Even with mostly accurate predictions, a model’s underlying assumptions may be questionable. As an example, models were established for astronomical orbits based on a geocentric assumption. Celestial spheres were necessarily introduced to explain the planetary orbits. These models predicted those orbits quite well. Of course, the geocentric assumption came into question, and new models with a heliocentric assumption were shown to be just as accurate without the need for celestial spheres.

F. Leghorn

In other words they doctored the data. Deja vu all over again.

squiggy9000 November 3, 2017 at 12:42 pm

In other words they doctored the data. Deja vu all over again.

No, they kept the data and changed the theory.



george e. smith

And Lunarcentric models would be just as accurate, just more complicated.
Well the Mandelbrot Set is pretty complicated; and it isn’t even a model of anything !



No, they kept the data and changed the theory.

As far as I know, there was no physical theory behind the geocentric model. The heliocentric model can be explained by the theory of universal gravitation.

My 2 cents.

Gew, beg to differ a bit. Mandelbrot set only becomes visible if you program its recursive function a certain way—100 recursions escape. Done over the complex plane from -1 to +1, -i to +i. Is an inverse of Julia sets. So exists, for sure. Just not obvious without some math effort. Unlike climate science, is fully reproducible. I progrsmmed a Mandelbrot set grnerator myself over 20 years ago. Slow compared to later algorithms, since brute force rather than spherical approximation.

Tom Halla

Trying to reduce something as complex as climate to a single number reminds me of Swift and his parody of science.


Oh, that’s easy…the answer is 42 (see Hitchhikers Guide to the Galaxy).

george e. smith

How about 3 ; so it’s the same as Pi.


They are doing something different

Start with a thorough understanding of what causes climate change.

Even a one page summary is acceptable.

Without that understanding, there are no real climate models.

Unfortunately that understanding does not exist today.

Therefore, we have only wild guess computer games, falsely called “climate models”,
that will make wrong predictions … until the temperature actuals are eventually “adjusted”
enough so the predictions look better!

The average temperature of our planet ‘s surface has remained in a one degree C. range since 1880,
even with haphazard measurements, lots of surface area not measured at all, and “adjustments” that may have doubled the warming in the raw data, and the data is owned by people who WANT to see a lot of global warming … yet we’ve been in a one degree C. range for 137 years!

Why would anyone with a functioning brain think such a tight average temperature range over 137 years is a ‘coming climate change catastrophe”?

Climate blog for non-scientists:
over 12,000 page views so far
No ads – no money for me – a public service

Michael Jankowski

Poorly-worded, but as it notes, most models seem to be scored on an individual variable (global temperature). This at least would seemingly call BS on models that fair miserably to reproduce other parameters with accuracy.

Jim Gorman

You kind of hit the nail on the head. Even if one could say that a model semi-accurately forecasted ‘global temps’, so what. What is really needed is accuracy to the point where we know what will happen within areas/regions. Will the Outback, desert southwest of the US, or the Sahara see most of the warming? Or will it only be at the poles? Or maybe evenly spread? So, so much we (or the modelers) don’t know!


My understanding is that the Chinese categorically reject CAGW and Climate Change altogether.
So they might be up to something else, like developing a new suite of models that actually produce useful long range forecasts. And maybe taking a poke at Western climate science in the process.


Not sure they reject CAGW altogether, China just has a entirely different perspective about climate. In their long recorded history they have faced climate change. They understand that climate changes regardless of what humans do or don’t do. They know when they have been rich it was far easier to adapt to change. When poor China has had prolonged suffering and strife. They have learned that to build wealth, besides stealing technology from elsewhere, they need cheap and abundant energy. Since they see themselves as THE rising world power they are quite happy to allow Europe and North America to play this stupid CAGW game.


This climate model effort is analogous to making further adjustments to the epicycles and deferents in a Ptolemaic planetary system. With each refinement, new complications arise elsewhere in the model.

Their underlying fundamental assumptions are wrong in both cases.


As a bit of trivia, the Ptolemaic model does work well in the gross sense of the big picture, as long as troublesome movement details are not examined too closely.

Planetarium projectors use the simplified mathematical relationships embodied in Ptolemaic calculations to project the night sky on the curved ceiling, creating the Earth centric view that wows and astonishes everyone when they first see this very realistic presentation. The Ptolemaic sky projection model works deceivingly very well in this application. This is the exact same model trap that climate modelers have fallen into. It appears correct to them in the gross, larger sense, so they believe their model represent fundamental realities of climate. They could not be more mistaken.

For more on the Ptolemaic model used in planetarium projectors:

Jim Gorman



The first rule of climate ‘science’ , if the models and realty differ in value , it is reality which always in error, somewhat underminds the need for this research. In addition the authors have made an error in their maths. For it is clear the ‘value’ of any model is not a function of its validity, rather it is in a direct relationship with the degree of support the model offers to further the AGW faith. Science has f all to do with it.


I have actually had this argument with federal scientists in a public management meeting. After saying that for the issue at hand we had best data set they had ever seen, I asked if we had ALL the data but their models disagreed with the data what would they believe, what would they base their recommendations to rule and policy makers on? Their answer, The Computer Models. When one of the senior committee officials asked them to explain, they instead requested a 15 minute recess which turned into an hour. The politically appointed members of the management unit were NOT happy people. It led to a brief investigation of all those in that work unit. It helped, but only briefly.


Heretic!!! They surely labeled you with the dreaded “D” word as a way to salve their cognitive dissonance.

Gary Pearse

I’ll wait for McIntyre or Briggs to work this over! First it’s already compromised science when you have to use statistics at all (that ought raise a din from all sides). It’s essential for the social sciences and we know the wiggle room it makes for these irredeemably ideologically corrupted disciplines (of which climate science is the best example).

To me, this analysis constructs a phony “index” that will give a totally wrong model a pass. If you have a hard-wired falsified theory based on CO2 as the basis and you adjust this with a concocted uber-aerosols effect to protect the theory and an additional wrong-signed cloud parameter, you could end up with an excellent forecast with a little prestidigitation and get an index of 0.99.

I’ve been fearing this possibility as a pretext for jailing deplorables and putting the world under elitist governance. Thankfully their hubris and post no-idjit-left-behind enrollment policies in institutions of higher learning seems to have blocked their vision. Adding a coefficient “c” equal to 1/3 to multiply their formula by would have given them a heck of a scary fit.

Clyde Spencer

It appears to me that the authors have presented a rigorous, quantitative method for evaluating multiple model results to determine the best compromise model. However, one often is more concerned about one of the variables than the others. They then recommend assigning subjective weighting to the variable(s) of primary interest. They have then degraded the quantitative approach with subjective assessments of the weighting to be assigned.

The authors acknowledge that is is generally recognized that some models do a better job of predicting future temperatures than they do future precipitation, and vice versa. That is an interesting state of affairs because there is strong interaction between temperature and precipitation. That is, the surface commonly cools down during and immediately after a Summer rain, and high surface temperatures may result in virga. So, at first blush, it would appear that there are serious problems with the assumptions or constructs within the models when these interacting variables have different inaccuracies.

It should be obvious that the models aren’t fit for the purpose for which models are usually built, i.e. to predict future states with the perturbation of one or more input variables. Being able to identify the best compromise model is indeed like putting “lipstick on a pig.” What is needed is a paradigm shift in modeling where the numerous output variables, such as temperature and precipitation, are consistent with each other and track historical records much better than the 3X overestimate of future temperatures currently seen.


The right way to evaluate a climate model is to watch and wait for a few hundred centuries. This just isn’t compatible with human lifespan.


The same methodology might be useful in detecting chronic bias not just of the model but the operator and the users.

How can the predictions for a chaotic system compared to reality be anymore than chance?

Jim Gorman

Worse than that, you can only compare the results to the past, that is, what has already happened. Is it more than chance that a model can predict global temps accurately in the future even if it happened to stumble across the correct answer one time?

climate isnt chaotic


“climate isnt chaotic”

That’s not what the IPCC say.

“In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible.”

IPCC Working Group I: The Scientific Basis, Third Assessment Report (TAR), Chapter 14 (final para.,, p774.

It’s not what Edward Lorenz said either.

“Lorenz’s early insights marked the beginning of a new field of study that impacted not just the field of mathematics but virtually every branch of science–biological, physical and social. In meteorology, it led to the conclusion that it may be fundamentally impossible to predict weather beyond two or three weeks with a reasonable degree of accuracy.

Some scientists have since asserted that the 20th century will be remembered for three scientific revolutions–relativity, quantum mechanics and chaos.”

Now, who to believe…a superannuated English Major with a record of Mannipulating temperature data to fit the AGW narrative or IPCC Working Group I and Ed Lorenz, one of the most distinguished climate scientists ever born…

Patrick MJD

“Steven Mosher November 3, 2017 at 4:07 pm

climate isnt chaotic”

This has to rate as dumbest post EVAH!


Yep, Mosh has degenerated into an empty sock.

Maybe Muller removed his hand ?


Off topic a bit, but I’m curious to get your takes on the flurry of propaganda articles coming out just before the UN climate talks starting on 11/6 and the findings of the “major federal climate report” that was release today.

Ha ha, the article mistates facts from the very opening by claiming that these models already “heavily rely on the performance”. Fraudulent reinitializations are already known to be the practice that allows some scientists to say this with a straight face.

Scott Cater

My guess is that the creators of this report are hold overs from the prior administration.

Crispin in Waterloo

A computer big enough to run a programme complex enough to realistically represent the evolution of the weather through the ages thus creating a picture of the climate, would run very slowly. In fact it would run at about the same speed as the actual climate.

This coincidence would give the modelers something to gauge their success by, as the actual performance could be compared with the computer-calculated performance in real time, side by side, for generations. After some time, tweaking and all, they would be able to demonstrate they can back-cast the whole climate accurately. I think this would be a major step forward.


Biosphere I, the original experiment.

Walter Sobchak

Mathematical onanism with lubricants.

Another Ian

A comment from a management school where the pyramid of management was being explained

“Oh is that how it works? I thought it was like a vegetarian’s outhouse where the turds float to the top”

Just saying.

The utility and skillfulness of computer models depends on:
1. how well the processes which they model are understood,
2. how faithfully those processes are simulated in the computer code, and
3. whether the results can be repeatedly tested so that the models can be validated and refined.

Specialized models, which try to model reasonably well-understood processes like PGR and radiation transport, are useful, because the processes they model are manageably simple and well-understood.

Weather forecasting models are also useful, even though the processes they model are very complex, because the models’ short-term predictions can be repeatedly tested, allowing the models to be validated and refined.

But more ambitious models, like GCMs, which attempt to simulate the combined effects of many poorly-understood processes, over time periods too long to allow repeated testing and refinement, are of dubious utility.

E.g., NASA’s ModelE2 consists of about a half-million lines of moldy Fortran code, which it is safe to assume nobody actually understands. They’ve got so many fudge factors, “knobs” and pseudo-random number generator calls in there that they can make it do just about anything at all, but It doesn’t in any sense represent an understanding of the Earth’s climate system. What’s more, unlike weather models, which are comparably complex but get tested every week, the predictions of those GCMs are untestable. Ask any computer scientist whether he would trust an untestable 500,000 line Fortran program as the basis for multi-million dollar decisions!

Worst of all are so-called “semi-empirical models,” which aren’t actually models at all. So-called “semi-empirical modeling” is an oxymoron: “modeling” that doesn’t actually model anything. It is similar to modeling, but without reference to any physical basis. It is really just curve-matching. It can be made to produce just about any desired result.

GCMs are subject to criticisms that they don’t accurately model the real world, because of inconsistency with observations of things like clouds and the predicted tropical mid-tropospheric hot spot. Semi-empirical modelers neatly avoid such criticism, by not even trying to model the real world. It’s the worst sort of junk science.


Please make that multi-billion dollar decisions that are much closer to a trillion dollars that a few billion dollars.

Mark - Helsinki

“Inevitably, the higher level of metrics loses detailed statistical information in contrast to the lower level of metrics.”

Does this mean destroying accuracy with averaging?

Mark - Helsinki

*accuracy of ‘first level’ metrics values

Mark - Helsinki

“According to the study, higher level of metrics is derived from and concisely summarizes the lower level of metrics ”

lower resolution?

Mark - Helsinki

The obfuscation is strong in this one 😀 I smell a rat

Steve Carousso

quantifying how bad they stink


“Directly contradicting much of the Trump administration’s position on climate change, 13 federal agencies unveiled an exhaustive scientific report on Friday that says humans are the dominant cause of the global temperature rise that has created the warmest period in the history of civilization.”

Oh, those rats.

richard verney

The swamp requires draining and we need to get rid of that rat infested hell hole.

Tom in Florida

These are truly “if” and “then” models. If the input conditions actually come to pass, then the results will be accurate. The problem seems to be that they never get the “if” anywhere near reality.

“The first level of metrics, including the commonly used correlation coefficient, RMS value, and RMSE, measures model performance in terms of individual variables.”

No. Performance is measured by the results being compared to actual measured conditions. These metrics are structured to measure the assumptions of one model compared to the assumptions of other models, and has no relevance to empirical data. Here is one more example of modelers trying to justify their existence.

NW sage

The cast of the TV show Stargate had/has a word to describe papers of this kind: technobabble – it means all the things ‘babble’ means and it sounds suitably technical (meaning obtuse)

Dr. S. Jeevananda Reddy

In the settled science scenario, on one side CO2 is increasing with the time and on the other the climate sensitivity factor coming down with the progression of time as presented by IPCC from their reports. This means the resulting temperature presents a zero trend. What will be the result of model tests???

Dr. S. Jeevananda Reddy

very few climate models are
intended to “predict” climate, and
that’s not how scientists use them.

they use them as experiments — change this
part over here, and see if it matches reality.

might be a particular parametrization, or a
different way of handling sea ice, or clouds,
or aerosol

warming to 2100 can’t be predicted anyway.
models are run to 2100 with some assumed
scenarios, none of which will
actually take place.

models are calculations, very very complex
calculations, & the
questions are what if you change
this term A here to be a different term A’.

Tom Halla

So if the computer models are not actually trying to describe the natural world climate, and be judged by their conformity to that real world, perhaps the writers of such exercises could be moved to the philosophy or theology departments, and no longer pretend to be doing science.

Clyde Spencer

If the models can’t predict the future, then there is no evidence that they are simulating reality. Without being able to trust the outputs to be realistic, then how can one trust that anything that comes from them tells you anything about reality. The only thing that one can say with confidence is that if you change “term A,” then you will probably get results that are bounded by an uncertainty range of an ensemble. Actually, it is worse than that because of the tuning that goes into the models. It is not unlike picking a number between 1 and 1,000 to characterize the average age of men on Earth.

richard verney

How about taking CO2 out of the mix and using RAW temp data from stations wholly not impacted upon by UHI, and hey presto.

In all likelihood the temperature today is no warmer than it was in around the late 1930s/1940 notwithstanding that approximately 95% of all manmade CO2 emissions have taken place since that date.

The funny thing is that that is what Biffas/Mann’s tree ring data was saying, and that is why they truncated it and spliced on the adjusted thermometer record.

I trust most of you recognize the following as the Stefan Boltzmann radiation equation that quantifies the amount of energy emitted from a surface.
Q = σ * ε * A * T^4
Two points: this radiation is a surface property and NOT a bulk property and its direction is from hot to cold. There are those that suggest since all surfaces emit based on their surface temperature energy can flow from colder to hotter with a resulting “net” flow. This supposedly explains how “back” radiation can flow from the cold troposphere to the warmer “surface” (1.5 m above the ground). This phenomenon is not present in the radiative flow calculation from the sun to the earth (1,368 W/m^2) and earth’s ToA “back’ radiation (240 W/m^2) to the sun.
As often seen in text books this “net” phenomenon is reflected in a modified equation where ΔT, (T1 – T2), is simply substituted for T, i.e.:
Q12 = σ * ε * A * (T1^4 – T2^4)
So, if 1 is hotter than 2 “net” energy flows from hot to cold. If 2 is hotter than 1 the result is negative and “net” energy still flows from hotter 2 to colder 1.
This substitution is mathematically illegal. Here is how it actually works.
Two S-B surfaces a & b, any temperature. (BB if ε = 1.0, GB if ε < 1.0)
Qa = σ * εa * Aa * Ta^4 Qb = σ * εb * Ab * Tb^4
Which surface is hot or cold is irrelevant. Energy radiates from the hot to the cold and according to RGHE “theory” heat also radiates from cold to hot leaving a “net” radiative LWIR heat flow.
So, let’s do that math.
(Qa – Qb) = (σ – σ) * (εa – εb) * (Aa – Ab) * (Ta^4 – Tb^4)
(σ – σ) = 0, i.e. ZERO!!!
Right side of equation goes to zero! Also goes to zero if ε, A or T are equal.
What does this illustrate/prove?
Conservation of energy: Qa = Qb
ZERO algebraic evidence of “back” cold to hot or “net” radiation.
Good thing, since that would grossly violate the laws of thermodynamics.

richard verney

comment image

Hey, this is MY marked up graphic!! R&C thoughts?

so you believe that, unlike
all other objects/substances in the universe,
the atmosphere doesn’t


No, it radiates – from 32 km where the molecules end not primarily from the ground/surface.

nickreality65 commented >>No, it radiates – from 32 km where the molecules end not primarily from the ground/surface. <<

so atmospheric gases radiate at 32 km altitude,
but these gases don't radiate
near the surface?

and you have evidence of this?
if so it would completely
rewrite physics.

can't wait to read it


At the surface they radiate 63 W/m^2, NOT 396 W/m^2. BTW “surface” is 1.5 m above the ground.

nickreality65 November 5, 2017 at 9:51 am


At the surface they radiate 63 W/m^2, NOT 396 W/m^2.

Per Stefan-Boltzmann, a black body radiating at 63 W/m^2 has a temperature of -90°C … I believe you are not referring to “radiation” (how much the body is radiating). Instead, you seem to be referring to “net radiation” (how much the body is radiating MINUS how much the body is absorbing).

While both are valid ways to look at a situation, for mathematical calculations you need to consider the individual energy fluxes.

Next, you say that the “333 W/m2 comes from nowhere does nothing” … but in fact it comes from the atmosphere, and it leaves the surface warmer than it would be without the radiation from the atmosphere.

Finally, you say:

BTW “surface” is 1.5 m above the ground.

While this is true for what is called the “surface air temperature”, it is NOT true for the K/T diagram you are discussing. In that diagram, the surface is the actual surface.


Clyde Spencer

I’m afraid that you have made a mistake in your algebra. The two sigmas do NOT equate to zero!The S-B constant of proportionality is the same for both expressions. Assume for the sake of illustration that both emissivities are equal. And, let’s assume that the areas are equal. (Actually, the area of the atmospheric emissions is slightly larger than the surface of Earth, but I want to keep it simple for illustrations.) Therefore, the three parameters are common to both difference expressions and can be extracted. Thus, Qnet simplifies to the product of the 3 parameters multiplied times the difference of the absolute temperatures to the fourth power. That is, the net energy is proportional to the difference between the temps raised to the 4th power.

Tink about it for a moment. If the temperatures were equal, there would be no net energy difference. If one temperature were absolute zero, there would be only one term surviving, the one with the positive temperature. All values in between these two extremes are possible. I think that the atmospheric energy component should be divided by two to account for the fact that half is radiating into space, and half is radiating back towards the surface. That can be taken care of with the area term.

“…half is radiating into space, and half is radiating back…”


Sounds like W.E. and ACS’s infamous, opaque, dull, multi-shell models. Bogus. See my papers:—We-don-t-need-no-stinkin-greenhouse-Warning-science-ahead-

Clyde Spencer

You didn’t respond to my major criticism that your algebra is wrong. Why should I bother reading more of the same?

Count to 10

Clyde is right. Algebraicly, what you wrote is abcd-efgh = (a-e)(b-f)(c-g)(d-h). This is a pretty big mangling of distributivity (if I have my terms correct).

Q = σ * ε * A * T^4
“Surface” σ ε A T Result
A 5.670E-08 0.9 10000 288 3.511E+06
B 5.670E-08 0.5 12000 213 7.002E+05
A-B 2.810E+06

Q = σ * ε * A *(TA^4 – TB^4)
A 5.670E-08 0.9 10000 288^4 – 213^4 2.460E+06


Q = σ * ε * A * T^4
“Surface” σ ε A T
A 5.670E-08 0.5 12000 288 2.340E+06
B 5.670E-08 0.5 12000 255 1.438E+06
A-B 9.020E+05

Q = σ * ε * A *(TA^4 – TB^4)
A 5.670E-08 0.5 12000 273-255 4.512E+05


My point is that you can’t just replace T with dT.

Yeah, I screwed up pretty good. I ASSUMED that if T^4 could be replaced with dT^4 than so could the other terms, A w/ dA, ε w/ dε, σ w/ dσ. But that’s not what happened. There is an ASSUMPTION that σ, ε and A are constant so they can be pulled out of the parens leaving behind dT^4.
Surface a and Surface b
Watob = (σ * ε * A * T^4)a – (σ * ε * A * T^4)b or w/ ASSUMPTION σ * ε * A *(Ta^4 – Tb^4)
As you mentioned, if Ta = Tb than the line goes to zero and Wa = Wb.
However, what happens of Ta = Tb and ε and A are NOT equal?
Than if Aa is larger than Ab, energy will flow from a to b EVEN THOUGH Ta = Tb.
Than if εa is larger than εb, energy will flow from a to b EVEN THOUGH Ta = Tb.
So, if Aa is larger than Ab, heat will flow from a to b even though Ta = Tb.
Sun Aa is huge compared to earth Ab.
And if εa is larger than εb, heat will flow from a to b even though Ta = Tb.
If surface a is opaque and dull and surface b is shiny and translucent, heat will flow from a to b even though Ta = Tb.
Should be easy enough to demonstrate in the lab, to Feynman’s satisfaction.
So, the area of the earth’s “surface” (How is that defined? The ground? or 1.5 m above the ground?) is enormous compared to the surface area of the GHGs (How is that even defined period?) as is the net flow.
The atmosphere is 99.96% transparent and with albedo reflective? How does that εa compare to GHGs εb?
I think what it all boils down to is the notion of “net” and “back” radiation is incorrect.
Which brings me to another point. Had a discussion w/ Scott Denning about this.
Denning’s hypothesis is: at 396 W/m^2 upwelling radiation the surface/ground will lose so much heat/energy so fast that it will get really^3 cold, even frozen. All that prevents this from happening is the 333 W/m^2 “back” radiation that compensates by warming/slowing the loss (396-333=63) the surface/ground, i.e. RGHE theory.
However, based on type K T/Cs I placed in the ground and the surface (1.5 m above the ground), the air heats and cools rapidly and a lot compared to the ground which heats and cools slowly.
During the day the sun heats both the air and ground and the air can be hotter than the mulch and grass covered ground. (Notice that the car, asphalt, hunks of iron, etc. can get much hotter than the air.)
At night the air cools quickly, becoming cooler than the ground and the ground cools slowly staying warmer then the air all night long.
As Feynman observed, if your theory doesn’t pass experiment, it’s wrong and RGHE, air warming the ground, fails the experiment.

Here’s how to evaluate a climate model.


At the top of the pyramid is a single index. Would that, properly, be: 42?

My plan is to invent a system, kinda like the Chinese system above, to judge scientists instead of climate models.Here’s the basic form, just as above.

It will have the same structure, where it starts by evaluating scientists on several different metrics. Those metric results move upwards to the midlevel in the graphic above, where they form the inputs to the multivariable integrated statistics system (MISS) regarding the scientist in question.

Then in the final step, shown at the top of the pyramid above, these MISS statistics feed into the high-level integrated topmost system (HITS). This final step “summarizes the three statistical quantities of second level of metrics into a single index and can be used to rank the performances” of various climate scientists.

At the end, this will give us a single number, the Scientific Value Index that perfectly expresses that scientist’s value to society. I predict that this system, which I have dubbed the “HIT and MISS System”, will allow us to … what was it … oh, yeah, “concisely summarize and evaluate” each scientist’s performance.

This is great, because it will settle all scientific debates immediately, definitively, and painlessly. If your Scientific Value Index is greater than that of your scientific opponent, you will be judged to have won the debate.


And if you can see what is wrong with my proposal … well, that’s exactly what is wrong with these scientists’ proposal for judging climate models.

Regards to everyone on what is a lovely rainy night here,


michael hart

How about we get Google to lend us one of their A.I. bots?

You know, the ones with fiendishly clever algorithms they use to “tease out” meaning from billions of unrelated pages about cats. The algorithms that can identify fake news, after an initial training period under the loving keystrokes of a Google/YouTube Hero, who will teach the long-suffering computer how to recognize and correct wrong-think in climate science and beyond.

Imagine such a vista…


Two alternate methods:
1: The Fyenmann method, if the data/results don’t match the model, the model is wrong.
2: The BOM method, if the data/results don’t match the model, change the data.

except the data/results are themselves
sometimes wrong or (esp) incomplete, so evaluation is far
nuanced and complicated.

Count to 10

Realistically, the “average global temperature” parameter, however defined, should be unimportant in molding climate changes. The models need to be able to predict not only how temperature patterns change seasonally, regionally, and over the course of an average day, they should also be predicting humidity and precipitation on those levels as well. If they get those all wrong, then there is no point in even checking is their global average somehow tracks reality.

On that note, the most ridiculous thing I have seen on this whole topic is the way that measured increases in temperatures in specific conditions (winter, night, high latitude) are used to elivate the global average, which is then used to predict uniform warming everywhere and at all times.

co2isnotevil November 4, 2017 at 11:57 am


“The problem with this analysis is that Psun * (1-a), the amount of solar energy available after albedo reflections, is itself a function of the temperature.”

Not as much as you think. Yes, the albedo in polar regions is larger than equatorial regions owing to ice and snow, but the decreasing albedo from melting ice and snow is quite small. It was larger coming out of the last ice age when there was a lot more of the surface covered in ice, but today, the average fraction of the planet covered by ice is pretty close to the minimum possible. Average polar temps are far below freezing and no amount of GHG action will ever be enough to melt it all and prevent it from returning in the winter. About the only thing that will cause this is when the Sun enters its red giant phase.

Considering that 2/3 of the planet is covered by clouds, which have about the same reflectivity as ice, 2/3 of all future melted ice has no effect on the net albedo.

co2, it seems you’ve missed my point, likely my lack of clarity. Let me give it another shot.

First, the important correlation of temperature is not with the polar ice albedo.

The important correlation of temperature is with the tropical albedo, which is ruled by cloud cover and which responds quickly and dynamically to local temperatures. This in turn imposes strong controls on local temperatures.

Second, it is exactly that relationship between temperature and clouds that is missing in your analysis. When it gets warm in the tropics, clouds form. This changes the amount of solar energy available after albedo reflections … but you do not have any equation in your analysis for that most important connection.

Or in your terms,

a = f(T)

which means that the albedo (a) is some unknown function (f) of the temperature (T).

Where is that in your analysis?



Anne Ominous

Not trying to criticize this analysis, but:

Unless it manages to properly carry uncertainties all the way up FROM the data TO the results (and I hope it does), it has potential to be just as wrong as all the others.


I have only skimmed through this article – lack of time – and have been unable to find anything that relates to step changes in climate. Have I missed it, or is it not there?
Presuming that it is not there, no climate model is going to fit the real data adequately. Climate frequently changes abruptly. Where should I go to read something about this, either a refutation or a support for my ideas?

Robin, that’s an interesting question that points to a more general problem with climate modeling. This is that many times, nature doesn’t do “gradual”. Nature does “edges”.

For example, there is no gradual transition from cloud to clear air. You are either in the cloud or you are not. Another example. Fifty miles out off the coast where I live, you often come across a clear line with green water on one side of the line and blue water on the other side of the line. It doesn’t shade gradually from one to the other. It undergoes, as you point out, a “step change”.

Computers, on the other hand, are the reverse of nature. They do “gradual” quite well … but step changes, not so much. I’m not saying that computers can’t do them … I’m saying that step changes are much harder to model accurately than are gradual changes.

Unfortunately, most of the interesting climate processes (tropical cumulus fields, dust devils, thunderstorms, the PDO, squall lines, the El Nino/La Nina pump, tornadoes, williwaws, cyclones, etc) are temperature-threshold based. When the temperature (or more accurately the temperature difference between surface and altitude) exceeds some local threshold, the phenomenon appears.

So ALL of them represent step changes. Makes for a very challenging system to model … and it’s the reason that the current class of climate models don’t work. All of those phenomena listed above act to regulate the temperature … but they are far too small to be included in the climate models.

As a result, they are trying to model the future temperature evolution of the planet, while not modeling the very climate phenomena that regulate the temperature … which is a fool’s errand.