A Sea-Surface Temperature Picture Worth a Few Hundred Words!

We covered this paper when it was first released, here is some commentary on it – Anthony


Guest essay by By PATRICK J. MICHAELS

On January 7 a paper by Veronika Eyring and 28 coauthors, titled “Taking Climate Model Evaluation to the Next Level” appeared in Nature Climate ChangeNature’s  journal devoted exclusively to this one obviously under-researched subject.

For years, you dear readers have been subject to our railing about the unscientific way in which we forecast this century’s climate: we take 29 groups of models and average them. Anyone, we repeatedly point out, who knows weather forecasting realizes that such an activity is foolhardy. Some models are better than others in certain situations, and others may perform better under different conditions. Consequently, the daily forecast is usually a blend of a subset of available models, or, perhaps (as can be the case for winter storms) only one might be relied upon.

Finally the modelling community (as represented by the football team of authors) gets it. The second sentence of the paper’s abstract says “there is now evidence that giving equal weight to each available model projection is suboptimal.”

A map of sea-surface temperature errors calculated when all the models are averaged up shows the problem writ large:

Annual sea-surface temperature error (modelled minus observed) averaged over the current family of climate modelsFrom Eyring et al.

First, the integrated “redness” of the map appears to be a bit larger than the integrated “blueness,” which would be consistent with the oft-repeated (here) observation that the models are predicting more warming than is being observed. But, more important, the biggest errors are over some of the most climatically critical places on earth.

Start with the Southern Ocean. The models have almost the entire circumpolar sea too warm, much of it off more than 1.5°C. Down around 60°S (the bottom of the map) water temperatures get down to near 0°C (because of its salinity, sea water freezes at around -2.0°C). Making errors in this range means making errors in ice formation. Further, all the moisture that lies upon Antarctica originates in this ocean, and simulating an ocean 1.5° too warm is going to inject an enormous amount of nonexistent moisture into the atmosphere, which will be precipitated over the continent in nonexistent snow.

The problem is, down there, the models are making error about massive zones of whiteness, which by their nature absorb very little solar radiation. Where it’s not white, the surface warms up quicker.

(To appreciate that, sit outside on a sunny but calm winters day, changing your khakis from light to dark, the latter being much warmer)

There are two other error fields that merit special attention: the hot blobs off the coasts of western South America and Africa. These are regions where relatively cool water upwells to the surface, driven in large part by the trade winds that blow into the earth’s thermal equator. For not-completely known reasons, these sometimes slow or even reverse, upwelling is suppressed, and the warm anomaly known as El Niño emerges (there is a similar, but much more muted version that sometimes appears off Africa).

There’s a current theory that El Niños are one mechanism that contributes to atmospheric warming, which holds that the temperature tends to jump in steps that occur after each big one. It’s not hard to see that systematically creating these conditions more persistently than they occur could put more nonexistent warming into the forecast.

Finally, to beat ever more manfully on the dead horse—averaging up all the models and making a forecast—we again note that of all the models, one, the Russian INM-CM4 has actually tracked the observed climate quite well. It is by far the best of the lot. Eyring et al. also examined the models’ independence from each other—a measure of which are (and which are not) making (or not making) the same systematic errors. And amongst the most independent, not surprisingly, is INM-CM4.

(It’s update, INM-CM5, is slowly being leaked into the literature, but we don’t have the all-important climate sensitivity figures in print yet.)

The Eyring et al. study is a step forward. It brings climate model application into the 20th century.

0 0 votes
Article Rating
117 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Steve Reddish
January 9, 2019 3:54 pm

It appears to me that models running high, and climate sensitivity due to doubling CO2 ranging up to 4.5 degrees are needed by those wanting to issue scary projections.

SR

Matthew Drobnick
January 9, 2019 3:55 pm

If be interested to see if any models are based on an ECS of .5 and see how much closer to observation they become. If so, then maybe they could stop adjusting the temperature data.

Not holding breath. We all know they need wiggle room to allow for sophistry when the predictions constantly fail.

Latitude
Reply to  Matthew Drobnick
January 9, 2019 4:00 pm

It’s too late…..they can’t go back and warm the past back to where it should be

nw sage
Reply to  Latitude
January 9, 2019 5:34 pm

Of course they can, and do! They simply go back and change the input data (and forget where they put it) then see where the model comes out. If they don’t think it ends up as what they want – or the sponsors desire – they wash, rinse, and repeat.

whiten
Reply to  Latitude
January 10, 2019 12:50 pm

Latitude.

It does not really matter, no one can change the reality.
The most ever achieved, will be like producing a lag in the perception of it all, no more no less.

cheers

Tom Halla
Reply to  Matthew Drobnick
January 9, 2019 4:04 pm

Or even if they were using the Lewis and Curry estimate for ECS, which IIRC is about 1.5 degrees. Which ECS did the Russian model use?

Jaap Titulaer
Reply to  Tom Halla
January 9, 2019 4:45 pm

1.35 Is Lewis I think. Highest of the ECS estimates based upon observations.
Lindzen had what, 0.5 ?

And Lewis and Curry likely used mostly the pre-adjusted data, apart from clouds etc, so that already has warming backed in.

John Peter
Reply to  Jaap Titulaer
January 9, 2019 11:51 pm

Lindzen’s Iris calculations showed around 1.1C for doubling of CO2 as I recall it. I don’t think he ever promoted 0.5C.

Steven Mosher
Reply to  John Peter
January 10, 2019 2:29 am

Linzden is wrong.

Dave Fair
Reply to  Steven Mosher
January 10, 2019 12:43 pm

If Linzden is wrong, prove it.

fred250
Reply to  John Peter
January 10, 2019 3:09 am

Mosher is wrong.

Its basically zero.

Immeasurable, and un-measured.

beng135
Reply to  John Peter
January 10, 2019 8:12 am

Linzden is wrong.

Right. Because you’re certainly smarter than Lindzen.

MarkW
Reply to  John Peter
January 10, 2019 8:24 am

Lindzen also made the assumption that all of the warming was due to CO2. Just to make it a worst case scenario.

MarkW
Reply to  John Peter
January 10, 2019 8:24 am

Whenever reality fails to line up with the models, it is reality that is wrong.

Steven Mosher
Reply to  Jaap Titulaer
January 10, 2019 2:41 am

No, Lewis and Curry are around 1.65 Best estimate, HIGHER if you use unadjusted
temperature data

patrick j michaels
Reply to  Steven Mosher
January 10, 2019 9:55 am

No. L & C have 1.5degC without the physically outrageous extension of krieged land data over a mixed ice/water ocean.

Steven Mosher
Reply to  Tom Halla
January 10, 2019 2:36 am

The russian model has an ECS of 2.1. Its does USE an ECS of 2.1

ECS is a EMERGENT property of a model

Steven Mosher
Reply to  Tom Halla
January 10, 2019 2:38 am

Shit

The russian model has an ECS of 2.1. it DOES NOT use an ECS of this value., it HAS an ECS of this value. ECS is an emergent property of models.

john
Reply to  Steven Mosher
January 10, 2019 6:55 am

You could’ve stopped at “shit”. That covers the whole damn field of enquiry.

Hugs
Reply to  john
January 10, 2019 7:31 am

Indeed but Mosher is right. The models are not supposed to contain in-hidden ECS, but they are supposed to be written to simulate the atmosphere.

In reality, it is difficult to know if the model was developed using some ECS in mind.

Dave Fair
Reply to  Hugs
January 10, 2019 12:59 pm

The different models are constructed and tuned to give an ECS that the different modeling groups think is “about right.” They all converge in the late 20th Century period, proving the power of aerosol assumptions.

Mash them all together and you get an ECS of over 3. Real atmospheric data shows that to be greatly exaggerated.

Dave Fair
Reply to  Steven Mosher
January 10, 2019 12:45 pm

The model PRODUCES an ECS; EMERGENT.

Crispin in Waterloo
Reply to  Steven Mosher
January 10, 2019 1:24 pm

Hi Everyone

Mosher is right on this point. If you are a newbie to the topic, research what he means. The ECS is the result of running the model based on a number of physical principles and constants as inputs.

The failure of all but one model to do a reasonable job would normally mean a lot of firings and project closures so evidently normal rules don’t apply to climate modellers.

It is quite possible to have a “high” setting for particle cooling and GHG warming and “match reality well” for a time so don’t place bets on any of them.

But the ECS is not a model input.

Dave Fair
Reply to  Crispin in Waterloo
January 10, 2019 2:18 pm

But tweaking a model to get a particular ECS is an input, Crispin. Even the modelers admit that they tweak their models and parameterize up the wazoo to get an ECS that “seems about right” to them. They use greater assumed aerosols to balance out the hot ones.

Proof that it is all modelturbation? They all converge on the “tuning period” of the late 20th Century, but wildly diverge in hindcasts and forecasts.

Until they all clean up their acts and fully document their programming, processes, parameterization and assumptions, I don’t believe they are sufficient to change our society, economy and energy systems. Chew on that, true believers.

Smart Rock
Reply to  Matthew Drobnick
January 9, 2019 5:08 pm

As I understand it, ECS is not an input in the climate models (if it was, you could do the whole thing in 10 minutes on a pocket calculator, supercomputers not needed!). Rather, the ECS is derived from the results of modelling.

Steven Mosher
Reply to  Smart Rock
January 10, 2019 2:42 am

Yes. ECS is calculated form the outputs.

whiten
Reply to  Steven Mosher
January 10, 2019 12:57 pm

Which it means, that it does not even exist…as far as the experiment concerned.

Oh only as made up concept to deceive, objectively meant.

cheers

patrick j michaels
Reply to  Steven Mosher
January 10, 2019 3:35 pm

“One can imagine changing a parameter which is known to affect the sensitivity, keeping this parameter and the ECS both within the anticipated acceptable range, and retuning the model otherwise with the same strategy toward the same targets”.

–Frederic Hourdin, “The Art and Science of Climate Model Tuning”, BAMS, 2017.

“Its climate sensitivity…had shot up from 3.5degC in the old version to 7degC, an implausibly high jump.
MPIM hadn’t tuned for sensitivity before–it was a point of pride–but they had to get that number down.”

Paul Voosen, “Scientists Open Up their Black Boxes to Scrutiny”.Science, 28 October 2016

So yes it is an emergent property, but it is the scientist, not the science, that determines what the “anticipated acceptable range” is.

whiten
Reply to  patrick j michaels
January 12, 2019 4:20 pm

Patrick.

If I am not wrong, which in this case I may very well be.
The case that you mention, it consisted with a change of core parametrising
of the model.
Even then the 7degC versus 3.5, was simply an assumed conclusion due to 3.5C achieved
2X faster.
If I am not too wrong with this, there was no any 7C result from the GCM simulation.
A way of showing how wrongly CS is considered in the case of GCMs.

The GCM simulation in this case does ~same ECS as most others do, but because it gets to what this guys call climate equilibrium much faster and probably in lower CO2 concentration, it is considered that the CS value in this case is higher by a factor of 2…
kinda of a messy scientific bollocks, if you get my point.

cheers

Steven Mosher
Reply to  Matthew Drobnick
January 9, 2019 5:17 pm

problem with .5 is that you would fail all paelo sims.
cant get out of a iceball earth.

next, is its not clear what you would change in physics to get .5

Latitude
Reply to  Steven Mosher
January 9, 2019 5:31 pm

….or it could be that we just don’t know enough and it’s something else

Steven Mosher
Reply to  Latitude
January 10, 2019 2:48 am

It could be UNICORNS yes.

Same wthing with evolution. You look at the facts and yes it looks like evolution is a good explanation, But you ACNT RULE OUT ALIENS!! directing evolution… or even god directing it

Yup, the nature of science is that all theory is underdetermined by the evidence.

Hugs
Reply to  Steven Mosher
January 10, 2019 7:34 am

I doubt all doors have been closed. It is very difficult rule out things, especially unknown unknowns, even if we happen to suffer from the hybris of ‘modern science knows everything about paleo climates’.

The evolution you put in here is a cheap shot. Stop that.

MarkW
Reply to  Steven Mosher
January 10, 2019 8:27 am

I love the way Mosher goes to extremes. If it’s not CO2, then it’s unicorns.
Why not deal with actual arguments, rather than thrashing strawmen?

MarkW
Reply to  Latitude
January 10, 2019 8:26 am

Something like once most of the water is covered by ice, snow stops. So that the ash being generated by volcanoes builds up on the ice, melting it.

richard verney
Reply to  Steven Mosher
January 9, 2019 5:38 pm

That does not appear to make much sense, given that we enter ice ages when CO2 is at its highest (presumably accompanied by high water vapour, if feedbacks are to be believed), and exit ice ages when CO2 is at its lowest (accompanied by low levels of water vapour).

mario lento
Reply to  richard verney
January 9, 2019 5:45 pm

Refreshing Richard Verney. When I read quotes from smart people (and try not to judge the source), I sometimes get lost – just accepting their statement as reasonable.
What Stephen Mosher said sounded reasonable. But then his statements presumes that CO2 MUST be correlated and measurable to some significant extent… Unfortunately, as you so quickly show, the answer is no to that premise.
It might be correlated and measurable or it might not be.

Joel O'Bryan
Reply to  Steven Mosher
January 9, 2019 5:48 pm

The problem with the paleo-sims versus the paleo-recons is that CO2 always lags temperature in the recons. In the sims, it is causal. It is a fundamental model error that no level of parameter or aerosol tweaking can fix.

fred250
Reply to  Joel O'Bryan
January 9, 2019 6:24 pm

In the Vostoc cores, peak CO2 was never able to maintain peak temperature, in fact, peak CO2 always led to a decline in temperature.

Reply to  fred250
January 9, 2019 7:22 pm

In the Vostoc cores, peak CO2 was never able to maintain peak temperature; in fact, peak CO2 WAS CAUSED BY temperature BECAUSE CO2 always LAGGED TEMPERATURE IN TIME.

CO2 TRENDS LAG TEMPERATURE TRENDS AT ALL MEASURED TIME SCALES.
– by hundreds of years in the ice core record;
– by ~9 months in the modern data record.

REFERENCES:

CARBON DIOXIDE IN NOT THE PRIMARY CAUSE OF GLOBAL WARMING: THE FUTURE CAN NOT CAUSE THE PAST
by Allan MacRae
http://icecap.us/index.php/go/joes-blog/carbon_dioxide_in_not_the_primary_cause_of_global_warming_the_future_can_no/

http://www.woodfortrees.org/plot/esrl-co2/from:1979/mean:12/derivative/plot/uah5/from:1979/scale:0.22/offset:0.14

THE PHASE RELATION BETWEEN ATMOSPHERIC CARBON DIOXIDE AND GLOBAL TEMPERATURE
by Ole Humlum, Kjell Stordahl, Jan-Erik Solheim
Global and Planetary Change, Volume 100, January 2013, Pages 51-69
https://www.sciencedirect.com/science/article/pii/S0921818112001658

J. Philip Peterson
Reply to  fred250
January 9, 2019 7:30 pm

Your graphic from woodfortrees does not show which time series variable is dependent, and which is independent.

Furthermore taking the derivative of a time series removes the trend. Therefore you are comparing apples to oranges in your woodfortrees graphic.

Joel O'Bryan
Reply to  fred250
January 9, 2019 9:51 pm

JPP,
there really is no dispute that CO2 lags temperature in the paleo reconstructions. Even in the last 150 years, since the end of LIA in 1850-1870, CO2 didn’t start moving up significantly until about 1950.

The climateers then argue that CO2 is a positive feedback that drives temps higher. They model that nonsense… ad nauseum. in supercomputer GCM after supercomputer GCM, all tweaked to provide results they “expect”. And verified against nothing but other models.

But then of course that defies logic since we know climate is stable within bounds of paleo-reconstructed temperatures and this… negative feedback must be stronger than any +tive feedbacks.

The clear logical conclusion is that whatever feedbacks occur from CO2, water vapor feedbacks (both some + and some more -) ultimately converge to a strong negative feedback to counteract any CO2 GH effect.

Mainstream climate science has taken a dead-end path to Climate Change alarmism due to a political bias (and rent seeking) wanting something that isn’t true in nature.
The Left needs to accept the failure of its failed clisci and move on to its next scare tactic.

Reply to  fred250
January 10, 2019 12:11 am

Thank you Joel.

JPP

I know what you are saying, but your comment reflects an error in your thinking. Try reading either paper referenced above.

I leave you with a plot from Humlum et al 2013, which is prettier than the ones in my originating paper of January 2008.
https://www.facebook.com/photo.php?fbid=1551019291642294&set=a.1012901982120697&type=3&theater

Reply to  fred250
January 10, 2019 12:54 am

Further comments on MacRae 2008 and Humlum et al 2013, referenced above.

I generally agree with the first three conclusions from Humlum 2013, as follows:
1– Changes in global atmospheric CO2 are lagging 11–12 months behind changes in global sea surface temperature.
2– Changes in global atmospheric CO2 are lagging 9.5–10 months behind changes in global air surface temperature.
3– Changes in global atmospheric CO2 are lagging about 9 months behind changes in global lower troposphere temperature.

Points 2 and 3 are similar to my 2008 conclusions.

Critiques of Humlum failed to refute the three conclusions above. In general, I regard all the critiques of these three conclusions as specious nonsense, which tend to obfuscate the clear observations in these papers.

One hint: It is not necessary that ALL the increase in atmospheric CO2 is due to temperature – part of the CO2 increase can be due to other causes such as fossil fuel combustion, deforestation, etc., but part of it is clearly due to temperature – and that part demonstrates that CO2 trends lag, and do not lead temperature trends in the modern data record, and that observation DISPROVES the CAGW hypothesis.

Another highly credible disproof of the CAGW meme is that fossil fuel consumption accelerated strongly after 1940 as did atmospheric CO2 concentrations, but global temperatures COOLED from ~1945 to 1977, warmed for over a decade, and then were relatively constant since – so the correlation with increasing atmospheric CO2 was NEGATIVE, POSITIVE AND NEAR-ZERO. To claim that atmospheric CO2 is the “control knob” for global temperature is a bold falsehood, that is refuted by observations at all measured time scales.

Regards, Allan

Reply to  fred250
January 11, 2019 3:24 am

Repeating using dropbox

I leave you with a plot from Humlum et al 2013, which is prettier than the ones in my originating paper of January 2008comment image?dl=0

fred250
Reply to  Steven Mosher
January 9, 2019 6:21 pm

“next, is its not clear what you would change in physics to get .5”

The climate boffins have already changed the laws of physics to get 1.5-4.5.

Change them back using actual real physics, you would get close to zero.

Richard M
Reply to  Steven Mosher
January 9, 2019 7:04 pm

Steven is assuming sensitivity is a fixed value. I suspect it is variable and temperature dependent among other things. When you realize this is likely then you can have one sensitivity during glacial periods and a completely different one during interglacials.

In addition, sensitivity can be different even within those periods and probably varies during the day.

Steven Mosher
Reply to  Richard M
January 10, 2019 2:28 am

Not really. Not assuming that at all, in fact it probably is not.
But for practical purposes I stick with my bet

Reply to  Steven Mosher
January 9, 2019 8:30 pm

It is not clear there ever was an iceball earth.

The physics you would change is net radiation to space. CERES, for instance.

Alan Tomalty
Reply to  Steven Mosher
January 9, 2019 11:04 pm

Mosher You are assuming that the convection(Navier Stokes equations) and the radiative transfer equations of the models are coupled together. Since the actual physics of convection and condensation has never been nailed down(EX: how does the latent heat released from condensation actually get out to space?) there is no actual correct physics that the models can use. So they have to wing it by parameterizing simpler processes. The other thing you are assuming is that CO2 drives temperature. Your assumption fails with the paleo records of ice ages. Global warming is a farce and you know it. I can’t understand why someone of your intelligence hangs on to this disgraceful climate science meme.

Stewart Pid
Reply to  Alan Tomalty
January 10, 2019 11:46 am

Steve M disgracefully hangs on because it pays the bills!! Just like so many of the others they are following the money.

tty
Reply to  Steven Mosher
January 10, 2019 12:51 pm

“cant get out of a iceball earth”

1. No proof there ever was an “iceball earth”

2. And if there was there are other mechanisms than CO2 for getting out of it (e. g. a LIP eruption or a major Bolide strike to lower albedo)

whiten
Reply to  Steven Mosher
January 11, 2019 11:44 am

Or if put in another way;
you will be failed by all proper GCM sims, in regard to the concept of
ECS,
when it, the ECS, is a generic derivative concept from-of GCM simulation outputs,
which according to my understanding stands in the range of 2.5 to 3.1 C…as per sims range.

Sims do not support any ECS value below or above that range, as far as I can tell…
by the very sims that dictate the path to a consideration of CS in the extrapolated
new meaning as a ECS.

So, as far as I can tell, a .5 is simply a mathematical extrapolated meaningless value, not suported by sims,
when considered as an ECS value, in the context of a new supposed climate EQUILIBRIUM.

🙂

cheers

Ulric Lyons
January 9, 2019 4:20 pm

“There’s a current theory that El Niños are one mechanism that contributes to atmospheric warming, which holds that the temperature tends to jump in steps that occur after each big one. It’s not hard to see that systematically creating these conditions more persistently than they occur could put more nonexistent warming into the forecast.”

Yet the central El Nino region and the North Atlantic are forecast too cool, so maybe they were expecting more positive Arctic and North Atlantic Oscillation conditions than has occurred.

JustTheFactsPlease
January 9, 2019 4:22 pm

A new study will show that the model averages were right all along… it was the data that needed fixing.

cinaed
Reply to  JustTheFactsPlease
January 9, 2019 6:47 pm

+1

Rod Evans
Reply to  JustTheFactsPlease
January 10, 2019 12:50 am

+4.5 LoL!

PhilF
Reply to  Rod Evans
January 10, 2019 7:41 am

Skeptics should stick with the data. CO2 rise follows temperature rise after a lag of several hundred years.

Most of the CO2 that we see is due to the temperature recovery from the Little Ice Age  (LIA) with a lag of 300 years . Coincidentally,  a much smaller amount is being added by humans.  It’s an accident that warmists, the IPCC and their much-amplified propaganda machine have taken advantage of.

J. Seifert
January 9, 2019 4:25 pm

The two warm water blobs at the west side of South America and Africa are
the warm waters, which emerge out of the ocean depth. And there, in the
depth, the warm waters, as you all remember, was the place with the
“missing heat hiding”.
Therefore, the two blobs demonstrate how, by circulation, the missing
heat comes out of its hide to the surface nowadays. This is, what all models
account for. Obviously, the Russian model forgot the missing heat and
therefore produced lower values than all other models.

Mario Lento
Reply to  J. Seifert
January 9, 2019 5:35 pm

J.Seifert writes: “The two warm water blobs at the west side of South America and Africa are
the warm waters, which emerge out of the ocean depth. And there, in the
depth, the warm waters, as you all remember, was the place with the
“missing heat hiding”.”

I do not think that’s correct. At least with regard to South America equatorial regions. The upwelling is always cooler (when it’s upwelling) (ENSO neutral to La Nina). The chart shows RED because the models showed the upwelling was less cool than measured… hence too warm.

OweninGA
Reply to  Mario Lento
January 9, 2019 7:05 pm

I think J. Seifert had a tongue firmly planted in his cheek as he typed. I could be wrong…

john
Reply to  OweninGA
January 10, 2019 7:10 am

I’ve been wondering about household toilet tanks. I think that’s where the missing heat might be hiding. Cue the music from “jaws” as we move in on an image of someone’s hand about to flush.
We need urgent legislation to stop people from flushing their toilets.
Does anybody have Alexandra Ocasio-Cortez’s contact info?

The planet is cooling. How much longer will we have to listen to this AGW garbage before we can get to the public stoning for Mann, Hansen and the rest?

RACookPE1978
Editor
Reply to  john
January 10, 2019 9:18 am

john

We need urgent legislation to stop people from flushing their toilets.

California (in their panic and excess population from millions of illegal aliens, resident taxpayers and welfare receivers), had already many local and state regulations and run near-continuous advertisement campaigns to “Not flush” (“If it’s brown, flush it down.” was one I recall. Though which Brown they wanted flushed was, admittedly, as clear as the waste in the toilet.)

commieBob
January 9, 2019 4:41 pm

As I and many others have pointed out, averaging a bunch of anything does not necessarily increase accuracy and precision. People get used to doing that in their undergraduate courses and it works under certain circumstances but it relies on criteria that most professors don’t explicitly state. I would go so far as to say that, for physical data, the technique almost always doesn’t work in the expected manner.

Joel
Reply to  commieBob
January 9, 2019 5:19 pm

So true. A normally distributed random variable is assumed for most undergraduate work. Scary to think that these guys don’t understand that.

patrick j michaels
January 9, 2019 5:03 pm

No one seems to have noticed the last sentence yet.

Steven Mosher
Reply to  patrick j michaels
January 9, 2019 5:17 pm

funny.
not

Dave Fair
Reply to  Steven Mosher
January 9, 2019 5:46 pm

funny.
yes

Models that are different by 3+ degrees C in their base global temperatures are not modeling the same physics.

cinaed
Reply to  Dave Fair
January 9, 2019 6:30 pm

Maybe they’re subtracting the temperature at the Earth’s apogee from the temperature at the Earth’s perigee.

Steven Mosher
Reply to  Dave Fair
January 10, 2019 2:46 am

HUH?

The nominal temperature of the earth is 288 K
Some simulate it and come up with 289.5K
Some get around 286.5 K

That is scary good

Dale S
Reply to  Steven Mosher
January 10, 2019 4:18 am

If a model of the human body were off in average temperature by 1.5K, would you describe it as “scary good”?

MarkW
Reply to  Steven Mosher
January 10, 2019 8:32 am

Given the number of tunable parameters, they should be accurate to a few hundredths of a degree.

Dave Fair
Reply to  Steven Mosher
January 10, 2019 12:50 pm

What different physics are used in the models to hindcast a 3 C temperature difference?

Michael Jankowski
Reply to  patrick j michaels
January 9, 2019 5:33 pm

20th century instead of 21st…sly!

As far as INM-CM5 goes…https://www.earth-syst-dynam.net/9/1235/2018/ , the paragraph that begins just below figure 6:

“…One of the most intriguing observed features of ongoing climate changes is the fast summer Arctic sea ice extent decrease in the beginning of the 21st century. The ensemble of CMIP5 models underestimates the rate of decrease in Arctic summer ice area by a factor of 2. INMCM4 participated in CMIP5 and also significantly underestimates the extent of Arctic sea ice decrease (Volodin el al., 2013). In newly obtained INM-CM5 data (Fig. 7) we qualitatively see the same behavior of the Arctic sea ice as the average rate of sea ice loss is underestimated by a factor of 2 to 3. However, in one model run (purple) the magnitude of decrease is similar to the one in the observations (reduction from 7–7.5 million km2 in the 1980s to 4–5.5 million km2 in the 2000s). In other runs Arctic sea ice loss is underestimated by a factor of 1.5–3, and in one run (green) one can even see some increase in Arctic sea ice area during the last decades. Our results suggest that the rapid decrease in Arctic sea ice extent near year 2000 was partially induced by external forcing; however, the role of internal variability can be very important (the range of the sea ice extent year-to-year variability could be estimated as 3.0 million km2)…”

If the CMIP5 ensemble members weren’t underestimating Arctic ice loss by a factor of 2, they would be running even hotter.

Also of note is that only one of the seven INM-CM5 runs did an adequate job modeling Arctic ice loss

leowaj
Reply to  patrick j michaels
January 9, 2019 6:00 pm

I noticed it. i laughed.

Dan Hughes
Reply to  patrick j michaels
January 9, 2019 6:17 pm

Actually it’s correct.
And good for a chuckle.

Steven Mosher
January 9, 2019 5:22 pm

Nice work Patrick!

‘Finally the modelling community (as represented by the football team of authors) gets it. The second sentence of the paper’s abstract says “there is now evidence that giving equal weight to each available model projection is suboptimal.”

Gosh its been like 10 years fighting against the democracy of models ( as gavin calls it)

It’s long been thought that equal weighting was sub optimal, however the question has always been
how do you improve and justify the weighting.

Best model

http://berkeleyearth.org/wp-content/uploads/2015/02/figure38-inmcm4-vs-berkeley-earth.png

Michael Jankowski
Reply to  Steven Mosher
January 9, 2019 5:51 pm

It really took 10 yrs to answer the question, “how do you improve and justify the weighting?”

What a bunch of nincompoops.

fred250
Reply to  Steven Mosher
January 9, 2019 6:27 pm

So you “adjust” the Berkley fabrication to match IMMcm4.

Your point is ???

Richard G.
Reply to  Steven Mosher
January 10, 2019 6:20 pm

“Democracy is a pathetic belief in the collective wisdom of individual ignorance.”- H. L. Mencken

Surfer Dave
January 9, 2019 5:25 pm

Worth looking at the NOAA sea surface anomaly as it is now. The Southern Ocean south of Western Australia has been anomalously cool for three years now, nothing like that picture above.
https://www.ospo.noaa.gov/Products/ocean/sst/anomaly/
As for the amazing stupid idea that averaging erroneous models gives you a ‘true’ model, I am dumbfounded that any sane person with a sound grasp of simulations and associated errors could believe that.

phodges
Reply to  Surfer Dave
January 10, 2019 11:21 am

THIS ^^^^^^^

Observer of the scene
January 9, 2019 5:32 pm

In a previous life 40 years ago when I was working on a team analyzing Chinese government reported economic data we soon came to realize that we were attempting double-precision arithmetic operations against wildly estimated data. All of our laboriously calculated trends and future projections were effectively totally worthless for any kind of rational policy recommendations to our customers.

Seems sort of what we are doing today with trying to analyze climatic data. Doesn’t matter how carefully you massage it the base data is crap so any results are crap. Oops.

Oh, and my friends still looking at Chinese economic data assure me it is still pretty much worthless because nobody in the government wants to report “real” economic data and risk exposing the truly shaky state of the provincial governments and the pernicious effects of the shadow economy.

rbabcock
Reply to  Observer of the scene
January 9, 2019 7:50 pm

+10000000000000

Graemethecat
Reply to  Observer of the scene
January 10, 2019 2:27 am

Nail firmly on head. I can’t praise your comment highly enough.

Andy
Reply to  Observer of the scene
January 10, 2019 4:50 am

If anyone wants to know what is happening in China go and have a look. I was totally amazed at the levels of poverty and the harshness with which people treat each other (no government bodies needed for this)Some of the most cold hearted people I have ever encountered. In Xi-an I gave up me seat for a person who looked about 80 and every other person just sat and watched this person struggling. Chongqing is even worse.

MarkW
Reply to  Andy
January 10, 2019 8:35 am

That is one of the worst features of communism, that it pits people against each other.
In capitalism, those who are able to cooperate the best, succeed. Under communism, success is determined by who is best able to brown nose those above him.

Richard G.
Reply to  MarkW
January 10, 2019 6:29 pm

Under capitalism the rich become powerful. Under socialism the powerful become rich.

Joel O'Bryan
January 9, 2019 5:41 pm

“and simulating an ocean 1.5° too warm is going to inject an enormous amount of nonexistent moisture into the atmosphere, which will be precipitated over the continent in nonexistent snow.

They do it for for nonexistent climate change. Climate change “science” is no longer a scientific endeavor. It long ago (like with AR2 and Ben Santer’s dishonest stunt that would have in any actual scientific field gotten him sanctioned and dismissed, after that Mann’s hockey stick dishonesty was easy) passed into the political realm, where consensus matters and contrary findings are ignored.

What I keep wondering is how, if the science is settled on CO2 and climate sensitivity, how these climate charlatans can keep demanding research grants to study their scam and then keep getting funded. Inquiring minds want to know. I’m in Lindzen camp where all of academic and government ‘clisci’ needs a 90% funding cut across the board, with modellers first.

John F. Hultquist
January 9, 2019 6:08 pm

“there is now evidence that giving equal weight to each available model projection is suboptimal.”

I think I hear and echo: Mine. Mine. . . . Mine x 29

Ricola

ChasTas
January 9, 2019 6:41 pm

Heard a quote the other day when dealing with hydraulic models, “All models are wrong, but some are useful”

OweninGA
Reply to  ChasTas
January 9, 2019 7:11 pm

That’s true of all models, not just hydraulic ones. Even in aerospace we will wind tunnel aspects that are frequently missed in modeling. They didn’t do that for the F35 and that caused some of their problem with fuel line cracks and fires. They assumed the F16 model was close enough for engine flight stressing.

Dave Fair
Reply to  ChasTas
January 10, 2019 12:25 pm

It depends on what purpose the “some” are used. If they are used as an attack on our society, economy and energy systems, I don’t like their “use.”

Gamecock
January 9, 2019 6:54 pm

“I do not believe in the collective wisdom of individual ignorance.” – Thomas Carlyle

January 9, 2019 6:56 pm

“First, the integrated “redness” of the map appears to be a bit larger than the integrated “blueness,” which would be consistent with the oft-repeated (here) observation that the models are predicting more warming than is being observed.”
It might be oft repeated, but if said like this, sloppily. The real issue that people want to make is that the model trends are higher than observed. And this has nothing to do with the SST figure shown. People seem very incurious about what that figure is, and the article is of no help there. It isn’t actually a Eyring et al result; they just quote it from a 2015 paper (Fig 1) by Richter. It isn’t of SST at any particular time; it is the deviation of the long term average, for both Earth and models.

SST’s aren’t a particularly good test of GCMs. The reason is that GCMs are primarily air and ocean models, coupled. The coupling is through a boundary layer model where things happen on a scale of mm to metres. Temperature has a steep gradient here, and it is possible that models could get that gradient wrong in places without spoiling the global approximation. That would happen if the turbulence model in that thin boundary layer was inaccurate, for example.

ossqss
Reply to  Nick Stokes
January 10, 2019 9:38 am

So what affect, if it was incorporated in the models, did the Karl et al paper have on SST’s Nick?

Reply to  ossqss
January 10, 2019 12:53 pm

It is very unlikely that any GCM was affected by the Karl et al paper.

Michael Jankowski
Reply to  Nick Stokes
January 10, 2019 4:27 pm

So are you claiming no GCM uses ERSST for tuning or just that no GCM uses one more recent than ERSST v3 or v3b?

Michael Jankowski
Reply to  Nick Stokes
January 10, 2019 9:47 am

“…it is possible that models could get that gradient wrong in places without spoiling the global approximation…”

Well that’s really analogous to how the GCM results typically are. They’re wrong in place-after-place but get the global approximation close enough through the balancing of errors that they look “reasonable.”

Dave Fair
Reply to  Nick Stokes
January 10, 2019 12:28 pm

Nick, read up on Bob Tisdale’s numerous analyses of model vs observation SSTs. Then get back to us.

Kurt
January 9, 2019 7:08 pm

“[W]e take 29 groups of models and average them. Anyone, we repeatedly point out, who knows weather forecasting realizes that such an activity is foolhardy. Some models are better than others in certain situations, and others may perform better under different conditions.”

The problem is that, since you need to have at least several decades worth of data to even begin to discern “climate” from “weather,” the very process of empirically testing how well a climate model is at predicting changes in climate isn’t practical. Thus, climate scientists simply take an average of the models, not because it makes any scientific or logical sense to do so, but out of desperation; they have no data to distinguish one model’s accuracy from that of another model so the “averaging” process treats all equally and just puts up a veneer or facade that some kind of “scientific” expertise is being applied, for the terminally gullible to fall for. But in truth, the fact that they have to blindly average a bunch of disparate models tells you that the climate scientists have no real idea of how or why the climate changes.

OweninGA
Reply to  Kurt
January 9, 2019 7:16 pm

That is baked into the design of the IPCC. They don’t want to understand climate. That was never the goal. The point was to find a hook to give the UN control of the world’s energy supplies and give them unlimited power. They did this by making the IPCC only consider man-made changes in climate. Their mandate was to find man guilty and enable the execution of the western civilization.

Rod Evans
Reply to  OweninGA
January 10, 2019 1:12 am

That is the key issue in climate science( sic) they (the UN) decided at an early stage, to ignore the science and just stick with the numbers that gave the right political answer.
It would be a catastrophe, for the control movement, or UN as it has become known recently, if the reality of climate change being normal and not alarming, was allowed to be absorbed by the population at large.
What would the de-industrialists i.e Greens do, if their go to scare story was taken off them?

Dave Fair
Reply to  Kurt
January 10, 2019 12:33 pm

Tuning to the late 20th Century gives wildly varying model hindcasts and forecasts. That’s because the different modelers create and tweak their models to get the ECS they want. Producing CAGW seems to be the dominant driver for most of the modelers.

January 9, 2019 8:31 pm

It is not clear there ever was an iceball earth.

The physics you would change is net radiation to space. CERES, for instance.

WXcycles
January 9, 2019 11:03 pm

Worry not, the temp data can be adjusted to match the model ensembles, in all cases.

Steve Richards
January 10, 2019 1:32 am

Says Nick Stokes: “and it is possible that models could get that gradient wrong in places without spoiling the global approximation”

Hmmmm so, as long as you like the results, fundamental errors do not matter, just bit of bad luck but still usable!!!

John Bills
Reply to  Steve Richards
January 10, 2019 8:06 am

And there (still) is this:
Land Surface Air Temperature Data Are Considerably Different Among BEST‐LAND, CRU‐TEM4v, NASA‐GISS, and NOAA‐NCEI
https://doi.org/10.1029/2018JD028355

MarkW
Reply to  Steve Richards
January 10, 2019 8:38 am

Nick assumes that the number of warming errors will always balance out the number of cooling errors.

Dan Hughes
January 10, 2019 4:36 am

From the paper:

In addition, it has been demonstrated that CMIP models are not independent. Most inferences in the literature about model interdependence are derived from error correlation [13,79]. This cannot identify the specific model components that are interdependent. Identification of these common components is a difficult task due to the large number of models involved in CMIP and lack of detailed information regarding individual model versions.

13. Sanderson, B. M., Knutti, R. and Caldwell, P. Addressing interdependency in a multimodel ensemble by interpolation of model properties. J. Clim. 28, 5150–5170 (2015).

79. Sanderson, B. M., Knutti, R. and Caldwell, P. A representative democracy to reduce interdependency in a multimodel ensemble. J. Clim. 28, 5171–5194 (2015).

[ bold by edh and edited to change the literature citations from superscripts to [ ], and change the ampersand in the references to ‘and’ ]

Lack of complete and correct documentation to an appropriate detail level, both external and internal to the code source, is a defining characteristic of 20th century software that is considered to be especially unworthy for critical applications. When in the past 15 years this has been stated to be a significant deficiency in Climate Science software, it has always been mocked and rejected by Climate Scientists. Now it has appeared in a Peer-Reviewed Paper written by True Climate Scienists, and published in a Certified Climate Science Journal.

Climate Science continues to present results from Black Box software. Climate Science is the single area of software applications that are used to guide public policy in which this state is allowed to exist.

Alan Watt, Climate Denialist Level 7
January 10, 2019 4:43 am

Patrick:

Is there a plot available showing just observed – INM-CM4 ?

Thank you.

Michael Spurrier
January 10, 2019 6:41 am

These models don’t work cos’ they use Al-Gore-ithms……….

ARW
January 10, 2019 8:50 am

Using that projection stretches the circumference at 60 N and 60 S by a factor of 2.

ren
January 10, 2019 10:33 am

Meanwhile, the temperature at the equatorial Pacific is falling.
comment image

ResourceGuy
January 10, 2019 2:08 pm

I notice the huge warming errors around NY and CA where the errors are most useful for large public transportation funding needs/appeals.

Johann Wundersamer
January 11, 2019 1:16 am

tasty – “there is now evidence … available model projection is suboptimal.”

Johann Wundersamer
January 11, 2019 1:29 am

“There’s a current FALSE theory that El Niños are one mechanism that contributes to atmospheric warming, which holds that the temperature tends to jump in steps that occur after each big one.”

It’s not “El Niños that contributes to atmospheric warming”

but the sun rising in the east doing a long ride over the Pacific to sink in the west *”.

* of course it’s the earth rolling under the sun west east but no one uses that phrase.

%d bloggers like this: