Another paper shows that climate models and climate reality vary – greatly

A new paper has been published in Geophysical Research Letters that shows once again, that climate models and reality significantly vary. It confirms what Dr. John Christy has been saying (see figure below). The paper also references Dr. Judith Curry and her work.

https://judithcurry.com/2015/12/17/climate-models-versus-climate-reality/

Pronounced differences between observed and CMIP5-simulated multidecadal climate variability in the twentieth century

Plain Language Summary

Global and regional warming trends over the course of the twentieth century have been nonuniform, with decadal and longer periods of faster or slower warming, or even cooling. Here we show that state-of-the-art global models used to predict climate fail to adequately reproduce such multidecadal climate variations. In particular, the models underestimate the magnitude of the observed variability and misrepresent its spatial pattern. Therefore, our ability to interpret the observed climate change using these models is limited.

Abstract

Identification and dynamical attribution of multidecadal climate undulations to either variations in external forcings or to internal sources is one of the most important topics of modern climate science, especially in conjunction with the issue of human-induced global warming. Here we utilize ensembles of twentieth century climate simulations to isolate the forced signal and residual internal variability in a network of observed and modeled climate indices. The observed internal variability so estimated exhibits a pronounced multidecadal mode with a distinctive spatiotemporal signature, which is altogether absent in model simulations. This single mode explains a major fraction of model-data differences over the entire climate index network considered; it may reflect either biases in the models’ forced response or models’ lack

of requisite internal dynamics, or a combination of both.

Some key quotes:

Here we show that state-of-the-art global models used to predict climate fail to adequately reproduce such multidecadal climate variations. In particular, the models underestimate the magnitude of variability in the twentieth century.

Our study documents pronounced differences between the observed and CMIP5-simulated climate variability in the twentieth century. These differences are dominated by a coherent multidecadal hemispheric-scale signal present in the observed SST and SLP fields but completely missing in any of the CMIP5 simulations.

Our results are also broadly consistent with recent analyses of Cheung et al. [2017], who documented substantial mismatches between their estimated internal components of the observed and CMIP5-simulated AMO, PMO, and NMO variability. However, these authors used subtraction of the scaled CMIP5 MMEM signal to deduce the internal variability in historical simulations of individual CMIP5 models. Kravtsov et al. [2015] and Kravtsov and Callicutt [2017] showed that the residual variability so defined misrepresents the true internal variability in CMIP5 simulations and is, in fact, dominated by model error, that is, the differences between the true forced response of individual models and the MMEM response. The magnitude of the CMIP5 “internal” variability estimated by this method is, hence, much larger than that of the true simulated internal variability, and the spectral characteristics of the true and estimated internal variability are entirely different.

Despite our explicit decomposition of the climate variability into the forced and internally generated components, dynamical attribution of the multidecadal model-data differences still remains uncertain. On one hand, if our derived CMIP5-based forced signals are realistic, these differences must arise from internal climate system dynamics presumably misrepresented in CMIP5 models, such as sea ice dynamics [Wyatt and Curry, 2014], oceanic mesoscale eddies [Siqueira and Kirtman, 2016], positive cloud and dust feedbacks [Evan et al., 2013; Martin et al., 2014; Brown et al., 2016; Yuan et al., 2016], or SST-forced NAO response [Kushnir et al., 2002; Eade et al., 2014; Stockdale et al., 2015; Siegert et al., 2016]. On the other hand, however, it is possible that CMIP5 models underestimate multidecadal variations in the true response of the climate system to external forcing or misrepresent the forcing itself [Booth et al., 2012; Murphy et al., 2017]; if this is true, the model-data differences reflect the mismatch between the actual and CMIP5-simulated forced signals, whereas the real world’s internal climate variability may be consistent with that simulated by the models. In either case, we strongly believe that model development activities should strive to alleviate the present large discrepancies between the observed and simulated multidecadal climate variability, as these discrepancies hinder our fundamental understanding of the observed climate change.

The paper is here:  http://onlinelibrary.wiley.com/doi/10.1002/2017GL074016/full

The SI is here: https://people.uwm.edu/kravtsov/files/2016/05/Supporting-Information_AGU_K2017_revised-1bwhctd.pdf

One of the figures from the SI shows the differences between the models forcing response and observed natural variability seen in AMO, NAO, etc cycles:

Figure S4: Raw observed indices (thin lines) and their estimated forced components — ensemble mean (thick lines) and uncertainty (error bars) — with the forced-signal estimates based on the Community Earth System Model (CESM) Large Ensemble Project (LENS) simulations (Kay et al., 2015). Forced signals were estimated using Kravtsov and Callicutt (2017) methodology, as (a) the rescaled (unfiltered) ensemble mean over the 40 historical LENS simulations (left panels), or (b) — as the rescaled 5-yr low-pass filtered ensemble means for 20 synthetic sub-ensembles of 5 simulations, each randomly drawn from the parent 40-member LENS ensemble. The index abbreviations are given in panel captions. Comment: The forced signals based on the entire LENS ensemble and its 5-member sub-ensembles are consistent.

But here is the real smoking gun:

Figure 1. Standard deviations (STDs) of the estimated observed (blue) and CMIP5-simulated historical (red) and control-run (black) internal variability for the five indices considered; top-to-bottom rows correspond to the results for the AMO, PMO, NMO, NAO, and ALPI indices, respectively. Also included are the estimates of the observed internal variability based on the one-, two-, and three-factor scaling methods of Frankcombe et al. [2015]; see legend. The STDs were computed for raw and boxcar running mean low-pass filtered time series using different window sizes of 2 ×K + 1 yr, K = 0 , 1, … , 30 (shown on the horizontal axis); K = 0 corresponds to raw annual data, K = 1—to 3 year low-pass filtered data, and so on. Error bars show the 70% spread of the STDs, between 15th and 85th percentiles of the available estimates of internal variability (see text for details). Shading indicates the range in which the observed internal variability is statistically larger than its historical (light shading only) or control-run counterparts (dark shading and light shading regions combined), at the 5% level; here KC2017 methodology was used to estimate the observed and simulated internal variability over the historical period. The NAO plot also includes the results (heavy black curve) based on an alternative, station-based observed NAO index (https://climatedataguide.ucar.edu /climate-data/ hurrell-north- atlantic-oscillation-nao-index-station-based). (left column) The results based on the full annual data; (right column) the results based on the anomalies with respect to the leading M-SSA pair of the corresponding observed or simulated realization of internal variability (see text for details); the M-SSA embedding dimension M = 20. Comments: (i) The simulated multidecadal variability is much weaker than observed (Figure 1, left column). (ii) Much of this model-data difference is rationalized by the leading M-SSA pair (Figure 1, right column).
h/t to Dr. Leif Svalgaard

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

174 Comments
Inline Feedbacks
View all comments
markl
June 28, 2017 11:24 am

The whole premise that a ‘model’ is proof of anything is bunkum. The only fact is they are getting away with alarming people, redistributing wealth, ruining efficient and economical energy production, and smearing/shaming non believers and getting away with it. Meanwhile others are sitting around twiddling their thumbs waiting for LIA II to be the savior of science. Sorry folks, but it’s time to fight back.

AndyG55
Reply to  JohnMacdonell
June 28, 2017 11:37 pm

Poor Dana, Nutcase of the SkS propaganda site.
If you believe a word he says.. then you are stupidly GULLIBLE.

JohnMacdonell
Reply to  AndyG55
June 29, 2017 6:06 am

If that is the most intelligent response you can muster, you have troubles, bud.

el gordo
Reply to  AndyG55
June 29, 2017 7:58 pm

‘But if we can reduce human carbon pollution, we’ll shift to a scenario with a long-term global warming slowdown.’
Theoretically, natural variables have overwhelmed the AGW signal temporarily, but in the Southern Hemisphere there is a huge global warming signal which is being ignored.
I speak of the intensification of the subtropical ridge.

June 28, 2017 12:26 pm

It would be nice to see more attention given to the disagreement between IPCC climate models and observation, but this paper is a poor place to start that conversation since it makes a very fundamental mistake right out the gate by “averaging” the output of 102 different climate models, then comparing that value with observation.
The idea behind an average value is that, due to the “law of large numbers”, error in the observation of a measure will cancel out as more observations are made and, assuming the error is normally distributed, will eventually converge on the true value with greater precision.
This concept depends on making multiple measurements/observations of the same thing. If you measure the same table 100 times, your measures will be made more precise by averaging. If you measure 100 different tables 1 time, the average of those measures is completely meaningless (literally).
Sergey Kravtsov makes this error. As a result, his critique is essentially baseless from the very start. A much better approach to demonstrating this problem might be to calculate the correlation coefficient (r-squared) for each individual model with respect to observation. R-squared is a concise measure of the percentage of variability in the dependent variable described by the independent variable and most stats packages produce that metric as a byproduct of a regression. For example a value of r-squared equal to .93 would tell us that 93% of the variability observed in temperature is described by the model. Producing the range of r-squared values over the set of 102 CMPI5 models would give a useful indication of how well the models agree with observation and ranking the models by r-squared provides an objective method for selecting the model that best fits observation. Any model that failed to describe at least 60% of the variability in observed temperature should, in my opinion, be rejected outright.

jonesingforozone
Reply to  Bartleby
June 28, 2017 1:28 pm

So, do you really think that this would change the conclusion?

Reply to  jonesingforozone
June 28, 2017 3:17 pm

How much money do you have?

rd50
Reply to  Bartleby
June 28, 2017 1:29 pm

Did you read the article? “averaging the output of 102 models”?

Reply to  rd50
June 28, 2017 3:15 pm

Yes, I did. I hope you understood the criticism.

Reply to  rd50
June 28, 2017 3:21 pm

You understand Sergey measured 100 different tables once? That’s his problem.

rd50
Reply to  rd50
June 28, 2017 3:24 pm

Sorry, he did not average 102 models.
You did not read the article.

Reply to  rd50
June 28, 2017 3:36 pm

Which part of “Average of 102 Climate Models” did you not understand?

Reply to  rd50
June 28, 2017 3:44 pm

And which article are you referring to?

rd50
Reply to  rd50
June 28, 2017 3:59 pm

I am responding to your comments on the Kravtsov article.
Here is what you wrote:
“It would be nice to see more attention given to the disagreement between IPCC climate models and observation, but this paper is a poor place to start that conversation since it makes a very fundamental mistake right out the gate by “averaging” the output of 102 different climate models, then comparing that value with observation.”
Try to find the “averaging” the output of 102 different climate……” in the Kravtsov article.
The models he used are in Table 1 of the Kravtsov article. See how many were used and how they were used. You are simply wrong assuming he used the average output of 102 …..
You simply did not read the article.

Reply to  rd50
June 28, 2017 5:14 pm

RD, I read this article. I did read the abstract of the other. If the assertion and graphic in this article is the one I was commenting on (and it was) the comment stands. I read the article.

Reply to  rd50
June 28, 2017 5:17 pm

RD, if you feel the Kravtsov paper has been misrepresented in this article I strongly encourage you to take that issue up with its publisher.

Reply to  rd50
June 28, 2017 5:19 pm

I believe this thread requires moderation.

Reply to  rd50
June 28, 2017 5:35 pm

RD50 writes: “You are simply wrong assuming he used the average output of 102 …..”
I made no assumption. The first graphic presented in this article (the one I am commenting on) very specifically labels the red chart line as the average of 102 CMIP 5 model runs. There’s no assumption; that’s the legend.
My criticism is in response to this article. If this article has deliberately, or in error, misrepresented the research being reported, that’s a problem you have with its publisher, not me.

rd50
Reply to  rd50
June 28, 2017 6:06 pm

I asked you:
Did you read the article?
You responded; YES
Now you pretend that the article you responded to was the graph published introducing the published article!
A pretty graph with a red line going up being the average of 102 models.
This graph was NEVER EVER introduced by Kravtsov in his article. But you blamed Kravtsov for using it.
Sure, now you have to try to defend yourself one way or the other that you were not responding to Kravtsov.
Sorry, the Kravtsov article this what you were responding to. You selected the wrong graph, a graph he never used.
Read again your response.
You specifically wrote: “Sergey Kravtsov makes this error.”
What was the error Sergey Kravtsov made? According to you: using the average of 102 models.
So you were responding to the article of Sergey Kravtsov erroneously, pretending that you read the article!
Nonsense.
You never read the article by Kravtsov.
Just give me a response of how many models he listed in Table 1 of his article and I will upload Table 1 of the article here.
The list in his Table 1 is quite different from your 102 list of models you used to unfairly criticize him.

rd50
Reply to  rd50
June 28, 2017 7:27 pm

Waited long enough for your response.
Here is a copy of Table 1 listing the models selected:
Table 1. CMIP5 Twentieth Century Simulations Used in This Studya
Model # Model Acronym Historical Historical GHG Historical Nat PI Control
1 CanESM2 5 5 5 996
2 CCSM4 6 3 4 501
3 CNRM-CM5 10 6 6 850
4 CSIRO-MK3–6-0 10 5 5 500
5 GFDL-CM2.1 10 – – –
6 GFDL-CM3 5 3 3 500
7 GISS-E2-Hp1 6 5 5 540
8 GISS-E2-Hp2 6 – – 531b
9 GISS-E2-Hp3 6 – 5 431b
10 GISS-E2-Rp1 6 5 5 550
11 GISS-E2-Rp2 6 _ – 531b
12 GISS-E2-Rp3 6 – 5 531b
13 HadCM3 10 – – –
14 HadGEM2-ES 5 3 4 575
15 IPSL-CM5A-LR 6 3 3 1000
16 MIROC5 5 – _ 770
17 MRI-CGCM3 3 1 1 500
Total 17 models 111 simulations 39 simulations 51 simulations 9306 (7282) years
aWe selected the models with four or more historical realizations (fourth run for the MRI model was not available) and
analyzed the runs for which sea surface temperature (SST), surface air temperature (SAT), and sea level pressure (SLP)
outputs were all available. Listed are the following: The number of realizations in historical runs with all forcings included
and in the runs with greenhouse gas (GHG) and natural (Nat) forcings only, as well as the length of the preindustrial control
runs (in years).
bThe (low-variance) PI control runs of GISS models were not included in the final control-run ensemble to compensate
for the absence of the (high-variance) GFDL-CM2.1 and HadCM3 control runs (see KC2017).
Difficult to read. I agree but a list of the models used is there. Much better presentation if you download the article with Table 1.
Nevertheless it proves beyond a reasonable doubt that the average of 102 models was never used by Kravtsov.

Reply to  rd50
June 28, 2017 8:14 pm

RD50 writes: “This graph was NEVER EVER introduced by Kravtsov in his article. “
This appears to be the root of our misunderstanding. My critique was of this article, the one we’re discussing. I expressed trust that the leading graphic, which is very clearly identified as representing the”Average of…” was a true and correct representation of the methods used by Kravtsov. I stand by the criticism I made in that context”.
As I mentioned earlier, if you believe this publication has in some way misrepresented Kravtsov’s work, the issue isn’t with me, it’s with the editor of this publication. My criticism remains valid.

Reply to  rd50
June 28, 2017 9:19 pm

I’d like to add (), that had there been some number of models less than 102, for example 17, which where run several times each, then aggregated into an “average”, the criticism I’ve made still stands. This is a fundamental truth of valid statistical methods.
It’s very important you understand the purpose of an “average” before you can comprehend the criticism I’ve made above. If that remains beyond your ken, this really isn’t the place to rectify that deficiency.

Gabro
Reply to  rd50
June 28, 2017 9:28 pm

Barb,
Yes. Even though the models and runs with ECS above 2.0 degrees C per doubling are clearly wrong, IPCC must leave them in so that the future looks scary.

Reply to  jonesingforozone
June 28, 2017 3:42 pm

Sufficient excuse for what?

Reply to  jonesingforozone
June 28, 2017 5:40 pm

Jones, while I’d like to use your reference I’m afraid I can’t really accept it as a curated source. I won’t be spending $1139 to read the original, most especially since it was completely funded by US taxpayers, of which I am one.
If it shows up on ResearchGate, a curated source I trust, I’ll be happy to review it in full.

Reply to  jonesingforozone
June 28, 2017 10:02 pm

Jones, the link is dead. Doesn’t resolve. Even if I wanted to waste time on it, I couldn’t.

jonesingforozone
Reply to  Bartleby
June 29, 2017 5:33 am

Oh, I see, it is a process to get to the paper.
First, open this link for guest access.
Then, select the pdf file dated June 15th.
The link is from page 6 of the Supplementary pdf which is free access:
grl56045-sup-0001-2017GL074016-SI.pdf
This Supplementary pdf appears on the Supporting Information tab of the page http://onlinelibrary.wiley.com/wol1/doi/10.1002/2017GL074016/suppinfo

Ralph W. Lundvall
June 28, 2017 1:12 pm

I want a scatter graph showing $s of grant money to accuracy of forecast.

Reply to  Ralph W. Lundvall
June 28, 2017 4:04 pm

That should be pretty easy.

June 28, 2017 1:39 pm

dear tanya:
please learn how to use a computer

Gabro
Reply to  Leo Smith
June 28, 2017 5:58 pm

She is using it as she intends, ie to troll.

JohnMacdonell
June 28, 2017 6:22 pm

Anybody here ever seen Sinclair’s video about Anthony?

Bill Illis
Reply to  JohnMacdonell
June 28, 2017 9:15 pm

Nobody should be subject to Peter Sinclair’s condescending factually incorrect videos.
(unless your like your condescending incorrect information which lots of people do).

Gabro
Reply to  Bill Illis
June 28, 2017 9:21 pm

Pete is a second generation enviro-N@zi trough-feeder with a Bachelor of Fine Arts degree.
Hilarious that the worse than worthless, waste of oxygen dweeb imagines he can breathe the same air scientifically with Anth@ony.

Steve in Seattle
June 28, 2017 7:20 pm

From key quote # 3 :
The magnitude of the CMIP5 “internal” variability estimated by this method is, hence, much larger …
I am confused by the use of the phrase this method – ” which ” method is ” this method ” above, referring to ?? Anybody can help me ?

RoHa
June 28, 2017 8:41 pm

It would have been polite to tell us that the article was by Sergey Kravtsov.
I love the plain language summary.

ironicman
June 28, 2017 9:32 pm

If the hiatus continues for another ten years then we can safely say the Lukewarmers win.

Gabro
Reply to  ironicman
June 28, 2017 9:40 pm

In that case, the non-warmers win.
Despite a physical “greenhouse effect”, that would tell us that in the actual climate system, it doesn’t happen, probably because of net negative feedbacks, which is just what should be expected on a self-regulating, watery planet.

ironicman
Reply to  Gabro
June 28, 2017 10:27 pm

Curry and Lomborg win, the Denialati lose because global cooling failed to kick in.

Gabro
Reply to  Gabro
June 28, 2017 10:32 pm

Lose they must because Mother Nature says so.
The sad truth is that the cr!minals will suffer not penalties for their cr!mes.

ironicman
Reply to  Gabro
June 28, 2017 11:33 pm

If the Lukewarmers win then the Klimatariat is let off the hook, its a sensitivity issue.
CO2 may yet have a case to answer if a Gleissberg Minimum fails to show.
‘Solar cycle 24 has turned out to be historically weak with the lowest number of sunspots since cycle 14 peaked more than a century ago in 1906. In fact, by one measure, the current solar cycle is the third weakest since record keeping began in 1755 and it continues a weakening trend since solar cycle 21 peaked in 1980.’
Paul Dorian

Gabro
June 28, 2017 9:44 pm

The jig is up.
The CACA charlatans lose. Reality wins.
The question now is, are Hansen, Schmidt, Jones, Mann, Trenbeth and co-c@onspirators charged only with fraud, or with crimes against humanity, as is totally warranted.
Using RICO, the Trump DoJ should sweat the underlings to get to the ring leaders, just as with the Mafia. Is Briffa an underling or a capo di tutti capi? If he sings like a canary, then he’s a mere soldier, not a capo. Overpeck, however, I have to go with capo.

Yogi Bear
June 29, 2017 4:51 am

The habit of referring only to the cold season NAO as if the rest of the year doesn’t matter, only serves to confuse its relationship with the AMO. The 3 month running mean shows a positive NAO regime from around 1963, and shifting negative from around 1995, well in time with the AMO phase shifts. While the JFM cold season alone shows little such congruence with the AMO envelope.
http://www.cpc.ncep.noaa.gov/products/precip/CWlink/pna/nao.timeseries.gif
http://www.cpc.ncep.noaa.gov/products/precip/CWlink/pna/season.JFM.nao.gif

JohnMacdonell
June 29, 2017 6:11 am

Gabro
“……. he can breathe the same air scientifically with Anth@ony.”
Nice to hear that Anth@ony knows science
.

Graham Dodd
June 29, 2017 6:04 pm

http://www.realclimate.org/index.php/climate-model-projections-compared-to-observations
I have a scientific background and education but got dragged into finance out of school and thus haven’t worked in the field and greatly appreciate all that Anthony and the knowledgeable posters here have taught me. I have run across this link being thrown around to support the argument that the climate models work well and would appreciate if one or more could provide me with a succinct rebuttal.
Thanks!

DWR54
June 30, 2017 5:50 am

Remote Sensing Systems (RSS) just released their updated lower troposphere temperature data set from v3.3 to v4.0:
http://images.remss.com/data/msu/graphics/TLT_v40/plots/RSS_TS_channel_TLT_Global_Land_And_Sea_v04_0.png
The result is a hugely increased rate of warming over the course of the data compared to the previous data set. The decadal tend since 1979 is 0.184 C/dec, very close to, though a little faster than the surface data sets, including GISS (0.176 C/dec since 1979). Still slightly below the CMIP5 multi-model average over the past 20 years, but much warmer than UAH’s equivalent TLT.
This change is primarily due to the changes in the adjustment for drifting local measurement time: http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-16-0768.1

rd50
Reply to  DWR54
June 30, 2017 1:39 pm
2hotel9
July 1, 2017 9:36 am

The models work exactly as they are designed to work and provide the precise results they are created to show.

markl
Reply to  2hotel9
July 1, 2017 11:16 am

+1 We need to stop saying they are failures and call them out for the propaganda they are.

gregfreemyer
July 1, 2017 12:58 pm

I just want to highlight that Kravtsov is one of the main researchers on the Stadium Wave Theory. I believe much of this work actually falls out of the ongoing work related to the Stadium Wave Theory. Kravtsov was an advisor to Marcia Glaze Wyatt who developed the Stadium Wave Theory as her PHD dissertation. Curry was also an author to some of the relevant papers, but Kravtsov has been doing a lot of the heavy lifting both initially and now as the Theories testable claims are coming to the forefront.
If the Stadium Wave Theory proves out we should see increasing sea ice across the top of Europe/Asia in the next 5-10 years.comment image
http://www.wyattonearth.net/images/392_stad_wave.png

2hotel9
Reply to  gregfreemyer
July 1, 2017 8:09 pm

Really? Bunches of drunk people standing up and sitting down while waving their hands in the air affectys the climate? Oy vay.

gregfreemyer
Reply to  2hotel9
July 2, 2017 2:37 am

It’s not the standing up, it’s when they run out of warm beer to piss out. Already happened south of Iceland
http://www.climate4you.com/SeaTemperatures.htm#North Atlantic (60-0W, 30-65N) heat content 0-700 m depth

2hotel9
Reply to  gregfreemyer
July 3, 2017 4:50 am

Warm water in the Gulf Stream? We are doomed, Doomed I tells ya!!!

2hotel9
Reply to  gregfreemyer
July 4, 2017 5:36 am

Wow, going by that graph the Gulf Stream is slackin’ off. Better dock its pay till it gets on the ball!

gregfreemyer
Reply to  2hotel9
July 5, 2017 5:29 pm

I told you. They ran out of beer in Mexico, so they quit peeing in the gulf.
More seriously, 70-90 year cycles seem real. That the first place I’ve seen that has already topped out for this cycle. Next should be eurasian ice extent. (ie. the ice from scandinavia to Eastern Russia.) If the Stadium Theory is right, that part of the world should have topped out and be headed down soon. Look at that multi-colored sine wave chart I posted. The theory is the energy gets pushed from one cycle to the next. ngAMO is in the first set of waves. That’s negative AMO. In theory it will lead the rest of the world into a cooling phase over the next 20 years.

2hotel9
Reply to  gregfreemyer
July 5, 2017 8:56 pm

Oh, don’t even get me started on “cycles”, they inter/commingle. And do not even mention the SUN in any of this! That sets off the leftards like nobody’s business. Except for the whole “I can choose my gender” horsesh*t. First the LGBTQUACBRTMLBLAHBLAH said you are born gay and can not change it, now they say no matter what gender you are physically born you can “choose” what you are. Am I, honestly, the only one who sees how these two “philosophies” mesh together so conveniently?