A climate science milestone: a successful 10-year forecast!

From the Fabius Maximus Blog.  Reposted here.

By Larry Kummer. From the Fabius Maximus website, 21 Sept 2017.

Summary: The gridlock might be breaking in the public policy response to climate change. Let’s hope so, for the gridlock has left us unprepared for even the inevitable repeat of past extreme weather — let alone what new challenges the future will hold for us.

The below graph was tweeted yesterday by Gavin Schmidt, Director of NASA’s Goddard Institute of Space Sciences (click to enlarge). (Yesterday Zeke Hausfather at Carbon Brief posted a similar graph.) It shows another step forward in the public policy debate about climate change, in two ways.

 

(1) This graph shows a climate model’s demonstration of predictive skill over a short time horizon of roughly ten years. CMIP3 was prepared in 2006-7 for the IPCC’s AR4 report. That’s progress, a milestone — a successful decade-long forecast!

(2) The graph uses basic statistics, something too rarely seen today in meteorology and climate science. For example, the descriptions of Hurricanes Harvey and Irma were very 19th C, as if modern statistics had not been invented. Compare Schmidt’s graph with Climate Lab Book’s updated version of the signature “spaghetti” graph — Figure 11.25a — from the IPCC’s AR5 Working Group I report (click to enlarge). Edward Tufte (The Visual Display of Quantitative Information) weeps in Heaven every time someone posts a spaghetti graph.

Note how the graphs differ in the display of the difference between observations and CMIP3 model output during 2005-2010. Schmidt’s graph shows that observations are near the ensemble mean. The updated Figure 11.25a shows observations near the bottom of the range of CMIP5 model outputs (Schmidt also provides his graph using CMIP5 model outputs).

Clearing away the underbrush so we can see the big issues.

This is one in a series of recent incremental steps forward in the climate change policy debate. Here are two more examples of clearing away relatively minor issues. Even baby steps add up.

(1) Ocean heat content (OHC) as the best metric of warming.

This was controversial when Roger Pielke Sr. first said it in 2003 (despite his eminent record, Skeptical Science called him a “climate misinformer” – for bogus reasons). Now many climate scientists consider OHC to be the best measure of global warming. Some point to changes in the ocean’s heat content as an explanation for the pause.

Graphs of OHC should convert any remaining deniers of global warming (there are some out there). This shows the increasing OHC of the top 700 meters of the oceans, from NOAA’s OHC page. See here for more information about the increase in OHC.

 

(2) The end of the “pause” or “hiatus”.

Global atmospheric temperatures paused during period roughly between the 1998 and 2016 El Ninos, especially according to the contemporaneous records (later adjustments slightly changed the picture). Activists said that the pause was an invention of deniers. To do so they had to conceal the scores of peer-reviewed papers identifying the pause, exploring its causes (there is still no consensus on this), and forecasting when it would end. They were quite successful at this, with the help of their journalist-accomplices.

Now that is behind us. As the below graph shows, atmospheric temperatures appear to have resumed their increase, or taken a new stair step up — as described in “Reconciling the signal and noise of atmospheric warming on decadal timescales“, Roger N. Jones and James H. Ricketts, Earth System Dynamics, 8 (1), 2017. Click to enlarge the graph.

 

What next in the public policy debate about climate change?

Perhaps now we can focus on the important issues. Here are my nominees for the two most important open issues.

(1) Validating climate models as providers of skillful long-term projections.

The key question has always been about future climate change. How will different aspects of weather change, at what rate? Climate models provide these answers. But acceptable standards of accuracy and reliability differ for scientists’ research and policy decisions that affect billions of people and the course of the global economy. We have limited resources; the list of threats is long (e.g., the oceans are dying). We need hard answers.

There has been astonishingly little work addressing this vital question. See major scientists discussing the need to do so. We have the tools to do so. A multidisciplinary team of experts (e.g., software engineers, statisticians, chemists), adequately funded, could do so in a year. Here is one way to do so: Climate scientists can restart the climate policy debate & win: test the models! That post also lists (with links) the major papers in the absurdly small literature — and laughably inadequate — about validation of climate models.

There is a strong literature to draw on about how to test theories. Let’s use it.

  1. Thomas Kuhn tells us what we need to know about climate science.
  2. Daniel Davies’ insights about predictions can unlock the climate change debate.
  3. Karl Popper explains how to open the deadlocked climate policy debate.
  4. Milton Friedman’s advice about restarting the climate policy debate.
  5. Paul Krugman talks about economics. Climate scientists can learn from his insights.
  6. We must rely on forecasts by computer models. Are they reliable? (Many citations.)
  7. Paul Krugman explains how to break the climate policy deadlock.

 

(2) Modeling forcers of climate change (greenhouse gases, land use).

Climate models forecast climate based on the input of scenarios describing the world. This includes factors such as amounts of the major greenhouse gases there are in the atmosphere. These scenarios have improved in detail and sophistication in each IPCC report, but they remain an inadequate basis for making public policy.

The obvious missing element is a “business as usual” or baseline scenario. AR5 used four scenarios — Representative Concentration Pathways (RCPs). The worst was RCP8.5 — an ugly scenario of technological stagnation and rapid population growth, in which coal becomes the dominant fuel of the late 21st century (as it was in the late 19th C). Unfortunately, “despite not being explicitly designed as business as usual or mitigation scenarios” RCP8.5 has often been misrepresented as the “business as usual” scenario — becoming the basis for hundreds of predictions about our certain doom from climate change. Only recently have scientists began shifting their attention to more realistic scenarios.

A basecase scenario would provide a useful basis for public policy. Also useful would be a scenario with likely continued progress in energy technology and continued declines in world fertility (e.g., we will get a contraceptive pill for men, eventually). That would show policy-makers and the public the possible rewards for policies that encourage these trends.

Conclusions

Science and public policy both usually advance by baby steps, incremental changes that can accomplish great things over time. But we can do better. Since 2009 my recommendations have been the same about our public policy response to climate change.

  1. Boost funding for climate sciences. Many key aspects (e.g., global temperature data collection and analysis) are grossly underfunded.
  2. Run government-funded climate research with tighter standards (e.g., posting of data and methods, review by unaffiliated experts), as we do for biomedical research.
  3. Do a review of the climate forecasting models by a multidisciplinary team of relevant experts who have not been central players in this debate. Include a broader pool than those who have dominated the field, such as geologists, chemists, statisticians and software engineers.
  4. We should begin a well-funded conversion to non-carbon-based energy sources, for completion by the second half of the 21st century — justified by both environmental and economic reasons (see these posts for details).
  5. Begin more aggressive efforts to prepare for extreme climate. We’re not prepared for repeat of past extreme weather (e.g., a real hurricane hitting NYC), let alone predictable climate change (e.g., sea levels climbing, as they have for thousands of years).
  6. The most important one: break the gridlocked public policy by running a fair test of the climate models.

For More Information

For more about the close agreement of short-term climate model temperature forecasts with observations, see “Factcheck: Climate models have not ‘exaggerated’ global warming” by Zeke Hausfather at Carbon Brief. To learn more about the state of climate change see The Rightful Place of Science: Disasters and Climate Change by Roger Pielke Jr. (Prof of Environmental Studies at U of CO-Boulder).

For more information see all posts about the IPCC, see the keys to understanding climate change and these posts about the politics of climate change…

1 1 vote
Article Rating
512 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
ivankinsman
September 22, 2017 5:10 am

This seems like a very rational and objective article to me on AGW. Let’s see more of these on WUWT.

I Came I Saw I Left
Reply to  ivankinsman
September 22, 2017 7:18 am

Looks like propaganda to me.

ivankinsman
Reply to  I Came I Saw I Left
September 22, 2017 7:22 am

Not at all. It is very pragmatic but I am assuming not ‘sceptical” enough for you.

I Came I Saw I Left
Reply to  I Came I Saw I Left
September 22, 2017 7:49 am

Increasing funding for climate science is pragmatic? What planet do you live on? Pragmatism would be hardening the infrastructure and civilization against what nature can throw at us. Excessive funding of climate science IS the problem. What purpose does it serve?

Reply to  I Came I Saw I Left
September 22, 2017 8:07 am

Ditto.

Johnny Cuyana
Reply to  I Came I Saw I Left
September 22, 2017 8:11 am

ICISIL., you are exactly correct!
In this article there are a few “reasonable” proposals; however, IMO, such are overwhelmed by a cry for more funding and support for more “centralized command and control” BIG GOVT brainwashing, that is, for support for many of the same gang of duplicitous quasi-scientists who got us here in the first place.

Greg
Reply to  I Came I Saw I Left
September 22, 2017 8:49 am

“…. not ‘sceptical” enough for you”
Me neither.
The first unlabelled figure shows CMIPS from 2007, fine. But the problem is that the DATA has since been “corrected ” to fit the models. This is not “validation” of a model prediction.
The unlabelled second figure shows “pentadal average” which is notably different from the 3mo and yearly plots. It seriously undercounts until 1970, fails to capture the early 80s drop and goes entirely in the wrong directions during the post Pinatubo period of the early 90s. Is this really supposed to be the same data?
Also this pentadal average clearly is NOT a five yearly average of anything it is a running mean. The author clearly does not even know what an average is.
Furthermore, seriously presenting OHC data going back to 1960 is either blatant misrepresentation or ignorance. There simply was not reliable OHC data that far back.
It may be a rather tamer form of propagandistic position but it is still AGW propaganda.

TRM
Reply to  I Came I Saw I Left
September 22, 2017 9:20 am

Re-allocate climate spending would have been a better term. I agree they don’t need more money, they just need to quit funding 30+ models that are nowhere near reality and focus on those 5 or so that are. Competition!

J.H.
Reply to  I Came I Saw I Left
September 22, 2017 9:59 am

There’s nothing “pragmatic” about a scam….. There hasn’t been any science in so called Climate Science. It’s been an exercise in political activism and a scam right from day one….. When Hanson admitted that he turned the air conditioners off at the conference and was proud about that theatrical prop…. The science of climate change died that very day.

Latitude
Reply to  I Came I Saw I Left
September 22, 2017 10:11 am

It is very pragmatic …..since when is some WAG that only works short term…the basis for anything pragmatic
…extend that little line out far enough that it matters…and it’s the same crap they’ve been spitting out since day one

gnomish
Reply to  I Came I Saw I Left
September 22, 2017 10:51 am

i’m not buying any pitch of any prophet or acolyte or minion
and i’m also getting real sick of constant exhortation and importuning.
kummer- mitts off my stuff or pull back a stump.
don’t even look at my wallet.
don’t even talk about my wallet.

Malcolm Carter
Reply to  I Came I Saw I Left
September 22, 2017 12:02 pm

I am having difficulty with the increasing OHC measured in 10^22 joules. Joules per what? Is this a large amount? Is this another attempt to make a tiny temperature increase (with large error bars) seem like a massive change. In other words is this a continuation of the hyperbole aka propaganda?

AndyG55
Reply to  I Came I Saw I Left
September 22, 2017 12:19 pm

Malcolm, think temperature changes in the 1/100 degree range, immeasurable even now with ARGO.
Measurements of ocean temperature before 2003 are basically non-existent for most of the ocean.
Its a load of ASSumption driven modelling FARCE.

MarkW
Reply to  I Came I Saw I Left
September 22, 2017 2:20 pm

Even if ARGO could measure with that accuracy, there aren’t enough probes to justify maintaining that kind of “accuracy” when extended to all the oceans.

Janice Moore
Reply to  I Came I Saw I Left
September 22, 2017 2:33 pm

ivan K:
1. Yes, it is “rational.” The author’s half-truths and lies support his goal: promote AGW.
2. “Objective” — not. It is an attempt to persuade you to agree with the author.
3. What this article is
(given, as Greg and NUMEROUS others have nicely explained, such assertions as this:

This graph shows a climate model’s demonstration of predictive skill ….

{WRONG: it only shows the code writers’ ability to make their computer simulation mimic KNOWN data.}
is: a piece of JUNK.
And this author, L. K. of the FM piece of junk blog is a KNOWN AGW schlepper around here.

Du
Reply to  I Came I Saw I Left
September 22, 2017 4:24 pm

Not really. Even “skeptics” like Doc Spencer point to a limited warming in the measured data of about 1.4 C per century. The actual data seems to appear stepped rather than a trend and could reflect solar influences working in combination with oceanic thermal mass. If the major solar minimum comes about as frequently predicted, then a good test will be to see if the “trend” steps backward over the next few decades.
The problem really isn’t about the science per se, but rather the struggle, which approaches the ugliest 19th century misunderstandings of Darwinian selection, for access to limited resources (money, facilities, computing capacity). This leads to competive “packs” of scientists battling it out for access to all the necessities of science: money, a new microscope or computer, publication space, etc. That is what produces the debacle we call “climate science.” If you toss in the growing success of “high-functioning sociopaths” in our current civilization, the out come is predictable. A means of forcing better science would be to pit teams of HFS’s against each other with equal, limited access to funding and facilities and publication in journals with open peer review, preferably refereed by reviewers from a different discipline. Attendance at conferences, which really is helpful if you actually practice a science, should be self-funded. The IRS allows professional expenses to be claimed, so make them take that route rather than funding travel to exotic locations through public taxes.

Duster
Reply to  I Came I Saw I Left
September 22, 2017 4:27 pm

I was completing this when the cat [really, no joke] stepped on the enter-key a moment before I completed typing my handle:
Not really. Even “skeptics” like Doc Spencer point to a limited warming in the measured data of about 1.4 C per century. The actual data seems to appear stepped rather than a trend and could reflect solar influences working in combination with oceanic thermal mass. If the major solar minimum comes about as frequently predicted, then a good test will be to see if the “trend” steps backward over the next few decades.
The problem really isn’t about the science per se, but rather the struggle, which approaches the ugliest 19th century misunderstandings of Darwinian selection, for access to limited resources (money, facilities, computing capacity). This leads to competive “packs” of scientists battling it out for access to all the necessities of science: money, a new microscope or computer, publication space, etc. That is what produces the debacle we call “climate science.” If you toss in the growing success of “high-functioning sociopaths” in our current civilization, the out come is predictable. A means of forcing better science would be to pit teams of HFS’s against each other with equal, limited access to funding and facilities and publication in journals with open peer review, preferably refereed by reviewers from a different discipline. Attendance at conferences, which really is helpful if you actually practice a science, should be self-funded. The IRS allows professional expenses to be claimed, so make them take that route rather than funding travel to exotic locations through public taxes.

Sixto
Reply to  I Came I Saw I Left
September 22, 2017 4:46 pm

Duster,
Satellite trend in “warming” is meaningless, since it can only measure since 1979, which was the start of a natural warming cycle. The alleged 1.3 degrees C per century rate is based upon a cycle coming off a strong cooling cycle from the 1940s. Thus the actual centennial scale warming is much lower.
But even if it were that high, that’s a good thing, not a bad thing.

Reply to  I Came I Saw I Left
September 23, 2017 1:18 am

ICISIL many replies to this entry, all other approaches of deceiving the masses have failed for soothsaying is not science. This is a very clever attempt as a prelude to tell us the oceans will boil by the year xxxx just another bit of fortune telling.
These soothsayers are rapidly running out of time as old Sol would appear to be entering one of his moods that causes a tad of serious cooling.

DrTorch
Reply to  ivankinsman
September 22, 2017 7:41 am

Yeah, the objective is, “Give us more money.” While cherry-picking those data which may match empirical measurements.
The call for validation is a nice change, I will give him that.

BCBill
Reply to  DrTorch
September 22, 2017 7:18 pm

Validation would be absolutely necessary as there is a high probability that out of ~50 climate models, one or two of them would get close to reality just by chance. The high variability in the projections suggests a weak methodology and high probability that chance takes the credit rather than the skill of the modellers.

Ron Long
Reply to  ivankinsman
September 22, 2017 8:03 am

Ivankinsman, look at what they constructed. First they adjusted temperature data to eliminate any “pause” or “hiatus” and to fit model projections, then they show increased sea temperatures and say it explains the “pause” or “hiatus”, that is the missing heat went into the oceans. They are trapped by their own data manipulation, looks like to me.

DD More
Reply to  Ron Long
September 22, 2017 12:34 pm

Everyone should compare HausDaddy’s Blended Model 1970-2020 Fairy Tale to this, from the Grooveyard of Forgotten Hits – 2009 Style.
http://rankexploits.com/musings/wp-content/plugins/BanNasties/imageDiversion.php?uri=/musings/wp-content/uploads/2009/04/20yeartrends.jpg
Note only the 1997 peak touches the ‘Model Runs’ wereas this chart is well over the lines.
They are reporting how well their New Revised Models compare to the New Revised Temperatures.
And if they have this ‘Climate Modeling’ down to perfection, why do we still need 47 of them?

Janice Moore
Reply to  Ron Long
September 22, 2017 4:06 pm

HausDaddy!!

LOVE it! lolol (and your excellent expose of the bogus “projection,” too)

Sixto
Reply to  Ron Long
September 22, 2017 4:11 pm

Hausdaddy is good, but Grooveyard is better. Gotta love those Golden Oldies!

Sixto
Reply to  Ron Long
September 23, 2017 12:08 pm

Larry spouts the CACA party line, imagining that satellite data have been more adjusted than the pack of “surface” lies. However, in reality, satellite observations were adjusted in the past to improve them, fixing problems. By contrast, “surface” lies are constantly “adjusted” in order to bring them in line with the failed CACA hypothesis, not to make them closer to reality.
Past satellite adjustments were warranted. HaddDRU, NOAA and NASA’s antiscientific, mendacious, made-up “surface” sets, not so much. As in, not at all.

higley7
Reply to  ivankinsman
September 22, 2017 8:04 am

The climate model predictions only match the temperature record because the temperature record was warmed to match the models. This is indeed propaganda. Berkely Earth pretended to be skeptical and honest and then went and warmed the data as the other AGW “scientists” do.

DWR54
Reply to  higley7
September 22, 2017 10:37 am

Either that or temperatures really are rising, as every group that looked into it has reported.

Dave Fair
Reply to  DWR54
September 22, 2017 12:06 pm

Rising overall, DWR54; but not at the rate predicted by the models!

MarkW
Reply to  higley7
September 22, 2017 11:52 am

I love it when trolls try to move the goal posts.
Nobody has claimed that the earth hasn’t warmed.
It’s the degree of warming that has been cooked.

AndyG55
Reply to  higley7
September 22, 2017 12:21 pm

“as every group that looked into it has reported.”
roflmao.. all linked to the same manically adjusted data.
reported… as part of the overall scam.

DWR54
Reply to  higley7
September 22, 2017 10:03 pm

AndyG55

roflmao.. all linked to the same manically adjusted data. reported… as part of the overall scam.

It’s noticeable that temperature data sets only become part of the ‘scam’ once they show trends some folks here don’t like the look of. The UAH lower troposphere satellite temperature data set shows an almost identical warming trend to HadCRUT4 since 1996. So if it’s a ‘scam’, then I guess you believe Roy Spencer and John Christy are in on it too?

DWR54
Reply to  higley7
September 22, 2017 10:09 pm

Sorry, that should be from 2006 (not 1996). That’s the period covered by the CMIP3 models. The trend in HadCRUT4 since 2006 is 0.274 deg. C per decade; in UAH TLT the trend since 2006 is 0.272 deg. C per decade. If we accept that the UAH satellite trend since 2006 is accurate (RSS disagrees and is higher), then the HadCRUT4 ‘scam’ has amounted to an increased warming trend of 2 one-thousandths of a degree C per decade since 2006. Well within the range of error in both sets.

Dyugle
Reply to  ivankinsman
September 22, 2017 8:31 am

They show the increase in heat content of the ocean. The heat capacity of the ocean is at least two orders of magnitude greater than the atmosphere. Thus, it will take more than 100 times as long to heat the ocean 2 degrees as it does to heat the atmosphere.

A C Osborn
Reply to  Dyugle
September 22, 2017 9:40 am

They modified the data, so it is no longer data, it is Guesstimation.

David A
Reply to  Dyugle
September 22, 2017 3:19 pm

Plus one magnitude, 1000 times.

Walter Sobchak
Reply to  Dyugle
September 22, 2017 11:44 pm

The amount of heat energy in the oceans is not two but three orders of magnitude greater than the atmosphere. Furthermore it is six orders of magnitude greater than the amount of energy that can be attributed to CO2

Reply to  ivankinsman
September 22, 2017 8:31 am

Out of more than 50 models they found only 3 that could fit up til now??

Reply to  Santa Baby
September 22, 2017 8:32 am

Spray and Pray?

Reply to  Santa Baby
September 22, 2017 8:37 am

How many models actually fit year for year? I guess it has been going down each year. In 20 year none of them no longer fit?

Reply to  Santa Baby
September 22, 2017 8:52 am

This is the the significant recommendation: “The most important one: break the gridlocked public policy by running a fair test of the climate models.”
Select what appear to be the best models (by throwing out all of the extreme outliers) and require the selected models to make a 10 year forecast. At the same time, validate the historic data by having a rigorous and critical look at all of the adjustments and the history of the method and location and technology of all of the data recording stations (starting with the surface stations findings.
A previous suggestion is to take the prefered models and start them from 1850, inputting all of the available data and run them forward, to see what they produce compared to the actual progression of the world’s climate. If 1850 is too far back to go, select a date in the 20th century when rigorous weather data was being collected…1940 for instance. If the models fail to actually model what happened, then recognize they are not fit for purpose.

AndyG55
Reply to  Santa Baby
September 22, 2017 10:28 am

Its a very odd way of doing science. Make 100 or so models, then choose the few that match massively up-adjusted temperatures, even if those models are based on far less aCO2 that there actually is.
High FARCE, …. not science.

Paul Blase
Reply to  Santa Baby
September 25, 2017 2:19 pm

Not only that, but given the fact that climate is a non-linear, chaotic phenomenon- as per several series of articles here at WUWT – is coincidental matching between a model and a given set of data necessarily an indication of actual correctness? How do we tell the difference between a correct model that happens to have one or two parameters slightly off and an incorrect one?

Tom O
Reply to  ivankinsman
September 22, 2017 8:44 am

I’m looking at the anomalies chart And trying to figure it out. The “anomaly” for what appears to be the 1998 el nino is significantly lower than the anomaly for the 2015 el nino. Since an anomaly, I thought, was the change from some baseline average, and the actual difference between the stated temperatures for the two el ninos are only slightly different, doesn’t that far greater anomaly for the 2015 el nino “suggest” that the “baseline average” has gone down in order for it to be that much greater?
Now if that is the case, how can you call this rational and objective since it obviously is pretending that oranges and pineapples are the same? I am probably misinterpreting something here, but I am not sure what.

A C Osborn
Reply to  Tom O
September 22, 2017 9:37 am

It is called “adjusting history” to make the data better.
I call it cheating and fraud.
As I have pointed out numerous times they have cooled the 1997/98 average by 2.39 degrees C.

Tom Dayton
Reply to  Tom O
September 23, 2017 9:03 am

Tom O: The baseline from which the anomalies are calculated is the same for all the years. El Ninos cause temporary increases in surface temperatures at the expense of deeper ocean temperatures. El Ninos and La Ninas cancel out each other in the long run, and they merely are variability within the Earth’s energy system. Meanwhile, additional energy accumulates pretty steadily, causing the rise over the long run. The wiggles in surface temperature caused by El Ninos and La Ninas are merely weather noise on top of the long term temperature increase that is climate. The 2015 El Nino was an increase on top of a starting temperature that was much higher than the 1998’s starting temperature.

Reply to  ivankinsman
September 22, 2017 8:44 am

Ivankinsman,
Thank you for the feedback. Note the comment replies. The usual, showing how the climate policy debate has become dysfucntional.
“I Came” thinks that any views that disagree with his (hers?) are “propaganda”. Johnny seems to have not read the first `1500 words of the essay, and delusionally imagined what the last section says.
So it goes.

ivankinsman
Reply to  Editor of the Fabius Maximus website
September 22, 2017 8:59 am

I agree. Difficult to bridge the big yawning gap that principally seems to exist in the US. Found this a very interesting profile of a typical climate sceptic:
https://www.theguardian.com/environment/2017/sep/22/climate-deniers-protect-status-quo-that-made-them-rich?CMP=share_btn_fb

I Came I Saw I Left
Reply to  Editor of the Fabius Maximus website
September 22, 2017 9:02 am

“…climate policy debate has become dysfunctional”
The Democrats embraced an extremist, radical leftist agenda, then accused the Republicans of being far right and creating a poisoned political climate by not compromising when the right started to undo some of the left’s excesses.
Climate science is dysfunctional, not the debate. Wouldn’t you agree?

MikeP
Reply to  Editor of the Fabius Maximus website
September 22, 2017 10:44 am

Interesting … a suck up comment without content is thanked for being “feedback”. Climate policy should be about science, not just liking those who call you “rational and objective”. When Climate science becomes google/facebook/twitter then it is dysfunctional.

Reply to  Editor of the Fabius Maximus website
September 22, 2017 11:02 am

Ivankinsman,
Wow. Thanks for the link to that entertaining Guardian “article”: “Climate deniers want to protect the status quo that made them rich.” Even for The Guardian, that’s quite nuts.
No wonder they’re losing money so briskly (£45 million last year, down by a third from the previous year!). It’s the ultimate trust fund baby of news media, without which it probably would quickly go to its deserved reward of bankruptcy.
https://www.theguardian.com/media/2017/jul/25/guardian-media-group-cuts-losses-by-more-than-a-third-to-45m

jclarke341
Reply to  Editor of the Fabius Maximus website
September 22, 2017 11:08 am

The gap doesn’t make the debate dysfunctional. It is the use of logical fallacies, Ad Hominem attacks, and suspicious data manipulations that make it dysfunctional. I have seen a great deal of it from both sides, but the skeptical argument doesn’t depend on such things.

Moa
Reply to  Editor of the Fabius Maximus website
September 22, 2017 11:39 am

Why don’t you show the satellite and balloon data ?
After all, the earliest ‘litmus test’ prediction of the AGW hypothesis is to do with the Lower Tropical Troposphere warming faster than the surface. So you need to demonstrate this. This is the specific discriminator between the IPCC AGW Hypothesis and other causes, such as a continuation of the natural global warming which is mostly likely caused by Solar magnetic variability (eg. the Svensmark – Shaviv hypothesis).
This is the second time I’ve seen your posts calling for being ‘reasonable’ while they omit essential facts and the *specific prediction of the AGW hypothesis*. For me, that’s two strikes against your credibility.
My question is whether you are just consistently mistaken in your approach (incompetent) or deliberately deceptive (unethical) ?
In science there is no ‘reasonableness’, there is only falsified and not-yet-falsified. The satellite and balloon datasets currently falsify the most-probable AGW model at the 3-sigma level. You need to address that to have any credibility. If you consistently don’t then you look really, really shady.
Thanks to Anthony Watts to allowing AGW-proponents such as yourself to post here, so we can have a robust debate – and we get to point out the holes in your presentation/reasoning. As Mark Steyn says, “Diversity of Opinion is the only ‘diversity’ that actually matters”. Good on you for batting up to present your beliefs to a tough crowd.
Moa, PhD (Physics).

Reply to  Editor of the Fabius Maximus website
September 22, 2017 11:50 am

Moa,
“Why don’t you show the satellite and balloon data ?”
(1) Because at 1600 words this is already far longer and more complex than most people are willing to read. As seen in the comments here. I prefer to focus, rather than try to provide a too-long essay few will read.
(2) Because the balloon data is sparse and the lower troposphere data is less accurate than the surface data (e.g., as seen by its far larger adjustments over time).
(3) Because models’ forecasts are less accurate for the lower troposphere than for the surface. Ditto for their forecasts of ocean heat content and (when downscaled) regional forecasts. This is a milestone, with lots more work needed. In the real work most progress is incremental. Even baby steps add up.
Here is a graph of model forecasts vs. observations using sat data:
http://www.remss.com/research/climate

Dave Fair
Reply to  Editor of the Fabius Maximus website
September 22, 2017 12:36 pm

Aw, fer christsakes! Mears’ apologia cannot muddy the waters enough to hide the fact that 15+ years of hard data trend beclowns IPCC model tripe.

Reply to  Editor of the Fabius Maximus website
September 22, 2017 12:13 pm

The satellite data have the same relationship to CMIP-5 as the surface data do…
http://images.remss.com/figures/climate/RSS_Model_TS_compare_globev4.png
The observations track the bottom of the 95% distribution band, only spiking to P50 during the monster El Nino of 2016.

Sixto
Reply to  Editor of the Fabius Maximus website
September 22, 2017 12:17 pm

Editor of the Fabius Maximus website September 22, 2017 at 11:50 am
The so-called “surface data” can’t possibly be “accurate”, because it is made up. The record isn’t data, ie actual observations, but man-made artifacts.
It is at best not fit for the purpose of public policy programs, and at worst, worse than worthless GIGO lies.

Reply to  Editor of the Fabius Maximus website
September 22, 2017 12:27 pm

Tell you guys what, since you appear to agree that at least three models “have it right”, then obviously the modelers have correctly deciphered what is going on in the earth’s atmosphere and obviously have developed all the equations necessary to describe the overall atmospheric operation. When do you expect them to put the mathematical underpinnings of their models into a textbook so that us commoners can become properly educated as to what is really going on in earth’s atmosphere?
Or did they just happen to hit upon some statistical manipulations that appear to give accurate results? That is, dumb luck in making a curve fitting prognostication.

Sixto
Reply to  Editor of the Fabius Maximus website
September 22, 2017 12:57 pm

The divergence in this century is obvious. Thus, the models can’t forecast to save their lives.

Sixto
Reply to  Editor of the Fabius Maximus website
September 22, 2017 12:58 pm

In response to David’s graph.

billw1984
Reply to  Editor of the Fabius Maximus website
September 22, 2017 1:07 pm

The one Zeke H. posted the other day (on Twitter I think) said it was RCP4.5 in size 3 font at the bottom. Seems a bit disingenuous to not clearly point out that the great fit (over a fairly short span of time) is to a scenario that will lead to less than 2 C warming by 2100. And to not call out those predicting doom from RCP8.5 studies (Schmidt and Zeke and other climate scientists not calling out others). Also, if one looks at the 1998 El nino and how a year or so later it was back at baseline, the great fit is to the top of the El nino peak. It may not look as good in a year or so if there is a La nina or it comes back to the baseline as in ’98’.
These graphs don’t say what RCP they are. Not sure with the CMIP3 if the letters in parentheses are just the GCM used or if those refer to something similar to RCPs. Do you know?

billw1984
Reply to  Editor of the Fabius Maximus website
September 22, 2017 1:18 pm

Google is my friend. The SRES A1B seems to also be a middle of the road emissions scenario.

David A
Reply to  Editor of the Fabius Maximus website
September 22, 2017 3:33 pm

Editor, all surface records are, in my view, fubar. However both UHA and the weather balloon data sets are most cogent to CAGW. PER IPCC CAGW doctrine, the tropospheric warming must exceed the surface warming by about 20 percent. ( More in the MIA tropical hotspot reason)
However they are in fact warming at a considerably slower rate. The model mean is warming at 2.5 times the observed warming. This is a major fail. This means that the bucket adjusted ocean warming, 50 percent made up homogenized land warming does not make a rats ass difference to GHG warming. Whatever the cause of MOST of the surface warming, CO2 cannot, per the IPCC physics, be the cause.
When is WUWT going to do a post on this model fail?
( UHA. University of Huntsville Alabama. If it is UAH, my dyslexic apology. I hear 10 out of 4 suffer with this.)

Walter Sobchak
Reply to  Editor of the Fabius Maximus website
September 22, 2017 11:47 pm

“climate-deniers-protect-status-quo-that-made-them-rich”
I wish.

Reply to  ivankinsman
September 22, 2017 12:12 pm

Malcolm,
“I am having difficulty with the increasing OHC measured in 10^22 joules.”
Me, too. So I wrote this as an intro to the subject of ocean heat measurements. It has some link you might find useful.
https://fabiusmaximus.com/2016/01/15/measuring-error-in-ocean-warming-93036/

Reply to  ivankinsman
September 22, 2017 3:39 pm

Larry Kummer(article) “(1) Validating climate models as providers of skillful long-term projections.
The key question has always been about future climate change. How will different aspects of weather change, at what rate? Climate models provide these answers. But acceptable standards of accuracy and reliability differ for scientists’ research and policy decisions that affect billions of people and the course of the global economy. We have limited resources; the list of threats is long (e.g., the oceans are dying). We need hard answers.”
Larry- one run of a climate model produces one temperature series that may predict one possible future of the climate state. How do we select which climate model/run is the correct one?
“(1) This graph shows a climate model’s demonstration of predictive skill over a short time horizon of roughly ten years. CMIP3 was prepared in 2006-7 for the IPCC’s AR4 report. That’s progress, a milestone — a successful decade-long forecast!”
You’ve misread your own graph. The the supposed “prediction” is the CIMP-3 model mean, not the output of one run of one climate model(which would be useless, since the climate is, to one degree or another, chaotic and never repeats the exact same pattern).
Averaging multiple runs from multiple models is an exercise in futility. The numbers being used don’t contain any meaningful information to be averaged. Numerous skilled statisticians have called out this error.
To quote one: “the variance and mean of the “ensemble” of models is completely meaningless, statistically because the inputs do not possess the most basic properties required for a meaningful interpretation. They are not independent, their differences are not based on a random distribution of errors, and there is no reason whatsoever to believe that the errors or differences are unbiased.”
Furthermore, implying a trend to the sort of air temperature data that has been accumulated is another major mistake. Despite many climatologists drawing straight lines through temperature data, the raw temperature data is not suited to straight line regression. The ARIMA statistical tests were invented to handle autoregressive data trends. Most of the data is a daily high, a daily low, and the arithmetic average. A useful indicator for a good meteorologist, but not for statistical analysis. Not to mention that all organizations that collect the data even adhere to the basic WMO standards. The Australian MO has been caught lately using a 1 sec interval for high, low and average with electronic instruments and truncating highs and lows at various stations. The WMO recommends 10 minutes of averaging, which is better but ignores the fact that an “average” of temperatures is not really a measurement. A more useful term would be the predominant or most common temperature in a period, or using a standardized block of aluminum to damp the second by second variations.

Tom Dayton
Reply to  philohippous
September 22, 2017 4:45 pm

philohippous: Regarding individual model runs, see my comment at https://wattsupwiththat.com/2017/09/22/a-climate-science-milestone-a-successful-10-year-forecast/comment-page-1/#comment-2617732 and the one following that.

Science or Fiction
Reply to  ivankinsman
September 22, 2017 3:46 pm

The central estimate for current total feedback from clouds is 0.6 W/m².
(ref.: AR5; WGI; Figure 7.10 | Cloud feedback parameters; Page 588)
The central estimate for global accumulation of energy is also 0.6 W/m²:
«considering a global heat storage of 0.6 W/m²»
(ref.: AR5; WGI; 2.3.1 Global Mean Radiation Budget; Page 181)
0.6 W/m² – 0.6 W/m² = must be zero
Current warming must be equal to the cloud feed-back effect alone then?
By the theory propounded by IPCC the current warming is equal to the cloud feed-back effect alone!
By the theory propounded by IPCC, the sum of all other effects must be zero then?
Wait a minute, something’s wrong here
The key won’t unlock this door
Wait a minute, something’s wrong, lord, have mercy
This key won’t unlock this door,
Something’s goin’ on here
I have a bad bad feeling

Reply to  ivankinsman
September 22, 2017 3:48 pm

He is an island of sanity

hunter
Reply to  Steven Mosher
September 23, 2017 5:30 am

Perhaps you are suffering from the Stockholm syndrome?

Reply to  Steven Mosher
September 23, 2017 11:44 pm

No you will note that the editor gives positive and rational steps for moving forward and improving the situation.
part of that is down to his skepticism.
yes, he is skeptical of all the skeptics who are so certain that the science must be wrong.
Im not a scientist but Im certain this stuff is all political!
For me its pretty clear. the best science ( stuf we knew in 1896) suggests we cannot spew c02 with impunity.
That devish stuff can both help plants grow and warm the planet.. lets call it a green house gas.
How much warmer? hard to say, but the best science suggests 1.5C to 4.5C for doubling from 280 to 560ppm
What can we do?
That’s also tough.
A) America can do what it does best, Innovate , think X prizes not subsidies
1) energy effieciency innovation
2. new power sources (not FF, or reduced emmissions)
B) Adapt: we dont prepare for the weather of the past. change that stupid approach to planning.
C) Mitigate: we are decarbonizing,
1. Accelerate Nuclear Power Plant building.
2. Experiment with a small revenue neutral carbon tax
3. Work on the black carbon problem
There is a chance this won’t be enough.
In short there is plenty of uncertainty in the science to have rational debate about key elements.
Thats a debate without insisting that skeptics are paid by big oil and debate that doesnt insist
that the science we’ve known since 1896 is a fraud or that we are all part of some socialist plot.
You can for example believe the GMST record is reliable and STILL have doubts about the
future warming we will see.
And there is plenty of uncertainty in the policy. you can accept all the science and all the models and still
have a rational debate about the policy.
The Editor actually takes time to read and consider and have fresh thoughts about how to move forward.
I suspect like me he is sick and tired of seeing his political team make unforced errors in this debate by making stupid hoax claims and stupid conspiracy claims, and basically avoiding the STRONGEST skeptical arguments in favor of the easiest arguments to make..

Dave Fair
Reply to  Steven Mosher
September 25, 2017 12:20 pm

Mr. Mosher, your “… the future warming we will see.” is ridiculous on its face. Go back to Wandering in your Weed Patch.
Your statement reflects on one of the many problems I see with CAGW-mongers: Religious-like certainty about the unknowable future; 1.5 to 4.5 degrees C is the IPCC’s huge range of uncertainty, no matter the lipstick Gavin Schmidt puts on that pig.
When IPCC and U.S. government reports sound like science instead of poorly written sales jobs I would be happy to change my opinion.

Sixto
Reply to  Steven Mosher
September 25, 2017 12:31 pm

Mosh,
There is no actual science behind the ECS range of 1.5 to 4.5 degrees C per doubling of CO2. That range was derived from two GCMs in the 1970s and never improved. Manabe’s model found 2.0 degrees C and Hansen’s 4.0, which is clearly unphysical, ie imaginary. Charney added an arbitrary error range of 0.5 degrees high and low from those two WAGs.
In the lab, the effect is 1.2 degrees C, but the atmosphere is much more complicated than a controlled experiment. IPCC assumes without evidence that net feedback effects are positive. There is no valid reason to think so, on a water planet clearly homeostatic.
Ner feedbacks are probably negative, so 0.0 to 1.1 degrees C is a more likely range, although an upper error margin to 1.5 or even 2.0 degrees is possible, given our enormous ignorance about the climate system. The science is profoundly unsettled.
Also, since you’re not a scientist, why does the BEST site state that you are one?

hunter
Reply to  ivankinsman
September 23, 2017 5:15 am

Praising a new climate hype marketing tool as a way to win an argument reminds me of the “hickey stick”. Hiding the multi-ensemble failed models behind crazy wide error bars and then claim this shows a successful 10 year forecast makes no sense, if one is seeking an open and rational discussion.
Rewarding the most pernicious social mania, who’s true believers are on record calling for the end of civil society and the impoverishment and reduction of humanity seems a bit unjust.
Ignoring the litany of failed predictions and doctored data that Schmidt’s side of the issue have utilized to focus on his new graph is just wrong.

Dave Fair
Reply to  hunter
September 25, 2017 12:26 pm

Hunter, your “Hiding the multi-ensemble failed models behind crazy wide error bars …” is slightly off; there are no “error bars.” Essentially, the IPCC takes the hottest models and compares them to the cool, hacked Russian model. No statistics there at all.

Ted Midd
Reply to  ivankinsman
September 23, 2017 3:43 pm

OHC calculated post Karl et al??????????

September 22, 2017 5:12 am

As long as past surface temperature observations are being adjusted to fit current hindcast models, and current surface temperature observations are the result of faulty instrumentation and poor placement of temperature stations, I think we should stick to satellite observations. And that is a different kettle of fish.

techgm
Reply to  alfin2101
September 22, 2017 5:27 am

Indeed. And “debate” has no place in proper scientific method (including debate regarding public policy that rests on science).

Reply to  alfin2101
September 22, 2017 5:40 am

Exactly. There’s too much skin in the climate game to trust anyone really.

Reply to  alfin2101
September 22, 2017 3:50 pm

Surface temperatures are not adjusted to the past model performance.
The models
1. Are more varied than and of the surface products.
2. There are key areas the models can never get right
3. The average of all models removes natural variation which the surface datasets have

hunter
Reply to  Steven Mosher
September 23, 2017 5:17 am

Yes they are Steve. Your claiming otherwise is disproved and anniying.

Editor
September 22, 2017 5:13 am

“Edward Tufte (The Visual Display of Quantitative Information) weeps in Heaven every time someone posts a spaghetti graph.”
I’m not sure about the Heaven part, references?
According to https://www.edwardtufte.com/tufte/ people can still sign up for his course. Fees include all his books, I went several years ago.comment image

Joe Ebeni
Reply to  Ric Werme
September 22, 2017 7:14 am

Darrel Huff Book “How To Lie With Statistics”

John Mauer
Reply to  Ric Werme
September 22, 2017 7:24 am

Great books, Great course.

Reply to  Ric Werme
September 22, 2017 8:46 am

Ric,
You are right, of course. But it makes the point in a powerful fashion. And it will become true eventually (he’s 75).

Greg
Reply to  Editor of the Fabius Maximus website
September 22, 2017 8:59 am

Yes, but it’s pretty bad form declaring people dead when they are still alive. Shows the author’s degree of fact checking.

Reply to  Editor of the Fabius Maximus website
September 22, 2017 11:04 am

Greg,
This shows your lack of a sense of humor. It’s also odd how you ignore the extensive details in a 1600 word article to focus on a 12 word joke.

Science or Fiction
Reply to  Editor of the Fabius Maximus website
September 22, 2017 4:53 pm

Whoever is careless with the truth in small matters cannot be trusted with important matters.
– Albert Einstein

Sixto
Reply to  Editor of the Fabius Maximus website
September 22, 2017 5:00 pm

Editor of the Fabius Maximus website September 22, 2017 at 11:04 am
Your 1600 word article is a joke. A very bad joke.
You’re lucky that the gracious host of this successful blog lets you peddle your pathetic wares here.
(You need to back off on the emotion here) MOD

Greg
Reply to  Editor of the Fabius Maximus website
September 23, 2017 3:47 am

Greg,
This shows your lack of a sense of humor. It’s also odd how you ignore the extensive details in a 1600 word article to focus on a 12 word joke.

So rather than correct your error and apologise you prefer to double down on the insult by pretending it was an intentional joke. That makes it worse not better !
Larry, I did not ignore 1600 word article, my first comment was a scathing criticism of the supposedly scientific content of the post like the “pentadal mean” which is quite clearly a different dataset, not as labelled and the stupid reliance on bogus OHC graphs going back to 1960 to make you case.
Of course, you chose to totally avoid addressing any of that in favour of doubling your insult and to focus on a 12 word joke.

hunter
Reply to  Editor of the Fabius Maximus website
September 23, 2017 5:21 am

You should have simply apologized and moved on.
Your unwillingness to do so is prideful and distracts from your reputation.
And frankly your bad joke is only part of what under reasonable review is a really bad essay.

Reply to  Ric Werme
September 22, 2017 11:03 am

Can someone please explain what is so bad about a spaghetti graph?

Jer0me
Reply to  James Bolivar DiGriz
September 22, 2017 1:09 pm

Throw enough spaghetti at the wall and see what sticks. Point to it and say how right you were.
That’s not science.

D. J. Hawkins
Reply to  James Bolivar DiGriz
September 22, 2017 2:15 pm

The spaghetti graph obscures rather than reveals, hinders rather than helps. The classic graph as shown in the post looks like a hundred joke cans of snakes were opened up at once and photographed via time lapse. More constructive might be a one-by-one comparison of each RCP with the historic data, including error bars. Then you could get a sense of which assumptions are too wide of the mark, given historical CO2 emissions. That might be useful.

Robert Austin
Reply to  James Bolivar DiGriz
September 22, 2017 2:31 pm

The spaghetti graph is useful in that it shows how divergent the climate models are. But which model is “right”? You don’t improve on knowledge by averaging peoples guesses. So the spaghetti graph is a travesty of science.

Tom Dayton
Reply to  James Bolivar DiGriz
September 22, 2017 3:31 pm

Robert Austin: None of the individual model runs has the shape of the multi-model (“model ensemble”) mean line. That’s not a failure, because we expect the global temperature to *not* follow that multi-model mean line. That’s a stronger statement than “we don’t expect the global temperature to exactly follow the multi-model mean line.” It would be disappointing if any of the individual model runs followed that mean line, because it is quite clear that the global temperature varies a lot more than that. That’s because the global temperature in the short term is weather by definition, and only in the long term is climate. So what we expect is for global temperature to vary a lot day to day, month to month, year to year, and even decade to decade, in response to variations in internal variations such as ENSO; and to variations in forcings such as volcanoes, insolation, greenhouse gas emissions and absorptions, and reflective aerosol emissions.
We do expect that the resulting wavy actual global temperature line will follow the *general*pattern* of all those model runs. That includes expecting the observed temperature line to usually stay within the range of all those model runs (the bounds of the ensemble). We expect it will not hug the ensemble mean; we expect it will swing up and down across that mean line, sometimes all the way to the edge of the range (not just to the edge of 95% of the range).
The CMIP5 project had multiple models, most produced by different teams. Each model was run at least once, but some were run multiple times with different parameter values or structural differences. The set of all model *runs* is a “convenience sample” of the population of all possible model runs. Indeed, it is only a “convenience” sample of all possible *models*. “Convenience” sampling in science does not have the “casual” or “lazy” implication that the word “convenience” does in lay language. It means that the sample is not a random selection from the population, and not even a stratified random sample. In this case, it is impossible to randomly sample from those populations of all possible model runs and all possible models. Therefore the usual “confidence limits” related concepts of inferential statistics do not apply.
What does this distribution of model runs represent? It is multiple researchers’ attempts to create models and model parameterizations that span the ranges of those researchers’ best estimates of a whole bunch of things. So it does represent “confidence” and “uncertainty,” but in more of a subjective judgement way than the usual “statistical confidence interval” that most people have experience with.

Tom Dayton
Reply to  James Bolivar DiGriz
September 22, 2017 3:32 pm

The climate model runs’ lines’ shapes do a good job of reproducing the sizes and durations of those large swings above and below the model ensemble mean. What the models do poorly is project the timings of those short term changes–for example, internal variability’s oscillations due to ENSO. The sizes and durations of temperature oscillations due to ENSO are projected well, but the phase alignments of those oscillations with the calendar are poorly projected.
That’s due to the inherent difficulty of modeling those things, but also to the difference between climate models and weather models. Those two types of models essentially are identical, except that weather models are initialised with current conditions in attempts to project those very details of timings that climate models project poorly. Weather models do well up through at least 5 days into the future, but after about 10 days get really poor. Climate models, in contrast, are not initialized with current conditions. They are initialized with conditions far in the past, the Sun is turned on, and they are run until they stabilize. It turns out that it doesn’t matter much what the initialization condition details are, because fundamental physics of energy balance (“boundary conditions”) constrain the weather within boundaries that are “climate.” It’s useful to think of it as the mathematical (not the normal English!) concept of “chaos,” with weather being the poorly predictable variations around stable “attractors.”
Evidence that the models well-project durations and sizes of temperature swings can be seen if you pick out from those model run spaghetti lines, the runs whose timings/phasings of some major internal variability in ocean activity just happen to match (by sheer accident) the actual calendar timings of those. Risbey et al. did that, as described well by Stephen Lewandowski along with his descriptions of several other approaches demonstrating the skill of the models: http://shapingtomorrowsworld.org/lewandowskyCMIP5.html

Duster
Reply to  James Bolivar DiGriz
September 22, 2017 4:50 pm

A spaghetti graph tells you almost nothing, but that makes it well suited to showing model ensembles which contain almost no real information. The only real information is visible in the fact that even “adjusted” temperature data runs below the model averages. Apparently, all (or nearly all) the models contain one or more systematic errors that bias the results. The simplest conclusion is that the models all contain a CO2 factor that assumes an influence of CO2 on atmospheric warming that is stronger than it should be.

Sixto
Reply to  James Bolivar DiGriz
September 22, 2017 4:58 pm

Mann’s “Nature trick” was to hide the decline behind a mass of spaghetti, so that the downturn in temperature, as so ridiculously based upon tree rings, wouldn’t be noticeable in one data series.
Treemometers aren’t thermometers in any case. The rings respond to water, not T.

Sixto
Reply to  James Bolivar DiGriz
September 22, 2017 4:58 pm

I should say, and less to growing season, related to average T during which.

Robert Austin
Reply to  James Bolivar DiGriz
September 22, 2017 6:50 pm

Tom,
Sorry, but your reply is complete BS. Multi-model averaging is bad science.

Sixto
Reply to  James Bolivar DiGriz
September 22, 2017 6:57 pm

Sir,
You wrongly and rashly impugn all BS, sir! Bull sh!t has many uses, while what Tom posts is pure, unadulterated, worse than worthless drivel and poppycock.

hunter
Reply to  James Bolivar DiGriz
September 23, 2017 5:28 am

Spaghetti graphs are bad because it shows that “climate science” is just trying to scare and not inform.
They cannot defend their failed predictions and so hide them behind huge error bars.
They rewrite the rules of confidence levels, lowering the required levels of precision so that unjustified confidence can be claimed.
Using the spaghetti graph approach reminds people that errors are not cancelled out but increased by using many inaccurate tools.
Schmidt is hoping to join Mann in fabricating a compelling visual to inspire the climate faithful in a sciencey looking way.

Mark Fife
September 22, 2017 5:21 am

Wonderful, another highly detailed and very precise chart showing changes to a metric which has never been accurately measured and for which the methods, locations, times, and numbers of measurements have undergone almost constant change. How useful.

September 22, 2017 5:27 am

Amen

Stephen Richards
September 22, 2017 5:28 am

This is just another of those pathetic attempts to justify more taxpayer monies for a crooked industry. Time to shut it down for ever.

I Came I Saw I Left
September 22, 2017 5:32 am

There is a strong literature to draw on about how to test theories. Let’s use it.

Krugman? Surely you mean strong in the sense of stench. Why do your writings not inspire me?

I Came I Saw I Left
Reply to  I Came I Saw I Left
September 22, 2017 5:33 am

screwed up blockquote. First paragraphed should be quoted.

Reply to  I Came I Saw I Left
September 22, 2017 8:40 am

I Came I Saw I Left September 22, 2017 at 5:33 am

screwed up blockquote. First paragraphed should be quoted.

Fixed.
w.

I Came I Saw I Left
Reply to  I Came I Saw I Left
September 22, 2017 9:07 am

You are so kind. t/u

Reply to  I Came I Saw I Left
September 22, 2017 8:48 am

I Came,
You should read before making your knee-jerk criticism. Citing Krugman in support of skeptical positions is a powerful argument — something like “admission against interest” in the courtroom.
https://www.nolo.com/dictionary/admission-against-interest-term.html

Reply to  Editor of the Fabius Maximus website
September 22, 2017 11:50 am

With the exception that Paul Krugman has a track record of contradicting himself as the political needs of the moment shift (as well as making stats up).
At the moment he wrote those things, he felt those ideas would advance the fortunes of the Democratic Party. It is highly unlikely he gave any consideration as to long term effects of his suggestion or had any interest in reducing partisan fighting while making it. It is far more likely that it was the product of the same thinking that generated his infamous call for the Fed, Banks and GSE’s in the mortgage industry to all work together to inflate a housing bubble to increase aggregate demand.
Given the Orwellian memory holing the people who see him as an authority have to be engaging in a daily manner to see him as a wise oracle, the fact that Krugman once recommended something is highly unlikely to sway them when it is trotted out by wreckers and class enemies.
That’s not to say that what he wrote is automatically stupid. It’s just that nothing he writes is ever treated as an admission by his fans when it takes him (or them) to places they don’t want to go.

Ellen
Reply to  I Came I Saw I Left
September 22, 2017 9:26 am

As soon as Krugman got two recommendations in a list of seven, I became — more suspicious than before.

Reply to  Ellen
September 22, 2017 11:05 am

Ellen,
Read, then comment. This will work much better for you.

Editor
September 22, 2017 5:33 am

There’s a HTML link problem at the top, on the second link to the Fabius maximus site:
<p><a href=”https://fabiusmaximus.com/2017/09/21/a-small-step-for-climate-science-at-last-a-step-for-humanity/”>From the Fabius Maximus Blog. Reposted here.</a></p>
<p>By Larry Kummer. From <a title=””FM”” href=”https://fabiusmaximus.com/2017/09/21/a-small-step-for-climate-science-at-last-a-step-for-humanity/” target=” rel=”noopener”>the Fabius Maximus website</a>, 21 Sept 2017.</p>
The “target” keyword only has one quote. The “title” keyword doesn’t need four.

marty
September 22, 2017 5:33 am

Oh yes! finally the catastrophical antropogenic climate change warming propaganda machine has success in adapting the past data to the reality of todays model forecast results! What a triumph! 🙂

A C Osborn
September 22, 2017 5:33 am

Anybody can make a forecast and then make the data fit it, as has been done here.
And the pause has not gone away, only the data adjusted lower to make it look as if it has.
I cannot make up my mind whether or not Larry Kummer is just naive and believes what is written or is just an out and out warmist helping to spread their mis-information.

Reply to  A C Osborn
September 22, 2017 5:43 am

Having survived another British summer, all I can say looking back over the last 60 years is how today’s summers remind me of the 1950s summers, and apart from the lower incidence of smog and pollution, so do the winters.
I dunno where all the global warming is, but it ain’t hereabouts.

marty
Reply to  A C Osborn
September 22, 2017 5:45 am

Yes they fit the data to the theory, that’s groundbreaking!

Sheri
September 22, 2017 5:42 am

Saved by an El Nino—that’s only valid for global warming believers, of course. Start in 1998 and you’re an excoriated skeptic out to ruin climate science. End in 2016 and you’re a hero to climate science and Al Gore. Insanity runs rampant.

ReallySkeptical
Reply to  Sheri
September 22, 2017 8:35 am

seems a line drawn between 1998 and 2016 roughly parallels the overall data.

Hugs
Reply to  ReallySkeptical
September 22, 2017 10:25 am

Yes. After its adjustment, that is. Sorry, no compelling evidence other than confirmation bias. No evidence on cagw at all.

Dave Fair
Reply to  Hugs
September 22, 2017 11:51 am

Hugs, as shown by Bob Tisdale, El Ninos’ impacts on the world, including average global temperatures, vary greatly. I believe the 2014, 2015 and 2016 El Nino had a greater impact on temperatures than the 1998 version; a straight line drawn between their peaks does not tell us anything about the underlying climatology. Nothing fundamental about the climate has changed between their peaks. Additionally, I argue the climate has not changed in 100+ years.

Steve Fitzpatrick
September 22, 2017 5:56 am

The CMIP 5 project ran from 2010 to 2014. Comparing “predictions” from models run in 2010 to 2014 with data from before 2010 is little more than silly exercise in confirmation bias. Any model, no matter how eggregiously wrong, can be ‘tuned’ to data you already have. Predictions only count when you don’t already have the results in hand. I think it better to see how the model projections do between 2014 and 2024 before declaring that they make accurate predictions. It would also help if the past projections were based a more realistic forcing profile (RPC 6) rather than a lowball forcing profile (4.5). Do that and the models look somewhat less…. ahem…. accurate.
.
Larry Kummer would do well to keep his eye on the pea, not on the slight of hand.

Reply to  Steve Fitzpatrick
September 22, 2017 8:51 am

Steve,
“The CMIP 5 project ran from 2010 to 2014. ”
Read more carefully. The graph is CMIP3, prepared in 2006-07 for AR4. That’s clearly explained in the text in bullet point (1). How could you have missed this?

D. J. Hawkins
Reply to  Editor of the Fabius Maximus website
September 22, 2017 2:20 pm

Which RCP does that represent, and do the actual global CO2 emissions match that pathway?

Steve Fitzpatrick
Reply to  Editor of the Fabius Maximus website
September 22, 2017 3:58 pm

Larry,
You are right about the CMIP 3 versus CMIP 5. But the A1B scenario from AR4 is way wrong. A1B called for US$75 trillion in world gross product in 2020, and according to the world bank, it was already US$75 trillion in 2016. Global emissions have been well over the A1B scenario, but temperatures are still on the low side of the projections. Take into consideration the warming influence of the recent El Nino, and the temperature trend remains below the CMIP 3 projections.
The most credible estimates of sensitivity to forcing are empirical… they are reasonably consistent with each other (unlike the models, which disagree wildly with each other!) and they say the sensitivity is about 60% of the average of the models (or a bit less).
The good news (so to speak) is that there will be no reduction in CO2 emissions in the foreseeable future (oh, say, for 25 years), so there will be plenty of time for the true sensitivity to forcing to become more clear. I do hope you will come back in 3 or 4 years when the temperature trend has again dropped below the model mean and explain why that happened… but don’t worry, there will be a hoard of modelers providing excuses for the continuing divergence (just as they are today). You can rely on those excuses being available. Will there be warming over the next couple of decades? Sure. Will it be huge and catastrophic? Heck no… more like 0.2 to 0.3C. Will economically sensible public policies ever actually be considered by the global warming concerned? I doubt it… it’s going to be solar, wind, and diminished global wealth or nothing, no matter how nutty those policies are, because the priorities of the global warming concerned are the priorities of Paul Ehrlich and the Club of Rome… always wrong headed but 100% convinced they are right.

jclarke341
September 22, 2017 6:00 am

The graphs showing the performance of the climate models are very misleading. First, there are the hindcasts, that make up more than half of the record. The cooling in 1992 is due to the eruption of Mt. Pinatubo. Of course, that will show up in the observed record, but why does it show up in the hindcasts of climate models? The only way that happens is that the models are forced to the observations, meaning they aren’t hindcasts of the models at all. The hindcasts are a deception of model accuracy.
Secondly, the last 10 years have been 8 years of flat, with an El Nino at the end. That is not a signal of man-made climate change. That is a signal of a steady climate with an El Nino at the end. I remember when the pause reached 10 years and skeptics were sighting it as a falsification of the models. This was seen as completely outrageous, because 10 years was simply not long enough to make any conclusions. Now after 10 years of flat and 2 years of El Nino, they are curve fitting it to the model forecast, and proclaiming it is proof that the models have skill. It would be funny if it wasn’t so sad and deceptive.
Thirdly, temperatures are still cooling from the El Nino. They may cool down to pause levels or even below. Proclaiming that the El Nino is evidence of resumed warming or a step-up in global temperatures is completely illegitimate at this time. We will need another year or two at least before any claims can be made either way.
Fourthly, Can someone explain how exciting additional carbon dioxide molecules in the atmosphere warms the top 700 meters of the oceans instantaneously, often without even warming the atmosphere? When we look at OHC and compare it to atmospheric temperatures, there is a much better argument for natural changes in the OHC to be driving atmospheric temperatures than for increasing atmospheric CO2 to be driving ocean temperatures. Unless they can show the magical science that takes heat instantly from an atmospheric molecule at elevation instantly into the top 700 meters of the ocean, the OHC correlation contradicts the AGW theory.
I believe that the F. M. website means well, and appears to be arguing for a reconciliation of the two sides of the climate debate, but science is not an average of two opposing interpretations of the observations. Overall, I find this article to be disingenuous and deceptive.

Reply to  jclarke341
September 22, 2017 6:17 am

I agree. There is an element of “Texas Marksman” in the surface temperature purported “records”. This site has gone into the level of infill used by HADCRU recently, and that collection of supposed surface temperatures is typical of the evidence used to support the narrative.
The test should be to use UAH, which has both a different means of measurement and a management that is not committed to supporting the consensus.

David Wells
Reply to  jclarke341
September 22, 2017 6:43 am

Just more manipulative artifice to keep their modelling chums in highly paid grant funded employment, just more parameterisation or tweaking a few numbers as Joanna Haigh called it. If the emission of 100,000 billion tons of Co2 between 2000 and 2010 didn’t freak out the atmosphere then its very hard to visualise exactly how much Co2 we need to emit to cause a rapid an immediate linear response from temperature. It was absolutely likely that the modelling profession would try to find some way of persuading people to jump back on board but it still doesn’t go anywhere near to resolve the basic issue of climate angst about Co2. If Co2 is the primary cause of climate change then why did the planet warm at the same rate between 1910 and 1940 as it did between 1970 and 2000? Maybe old hat but still central to the debate. To me this is just one more egregious attempt to make plausible the implausible that Co2 forces catastrophic climate change without causing catastrophic global warming first. We still don’t know how sensitive atmosphere is to Co2 if 100,000 billion tons did not shake things up then more seemingly coherent modelling is hardly likely to make a difference is it. Unless of course modelling is now so embedded in our psychology that we after 40 years of bludgeoning are believing their is some reality in their somewhere which I dispute. The models are predicated upon inexact science and grossly inadequate data so that means the result is always a product of the individual(s) who tweak it until it says what they want us to believe is true.

Tom Dayton
Reply to  jclarke341
September 22, 2017 7:17 am

jclarke341: The Pinatubo eruption’s cooling temperature forcing from reflective aerosols was used in the models’ hindcasts. Hindcasts use actual forcings as inputs to the models. (Forecasts must use estimates of future forcings.) The models’ accurate reflection of the cooling effect of Pinatubo’s eruption is evidence of the models’ skill in calculating forcings’ effects on temperature.

Kaiser Derden
Reply to  Tom Dayton
September 22, 2017 8:42 am

a model can never have predefined future OR past unexpected forcings … its not a model then … the eruption was an unexpected forcing … period … if its in the model then the model has a parameter like “in 1992 assume an eruption” i.e. not a model at all …

Tom Dayton
Reply to  Tom Dayton
September 22, 2017 8:54 am

Kaiser Derden: GCMs are not models of forcings. They are models of the Earth’s systems in response to forcings. For example, there do exist completely other types of models that are used to attempt to predict or project the Sun’s output, the amount of CO2 humans will inject into the atmosphere, and so on. But those are not GCMs. GCMs take forcings as inputs. Different forcings are input as different “scenarios” to project how the climate will respond to each of those scenarios.

Dave Fair
Reply to  Tom Dayton
September 22, 2017 9:29 am

Uh, Tom, you may have caught your foreskin in the assertion “The models’ accurate reflection of the cooling effect of Pinatubo’s eruption is evidence of the models’ skill …” Please note that models “over-drove” global temperatures downward. Such unrealistic cooling (reflecting modelers’ misapprehension of forcing effects) would necessarily effect future modeling of global temperatures. Additionally, using 2014, 2015 and 2016 Super El Nino temperatures as end-of-trend data points is disingenuous, to put it mildly.

jclarke341
Reply to  Tom Dayton
September 22, 2017 11:38 am

“The models’ accurate reflection of the cooling effect of Pinatubo’s eruption is evidence of the models’ skill in calculating forcings’ effects on temperature.”
Is this calculated from first principles, or more from our experience with volcanoes and atmospheric temperatures. We have many examples of those. We know from experience and observation what volcanoes do. We can use that experience along with first principles, to calculate a reasonable response to a given volcano eruption.
Unfortunately, that knowledge only lends to our skill in predicting the volcanic impact on climate. It in no way verifies the models skill in predicting the impact of human emissions, which are, by far, the driving factor in the current climate models. Our knowledge of volcanoes is not evidence that models have skill in predicting the effect of human CO2 emissions.
My point is that the article is about the models showing skill at forecasting, and then most of the graphic is a hindcast that is force fit to the observations. This gives the impression that the models, from the beginning, had significant skill, which is not true. They have always run hot, except for a few El Nino years, which they are still doing. Roy Spencer’s graphic of model/actual temperatures is still a much better representation of the actual ‘skill’ of the models.
http://www.drroyspencer.com/wp-content/uploads/CMIP5-73-models-vs-obs-20N-20S-MT-5-yr-means1.png

Dave Fair
Reply to  jclarke341
September 22, 2017 12:25 pm

JC, please compare the models’ response to Pinatubo to actuals. The models “overdrove” global temperature reductions significantly.

Michael Jankowski
Reply to  Tom Dayton
September 22, 2017 1:53 pm

Tom Dayton, the hindcasts use ESTIMATED forcings as inputs to the model in many cases, both in terms of W/m^2 and atmospheric concentration. Pinatubo was a large event and the focus on intense modeling for several years at NOAA and elsewhere. The models’ so-called “accurate reflection of the cooling effect of Pinatubo’s eruption” is better described as the models’ being forced to match Pinatubo as the calibration event of all volcanic eruption events.
Futhermore… https://www.geosci-model-dev.net/9/2701/2016/gmd-9-2701-2016.pdf
“…the fifth phase of the Coupled Model Intercomparison Project (CMIP5) has demonstrated that climate models’ capability to accurately and robustly simulate observed and reconstructed volcanically forced climate behavior remains poor…”
You say “accurate reflection,” 2016 publication says “poor.”

Michael Jankowski
Reply to  Tom Dayton
September 22, 2017 1:56 pm

(I realize this article is about CMIP3 not CMIP5…however, the CMIP5 models are supposed to be an improvement, of course!)

Reply to  jclarke341
September 22, 2017 8:59 am

jclarke,
“First, there are the hindcasts, that make up more than half of the record.”
The graph shows the multi-decade trend as necessary context. Just showing the ten year forecast period would be nuts. The essay is clearly about the forecast period.
“That is not a signal of man-made climate change.”
The atmosphere temperature record since ~1950 looks like stairs, not a straight line. Rise, pause, rise. For an explanation read  “Reconciling the signal and noise of atmospheric warming on decadal timescales“, Roger N. Jones and James H. Ricketts, Earth System Dynamics, 8 (1), 2017.

A C Osborn
Reply to  Editor of the Fabius Maximus website
September 22, 2017 9:47 am

The paper that you quote shows that the current data has been adjusted to get closer to the CO2 trend, because it is no longer a stair step.
But the original raw data is.
You just shot yourself in the foot.

Reply to  Editor of the Fabius Maximus website
September 22, 2017 11:08 am

A C Osborn,
“because it is no longer a stair step.”
See the NOAA graph. It clearly shows three “stair steps”. Only time will tell if the next few years form another “step” (plateau).

jclarke341
Reply to  Editor of the Fabius Maximus website
September 22, 2017 11:45 am

It is not “necessary context” if it is not the same thing before and after the vertical line, and it is not. To the right, we have a forecast. To the left we have a computer model output forced to mimic what has been observed. That is not a hindcast, and it is disingenuous.
Only a warmist could refer to natural cycles as ‘noise’. It’s not noise…it’s the Earth’s natural climate. Just because we don’t understand the cycles, is hardly reason to try and dismiss them as noise.

A C Osborn
Reply to  Editor of the Fabius Maximus website
September 22, 2017 12:08 pm

That is nothing like a stair step that is a saw tooth on a trend and looks nothing like the original data.
You do know what stairs look like I assume?
Step up, horizontal for a period, step up, horizontal etc
Aahh forget it, you have your mind made up and believe their adjusted data.

Philo
Reply to  jclarke341
September 22, 2017 3:00 pm

Most meterologists find that the air temperature over the oceans is highly correlated with the ocean surface temperature, and presumably controlled by it.

arthur4563
September 22, 2017 6:02 am

Touting a ten year prediction as proof of anything is absurd. And the prediction was not particularly good one at that and is getting worse as the planet cools after the last El Nino. I am convinced that most of those who are arguing about the climate future are doing so because their income depends upon convincing folks that something needs to be done. ALL of the future of CO2 emissions is about future energy technologies and that requires energy experts, not climate experts, who generally make pretty stupid assumptions , “business as usual” being the dumbest of them all. Anyone who has missed the recent massive changes in automotive technology, in which electric cars have become economically competitive in the mid range market, motivating major automakers to vastly expand their plans for near-term and long term electrification of their vehicles,must have been living on Mars. The second man-made emitter of CO2 is the power generation sector,and here again, the future (and a near one at that) is clearly molten salt nuclear power, which can be sited anywhere, including within cities, with no need for cooling water, fueled either by Thorium or uranium. There is no need to subsidize this technology ala wind and solar, and this technology guarantees the cheapest power possible and the safest as well and with no need for
the massive overcapacity required by unreliable wind and solar, which, contrary to the beliefs of their enthusiasts, are NOT transformed into reliable energy sources simply by installing storage facilities. In fact, wind and solar require fossil fuel backup, whereas molten salt reactors can load follow, thereby requiring very little (probably none) peak demand backup by fossil fueled generators. It is as clear as the nose on your face that economics alone will drastically reduce man-made CO2 (and other) emissions – there is, in fact, no need whatsoever to convince anyone about any (in actuality, impossible) future climate changes brought on by excessive atmospheric CO2.
Now THAT is the real “business as usual” scenario – business always responds to economics, and economics is what is motivating the eelctrification of cars and the power generation provided by molten salt nuclear tehnology. As per usual, Paul Krugman, the corrupt NY Times lapdog who they trot out whenever they need to promote their opinion by putting it into the mouth of an “independent expert”, has totally misunderstood the issue at hand, which proves no obstacle to his offering up his solution of the non-existent problem. Krugman is living, breathing proof that a Nobel Prize means nothing – just take a gander at who these things have been awarded to : Al Gore, perhaps the dumbest cluck who ever claimed to know anything about anything, or Jimmy Carter, the peanut grower and worst President since Abe Lincoln.

Dave Fair
Reply to  arthur4563
September 22, 2017 12:46 pm

The same error made by Malthus, Club of Rome, etc.: Nothing changes in the future; people in the future will not have better technology.

Peter Brunson
September 22, 2017 6:06 am

Again comparing cold times to warm ones.
Warming so measured means very little.

September 22, 2017 6:09 am

Wonderful. The IPCC produces a graph showing the observations lying right at the bottom of the model runs. Then Gavin miraculously comes up with a graph showing the observations right in the middle of the model runs, over the same period of time. What a genius that man is. Give him a Nobel prize.

Reply to  Paul Matthews
September 22, 2017 7:20 am

When does he retire?

MarkW
September 22, 2017 6:10 am

The problem is, they don’t measure heat content, they measure temperature and then convert that into heat content.
The fact is, as big as those numbers look when presented in joules, they only work out to a warming of a few hundredths of a degree, and the notion that we can measure the temperature of the entire oceans, surface to depth, to a few hundredths of a degree is ludicrous.

MarkW
September 22, 2017 6:11 am

The projection is only “accurate” because of the 2015/2016 El Nino. As the world continues to cool down from that event, the accuracy will continue to disappear.

Matt G
September 22, 2017 6:12 am

It’s not global warming/climate change it is El Nino warming/El Nino change.
Why does GISS show about 0.2c warmer for the recent strong El Nino compared with HADCRUT?
http://www.woodfortrees.org/plot/gistemp/from:1997/offset:-0.1/plot/hadcrut4gl/from:1997
Why does RSS show about 0.2c warmer then UAH for the recent strong El Niño?
http://www.woodfortrees.org/plot/rss/from:1997/plot/uah6/from:1997
It was well known that satellite data showed warming and cooling with ENSO stronger than the surface data, so what has changed?
The strong El Nino in 1997/98 was very similar to 2015/2016, so why has the GISS warmed 0.6c compared to 0.4c last time? They is a deliberate artificial add-on temperature of 0.2c, that confirms with HACRUT because it is 0.2c cooler.
There is no reason why the behaviour with ENSO between satellite and surface would change with just one strong El Nino. Satellite has always showed more warming with El Niño’s than surface data, but this time GISS has matched them. HADCRUT is also closing in on satellite data too regarding warming with El Niño’s.
This has not been applied with the 1997/98 El Nino, which would had made it 0.2c warmer and no global records would have been broken until the El Niño 2016/17. Instead the strong 1997/98 El Nino has been cooled with their continuous adjustments. There are serious dubious tricks going on here and they should be ashamed of their dishonesty.

Don Mingay
September 22, 2017 6:13 am

Gavin Schmidt has excelled once more and even exceeded himself. The use of CMIP-3 at outset shows contempt for CMIP-5 progress which is understood. The data points are interesting in that only those that are consistent within the detail are shown while the majority, which do not agree as well or often at all, are omitted. The degree of fit of data points shown is statistically impossibly good or else confirms once again selectivity. The lack of HadCRUT recently released data in the final ten years is recognised as that shows no warming being in total conflict with what is shown. (Confirm the almost total lack of HadCRUT (light blue) data points (only 2) shown since 2007 in that period.) I am unsure as to why C & W data are shown but recognise why the totally divergent International data from UAH and RSS are not shown. Could the level of attempted dis- and misinformation be growing even now as reality dawns on all but the committed alarmists (those who are scientists) of yesteryear?

Tom Dayton
Reply to  Don Mingay
September 22, 2017 6:51 am

CMIP3 is useful to show, because it has a longer forecast, because it was run years before CMIP5.

Dave Fair
Reply to  Tom Dayton
September 22, 2017 9:39 am

Tom, CMIP5 was supposed to correct the deficiencies in CMIP3, especially in Polar ice. The fact that climate apologists were forced to fall back on CMIP3 is an indication that climate “science” is not progressing as the alarmists had hoped.

Curious George
Reply to  Don Mingay
September 22, 2017 7:45 am

Could someone please pinpoint where to find that graph in AR4? (Of course, without “measured” “anomaly” after 2006). There is a similar graph FAQ8.1 Fig.1, but it does not show any dip around 2000. Maybe at the time of AR4 data was not adjusted enough.

Reply to  Don Mingay
September 22, 2017 9:03 am

Don,
“The use of CMIP-3 at outset shows contempt for CMIP-5 progress which is understood.”
That’s very wrong, for two reasons.
First, the only way to show success in multi-decade forecasts is to match forecast vs. observations of past models. Hindcasts of current models are weak evidence due to the possible influence of tuning to match past observations.
Second, as the text says — Schmidt also provided the same graph using CMIP5.

billw1984
Reply to  Editor of the Fabius Maximus website
September 22, 2017 1:12 pm

CMIP5 looked worse. I left a post with a question or two above.

September 22, 2017 6:16 am

Based on performance so far the cAGW conjecture has about 0.0002% probability to be correct once in 500,00 years. Let’s hope my ancestors won’t freeze to death at childbirth first.

Another Doug
Reply to  jaakkokateenkorva
September 22, 2017 9:19 am

Good news! Your ancestors did not freeze to death!
Not sure about your decedents yet.

Hugs
Reply to  Another Doug
September 22, 2017 10:30 am

They keep rewriting history…

Luc Ozade
Reply to  Another Doug
September 22, 2017 11:31 am

Not sure about your decedents yet.
Descendants?

Reply to  Another Doug
September 22, 2017 11:41 am

10% didn’t survive the year without summer at the 19th century and no one seems to know how cold it will eventually be.

Editor
September 22, 2017 6:17 am

Schmidt’s CMIP-5 output is not similar to the CMIP-3 output. It exhibits the same thing as Ed Hawkins’ version. The observations only approach P50 (model mean) during the recent strong El Nino.

David Wells
September 22, 2017 6:18 am

Boost funding for climate sciences. Many key aspects (e.g., global temperature data collection and analysis) are grossly underfunded. If $40 billion wasn’t enough then how much is enough? Seems like every university on the planet is living on the back of angst about “an extreme climate” and if the only supposed means of moderation is to abate Co2 then its a lost cause to begin with. So they have fiddled with the models to imply a greater degree of accuracy the same reality applies, suspend democracy and cut Co2 and when the planet continues to warm what exactly do we do then?

marty
Reply to  David Wells
September 22, 2017 6:23 am

Then we don’t have any money left to adapt to warming! (and nothing to eat, because of low CO2)

Editor
September 22, 2017 6:25 am

All he did was to expand the lower uncertainty range.comment image
Widening the error range isn’t “progess.”

Tom Dayton
Reply to  David Middleton
September 22, 2017 6:40 am

No, David Middleton. CMIP3 and CMIP5 were projects using different models. The gray shaded region in the graph spans 95% of those models’ runs.

Reply to  Tom Dayton
September 22, 2017 6:53 am

You are correct. It is a probability distribution, not an error range.
I should have said, all he did was to lower the bottom of the 95% probability band.
CMIP-3 simply lowers the bottom of the 95% probability band relative to CMIP-5. The tops of the bands are essentially the same.

David Wells
Reply to  Tom Dayton
September 22, 2017 6:59 am

Does that really matter? Anxiety over different model runs just distracts from the real issue, does Co2 cause CAGW. You could run these models a million times and it would make no difference whatsoever because at the basic level a computer can never provide the resolution need to cope with the complexity of our climate hence parameterisation. Grid size of 400 km needs to reduce to 1mm or less to capture thunderstorms lightning and aerosols so bellyaching about modelling is just one more pointless distraction minutiae. As the IPCC said we have a coupled non linear chaotic climate and therefore predictions of future climate states are not possible. Are we able to influence a climate that can create monster hurricanes that emit the energy equivalent of 1 million Hiroshima nuclear bombs every single day of their existence no chance. Is there anyone on the planet able to identify how much Co2 we need to mitigate to exert influence over another Hurricane Irma? Not a chance. What we should do is sack every modeller on the planet and spend the money on securing infrastructure to cope with whatever we believe might be thrown at us because else wasting energy on whether one model or the other is more honest than they other is idiotic. Humanity influencing climate is moonshine we might as well advocate rain dancing.

Tom Dayton
Reply to  Tom Dayton
September 22, 2017 7:03 am

No, David Middleton, it is not a probability distribution. It is merely the range of 95% of the model runs. Those model runs are a convenience sample of all possible model runs, and the model runs are from a convenience sample of all possible models. Convenience sampling rather than systematic (e.g., random) sampling means the samples cannot be relied on to be representative of the populations, sufficiently to formally infer probabilities in the ways you are accustomed to seeing. (By the way, climatologists are very well aware of that, and usually they carefully label their graphs accurately as Gavin has done. But rarely do they discuss it with the public because it is a difficult topic to understand without a background in statistics.)

Tom Dayton
Reply to  Tom Dayton
September 22, 2017 7:05 am

David Middleton: You must be tired from that Gish Galloping. You made a claim that was incorrect. I provided a correction. You changed the topic.

Reply to  Tom Dayton
September 22, 2017 7:17 am

Thank you David Wells September 22, 2017 at 6:59 am. My thoughts exactly.

Tom Dayton
Reply to  Tom Dayton
September 22, 2017 7:18 am

David Middleton, I apologize. A different David Gish Galloped.

Reply to  Tom Dayton
September 22, 2017 7:18 am

It is a probability distribution. 95% of the model runs fell within the gray band. The model mean (P50) is the middle of the gray band.
I was incorrect when I descibed it as an error range.
The only reason that the CMIP-3 P50 tracks the observations is because the bottom of the 95% band was expanded downward relative to CMIP-5.comment image
This is an example of improving accuracy by reducing precision.

Tom In Indy
Reply to  Tom Dayton
September 22, 2017 7:40 am

Agree. I don’t see anything related to standard errors, it’s simply the mean +/- 95% of the total range of model output. A question – Does variation in the input that feeds each model run cause most of variation around the mean, or is more of the variation around the mean driven by variation in the sensitivity of the models to a given set of inputs?

Tom Dayton
Reply to  Tom Dayton
September 22, 2017 8:08 am

Tom in Indy: The CMIP projects’ model runs are supposed to use the same forcings as inputs. That is, all the CMIP3 runs use the same forcings as each other, all the CMIP5 runs use the same forcings as each other,… The CMIP site has all those forcings listed for each of the CMIP projects. For example, for CMIP6 see the section “CMIP6 Forcing Datasets” here: https://www.wcrp-climate.org/wgcm-cmip/wgcm-cmip6. But I imagine that there might be miscellaneous minor forcings that are specified differently in different models and different runs of a given model.

Tom Dayton
Reply to  David Middleton
September 22, 2017 7:11 am

David Middleton: If instead or in addition you were referring to the dashed lines in the CMIP5 graph, those are the result of estimating the effects on the model runs if the forcings used by the models better matched the actual forcings. The forcing estimates used in the CMIP5 model runs were significantly less accurate than were known a few years later, so Schmidt, Shindell, and Tsigaridis in 2014 used the newer, more accurate estimates: https://t.co/0d0mcgPq0G

Reply to  Tom Dayton
September 22, 2017 7:23 am

Which produced a broader 95% probability band.
How does a more “accurate” estimation of forcing yield a wider distribution of model ourputs?

David Wells
Reply to  Tom Dayton
September 22, 2017 7:43 am

Watch the birdie? Review the following and then maybe you guys can communicate on a more realistic level:
https://wattsupwiththat.com/2017/01/02/john-christy-climate-video/
https://wattsupwiththat.com/2015/02/20/believing-in-six-impossible-things-before-breakfast-and-climate-models/
https://wattsupwiththat.com/2016/12/29/scott-adams-dilbert-author-the-climate-science-challenge/
https://wattsupwiththat.com/2017/01/10/the-william-happer-interview/
http://www.thebestschools.org/special/karoly-happer-dialogue-global-warming/william-happer-interview/
As said but obviously with no impact whatsoever what exactly is the point of dancing on a pin head? Models are the product of belief using inadequate data and computers incapable of resolving the issue because of their inherent lack of resolution. If you run the same model on a different computer you will get a different result. If the grid size is reduced to 1mm on current technology one computer run would take 4.5 billion years.
Gavin Schmidt is behaving like the Royal Society he believes his purpose in life is to maintain the status quo and if that is not possible then at least find some way of manipulating reality to keep himself and his chums in well paid employment. Models are irrelevant. As said if the atmosphere is not sensitive to 100,000 billion tons of Co2 then what it is sensitive to?
If Co2 is not the cause of climate change then something else is and if that is the case humanity has no chance of either moderating its effects or manipulating the outcomes. Co2 is only popular as an entity because believing it is the arbiter of our climate future gives the misleading indication that we can exercise control over the climate which is what rain dancing is all about.
Set aside Co2 and what you are left with is reality which to superstitious humanity is a huge challenge.

Tom Dayton
Reply to  Tom Dayton
September 22, 2017 7:54 am

David Middleton: Look only at the CMIP5 graph. The gray band outlined with dotted lines is the model runs range with less accurate forcings used as model inputs. The dashed lines are the estimates of the upper and lower bounds of the range (i.e., dashed lines replace the original dotted lines), and the mean line (the middle dashed line replaces the solid mean line). Both upper and lower bounds, and the mean, are shifted down. You are factually incorrect in your claim that the better forcing estimates result in a wider range.

JonA
Reply to  Tom Dayton
September 22, 2017 8:04 am

Can you please provide justification of why an ‘ensemble mean’ has any value whatsoever?
it’s a prime example of reification IMHO.

Tom Dayton
Reply to  Tom Dayton
September 22, 2017 8:40 am

JonA, a short explanation of the value of ensemble model means is in AR5, Chaper 9 Evaluation of Climate Models, section 9.2.2, page 754. https://www.ipcc.ch/report/ar5/wg1/. More details can be found, for example, in an entire special issue of Philosophical Transactions of the Royal Society A, 2007: http://rsta.royalsocietypublishing.org/content/365/1857

Duster
Reply to  Tom Dayton
September 22, 2017 4:58 pm

David Wells September 22, 2017 at 7:43 am
Glad to see someone else making the argument that believing CO2 changes climate, and believing humans seriously affect CO2 levels allows people to believe climate can be controlled. No body seems to really want to accept the Second Law.

jclarke341
Reply to  Tom Dayton
September 22, 2017 7:30 pm

I am with you, Mr. Wells. The warmists have put forth these things called ‘climate models’ and we have taken the bait. I significant percentage of the population thinks the models have skill. Skeptics don’t believe the models have skill, but argue as if they could have skill if they were done better. But neither viewpoint is correct. It isn’t even possible for the models to have skill at predicting future climate.
But they do still serve an important purpose, akin to the fire and smoke of the Wizard of Oz. They distract us from the men (and women) behind the curtain.

Reply to  David Middleton
September 22, 2017 9:31 am

David,
“All he did was to expand the lower uncertainty range.”
(1) They are different models. Of course they have different uncertainty ranges. I’ll ask why they differ so much.
(2) Statistical uncertainty ranges are facts, not opinions. To say that he changed the mean is an accusation of malfeasance. What are you saying?

Reply to  Editor of the Fabius Maximus website
September 22, 2017 9:57 am

I am saying that the older CMIP-3 model yielded a much broader probability distribution of model outputs than the newer CMIP-5 model. And all of the broadening was on the low side. This is what moved the model mean toward the observations. I don’t know if this is due just to different forcing assumptions or a different ensemble of RCP scenarios.comment image
CMIP-5 yielded a range of 0.4 to 1.0 C in 2016, with a P50 of about 0.7 C. CMIP-3 yielded a range of 0.2 to 1.0 C in 2016, with a P50 of about 0.6 C.
They essentially went from 0.7 +/-0.3 to 0.6 +/-04.
Progress shouldn’t consist of expanding the uncertainty… unless they are admitting that the uncertainty of the models has increased.
I have no idea about their motives… just a negative preconception regarding anything done by GISS.
Is CMIP-3 now the standard? Will it replace CMIP-5?

Dave Fair
Reply to  Editor of the Fabius Maximus website
September 22, 2017 10:02 am

Editor of FMW, I will assert that statistical uncertainty can only apply to actual data. Since climate model output is not data, their “statistical uncertainty” is an imaginary construct of modelturbation.

Tom Dayton
Reply to  Editor of the Fabius Maximus website
September 22, 2017 10:50 am

David Middleton: You wrote that CMIP5 produced a narrower range than CMIP3. CMIP5 was produced after CMIP3. So you had it all backwards when you wrote “Progress shouldn’t consist of expanding the uncertainty… unless they are admitting that the uncertainty of the models has increased.”

Reply to  Editor of the Fabius Maximus website
September 22, 2017 11:15 am

David,
It is an interesting question. I don’t get the same numbers. But trying to pull numbers with such accuracy from a graph is imo a waste of time. I suggest you ask Schmidt or perhaps someone at Carbon Brief or elsewhere for the actual numbers.

Reply to  David Middleton
September 22, 2017 1:14 pm

David,
I asked Schmidt why the uncertainty range is smaller for CMIP5 than CMIP3. His answer –

Not sure. Candidates would be: more coherent forcing across the ensemble, more realistic ENSO variability, greater # of simulations— Gavin Schmidt (@ClimateOfGavin) September 22, 2017

Reply to  Editor of the Fabius Maximus website
September 22, 2017 2:12 pm

He doesn’t know? Didn’t he run the models?

Reply to  Editor of the Fabius Maximus website
September 23, 2017 11:32 am

Larry,
I asked Dr. Schmidt if his models were ensembles of different RCP scenarios, or just RCP 4.5. We’ll see if I get an answer.
The model in Zeke’s article is just RCP 4.5, a strong mitigation scenario. Actual emissions most closely match RCP 6. Ed Hawkins models use a range of RCP’s in the ensemble.
If they are using a lower RCP than reality, they haven’t fixed the problem with the models running hot. They are just hiding the problem… largely because the recent Nature paper is embarrassing them.

Dave
September 22, 2017 6:32 am

The #1 conclusion is boost funding. That alone makes me skeptical of the entire piece.

Dan
Reply to  Dave
September 22, 2017 7:10 am

Actually, all 6 conclusions require more government funding. Make Climate propaganda bigger and bigger. Add more and more layers of organization and over-site. Try harder and harder to solve a non-existent problem and predicate an unpredictable system.
Show me YOUR money!

I Came I Saw I Left
Reply to  Dave
September 22, 2017 8:12 am

That’s what’s known as a tell.

Ian W
September 22, 2017 6:39 am

It is true the Ocean Heat Content is the best metric for ‘global warming’. Now provide a mechanism showing how a small increase in carbon dioxide in the atmosphere can provide the amount of energy to raise that ocean heat content, given that the top 10 meters or less of the oceans hold more energy than the entire atmosphere which itself has only increased in ‘temperature’ by a very small amount.
The quantities do not match.
There must be a major energy input to raise the ocean heat content and small amounts of claimed ‘back radiation’ are insufficient. It is far more likely to be energy from the Sun.

Tom Dayton
Reply to  Ian W
September 22, 2017 6:42 am

Ian W: Here is one of many places to find an explanation of how atmospheric CO2 causes heating of the oceans: https://scienceofdoom.com/2010/10/06/does-back-radiation-heat-the-ocean-part-one/

Andrew Cooke
Reply to  Tom Dayton
September 22, 2017 7:26 am

(Sigh). No, really it doesn’t. Here is why it doesn’t. It does not separate out the difference in radiation. What part is specifically attributable to the increased CO2. What part is attributable to natural causes if, for instance, the amount of CO2 was low and steady in the atmosphere.

Ian W
Reply to  Tom Dayton
September 22, 2017 7:44 am

Tom,
Given the claimed increase in ocean heat content calculate the number of joules of energy required. Now calculate the amount of ‘downwelling’ infrared in watts per square meter due to CO2 from anthropogenic sources and see how long ‘the oceans’ need to warm by the amount claimed. If at all.
Even the reference you gave admits that the top micron of water surface absorbed most of the infrared and would be raised in energy given Raoult’s law this will lead to an increase in evaporation removing that energy from the surface of the ocean – cooling the surface.
Your idea that 3 watts per square meter of infrared from CO2 will warm water at the rate claimed in the CIMP graphs is extremely simple to test. I propose that you attempt to falsify / validate your hypothesis by testing it. Remember as Earth’s surface is 70% or so water this is actually a test of the entire anthropogenic global warming hypothesis. You could become famous.

RealOldOne2
Reply to  Tom Dayton
September 23, 2017 7:48 pm

Tom Dayton, The s-o-d blog does not explain how OHC increases are caused by ghgs in the colder atmosphere. The only ocean-atmosphere heat exchange process that adds heat/thermal energy to the oceans is solar radiation. The other ocean-atmosphere heat exchange processes, conduction, latent heat/evaporation, and longwave radiation all cool/transfer heat out of the oceans.
The transfer of thermal energy via longwave radiation is one directional from the warmer ocean surface to the colder atmosphere. There is no two way flow of thermal energy, for that would violate the 2nd Law of Thermodynamics as it would require the transfer of thermal energy/heat from the colder atmosphere to the warmer (on a global average) surface of the ocean, causing the warmer object to increase its temperature. That can’t happen in the real world. 2nd Law. Here is the real Earth energy budget, from peer reviewed science,comment image . Notice: a) this paper was written explicitly from the perspective of the 2nd Law of Thermodynamics and b) there is no energy flux component from the colder atmosphere to the warmer surface of the Earth.
The claim that the ‘missing heat’ during the ‘pause/hiatus’ is in the ocean actually supports natural/solar-caused climate warming, not anthropogenic ghg caused warming. And since the temperature the atmosphere didn’t increase during the pause, there can be no increase in OHC due to less heat escaping the ocean due to more ghgs in the atmosphere.

Tom Dayton
Reply to  Ian W
September 22, 2017 7:46 am

Here is Part 4 of that series. At its start it has links to Parts 1, 2, and 3: https://scienceofdoom.com/2011/01/06/does-back-radiation-%e2%80%9cheat%e2%80%9d-the-ocean-%e2%80%93-part-four/

Andrew Cooke
Reply to  Tom Dayton
September 22, 2017 7:55 am

Tom, Hypothesis C is based on oversimplification. And still doesn’t explain the transference of all those joules into the ocean.

JonA
Reply to  Tom Dayton
September 22, 2017 8:11 am

I’m interested in why you quote scienceofdoom as an authoritative source.

Tom Dayton
Reply to  Tom Dayton
September 22, 2017 8:45 am

JonA: I linked to Science of Doom because apparently most people at WUWT are unwilling to open any textbooks. Despite being able to get them for free from any college or university library. Or by interlibrary loan from their local public library.

Andrew Cooke
Reply to  Tom Dayton
September 22, 2017 10:26 am

Oh barf, Tom. The elistist, you are all stupid stuff is for the little kiddies in mommy’s basement. I would assume you are above that.

Tom Dayton
Reply to  Ian W
September 22, 2017 7:48 am

Ian, you really should read that four-part series. Clearly you have not, because you are entirely misunderstanding the mechanism.

Leonard Lane
Reply to  Ian W
September 22, 2017 8:48 am

Why is this kind of material published here? I see no value in this post.

gnomish
Reply to  Leonard Lane
September 22, 2017 11:05 am

because the real al gore was getting his chakra released so his understudy came in his place.

the other Ed Brown
Reply to  Leonard Lane
September 22, 2017 11:55 am

Leonard: Je n’ai pas besoin d’assistance ; au contraire, je suis ici pour vous aider.
Both the originating article, and especially this comments thread provide a succinct sample and rebuttal of the climate fraud we know as CAGW. I’ve bookmarked it and archived many of these comments as easily understood examples for the further argumentation, which is surely in our future.
[Disclaimer: I am not a French scholar, but I did sleep in my own bed last night. Alas, alone.]

RHS
Reply to  Ian W
September 22, 2017 10:25 am

Lets see, with:
1,347,000,000 cu km – Volume of oceans
1 gigatone of water = 1 petagrams (10×15 grams) of water
Number of grams of water in oceans = 1,347,000,000,000,000,000,000,000 grams
300,000,000,000,000,000 Joules (300Zj) – Energy entered into oceans
It takes 4.18 joules per gram of water to raise the temp 1 Kelvin.
A little conversion journey takes us down this path:
10×3 – 4.180 Kjoules per kgram water kelvin – Amount of energy to raise temperature of 1 kgram of water, 1 degree kelvin.
10×6 – 4.180 Mjoules per 1 Mkg water kelvin – Amount of energy to raise temperature of 1 Mg of water, 1 degree kelvin.
10×9 – 4.180 Gjoules per 1 Gg water kelvin – Amount of energy to raise temperature of 1Gg of water, 1 degree kelvin.
10×12 – 4.180 Tjoules per 1 Tg water kelvin – Amount of energy to raise temperature of 1Tg of water, 1 degree kelvin.
10×15 – 4.180 Pjoules per 1 Pg water kelvin – Amount of energy to raise temperature of 1Pg of water, 1 degree kelvin.
10×18 – 4.180 Ejoules per 1 Eg water kelvin – Amount of energy to raise temperature of 1Eg of water, 1 degree kelvin.
10×21 – 4.180 Zjoules per 1 Zg water kelvin – Amount of energy to raise temperature of 1Zg of water, 1 degree kelvin.
300 Zj is enough energy to raise 300 Zg water 1 kelvin. However, we have just over 4 times that volume of water in the ocean.
With 1,347 Zgrams water, 300 Zj is enough to raise the temp of the water roughly 0.25 Celsius. How is this a concern?

Taylor Ponlman
September 22, 2017 6:40 am

Something occurred to me in reading this article. There has been a lot of talk about “red teaming” climate science, and the attendant issues of wrestling that big a topic to the ground. However “red teaming” the climate models, or at least the major ones, seems doable, and the expertise and even methods are reasonably well understood.
We are expected to base our lives and our economic systems on their output, so why not understand their fitness for purpose? I’m not talking about trying to measure their skill, just their quality and robustness, first of the architecture and code, and second of their assumptions.
An associated effort would be to red team the RCPs. It’s been a long time since those were formulated and particularly RCP 8.5 has caused a lot of unjustifiable mischief.

Reply to  Taylor Ponlman
September 22, 2017 7:24 am

“Tiger Teams” should analyze both the data and the models. The “adjustment”, “infilling” and “reanalysis” of the data should be investigated in detail. The “fitting” of the models should also be investigated in detail. We’ve seen quite enough of the “magic” of “the man behind the curtain”.
Should a Fed Team / Blue Team exercise occur, neither team should be permitted to make assertions based on data or model code which has not been publicly available for analysis.

Tom Dayton
September 22, 2017 6:48 am

The spaghetti graph does convey important information: None of the individual model runs matches the model ensemble mean. That’s because all of the model runs realistically reflect the swings up and down of the temperature that are due to internal variability, in particular the shifting of heating between oceans and air. Most models get the sizes and durations of those swings correct, but most get the timings incorrect.

Reply to  Tom Dayton
September 22, 2017 8:02 am

Most models get the sizes and durations of those swings correct, but most get the timings incorrect.

Weather prognostications and lottery tickets demonstrate the same problem more tangibly.

I Came I Saw I Left
Reply to  jaakkokateenkorva
September 22, 2017 8:30 am

+1
I can predict with 100% confidence that I can pick the correct sequence of winning lottery numbers. Timing is what always frustrates my retirement in Tahiti.

Reply to  jaakkokateenkorva
September 22, 2017 2:40 pm

+2

Pat Kelly
September 22, 2017 6:52 am

How do statistical models further our understanding of the science of climate change? Without a more fundamental model of the dynamics involved – which includes water vapor and cloud formation, these statistical models will eventually no longer fit the observed data. Since the alarmist community want to impact political and social behavior 100 years from now, I don’t see this model as very useful.

Tom Dayton
Reply to  Pat Kelly
September 22, 2017 7:39 am

These models are not statistical models. They are physical models. Here is an introduction: “The Physics of Climate Modelling” by Gavin Schmidt in Physics Today, 2007: http://doi.org/10.1063/1.2709569

Butch2
Reply to  Tom Dayton
September 22, 2017 7:56 am

….Really ??? ROTFLMAO …..

Pat Kelly
Reply to  Tom Dayton
September 22, 2017 11:41 am

You should pay attention more closely to what was written in the article above. The author refers to ” The graph uses basic statistics, something too rarely seen today in meteorology and climate science.” Now you may want to pretend that the things they use for models are “physical,” but unfortunately, they are currently incapable of describing all but the most basic influences on atmospheric temperatures. Couple that with the cherry picking of weather stations to represent the global condition and you can see their problem with prediction.

Dave Fair
Reply to  Pat Kelly
September 22, 2017 10:12 am

No, no, no, no Pat. Like all good shamans, they want to impact political and social behavior NOW, based on fear of events 100 years from now.

September 22, 2017 6:56 am

No satellite data, only mangled surface data sets that are not temperature data sets, they are rubbish.
No no no

Griff
Reply to  Mark - Helsinki
September 22, 2017 7:09 am

Not even the RSS satellite data, eh…

AndyG55
Reply to  Griff
September 22, 2017 11:04 am

The RSS satellite data that shows 1998 only 0.009C below 2016?
The RSS satellite data that shows 2005, 2006, 2007, 2010, 2014, 2015 to be WELL BELOW 1998 ?
You mean that satellite data ?
Again, griff puts foot in mouth and starts yapping !!

MarkW
Reply to  Griff
September 22, 2017 2:27 pm

As problematic as RSS is, it is at least close to being usable. Unlike the ground based measurements.

Reply to  Mark - Helsinki
September 22, 2017 7:29 am

Once a data set is “adjusted”, what remains is an “estimate set”. While it is highly unlikely that the errors in the near-surface temperature data set are random, it is ludicrous to suggest that the errors in the “adjusted” estimate set are random.

September 22, 2017 6:56 am

This article is founded on utter rubbish called science

Bob Hoye
September 22, 2017 6:57 am

Larry cites Krugman.
Not a rational report, no matter the many other citations.

Reply to  Bob Hoye
September 22, 2017 9:37 am

subtle,
You should read before making your knee-jerk criticism. Citing Krugman in support of skeptical positions is a powerful argument — something like “admission against interest” in the courtroom.
https://www.nolo.com/dictionary/admission-against-interest-term.html

September 22, 2017 6:59 am

Cooking surface data sets so they keep the model output relevant, and this is called science?
meanwhile the only data sets worth looking at are left out.
I call nonsense

September 22, 2017 6:59 am

Ah logic in a box, is not logic

JohnWho
September 22, 2017 7:07 am

Wait, what does the following (from “conclusions”) have to do with “public policy response to climate change”?
“4. We should begin a well-funded conversion to non-carbon-based energy sources, for completion by the second half of the 21st century — justified by both environmental and economic reasons.”

E. Martin
September 22, 2017 7:12 am

Advice on theory testing from Paul Krugman? Are they serious? Many feel that he has made a lifetime career out of being wrong.

September 22, 2017 7:15 am

“A climate science milestone: a successful 10-year forecast!”
Sorry to be a party pooper. But there are two things wrong about this approach.
1) First, can the input data or the results be considered cumulative data series?
The reason is that any two sets of random data, when cumulated over time, will show high correlation–spurious, but very convincing.
Jamal Munshi has a nice video on cross-correlation of cumulated variables. I think the accent is Irish, but Dr Munshi is a professor at a US university.

Munshi, Jamal. “The spuriousness of correlations between cumulative values.” (2016).
https://tinyurl.com/y8sw5luj
2) Statisticians and econometricians warn us that trending time series are also subject to spurious correlation. Which is why Granger and Engle won a Nobel Prize for their work on co-integration.
See this approach applied to climate data:
Beenstock, M., Y. Reingewertz and N. Paldor (2012), Polynomial cointegration tests of
anthropogenic impacts on global warming. Earth System Dynamics 3: 178-188.
URL: https://www.earth-syst-dynam.net/3/173/2012/esd-3-173-2012.pdf
Beenstock, Michael, Yaniv Reingewertz, and Nathan Paldor. “Testing the historic tracking of climate models.” International Journal of Forecasting 32.4 (2016): 1234-1246.
https://pdfs.semanticscholar.org/f298/ca71000425cecf4e25382344a7b6f1662436.pdf

Michael Lemaire
Reply to  Frederick Colbourne
September 22, 2017 11:58 am

Jamal Munshi has authored many very interesting papers about the correlation (or rather lack thereof) between fossil fuel emission and atmospheric CO2, between atmospheric CO2 and sea level rise, etc.. His work deserves to be reviewed in WUWT.

Andrew Cooke
September 22, 2017 7:16 am

Someone on here please help me understand some information. The graph on heat content indicates that the heat content of the ocean has gone up 16×10(22) joules since 1990. Now that much joules roughly equals 3.82×10(16) Kcals. The mass of the ocean is estimated to be 1.4×10(21) Kg. That means each Kg has gone up by 2.72×10(-5) degrees Celcius.
This makes me ask a few questions.
1. Conservation of energy. How is it that 16×10(22) joules was transferred into the ocean by Global Warming? For that to happen it had to come from the air. We are discounting the sun, because AGW proponents tell us the sun is not the driver. Where did the energy come from?
2. How is it that our measurement instruments are able to measure the whole mass of the ocean that we can come up with such a determination of a temperature rise of 2.72×10(-5) degrees Celsius? If we use just the mass of surface water to 1000 meters (which is probably what we can really measure), it really messes up the measurements. For that matter, what is the error range on that determination?
You know, I may be looking at this wrong. I have a degree in engineering and had to take 3 physics classes to get it, but maybe someone on here can more fully enlighten me.

Tom Dayton
Reply to  Andrew Cooke
September 22, 2017 7:22 am

Here is one of many places to find an explanation of how atmospheric CO2 causes heating of the oceans: https://scienceofdoom.com/2010/10/06/does-back-radiation-heat-the-ocean-part-one/

Andrew Cooke
Reply to  Tom Dayton
September 22, 2017 7:35 am

Yeah, no, it doesn’t. I saw you post this on another post up thread. It just doesn’t. It gives enough information to make the writer look smart. It just doesn’t touch on the right information.
If CO2 is the cause, what part is attributable to the difference in CO2 from the baseline? Of course even the notion of a baseline is silly, but for the purpose of this question, let’s consider the baseline to be whatever amount of CO2 would not cause the heat content of the ocean to rise.
Oh, and the real question. How does back radiation account for 16×10(22) joules? There has been 8.51×10(8) seconds pass since 1990. That would mean 1.88×10(14) joules per second have gone into the ocean solely from forcing from Global Warming.
HOW IS THAT POSSIBLE?!!!!!!!!!!!!

Tom Dayton
Reply to  Tom Dayton
September 22, 2017 7:43 am

There are four parts in that series. I pointed you only to Part 1, assuming you could find the other parts by yourself. I guess not. Here is Part 4. At its start it has links to Part 2 and Part 3: https://scienceofdoom.com/2011/01/06/does-back-radiation-%e2%80%9cheat%e2%80%9d-the-ocean-%e2%80%93-part-four/

Andrew Cooke
Reply to  Tom Dayton
September 22, 2017 7:58 am

Really, Tom. Yes I read it all. It is just a series of blog posts that lead up to the creation of a hypothesis c and a Mathlab model.
Still doesn’t answer my two questions above.

Tom Dayton
Reply to  Tom Dayton
September 22, 2017 8:19 am

I pointed you to Science of Doom because I thought you wanted technical detail. Apparently not. So here is a simpler, briefer, explanation, with references to peer reviewed literature for details: https://www.skepticalscience.com/How-Increasing-Carbon-Dioxide-Heats-The-Ocean.html

Matt G
Reply to  Tom Dayton
September 22, 2017 8:21 am

The AMO explains ocean warming since the 1990’s, where it started to become positive by the end of it and a frequency increase in El Nino’s. No other explanation is required, removing the AMO from the global trend causes it to flat line. The AMO an effect from the AMOC is linked with global ocean conveyor ending in the Arctic ocean so hugely influences the NH especially.
Simple experiment that shows the difference between LWR and SWR warming small amount of water.
Two identical bucket (5 L for example) with the same volume of water at the same temperature is placed in two different locations outside.
1) One in shade where no sun can reach it all day, but is open to the atmosphere.
2) The other is placed in open area where sun can reach it most of the day.
Record the temperatures hourly and see how the two compare.
You will be surprised to find out the bucket in shade doesn’t warm during one day and the one in the sun warms greatly. Small water volumes are used to find any small changes that oceans or huge bodies of water hide.
This demonstrates the orders difference between the two and LWR fails to over come latent heat at the surface with evaporation.

Andrew Cooke
Reply to  Tom Dayton
September 22, 2017 8:53 am

(Sigh Again. Ok, deep breath.) Now, Tom, the skeptical science post was interesting. The author was positing that the upper skin of the ocean getting warmer lessens the heat transference from the layers underneath which causes more heat to be retained. That explains how the ocean could hold in more heat.
But that still doesn’t answer the question. The only reason that matters is that it keeps the ocean from transferring heat back into the atmosphere (theoretically), but that does not increase the amount of heat actually going into the ocean.
If you have an equation 5x-5y = 1, taking away the 5y does cause the number on the right side to increase to a greater degree (lets call it rate of transfer), but it does not increase the amount of x. X and Y are mutually exclusive.
So that explains how the ocean could retain heat (theoretically), but it does not explain WHERE THE 1.88×10(14) JOULES COME FROM!!

Toneb
Reply to  Tom Dayton
September 22, 2017 9:00 am

Andrew:
” For that to happen it had to come from the air. We are discounting the sun, because AGW proponents tell us the sun is not the driver. Where did the energy come from?”
No, it did come from the Sun – solar SW.
It is absorbed by deep surface layers of the ocean – which LWIR makes more isolated from the surface emitting skin via a reduction in DeltaT in the topmost few mills. Heat flows better across a large Delta, yes? (2nd LoT).
“Oh, and the real question. How does back radiation account for 16×10(22) joules? There has been 8.51×10(8) seconds pass since 1990. That would mean 1.88×10(14) joules per second have gone into the ocean solely from forcing from Global Warming.”
The Earth is 510×10(12)m2 surface area.
70% is ocean = 357×10(12) m2
1.88×10^14/3.57×10^14
~= 1/2 W/m^2 being retained by the oceans
So 0.5 J/s each m^2 of ocean surface accounts for the 16×10^22 J

Tom Dayton
Reply to  Tom Dayton
September 22, 2017 9:06 am

Andrew Cooke: The vast majority of the energy entering the oceans is from non-IR sunlight that penetrates that thin surface layer. That energy heats the oceans. But the oceans also lose energy through radiation and convection. CO2 in the atmosphere reduces that loss of energy. Increasing CO2 in the atmosphere shifts the balance of ocean energy gain and loss by reducing the loss. Consequently more energy accumulates in the oceans than is lost. The hotter oceans radiate more energy, so if the CO2 insulation stabilized then eventually the rate of loss from the oceans would increase enough to match the incoming energy from the Sun. But as long as CO2 in the atmosphere keeps increasing, the resulting insulation keeps increasing, and energy in the oceans keeps accumulating.

Andrew Cooke
Reply to  Tom Dayton
September 22, 2017 9:29 am

Toneb, finally an answer I can sink my teeth into. How deep would you say this “skin” of the ocean actually might be? Doesn’t the skin have to be static for this to really matter? When the wind blows does the increased movement cause greater heat transfer? How much heat was released by El Nino?
Other questions. Is it just the water above the thermocline that is increasing in temperature? What about the water below the thermocline? What impact does this have on the Kcals to mass?

Andrew Cooke
Reply to  Tom Dayton
September 22, 2017 10:42 am

Toneb, now for the ultimate question. What is the difference in temperature necessary between the skin and the water under it for it to have enough transference to allow the difference in heat to go back into the atmosphere?
You know the more I think about this, the more I realize how childish the whole explanation is. There are so many problems.
A. Since it is a rate, the loss of transference should cause an acceleration which would mean the temperature of the ocean should be accelerating upward. Yeah. Uh-huh.
B. You can’t use an average and call that as if it is the static universal temperature at a given latitude. Different parts of the ocean are at different temperatures.

Dave Fair
Reply to  Andrew Cooke
September 22, 2017 12:10 pm

Temperature anomalies are supposed to account for that, Andrew. Even using anomalies, look at how Bob Tisdale destroys CAGW SST memes!
https://bobtisdale.wordpress.com/

Andrew Cooke
Reply to  Tom Dayton
September 22, 2017 10:53 am

You know what, I have another question for all you “skin” theorists.
So if the “skin” is warmer, is more water evaporating? Would this not cause more cloud cover?

Tom Dayton
Reply to  Andrew Cooke
September 22, 2017 7:30 am

Since 2006, the Argo program has measured down to 2,000 meters, nearly globally. If you want details on that and other measurements, you can start by reading the references in this recent article: https://eos.org/opinions/taking-the-pulse-of-the-planet. Look in the body of the article to find the relevant references, then look at the reference list at the bottom of the article for links to those articles themselves. It is those articles themselves that have details you are looking for, though if you want even more details you’ll need to follow the references in those articles as well. Or you can go straight to the various programs’ web sites, such as the site for Argo: http://www.argo.ucsd.edu/About_Argo.html

Andrew Cooke
Reply to  Tom Dayton
September 22, 2017 8:28 am

Ah, Argo, Smargo. Just answer the question. How is it possible that 1.88×10(14) joules per second go into the ocean from forcing from Global Warming?

Reg Nelson
Reply to  Tom Dayton
September 22, 2017 8:34 am

ARGO showed cooling initially showed cooling. This of course was adjusted to show warming. Then SST was adjusted up using water bucket data.
This a political movement (The Cause), not a scientific one.

Tom Dayton
Reply to  Tom Dayton
September 22, 2017 8:47 am

Andrew Cooke: That was my answer to your question # 2. You’re welcome.

Andrew Cooke
Reply to  Tom Dayton
September 22, 2017 8:54 am

No, it doesn’t. Nice try.

knr
Reply to  Tom Dayton
September 22, 2017 10:16 am

nearly globally.! reminds us how many Agro buoys there are and therefore how much sea area each has to cover , that is where there are any at all .
Globally , well in the sense of being able to measure the weather over the whole area of the Himalayass by having one measuring station 200 miles from one bit of it is ‘global ‘
In the sense that is scientifically meaningful for accuracy being claimed , not even in the same universe let alone global .

Pamela Gray
September 22, 2017 7:24 am

Once again climate research ignores confounding factors that in complex systems noted for both millennial, centurial, and decadal cycles, are around every corner. And once again climate research runs with correlation as if it alone is proof of causation.
Will we really benefit from more research to develope mitigation plans? Humans have through lack of common sense, run past any hope of mitigation by creating fragile holds on continental edges. Move inland!
Idiots.

David Wells
Reply to  Pamela Gray
September 22, 2017 8:50 am

Pamela “we have run past any hope of mitigation”. Mitigation of what exactly? Mitigating SLR for instance which has been rising since 20,000 years ago with no acceleration for 90 years. And mitigation by what means? If 100,000 billion tons of Co2 emitted between 2000 and 2010 did not shake our atmosphere then how much is needed to achieve that end? If Co2 is not the shaker and mover of our climate then what should we mitigate in order to influence our climate to the extent that SLR for example will not continue?
Maybe we should hire Billy Connolly to shout out loud “go away nasty climate change”? Mitigation of reality is not possible, the climate will change, the planet was not designed specifically for human habitation and so like politics its about events. As said how much of what should we mitigate or abate in order to influence the energy of 1 million Hiroshima Bombs every day as exhibited by Hurricane Irma?
Humanity will never ever find the answers to our climate in little black boxes. We have a coupled non linear chaotic climate therefore future climate states are not predictable, we will get what we get and that is a fact.

Jean Meeus
Reply to  David Wells
September 22, 2017 9:22 am

CO2, not Co2.

Pamela Gray
Reply to  Pamela Gray
September 22, 2017 12:44 pm

Sea levels will likely continue a slow undulating sea level change, some of it up, some of it down, as long as we remain in the current interstadial warmth. So I still say, Idiots.
Here is the latest link to the natural cycles which will effect flora and fauna at continental edges.
http://eprints.esc.cam.ac.uk/3856/1/nature21364.pdf

September 22, 2017 7:29 am

One more chart for policy considerations.comment image

William Astley
September 22, 2017 7:31 am

Come on man. We have already spent billions and billions on ‘climate’ research. There is sufficient information to solve the what causes cyclic climate change in the paleo record, why did the earth warm in the last 150 years, and will the planet warm or cool in the next 5 years?
The answer to what causes cyclic warming and cooling of the earth is the sun. There is a jump up and down change the universe type of breakthrough (changes almost every field of science, new field of physics, and so on) that explains the mechanisms.
You ignore the paleo record and ignore the hundreds of papers that shows unequivocally that solar cycle changes cause the planet to cyclically warm and cool.
The researchers have hidden the fact that changes in planetary cloud cover (the forcing due to the reduction in cloud cover is calculated 7.5 watts/m^2, as compared to calculated 2.5 watts/m^2 due to a doubling of atmospheric CO2, see Palle’s paper) explains all of the warming in the last 30 years.
There are cycles of warming and cooling that correlate with cosmogenic isotope changes (planet warms when the solar magnetic cycle is active and cools when the solar cycle goes into a Maunder minimum) in the paleo record, with a periodicity of 1500 years, with a beat of plus/minus 500 years.
Note the periodicity of the warming and cooling is the same for both hemispheres which rules out solar insolation changes as the mechanism as the warming and cooling is out of phase for the two poles.
http://wattsupwiththat.files.wordpress.com/2012/09/davis-and-taylor-wuwt-submission.pdf

Davis and Taylor: “Does the current global warming signal reflect a natural cycle”
…We found 342 natural warming events (NWEs) corresponding to this definition, distributed over the past 250,000 years …. …. The 342 NWEs contained in the Vostok ice core record are divided into low-rate warming events (LRWEs; < 0.74oC/century) and high rate warming events (HRWEs; ≥ 0.74oC /century) (Figure). … …. "Recent Antarctic Peninsula warming relative to Holocene climate and ice – shelf history" and authored by Robert Mulvaney and colleagues of the British Antarctic Survey ( Nature , 2012, doi:10.1038/nature11391),reports two recent natural warming cycles, one around 1500 AD and another around 400 AD, measured from isotope (deuterium) concentrations in ice cores bored adjacent to recent breaks in the ice shelf in northeast Antarctica. ….

Greenland ice temperature, last 11,000 years determined from ice core analysis, Richard Alley’s paper. William: As this graph indicates the Greenland Ice data shows that have been 9 warming and cooling periods in the last 11,000 years.
http://www.climate4you.com/images/GISP2%20TemperatureSince10700%20BP%20with%20CO2%20from%20EPICA%20DomeC.gif
http://www.agu.org/pubs/crossref/2003/2003GL017115.shtml

Timing of abrupt climate change: A precise clock by Stefan Rahmstorf
Many paleoclimatic data reveal a approx. 1,500 year cyclicity of unknown origin. A crucial question is how stable and regular this cycle is. An analysis of the GISP2 ice core record from Greenland reveals that abrupt climate events appear to be paced by a 1,470-year cycle with a period that is probably stable to within a few percent; with 95% confidence the period is maintained to better than 12% over at least 23 cycles. This highly precise clock points to an origin outside the Earth system (William: Solar magnetic cycle changes cause the warming and cooling); oscillatory modes within the Earth system can be expected to be far more irregular in period.

Mechanism where Changes in Solar Activity Affects Planetary Cloud Cover
1) Galactic Cosmic Rays (GCR)
Increases in the sun’s large scale magnetic field and increased solar wind reduces the magnitude of GCR that strike the earth’s atmosphere. Satellite data shows that there is 99.5% correlation of GCR level and low level cloud cover 1974 to 1993.
2) Increase in the Global Electric Circuit
Starting around 1993, GCR and low level cloud cover no longer correlate. (There is a linear reduction in cloud cover.) The linear reduction in cloud cover does correlate with an increase in low latitude solar coronal holes, particularly at the end of the solar cycle, which cause solar wind bursts. The high speed solar wind bursts create potential difference between from high latitude regions of the planet and the equator. The increase in potential difference removes cloud forming ions from high latitude regions and the equatorial region which causes changes in cloud properties and cloud duration which causes warming in both locations.
3) Changes geomagnetic field intensity and orientation. Unexplained rapid and cyclic geomagnetic field changes, in the paleo record, have been found to correlate with planetary temperature changes.
http://solar.njit.edu/preprints/palle1266.pdf

The Earthshine Project: update on photometric and spectroscopic measurements
“Our simulations suggest a surface average forcing at the top of the atmosphere, coming only from changes in the albedo from 1994/1995 to 1999/2001, of 2.7 +/-1.4 W/m2 (Palle et al., 2003), while observations give 7.5 +/-2.4 W/m2. The Intergovernmental Panel on Climate Change (IPCC, 1995) argues for a comparably sized 2.4 W/m2 increase in forcing, which is attributed to greenhouse gas forcing since 1850.
“As evidence for a cloud—cosmic ray connection has emerged, interest has risen in the various physical mechanisms whereby ionization by cosmic rays could influence cloud formation. In parallel with the analysis of observational data by Svensmark and Friis-Christensen (1997), Marsh and Svensmark (2000) and Palle´ and Butler (2000), others, including Tinsley (1996), Yu (2002) and Bazilevskaya et al. (2000), have developed the physical understanding of how ionization by cosmic rays may influence the formation of clouds.
Two processes that have recently received attention by Tinsley and Yu (2003) are the IMN process and the electroscavenging process. (William: There is a third mechanism.)”

Hans-Georg
Reply to  William Astley
September 22, 2017 7:46 am

Fabius Maximus is wrong here. There is no successful 10-year forecast. And that has two reasons. There are only scenarios in climate research, no predictions.
Unfortunately, none of the scenarios is true. But only the mathematical mean value of the scenarios, which, however, is completely irrelevant in climate research.
The end point of the graphic is cherrypicking. Without the two-year Super-El Nino we would have far lower atmospheric temperatures. A lower increase could also be observed in ocean heat content.
As a prospect of the further development remains only the statement that the Spike in the temperatures by the Super El Nino will be relativized in the next years by the following La Nina and the negative phase of the AMO(C).

September 22, 2017 7:33 am

Here is a clear sign of why there is global warming. Intellicast is showing that the low for the last 3 nights where I live was 50 F for right now, 48 F, and 60 F. In reality temps are currently 40 F after rising from 36 F this early morning. Yesterday the low was 35 F despite Intellicast showing 48 F, and the night before that was around 48 F and not 60 F. Global warming can be readily seen. …http://www.intellicast.com/Local/ObservationsSummary.aspx?location=USCA0307

Pamela Gray
Reply to  goldminor
September 22, 2017 7:38 am

You forgot the sarcasm tag

Reply to  Pamela Gray
September 22, 2017 7:44 am

I have never used a sarc tag, but I suppose on occasion I should. Still I have to wonder about why they are not showing the real lows for the night as I have noticed some discrepancies lately.

Curious George
Reply to  Pamela Gray
September 22, 2017 7:55 am

Don’t take The Cause sarcastically.

Sixto
Reply to  Pamela Gray
September 22, 2017 12:33 pm

Gold,
NOAA’s gnomes are now putting their thumbs on the “raw” data. Book cooking government science has been totally corrupted to the core by the CACA infection.

hunter
Reply to  goldminor
September 23, 2017 5:06 am

So climate change really is man made, see?

Reply to  hunter
September 23, 2017 2:31 pm

Something at Intellicast has changed in this year as it was not like this before. I have been using there site almost daily for the last 4 years. Their temps always matched up close enough with my thermometer, and my ability to sense what the approximate temp is.
Intellicast is showing last nights low as 55 F. That is an outright lie. I went outside around 1 am last night as I was up late. The air had a cold bite to it. It was at the least in the very low 40s. A prospector friend of mine who lives in the great outdoors said there was frost on the roof of the local firehouse and some other nearby buildings this morning which did not melt to almost 10 am. So no way that the low was 55 F last night.

Robert Austin
September 22, 2017 7:56 am

Promotion of the multi-model ensemble is bogus science, plain and simple, its use is to be deplored. The average of a multi-model ensemble has no scientific meaning and Larry, not being a scientist, might not realize this fact, but Gavin Schmidt should.
Ocean heat in theory should be a good metric for global warming but can we measure it to be precision necessary? Take the ocean heat chart above and convert Joules to C. now do you think we can measure temperature to that precision and accuracy?
More money for climate science! We have blown way to much money on it over the decades with little to show, let’s burn some more. Larry, your article comes close to saying “the science is settled”, but you do not convince.

Hans-Georg
Reply to  Robert Austin
September 22, 2017 8:04 am

It seems as if Fabi (l) us Maximus forgot the sarc / tag. Rarely have I seen such a wretched post on WUWT. He has forgotten the principle of honest science: no mixing of scenarios to make a prediction from scenarios and no cherrypicking.

Reply to  Robert Austin
September 22, 2017 9:14 am

Excellent points. All ocean heat estimates (Levitus many papers) prior to Argo are just best guesses. The XBT darts had multiple problems and coverage was very poor.

Dave Fair
Reply to  ristvan
September 22, 2017 10:30 am

When running a business, Rud, one never allows the person collecting the cash to be the person accounting for the cash. Climate “science” violates that dictum.

September 22, 2017 8:02 am

Schmidt continues to make the same egregious error of scientific judgement as of the majority of academic climate scientists by projecting temperatures forward in a straight line beyond a peak and inversion point in the millennial temperature cycle.
Climate is controlled by natural cycles. Earth is just past the 2003+/- peak of a millennial cycle and the current cooling trend will likely continue until the next Little Ice Age minimum at about 2650.See the Energy and Environment paper at http://journals.sagepub.com/doi/full/10.1177/0958305X16686488
and an earlier accessible blog version at http://climatesense-norpag.blogspot.com/2017/02/the-coming-cooling-usefully-accurate_17.html
Here is the abstract for convenience :
“ABSTRACT
This paper argues that the methods used by the establishment climate science community are not fit for purpose and that a new forecasting paradigm should be adopted. Earth’s climate is the result of resonances and beats between various quasi-cyclic processes of varying wavelengths. It is not possible to forecast the future unless we have a good understanding of where the earth is in time in relation to the current phases of those different interacting natural quasi periodicities. Evidence is presented specifying the timing and amplitude of the natural 60+/- year and, more importantly, 1,000 year periodicities (observed emergent behaviors) that are so obvious in the temperature record. Data related to the solar climate driver is discussed and the solar cycle 22 low in the neutron count (high solar activity) in 1991 is identified as a solar activity millennial peak and correlated with the millennial peak -inversion point – in the UAH6 temperature trend in about 2003. The cyclic trends are projected forward and predict a probable general temperature decline in the coming decades and centuries. Estimates of the timing and amplitude of the coming cooling are made. If the real climate outcomes follow a trend which approaches the near term forecasts of this working hypothesis, the divergence between the IPCC forecasts and those projected by this paper will be so large by 2021 as to make the current, supposedly actionable, level of confidence in the IPCC forecasts untenable.”
The fundamental error in establishment forecasts is illustrated in Fig 12 from the paper’
comment image
Fig. 12 compares the IPCC forecast with the Akasofu (31) forecast (red harmonic) and with the simple and most reasonable working hypothesis of this paper (green line) that the “Golden Spike” temperature peak at about 2003 is the most recent peak in the millennial cycle. Akasofu forecasts a further temperature increase to 2100 to be 0.5°C ± 0.2C, rather than 4.0 C +/- 2.0C predicted by the IPCC. but this interpretation ignores the Millennial inflexion point at 2004. Fig. 12 shows that the well documented 60-year temperature cycle coincidentally also peaks at about 2003.Looking at the shorter 60+/- year wavelength modulation of the millennial trend, the most straightforward hypothesis is that the cooling trends from 2003 forward will simply be a mirror image of the recent rising trends. This is illustrated by the green curve in Fig. 12, which shows cooling until 2038, slight warming to 2073 and then cooling to the end of the century, by which time almost all of the 20th century warming will have been reversed.
The current situation is illustrated in Fig 4comment image
The paper further states
“The RSS cooling trend in Fig. 4 and the Hadcrut4gl cooling in Fig. 5 were truncated at 2015.3 and 2014.2, respectively, because it makes no sense to start or end the analysis of a time series in the middle of major ENSO events which create ephemeral deviations from the longer term trends. By the end of August 2016, the strong El Nino temperature anomaly had declined rapidly. The cooling trend is likely to be fully restored by the end of 2019.”
Schmidt compounds his original error by ending his time series before the effects of the major El Nino on the underlying trend have dissipated .

Bob boder
Reply to  Dr Norman Page
September 22, 2017 8:22 am

NORM!!!!!!!

Reply to  Bob boder
September 22, 2017 9:35 am

I’m not sure whether you are enthused or horrified by my working hypothesis.

D. J. Hawkins
Reply to  Bob boder
September 22, 2017 3:41 pm

Doctor;
I think that’s just a little “Cheers” humor.

September 22, 2017 8:10 am

I’ve never understood why Larry Kummer’s missives are posted on WUWT. Clearly, it is a mystery to me and I wonder why, why allow and avow this smarmy globalist activist such street cred?

Reply to  Scott Wilmot Bennett
September 22, 2017 8:42 am

And the “why” isn’t rhetorical but I guess Larry has convinced Anthony, that he is an impartial activist! ;-(

Reply to  Scott Wilmot Bennett
September 22, 2017 9:23 am

Scott, I would think that the good side is that it exposes the many flaws in the alarmist argument.

Robert Austin
Reply to  Scott Wilmot Bennett
September 22, 2017 2:42 pm

Larry makes us think and requires more than a knee jerk reply. I think Larry is somewhat climate science naive to the machinations of the Gavins of main stream climate science but he is civil and rational.

September 22, 2017 8:14 am

Item one in the conclusions recommend more investment in the “sciences” of climate. I strongly disagree with pursuing any major investment in so called climate science. I am unable to have any faith in science’s ability to accurately predict any major climatic events. Instead investment I feel should focus on the mitigation of the adverse affects that various parts of the planet suffer from climatic upheavals..

Engineer_1
September 22, 2017 8:22 am

“We have the tools to do so. A multidisciplinary team of experts (e.g., software engineers, statisticians, chemists), adequately funded, could do so in a year.”
If you are lucky to live long enough, you will see the inversion of radicalism more than once. Climate alarmism is the orthodoxy of today, and climate skepticism is viewed as radical in comparison, and I am a skeptic. Occupying the house of the orthodox is no solution, except perhaps as a path to political power.
The claim that a team of engineers, physicists, or mathematicians will establish a V&V solution in a year is farfetched, and that is being kind. There will be at least two tiers of V&V, code compliance and predictive skill. The code compliance part will take at least a year by itself, and the predictive skill part is so subjective that its time line is between 0 and N years, where N is bigger than your political horizon.

Curious George
Reply to  Engineer_1
September 22, 2017 8:47 am

It is a request for adequate funding.

Reply to  Engineer_1
September 22, 2017 9:47 am

engineer,
All large projects being with someone saying that it can’t be done.
“predictive skill part is so subjective that its time line is between 0 and N years, where N is bigger than your political horizon.”
Quite absurd. Model validation is a well-established field, with proven tools. In a year outside experts could provide useful reports about individual aspects of the models and validation data — comparing them with established results in their own fields of expertise.

Andrew Cooke
Reply to  Editor of the Fabius Maximus website
September 22, 2017 10:45 am

Models are the ultimate tool of subjective analysis. Keyword. Subjective.

Reply to  Editor of the Fabius Maximus website
September 22, 2017 11:16 am

Andrew,
“Models are the ultimate tool of subjective analysis.”
Wow. Quite nuts. I suggest you ask some engineers or experts in other fields about quantitative models.
Like all powerful tools, they can be misused. Just like a chain saw. But that doesn’t invalidate their utility.

Andrew Cooke
Reply to  Editor of the Fabius Maximus website
September 22, 2017 12:00 pm

No, Editor, it is not nuts. And I am an Engineer. I use modeling software, have taken modeling classes and know the math behind them.
It has been my unfortunate experience that models are a tool that before it is used a premise or notion causes the tool to be used in a certain way. It is very difficult to know all the factors that go into a chaotic system and even more difficult to quantify all the variables that go into a model. The premise or notion then dictates to the user of the model what assumptions to make when the variables are difficult to quantify. The assumptions are, in effect, the subjective view of the modeler.
Simple systems can be easily modeled. Complex systems are……very difficult. The atmosphere of the earth is an exceptionally complex system. The more complex, the more variables; the more variables, the potential assumptions; the more potential assumptions, the more subjective.
The more assumed variables, the less useful the model. Probably the single biggest reason why all climate models appear to be useless.

Engineer_1
Reply to  Editor of the Fabius Maximus website
September 22, 2017 12:05 pm

Editor of the Fabius Maximus website
Heh, that didn’t take long. Your “all large projects” comment is as circular as it gets.
There are no tools, other than code compliance processes, that may be immediately applied to the problem. List one tool that can improve the predictive skill of a GCM that is not subjective, and so incredibly limited that the constraints make it virtually useless as a predictive tool, yet appear to have good fidelity as a curve fitting tool. Too many state spaces that are dependent to have a high V&V score of predictive efficacy. The best tools for parameter/state space management/V&V in GCMs at this time only show “promise,” whatever that means.
While it is true that outside experts may lend insight to the problem, most honest people already know what the problem is. Successful engineering models are highly constrained, and have been verified by observation and experimentation. They tend to blow up when we ask them to crowd the edges. One cannot verify a climate model by experimentation and observation, other than fit to previous data.

Reply to  Editor of the Fabius Maximus website
September 22, 2017 12:17 pm

Andrew,
“Models are the ultimate tool of subjective analysis.”
If that is so clear, can you provide a cite or two from an expert or textbook — or something — supporting that very broad and definitive statement?
I’ve worked with modeling experts off and on since 1986, with climate scientists for the past several years — and never heard anything remotely like that.
There is a subjective element in models, as in pretty much everything. But that’s quite an exaggeration.Note that weather models have become increasingly accurate predictors of complex systems..With some basis to believe that they are about to make a jump forward in power during the next decade.

September 22, 2017 8:35 am

Okay. Here’s my question to policy decision-makers:
Why should I pay more taxes for someone to measure outside air temperature and write reports blaming the mankind?

Curious George
September 22, 2017 8:39 am

Why is the measured temperature in the graph so dramatically different from the graph FAQ 8.1 Fig. 1 in AR4?

PQ
September 22, 2017 8:40 am

This just proves that the models were used to sense-check the Pausebuster revisions to the surface temperature records. Next.

WonkotheSane
September 22, 2017 8:42 am

Aren’t these the same people who were whining that those who claimed a pause used the 1998 El Nino as a baseline (which was false, BTW)? Now they’re using the peak of the current El Nino as vindication (along with questionable statistical analysis, what else is new). Temps are falling from that peak is already falling rapidly. It will be fun to see they’re explanation after it pull the temperature outside of their hyper-extended band.

dmacleo
September 22, 2017 8:43 am
Matt G
September 22, 2017 8:50 am

That’s progress, a milestone — a successful decade-long forecast!

Really?
1) The warming is no different to the ongoing trend previously so anyone could have guessed it.
2) The warming follows scenario C where big reductions in emissions should had occur to reflect this reality.
3) The warming trend supporting scenario C indicates no concerns regarding dangerous climate change because the warming would be less than 2c per century.
4) The 10 year period cooled for most of it and only a strong El Niño towards the end has kept it roughly on track.
5) For most of the 10 year period global temperature were below the middle range forecast estimates.
6) The timeline is full of confirmation bias and ignores better observation tools that don’t fit the groups vision.
7) Exaggerating the surface temperature trend is not a proud moment that it only roughly with a recent strong El Nino, fits a short period that has shown little change and no concerns for the future.

Michael Jankowski
Reply to  Matt G
September 22, 2017 2:06 pm

Bravo!

K. Kilty
September 22, 2017 8:54 am

My reading of this is that without the most recent, very strong El Nino the skill of this projection (forecast, prediction, guess) would have been much less impressive. When a feature no accounted for in the model helps the model meet observations, then success is not exactly the word I would use.

Michael Jankowski
September 22, 2017 8:55 am

The 95% model spread is about 0.8 deg C. That’s huge. You can drive a model truck through that.

prjindigo
September 22, 2017 8:57 am

An infinite number of monkeys with an infinite number of typewriters and one accidentally writes a daytime soap opera so everybody runs around yelling “monkeys can write shakespear”…
Do we HAVE to support fallacious articles here?

Kaiser Derden
September 22, 2017 9:01 am

a broken / stopped clock is right twice a day … still not fit for purpose 🙂

September 22, 2017 9:01 am

What is shown, is a graph of observations at or below the ensemble forecast average the majority of time………and diverging, with the observations not as warm as the ensemble average………until the El Nino spike higher, that takes the observations up to the ensemble average…….and ends the successful 10 year forecast
Even if you buy the surface observations, along with their bias and adjustments, if the global climate models had predicted the 2015/16 spike higher in global temperatures from the El Nino and dialed that in(which of course they can’t), then the ensemble average would also have spiked higher and the actual temperatures would have not been able to spike up to the ensemble average.
If you take out the El Nino spike, it’s crystal clear that the global climate models are too warm.
Not just clear…….crystal clear! When the end point on the graph represents a spike higher that everybody on the planet knows will not have sustained momentum and the source uses it to rescue their global climate model that is clearly too warm…….it tells you about the objectively of the source.
I would bet a large sum of money that observations cannot keep up with the slope of the temperature increase on the global climate model ensemble average the next 10 years.
It’s possible but the reality/science as viewed by an independent operational meteorologist making observations of the global atmosphere the past 35 years say’s no. We may get warmer but not at the rate predicted by the models.
This is not to say that global climate models don’t have value. They clearly do but only when used honestly and with adjustments that reconcile differences between observations and models predictions.
It’s funny how the climate news about the same exact thing can change and be spun. Before the El Nino, “The Pause” or warming “slow down” was being legitimately discussed, even by those who thought we were headed for catastrophic warming.
Then, we have 2 years with a Super El Nino and a global temperature spike higher changing the latest tune to “the models are now confirmed”
If they were confirmed, then the El Nino spike higher should have taken the observations far into the upper range of the ensemble average.
Another problem that I see is that the observations show an increase of more than 1 Deg. C from the starting point in 1975 to the top of the El Nino spike. Satellites do/did not show that much of an increase and 1975 was the low point from the previous 30 years of modest global cooling.
So the starting point is at THE spike lower and the end point is just after a spike higher.
That’s not how you should get a trend to judge model performance.
http://www.drroyspencer.com/2017/09/uah-global-temperature-update-for-august-2017-0-41-deg-c/
How about, instead, we cherry pick 2012, during a cooling La Nina as our end point and use the cooler satellite data.comment image
So the cold extreme in observations is below 97% of climate models(using the cherry pick). The warm extreme cherry pick in observations from this article spikes up to reach the ensemble average.

knr
Reply to  Mike Maguire
September 22, 2017 10:05 am

I would bet a large sum of money that observations cannot keep up with the slope of the temperature increase on the global climate model ensemble average the next 10 years.
Pre or post ‘adjustment’ ?

Tom Dayton
Reply to  Mike Maguire
September 22, 2017 3:37 pm

Mike Maquire: That graph by Christy is severley flawed and misleading. Here is one critique; click the link in its first sentence to see a prior critique: http://www.realclimate.org/index.php/archives/2017/03/the-true-meaning-of-numbers/

tom0mason
September 22, 2017 9:02 am

What a load of blather!
The model’s may have started with a different objective originally but now they and their results are nothing more than a tool used by the UN-IPCC to frighten politicians.
Regardless of all the crud written above they are not accurate as they do not allow for the real physical changes that happens on this planet that changes the weather trends and thus the climate over time. But then again that is not what the elites of the UN want — they want a stick the bash the western nations with. These model are neither scientific nor clearly statistical (has the code been openly validated?).
So let me repeat — these models are nothing more than a tool used by the UN-IPCC to frighten politicians and the public — that is their ONLY use!

Kaiser Derden
September 22, 2017 9:08 am

so if there where only 2 models and model one predicted a temp that turned out to be .5 degrees higher than observed and model two predicted a temp that turned out to be .5 degrees lower than observed … does anyone in their right mind think that taking the model “average” and saying it matches observed means the model are predictive …
I guarantee you there are models being run that are tuned to show lower temperatures so that the models “average” manages to come closer to reality … the guys with the hot models and the guys with the cool models (both sets which are way off reality) coordinate to ensure the average doesn’t look as bad … all the gloom and doom folks point at the hot models of course …

Dale S
September 22, 2017 9:12 am

Claiming Schmidt’s graph uses “basic statistics” is mystifying to me. You might be able to apply basic statistics to many runs of the same model, but the CIMP3 ensemble is anything but that. Applying basic statistics to a variety of model runs produced by a variety of model doesn’t actually produce any statistically meaningful prediction.
However, I’m tremendously impressed at how well the ensemble mean wiggle matches in a 5-year spread around Pinatubo. Even the “95% range” wiggle matches extremely well. That feat is matched nowhere else on the graph.
Including 2006-2016, where the ensemble mean neither wiggle matches the observational record nor shows a similar magnitude. That “estimated 2017” appears to be bang-on is a product of the choice of where to center the anomaly. I’d be interested in a chart that compares the *absolute* temperature instead of anomaly.
One also wonders if the emissions over the last ten years tracked SRES A1B *and* the resulting concentration of greenhouse gases in the atmosphere matches what the models calculated based on those emissions. Unless both are true, the models can’t collectively claim that their projection matching observations was the result of skill.
Please convert the OHC chart to temperature and show error bars in the estimate, so we can better understand the scale of the detected warming in part of the ocean.

Tom Dayton
September 22, 2017 9:13 am

Projections by Hansen et al. 1981 and Hansen et al. 1988 also have been accurate, far more than 10 years after their projections: http://www.realclimate.org/index.php/climate-model-projections-compared-to-observations/

A C Osborn
Reply to  Tom Dayton
September 22, 2017 10:07 am

Tom, you obviously believe that DLWIR can heat the Ocean.
But it would appear that all Watts or Joules are not created equal.
Can you explain why it cannot be made to do any “work” as Sunlight can, after all warming the Oceans must be doing work isn’t it?
Why can’t we harvest it?
Can you explain why when Solar Ovens, which can focus the sun to raise temperatures up to 450 degrees C, but at night those same Ovens using DLWIR turn to Refrigerators and lower the temperature of the object in them?
Ask your friends over at Science of doom and elsewhere why we can’t use it to do work, after all it is Heat isn’t it?

Dave Fair
Reply to  Tom Dayton
September 22, 2017 10:57 am

Oooh, Tom. Pinched your foreskin again on the realclimate site you posted: Their chart of actual measurements generally track Hansen’s 1988 “projections” Scenario C (constant 2000 forcing), the low-ball scenario. Observations only jump up to his Scenario B (BAU) in the 2014-16 Super El Nino years. His Scenario A (high emissions), oddly enough that tracks actual CO2 emissions, is never even remotely reflected in actual temperature measurements.

AndyG55
Reply to  Tom Dayton
September 22, 2017 11:08 am

“What changed in 2013-2015?”
Gavin took over from Jimmy. !!!!

AndyG55
Reply to  Tom Dayton
September 22, 2017 11:37 am

Not intended to be that funny.
I noted a couple of years ago that divergence from temperature reality took on a new breathe of life when Gavin took over.

AndyG55
Reply to  Tom Dayton
September 22, 2017 11:38 am

It was as though Jimmy had thought he had mutilated the data enough..
But Gavin was INTENT on doing far more. !

Tom Dayton
Reply to  Tom Dayton
September 22, 2017 11:52 am

Forrest: Your questions do not make sense to me, because you seem to be giving those scenarios definitions they do not fit. Here is a good explanation; after you read the Basic tabbed pane, read the Intermediate one and then the Advanced one: https://skepticalscience.com/Hansen-1988-prediction.htm

Dave Fair
Reply to  Tom Dayton
September 22, 2017 12:39 pm

Tom, skepticalscience’s arm waving cannot hide the fact that global temperatures did not evolve as predicted by Hansen.

Reply to  Tom Dayton
September 22, 2017 12:03 pm

Tom,
That’s almost an urban legend. Schmidt’s graph here shows results of well-documented models and their equally well-documented forecasts. Eventually the forecast-observation match will be published in some form of peer-reviewed report. There is nothing remotely like that for Hansen’s long-ago papers.
There is almost nothing documenting and reviewing the 1981 paper. Here is the more important one: “Global climate changes as forecast by Goddard Institute for Space Studies three-dimensional model” by Hansen et el, Journal of Geophysical Research, 20 August 1988.
Its skill is somewhat evaluated in “Skill and uncertainty in climate models” by Julia C. Hargreaves, WIREs: Climate Change, July/Aug 2010 (ungated copy). She reported that “efforts to reproduce the original model runs have not yet been successful”, so she examined results for the scenario that in 1988 Hansen “described as the most realistic”. How realistic she doesn’t say (no comparison of the scenarios vs. actual observations); nor can we know how the forecast would change using observations as inputs.
Two blog posts discuss this forecast (for people who care about such things): “Evaluating Jim Hansen’s 1988 Climate Forecast” (Roger Pielke Jr, May 2006) and “A detailed look at Hansen’s 1988 projections” (Dana Nuccitelli, Skeptical Science, Sept 2010).
Before popping the corks and revamping the world economy on the basis of Hansen’s 1988 paper, there are some questions needing answers. Why no peer-reviewed analysis? What does the accuracy (if any) of his 1988 work tell us about current models? Why so many mentions of Hansen’s 1998 paper — and few or no reviews of the models used in the second and third Assessment Reports? Those would provide multi-decade forecast records.
Perhaps the best known attempt at model validation is “Global climate changes as forecast by Goddard Institute for Space Studies three-dimensional model” by Hansen et el, Journal of Geophysical Research, 20 August 1988.
Its skill is somewhat evaluated in “Skill and uncertainty in climate models” by Julia C. Hargreaves, WIREs: Climate Change, July/Aug 2010 (ungated copy). She reported that “efforts to reproduce the original model runs have not yet been successful”, so she examined results for the scenario that in 1988 Hansen “described as the most realistic”. How realistic she doesn’t say (no comparison of the scenarios vs. actual observations); nor can we know how the forecast would change using observations as inputs.
Two blog posts discuss this forecast (for people who care about such things): “Evaluating Jim Hansen’s 1988 Climate Forecast” (Roger Pielke Jr, May 2006) and “A detailed look at Hansen’s 1988 projections” (Dana Nuccitelli, Skeptical Science, Sept 2010).
Climate scientists can restart the climate policy debate & win: test the models! Especially note the last section: cites and links to a wide range of papers on validation of climate models. Mostly weak tea, not an adequate foundation for anything — let alone policies to save the world.
The small literature on this vital subject tells us a lot about the situation.

Tom Dayton
Reply to  Editor of the Fabius Maximus website
September 22, 2017 1:48 pm

Here is just one excellent discussion of Hansen et al. 1988’s projections’ skill; after you read the Basic tabbed pane, read the Intermediate one and then the Advanced one: https://skepticalscience.com/Hansen-1988-prediction.htm

D. J. Hawkins
Reply to  Editor of the Fabius Maximus website
September 22, 2017 3:56 pm

Your links to Dr. Pielke’s articles appear to be broken.

Tom Dayton
Reply to  Editor of the Fabius Maximus website
September 22, 2017 5:14 pm

This comment by Tom Curtis conveniently links to multiple and more recent evaluations of which of Hansen’s scenarios most closely match the actual forcings: https://skepticalscience.com/Hansen-1988-prediction-advanced.htm#107965

Tom Dayton
Reply to  Tom Dayton
September 22, 2017 4:38 pm

Forrest: The scenarios in Hansen’s projections are not things to be matched. They included (i.e., varied across scenarios) only greenhouse gas forcings (not just CO2, but not reflective aerosols). Also, the sensitivity emerging from his model was too high, as has been known and acknowledged by climatologists for many years. Details: https://skepticalscience.com/Hansen-1988-prediction-advanced.htm

Sixto
Reply to  Tom Dayton
September 22, 2017 5:03 pm

Hilariously, Hansen’s scenario for drastic reductions in GHGs has come closest, although still far off the mark, despite his maximal scenario for CO2 actually having been realized.
Use that scenario, and there is less than no coincidence between his simpleminded extrapolations and reality.

Tom Dayton
Reply to  Tom Dayton
September 22, 2017 5:11 pm

Forrest: Here is more recent detail about the actual forcings versus Hansen’s scenarios: “Overall in order to evaluate which scenario has been closest to reality, we need to evaluate all radiative forcings. In terms of GHGs only, the result has fallen between Scenarios B and C. In terms of all radiative forcings, the result has fallen closest to and about 16% below Scenario B. Scenario A is the furthest from reality, which is a very fortunate result.” https://skepticalscience.com/hansen-1988-update-which-scenario-closest-to-reality.html

Tom Dayton
Reply to  Tom Dayton
September 22, 2017 5:29 pm

Forrest: By “not things to be matched” I meant merely that I got the impression from your questions of which scenario I expect Hansen’s projections to match in the future, that you thought Hansen was predicting which scenario would happen in the future. He was not. His model did not. His model was intended to project the temperature response to whatever greenhouse gas forcings actually happen in the future. Lacking a time machine or infinite computer time, he made three scenarios of those forcings, fully expecting that none of those scenarios would actually come true in its details, but hoping that their range would encompass the future reality.
Hansen, like all other climate modelers, made assumptions of volcanic eruptions just as their best guesses. For example, the CMIP web site gives instructions to CMIP modelers, on what to assume for those aspects. They are not trying to predict volcanic eruptions. They are merely trying to put something reasonable into their climate models, knowing full well that those aspects will not be correct.
El Nino and La Nina are not forcings. They are not entered into Hansen’s or anyone else’s models. They are emergent phenomena as some of the internal variability of the climate system, largely responsible for the wiggles of the individual model runs.

Tom Dayton
Reply to  Tom Dayton
September 22, 2017 5:47 pm

Forrest: I don’t know which of Hansen’s scenarios most likely will turn out to be closest to the reality. I haven’t even considered it. I don’t think about Hansen’s scenarios. Instead I consider the more thorough and up to date RCPs: https://skepticalscience.com/rcp.php.
I don’t know in what ways the reality changed in 2013-2014. Don’t much care. Focused on newer, better models.
El Nino and La Nina are only weather, not climate. They greatly affect short term atmospheric and ocean temperatures, but being only internal variability they are unforced variability. They do not affect the long term trend. They cancel each other out over the long term, and anyway merely shift the forced energy accumulation between oceans and air. See my comment at https://wattsupwiththat.com/2017/09/22/a-climate-science-milestone-a-successful-10-year-forecast/#comment-2617732 and the one following that.

Reg Nelson
September 22, 2017 9:21 am

CMIP3 individual realisations (20C3M+SRESA1B)
What is this?
I thought 20 was the number of models, but I googled it and there were 25 models used in CMIP3.
SRES stands for Special Report on Emission Scenarios. According to the IPCC website:
“Since the SRES was not approved until 15 March 2000, it was too late for the modelling community to incorporate the final approved scenarios in their models and have the results available in time for this Third Assessment Report.”
So it appears that SRESA1B was not part of the original forecast.

Reg Nelson
Reply to  Reg Nelson
September 22, 2017 9:26 am

I was correct. From Gavin’s site:
“Model spread is the 95% envelope of global mean surface temperature anomalies from all individual CMIP3 simulations (using the SRES A1B projection post-2000). Observations are the standard quasi-global estimates of anomalies with no adjustment for spatial coverage or the use of SST instead of SAT over the open ocean. Last updated Feb 2017.”
So Gavin is presenting information as if it was part of the original CMIP3 forecasts, when it was not.

AndyG55
Reply to  Reg Nelson
September 22, 2017 11:35 am

Enough models to hit the side of a barn. !!
Then move the barn as necessary. !

Kaiser Derden
September 22, 2017 9:21 am

I work in the financial industry and do you know what we call a modeler who’s model is off by 50% or more ever year ? a barista … or a waiter …

September 22, 2017 9:26 am

The basic premise here is false. The CMIP5 model set (102 runs from 32? different model groups is hindcast before YE2005 (with parameter tuning) and forecast from Jan 1 2006. CMIP3 is hindcast from 2000 and forecast from 2001. It is not when yhe models were run. It is the initialization date that determines forecast/hindcast. Kummer should have known this.
Christy’s 29 March 2017 publicly available congressional testimony shows both graphically and statistically that all but one CMIP5 run failed to even get close to balloon, satellite, and reanalysis estimates, all of whichnare in good agreement with each other. The lone CMIP5 exception reasonably tracking reality is the Russian model INM-CM4. It has higher ocean thermal inertia, lower water vapor feedback, and low sensitivity.

Reply to  ristvan
September 22, 2017 9:53 am

ristvan,
“It is the initialization date that determines forecast/hindcast.”
That’s the practice in climate science. It’s one of their quaint customs that have resulted in three decades of chasing tails in the public policy debate.
In finance model failures mean unemployment. So the hindcast/forecast label is based upon the date of the model — to avoid influence of tuning the model to match past results.
Public policy issues concern spending trillions of dollars, the path of the national and world economies, and allocation of scarce resources against the many serious threats the world faces. The standards are higher than the usual academic debate.

Reply to  Editor of the Fabius Maximus website
September 22, 2017 10:46 am

Ristvan’s comment reminds me of what I consider the ur-narrative of the climate change policy debate — explaining why it failed. It’s a story, FWIW.
My first encounter with the climate wars was soon after James Hansen’s boffo Senate testimony in 1988. He spoke before the Quantitative Methods Group of the San Francisco Society of Security Analysts. These lunches of 20 or 30 included some of the finest mathematicians in the nation — Wall Street pays to get the best (I was a very junior member). They laughed at Hansen’s presentation of hindcast as definitive evidence, and ripped his methods. It was a typical firefight among academics, but brutal to watch.
As always, what happened afterwards is the important part. Hansen had received feedback from some smart experts. So, with the future of the world at stake, he went back to his office and revised his presentation to respond to their critique. Right or wrong, convincing people means responding to their objections — not just repeatedly saying “We’re right and you’re wrong.”
Nope. He ignored them. (Now he would be accompanied by a chorus of Leftists who would chant “denier denier” when anyone raised an objection.)
Three decades later we’re still hearing the exact same kind of presentations. Rebuttals by experts of various kinds are met by screams of “deniers!” Requests for independent verification — second opinions by unaffiliated experts — are ignored. This is not the behavior of people who believe the world is as risk.
For more about this see How we broke the climate change debates. Lessons learned for the future..

Dave Fair
Reply to  Editor of the Fabius Maximus website
September 22, 2017 12:04 pm

Look, the models do not reflect early 20th Century cooling and warming. Christ! They can’t even get history straight.

Reply to  Editor of the Fabius Maximus website
September 22, 2017 9:26 pm

Public policy issues concern spending trillions of dollars, the path of the national and world economies, and allocation of scarce resources against the many serious threats the world faces. The standards are higher than the usual academic debate.

You are making a compelling case for minarchism. The standards areshould be higher:
Gas temperature is driven by mass, pressure and volume (pV=nRT), not by composition. This can be observed, measured and verified in a laboratory and the planets of the solar system. The impact of part per million variations in the atmospheric composition to temperature is scientifically comparable to homeopathy.
You are free to disagree with it, but while we are discussing public policies:
Anthropogenic climate change policy is anti-mankind by its own definition. As such, substandard by any measure. At best, it’s a waste of scarce shared resources and, at worst, against human rights, as UN defines it. Although recognising some of the intentions good, they are paved to oppressive enough direction to trigger resistance.

Curious George
Reply to  ristvan
September 22, 2017 10:28 am

Did IPCC publish this graph in 2007, or did authors re-create it only in 2017?

Reply to  Curious George
September 22, 2017 10:49 am

George,
“Did IPCC publish this graph in 2007, or did authors re-create it only in 2017?”
The post contains the relevant from from AR5. I don’t know what graph was in AR4.
Why does it matter? The data are the evidence, not the graphic presenting it.

Curious George
Reply to  Curious George
September 22, 2017 11:09 am

It matters because they did not publish a forecast 10 years ago. Publishing it in 2017 makes it a hindcast, not a forecast.

Reply to  Curious George
September 22, 2017 11:23 am

George,
“Publishing it in 2017 makes it a hindcast, not a forecast.”
You appear unclear about how this works. The forecast was published in AR4 and other publications back then. The graph comparing the forecast with observations thru now has to be published now.

Curious George
Reply to  Curious George
September 22, 2017 11:26 am

It is an old trick. Lord Knowitall to butler James:
– Think of a number between 1 and 10.
– Eight, Your Lordship.
– I knew you would say Eight. Read a note in the flower pot on the windowsill.
– “I knew you would say Eight”. How did you do it, Your Lordship?

Curious George
Reply to  Curious George
September 22, 2017 11:28 am

Where in AR4 did they publish it? The closest thing I found was FAQ 8.1 Fig 1, and that one has a wildly different temperature data.

Reply to  ristvan
September 22, 2017 11:19 am

Forrest,
“Talking about tuning, isn’t that one of the core problems with the models?”
Not at all. “Tuning” is a factor to consider in the validation of models. The easy solution is to consider as “forecasts” their predictions made after the model was created.

Tom Dayton
Reply to  Editor of the Fabius Maximus website
September 22, 2017 5:51 pm

Forrest, here is a recent article on tuning: http://journals.ametsoc.org/doi/abs/10.1175/BAMS-D-15-00135.1

Clyde Spencer
September 22, 2017 9:58 am

Larry Kummer,
Let’s look at the CMIP3 ‘prediction’ another way. If the “CMIP3 individual realizations” was a chart of stock values of a group of similar industry stocks (colored lines), and the black line were the predicted stock values from your financial advisor’s industry model, how would you fare investing in the industry group? From some initial investment in year 1998, when the investments and model are equal valued, you would promptly lose a significant amount; it would take about two years before your investment was again on a par with the prediction. For about 5 or 6 years, the investments and prediction would dance around each other with the less volatile prediction doing slightly better on average. Then, starting about 2007, your investment values would dive and not recover until about 2014. If you were buying and selling during this time, you could have lost a lot of money. If you bought during downturns and sold on upturns, you would definitely be in the negative realm. If you sold on downturns with a stop-loss order, and bought on upturns, you might see some gains, but it would be dependent on timing and your tolerance for pain. Indeed, the transient ‘recovery’ about 2015 can be expected reasonably to be followed by another downturn (which has happened already) and it is probable that it will go lower yet still. I think that most investors would conclude that the ‘Black Line’ prediction, were it real, would be a better overall performer (running hotter, especially between about 2007 and 2014) than the colored-line industry-group. Really, there is only comparable performance between about 2001 and 2007 – six years, not 10! If there is an analogy between the predictive ability of any model, and faith in the performance of a stock, then I think that most investors would be disappointed with this model. Only the general trends are similar, not the actual values at any point in time.
What should be considered is that while I offer an analogy to better deal with temperature changes, detached from ideological commitments, it shouldn’t be forgotten that all the ‘solutions’ to the claimed anthropogenic ‘problems’ will cost money. That is, we will be making future investments and we should be certain that the models we follow are reliable.

Reply to  Clyde Spencer
September 22, 2017 10:53 am

Clyde,
That was my business for 30 years, so I understand the logic. I don’t see how it applied to climate forecasts.
Forecasts are evaluated using statistical tools, not word analogies. In fact, investments are now evaluated using statistical tools — which is why we now know that few (perhaps none) can outperform passive strategies. This was not clear for decades using the kind of word pictures and chart junk that used to be investment analysis.

Clyde Spencer
Reply to  Editor of the Fabius Maximus website
September 22, 2017 6:15 pm

Larry,
OK, my words were obviously wasted on you. I was trying to explain why I would not consider the graph to represent the level of skill many demand when their money is at risk.
I went back and re-read the section (and the links) where you make the claim “This graph shows a climate model’s demonstration of predictive skill over a short time horizon of roughly ten years.” You further claim, “The graph uses basic statistics,…” I’m sorry, but I don’t see your claimed “basic statistics.” I see a line that supposedly represents an average of a number of ‘projections,’ accompanied by a claimed 95% envelope, and several lines that represent smoothed averages of different temperature data sets. That’s it! No information on SD of the data sets, no information on correlation between the lines, no information on the variance of the slopes, etc. That is, your claim of “basic statistics” is without substance unless you want to call graphing lines “basic statistics.”
Call me unconvinced. However, let me close with a quote from the chapter on the evaluation of
climate models from Working Group I, AR5: “Although future climate projections CANNOT BE DIRECTLY EVALUATED, climate models are based, to a LARGE EXTENT, on verifiable physical principles and are able to reproduce many important aspects of past response to external forcing.” You have more faith in the skill of ‘projections’ than the authors of chapter 9.

knr
September 22, 2017 10:02 am

How about holding climate ‘science’ to the same ,ethical , professional practice and academic standard as other sciences. Rather than thinking it is OK to have ‘standards ‘ in all these areas that would see an undergraduate getting a big fat ‘F’ if they used them in an essay ?

Reply to  knr
September 22, 2017 10:54 am

knr,
Could you please state your objections is a clearer form? I don’t speak Rant.

nn
September 22, 2017 10:19 am

Out of hundreds of divergent hypotheses, one of them was bound to produce a correlation with reality, eventually. Still, prediction and reproduction are the hallmarks of viable science.

Reply to  nn
September 22, 2017 10:55 am

nn,
“Out of hundreds of divergent hypotheses”
The major modeling system now is CMIP5. As the number ‘5″ suggests, the IPCC has not been using hundreds of hypotheses/models.

Reply to  Editor of the Fabius Maximus website
September 22, 2017 11:25 am

NN,
“The number of models would be measured by the number of strands of spaghetti, wouldn’t it?”
That’s why observations are best compared with the ensemble mean plus the confidence range.

AndyG55
Reply to  Editor of the Fabius Maximus website
September 22, 2017 11:31 am

The number “5” does not represent anything to do with the number of models used (300, iirc)
Why the deliberate mis-direction?

AndyG55
Reply to  Editor of the Fabius Maximus website
September 22, 2017 11:32 am

30 , not 300 !!

Reply to  Editor of the Fabius Maximus website
September 22, 2017 11:40 am

Andy,
“The number “5” does not represent anything to do with the number of models used (300, iirc)”
Not everything in climate science can be reduced to kindergarten level. Read about the CMIP project to learn what it is, and what the different generations mean.
Wikipedia is a good starting place. For more detail see their website:
http://cmip-pcmdi.llnl.gov/

Tom Dayton
Reply to  Editor of the Fabius Maximus website
September 22, 2017 11:41 am

Each strand of spaghetti is one model run. Some models are run multiple times, so there are fewer models than strands of spaghetti. You can look at the CMIP site for details.

AndyG55
Reply to  Editor of the Fabius Maximus website
September 22, 2017 12:32 pm

“As the number ‘5″ suggests, the IPCC has not been using hundreds of hypotheses/models.”
The number “5” suggests NOTHING of the kind.

AndyG55
Reply to  Editor of the Fabius Maximus website
September 22, 2017 12:34 pm

“Not everything in climate science can be reduced to kindergarten level”
Yet you keep managing to. !!
“As the number ‘5″ suggests”……..
The “5” has zero meaning related to the number of models, why did you imply it did. ?

Bruce Cobb
September 22, 2017 10:53 am

“the gridlock has left us unprepared for even the inevitable repeat of past extreme weather”
Ah, so this is about weather, not climate. Thought so. And somehow, man can control the weather. Good luck with that.

Reply to  Bruce Cobb
September 22, 2017 10:57 am

Bruce,
“so this is about weather, not climate.”
Weather (e.g, storms, drought) is how we experience climate.
“And somehow, man can control the weather.”
That’s quite a delusional summary of the debate.

A C Osborn
Reply to  Editor of the Fabius Maximus website
September 22, 2017 12:00 pm

That is an Exact summary of the reduce CO2 debate, which is what the IPCC is all about.

hunter
Reply to  Editor of the Fabius Maximus website
September 24, 2017 4:32 am

Your definition of weather sends the climate faithful a case of vapors followed by end of discussion.
And your point about how the cage of climate obsession leaves us less prepared is likely proof that nearly all money spent on climate science and renewables is wasted money.

September 22, 2017 10:56 am

Forrest,
“The first thing I noticed was that the grey area for hindcasts seems to have a similar range to the forecasts. Surely they should match known figures better than they do.”
No. A closer match would be clear evidence that they were improperly tuned to match past observations.

Reply to  Editor of the Fabius Maximus website
September 22, 2017 11:27 am

Forrest,
“How do you distinguish between the two?”
By testing the model by comparing observations with forecasts, not hindcasts. As Popper said, successful predictions are the gold standard for science.

Dave Fair
Reply to  Editor of the Fabius Maximus website
September 22, 2017 12:14 pm

Forrest, I’m not sure that a model that doesn’t reflect exactly the past is a very good model. Using parameter fiddling is dishonest.

HAS
Reply to  Editor of the Fabius Maximus website
September 22, 2017 8:49 pm

In fact one should pay more attention to the past when evaluating the future.
All these graphs show is the flexibility anomalies offer when matching models to temp series. What exactly is being claimed to be being accurately forecast? Obviously not the absolute value of the anomaly since they can be adjusted across a wide range by simply choosing different base periods as the following graph shows:
http://i68.tinypic.com/nv4pkp.jpg
Assuming I download the data sets correctly ’20C3M A1B’ is the same as what Gavin shows but including back to 1900. The anomalies are relative to 1980-1999. HadCRUT4 anomalies are relative to 1961-1990 so need to be adjusted to the same base which is what ‘Adj HadCRUT4’ shows. All well and good, the forecast looks pretty good (forgetting so called error distributions), as does the period Gavin includes. But 1930-1950 doesn’t look that good. This calls into question how well the forecast model is really working.
And just to show what can be done with anomalies we could take the not unreasonable view that we’re looking at the 20th century for our base period and we’ll adjust HadCRUT4 so it lines up on the middle two decades of 20C3M (1940-1959). That’s ‘Adj2 HadCRUT4’ on the graph. You will see it’s now getting well off beam over the forecast period.
The conversion of the various series to absolutes would give one justifiable stake in the ground, but the range of model temps and the consequent error ranges would cover most conceivable futures on a decadal scale.
Largely bread and circuses IMHO.

Tom Dayton
Reply to  Editor of the Fabius Maximus website
September 22, 2017 9:15 pm

HAS: We care about trends. Trends are slopes. The baseline is irrelevant to the trend. Changing the baseline merely moves the line up and down the y axis without affecting the trend. That’s elementary geometry. If you want to know more about trends and anomalies versus absolute temperatures, read this recent explanation: http://www.realclimate.org/index.php/archives/2017/08/observations-reanalyses-and-the-elusive-absolute-global-mean-temperature/

HAS
Reply to  Editor of the Fabius Maximus website
September 22, 2017 9:38 pm

Tom, thanks, I know that.
But you’ll appreciate we are dealing with a different problem when evaluating the output from models from what you linked to. And unfortunately trends in anomalies are even more fraught when it comes to evaluating forecasts from multiple models.
As I said bread and circuses, but I guess it’s Twitter.

Tom Dayton
Reply to  Editor of the Fabius Maximus website
September 22, 2017 9:52 pm

HAS: Your reply makes no sense,

HAS
Reply to  Editor of the Fabius Maximus website
September 22, 2017 10:49 pm

Tom there were two sentences of substance in what I wrote.
The first said that your link didn’t deal with the problem in hand – evaluating the output of climate models against global estimates of temperatures. Your link deals with the construction of the anomaly and its rationale.
I’m happy to explain the difference if it isn’t obvious.
The second said that testing trends in anomalies derived from ensembles was fraught, particularly the problems of using a simple linear combination of the output from non-linear models as a well behaved statistic over time and across models.
Putting out stuff like this graph making bold claims based on it detracts from the complexity of the science.
Hence my last sentence.

Sara
September 22, 2017 11:25 am

OKay, Fabius, you can throw all the charts and graphs you want to at me, but you still won’t get my money. One-half of one degree, centigrade or Fahrenheit, is irrelevant, whether long-term or short term. Maybe you don’t know the difference between an omega blocking high dredged up by Irma and supported by Jose, blocking cooler autumn breezes coming from the northwest, but I DO. And, Sport, It’s W-E-A-T-H-E-R.
The exaggerated ups and downs of temperature swings don’t amount to a hill of beans because if they were put in proper size instead of a 3×3 inch twitterpated graph, they’d be a nearly flat line. Squeezing and distorting the lack of real difference in average temperature is nothing but a grab and a passionate plea for MORE MONEY, MORE FUNDING, MORE THIS AND THAT including attention.
If you stretch those charts of yours out to full-size, meaning actual lengths of time, those exaggerated ups and downs will flatline, just like a bad EKG readout.
If the temperature changes 10 degrees from one day to the next or one month to the next, that is weather. It is NOT climate. And unless you can physically prove otherwise, which you haven’t done anything but convince me that you’re a crank and a carnival barker at a county fair.
Just so you know, we’ve been in a solar minimum since 2008, when the Sun blew a wad and went to sleep until the fall of 2010, and did NOT come back to its previous level of activity, which surprised NASA. I have that all recorded, Kiddo. We’re in a solar minimum and will be for a while. You have no control over that. Humans can’t even control their own digestive systems or their emotions, so what in the blue-eyed bleeping world makes you think any of us puny puddle hoppers can control the freaking climate?
I’m truly curious to know if you understand the SIZE of this planet. I’m not sure that you do. Believe me, SPORT, it has its own agenda and it doesn’t give a crap about yours. But if you really ARE interested in reducing carbon emissions, which plants desperately need to stay alive, you could help the Cause of Plants by wearing a rebreathing device.
Smooches!!!!
(Snipped) MOD

Reg Nelson
September 22, 2017 11:57 am

Gavin’s graph doesn’t appear to have current temperature data (or incorrect data). According to Wood for Trees the HADCRUT4 global temperature anomaly at the beginning of 2007 was 0.8 C it is now at 0.6 C.
http://www.woodfortrees.org/plot/hadcrut4gl/from:2007/to:2017

Bruce Cobb
September 22, 2017 12:03 pm

Ah, so climate is “an experience”. Got it.
Right. The “debate” is about whether and to what degree man controls climate, which would obviously affect weather, which is how we “experience” climate.
Talk about delusional.

Reply to  Bruce Cobb
September 22, 2017 12:10 pm

Bruce,
“Ah, so climate is “an experience”.”
Yes, we “experience”‘ weather — as we do all other real world phenomena.
Sadly, WUWT is not moderated to ban trolls.

AndyG55
Reply to  Editor of the Fabius Maximus website
September 22, 2017 12:48 pm

Otherwise you would not be here.

Sixto
Reply to  Editor of the Fabius Maximus website
September 22, 2017 3:35 pm

OK, we have a Zinger of the Day winner!

Dave Fair
Reply to  Bruce Cobb
September 22, 2017 12:51 pm

Mushing dogs in the Alaska winter I wore heavy clothing. Scooping horse droppings in the Las Vegas summer I wear shorts. That’s how I “experience” climate.
Had it not been for my wife, I would not have been doing either. Does that mean I could have avoided climate change, through divorce?

JohnKnight
September 22, 2017 12:03 pm

Ah, gridlock man . . ; )
Larry, I am convinced you are a totalitarian control freak, who will not be happy ’till our “gridlock” problem goes the way of China’s . .

Reply to  JohnKnight
September 22, 2017 12:07 pm

John,
“Larry, I am convinced you are a totalitarian control freak, who will not be happy ’till our “gridlock” problem goes the way of China’s . ”
That’s quite delusional. The only policy remedies given here are standard parts of the Republican platform for many decades: better infrastructure and diversifying our energy sources.

Sixto
Reply to  Editor of the Fabius Maximus website
September 22, 2017 12:35 pm

Has a majority of GOP members of Congress voted for “renewables” subsidies?
We’ll see if the now GOP-controlled Senate and a nominally GOP president continue this insane folly.

JohnKnight
Reply to  Editor of the Fabius Maximus website
September 22, 2017 12:36 pm

There’s nothing stopping the people of say, New York, from improving THEIR infrastructure, Larry, and diversifying THEIR energy sources, if THEY feel that is important . . don’t need to stomp out all “gridlock” (AKA disagreement) in America over the likelihood of catastrophic global warming anytime soon . .

Sixto
Reply to  Editor of the Fabius Maximus website
September 22, 2017 4:38 pm

John,
Right on!
The best example is NYC. It was totally predictable that New York would yet again be hit by a hurricane or tropical storm, which might, as Sandy did, arrive at high tide. But instead of building a storm surge barrier, as Providence, RI did after the bad hurricanes before the 1960s, NYC preferred to blame Sandy’s damage on “climate change’ and beg for federal handouts. The cost of the barrier would have been less than Sandy’s damage, but environmentalists feared a barrier would upset the fragile ecology of the Bay. Yeah, right!

Sixto
September 22, 2017 12:07 pm

Let’s see ocean heat content from the 1920s, ’30s, ’40s and ’50s.
You know, when the Siberian coast was ice free in summer and even a sneaky German ship, obviously unaided by Soviet icebreakers, was able to steam to Japan during WWII.

Reply to  Sixto
September 22, 2017 12:56 pm

Sixto,
“Let’s see ocean heat content from the 1920s, ’30s, ’40s and ’50s.”
That would be nice. Unfortunately, global OHC data gets rapidly sketch as one goes back in time from 2004-06, when the Argo system was developed.

Sixto
Reply to  Editor of the Fabius Maximus website
September 22, 2017 1:00 pm

Editor,
Yes, indeed it would be nice, but inconvenient for Warmunistas.
If the 1960s can be constructed, then why not previous decades? Starting at the low point of the postwar cooling is misleading, apparently intentionally so.
Just like all CACA presentations.

Reply to  Editor of the Fabius Maximus website
September 22, 2017 1:09 pm

Sixto,
“If the 1960s can be constructed”
Data goes far father back than the 1960s, but with rapidly diminishing quality.

Sixto
Reply to  Editor of the Fabius Maximus website
September 22, 2017 1:36 pm

Larry,
US submarine data did improve with the nuclear fleet, but the US and other navies have lots of temperature data from previous decades.
It appears to me that the reconstruction too conveniently starts when ocean heat content was at its lowest since World War I.
Your pro-CACA bias is blatant.

Sixto
Reply to  Editor of the Fabius Maximus website
September 22, 2017 4:09 pm

Larry,
Check this out, if you really do imagine that OHC from before the ’60s can’t be reconstructed:
https://wattsupwiththat.com/2017/09/21/2014-hgs-presentation-climate-change-facts-and-fictions/comment-page-1/#comment-2616995
Yet again you show yourself a shill for CACA pukers.

4 Eyes
September 22, 2017 12:24 pm

Averaging models is meaningless. The way forward is to pick 2 or 3 models that actually history match temperatures to date and then run those models out for 100 years. If you use models that bear no resemblance to reality in your averaging you are fooling yourself or fooling others. There seems to be a lot of effort put into fooling others.

John Smith
September 22, 2017 12:51 pm

I read somewhere that when they do a hindcast they re-align their models to actual measurement data every 10 years because of unpredictable ‘natural variation’ that would otherwise cause a deviation. If this is true, it is jaw-dropping because it would make almost any model correlate reasonably well with the past. Anyone confirm this?

Tom Dayton
Reply to  John Smith
September 22, 2017 1:51 pm

No, it’s not true. Hindcasts use the actual forcings that occurred, because those are known (having already occurred). Climate models take forcings as inputs.

Snarling Dolphin
September 22, 2017 12:57 pm

One successful 10-year forecast to match adjusted temperatures breaks the gridlock? God, let’s hope not Fabio!

Reply to  Snarling Dolphin
September 22, 2017 1:08 pm

Dolpine,
“One successful 10-year forecast to match adjusted temperatures breaks the gridlock”
Let’s replay the tape to see that the post actually says the exact opposite of what you claim.

“The gridlock might be breaking in the public policy response to climate change. …This graph shows a climate model’s demonstration of predictive skill over a short time horizon of roughly ten years. …
“It shows another step forward in the public policy debate about climate change …
“This is one in a series of recent incremental steps forward in the climate change policy debate. Here are two more examples of clearing away relatively minor issues. Even baby steps add up. …
“Perhaps now we can focus on the important issues. …”

September 22, 2017 12:58 pm

Regarding the Krugman suggestion, it’s a good one. And not unique to him (eg Feynman makes it here)
[youtube https://www.youtube.com/watch?v=MIN_-Flswy0&w=560&h=315%5D
I wonder if there isn’t an attempt to kick the hornets nest by choosing authorities that will be perceived as perverse choices; because then one can paint oneself as a more rational, cooler head when compared to the irritated people reflexively flinging abuse.

Reply to  tarran
September 22, 2017 1:01 pm

tarran,
“I wonder if there isn’t an attempt to kick the hornets nest by choosing authorities that will be perceived as perverse choices; because then one can paint oneself as a more rational, cooler head when compared to the irritated people reflexively flinging abuse.”
(1) In the real world, smart people have useful insights even if they happen to disagree with you on some things.
(2) You should read before making your knee-jerk criticism. Citing Krugman in support of skeptical positions is a powerful argument — something like “admission against interest” in the courtroom.
https://www.nolo.com/dictionary/admission-against-interest-term.html

Reply to  Editor of the Fabius Maximus website
September 22, 2017 1:24 pm

And maybe you should reread the first sentence of my comment “Mr I really should practice what I preach”. 😉

September 22, 2017 12:58 pm

A nice demonstration of how much research about effects of global warming has gone bonkers. Before reading, note that many patent examiners work remotely — and probably a large fraction work in places with air conditioning.
Too hot to reject: The effect of weather variations on the patent examination process at the United States Patent and Trademark Office” by Balázs Kovács in Research Policy, in press.
Highlights
This paper demonstrates that external weather variations affect if patent applications are allowed or rejected United States Patent and Trademark Office (USPTO).
The analyses are based on detailed records of 8.8 million “allow”/”non-final reject”/”final reject” decisions made at the USPTO between 2001 and 2014.
Temperatures warmer than usual in that week of the year lead to higher allowance and lower final rejection rates.
Higher cloud coverage than usual in that week of the year lead to lower final rejection rates.
Abstract
This paper documents a small but systematic bias in the patent evaluation system at the United States Patent and Trademark Office (USPTO): external weather variations affect the allowance or rejection of patent applications. I examine 8.8 million reject/allow decisions from 3.5 million patent applications to the USPTO between 2001 and 2014, and find that on unusually warm days patent allowance rates are higher and final rejection rates are lower than on cold days. I also find that on cloudy days, final rejection rates are lower than on clear days. I show that these effects constitute a decision-making bias which exists even after controlling for sorting effects, controlling for applicant-level, application-level, primary class-level, art unit-level, and examiner- level characteristics. The bias even exists after controlling for the quality of the patent applications. While theoretically interesting, I also note that the effect sizes are relatively modest and may not require policy changes from the USPTO. Yet, the results are strong enough to provide a potentially useful instrumental variable for future innovation research.

Sixto
Reply to  Editor of the Fabius Maximus website
September 22, 2017 3:29 pm

This lame attempt to ingratiate yourself to genuine skeptics just comes across as smarmy.
Of course there reams of worthless journal papers which assume the worst bogus CACA forecasts, then try to come up with bizarre effects therefrom across a host of different phenomena.
And if you want to study ground squirrels or great horned owls, then you have to tie your research to “climate change” Say the magic words and win a grant.

Philo
September 22, 2017 1:25 pm

Larry Kummer- a 10 year “successful” climate forecast is just silly. By definition it has to be at least 30 years, the WMO diktat, or at least 17 years to be a “trend” via Trenberth or whomever.
In any case, global average temperature is a statistical construct, not a measurement or observation, and has little, if any functional meaning. It is a temperature average of sorts, and doesn’t represent anything physical that might “drive” the climate.
So a 10 year match is simple happenstance and doesn’t mean anything, or, more suspiciously, it could be manufactured. I didn’t see any registry of procedures, data, or code mentioned.

AZ1971
September 22, 2017 1:29 pm

Use sci-hub.cc and the doi to read the revisionist magic by Gavin Schmidt and company. It’s hilarious.

One cause might be the chaotic internal variability of the coupled system of oceans and atmosphere, for example in the tropical Pacific Ocean, or in variability in deep ocean circulation. Alternatively, decadal-scale temperature variations can be a response of the climate system to external influences, such as volcanic eruptions or the solar cycle.

In other climate discussions (including the UN IPCC AR5) there is so little variation in solar insolation as to be a non-starter in climate forcing, and as yet there has been no large-scale volcanic eruptions post-2000 which would allow volcanic aerosols to circulate high enough and in sufficient quantity as to cause a negative forcing. Yet this is precisely what Gavin came up with to explain ‘the pause’?
And people believe these fools? WHY??

Sixto
September 22, 2017 1:32 pm

Larry,
I’ve visited your site and find that I cannot say enough bad things about it.
Surely you’re aware that the Fabian Socialists also took their name from Fabius. And probably you, if not your young Marine collaborator, know that Dalton Trumbo was a Communist, But you might not be aware that Zero Hedge is a Russian propaganda site. One of its two founders, Daniel Ivandjiiski, barred from the hedge fund industry for insider trading, is the son of a former Bulgarian KGB officer whose cover was “journalism”.
I also don’t know to what you refer by claiming that “study of actual scientists disproved” Popper’s “theory”.

Chris Hanley
Reply to  Sixto
September 22, 2017 3:21 pm

The Fabian strategy is to wear down the opponent by attrition, maybe by sheer boredom.

Sixto
Reply to  Chris Hanley
September 22, 2017 3:26 pm

In the latter case, Larry Cunctator has succeeded admirably.

gnomish
Reply to  Sixto
September 22, 2017 3:39 pm

“I also don’t know to what you refer by claiming that “study of actual scientists disproved” Popper’s “theory”.”
especially since popper had no theory and merely rebranded plato’s noumenal essence nonsense.

Sixto
Reply to  gnomish
September 22, 2017 3:58 pm

I don’t see Popper that way at all.
I see him saying what Feynman said in his famous lecture. And what Einstein said succinctly. But being a philosopher, he had to say it all at some length to be taken seriously.
And to his credit, he refined his “theory” as he learned more about real science. Couldn’t agree more though that his thought isn’t a theory, but simply a statement of the real scientific method.
Fabio Cunctator buys into Oreskes’ poisonous attempt to redefine the scientific method. He mistakes Kuhn’s largely justified commentary on the sociology of science for a new take on the scientific method. That science is political and sociological is hardly surprising, but that doesn’t mean that its best practice isn’t the method as elucidated by Einstein, Feynman and Popper.

gnomish
Reply to  gnomish
September 22, 2017 11:14 pm

sixto- if yu really like the topic of epistemology (or just want to see popper shredded and flushed) just find mr science.or.fiction comment and click his nick
his worst absurdity was the claim that nothing can be proven.
you’re supposed to take that on faith cuz it ws a divine revelation
he’s a mystic. that is as far from science as it gets.

gnomish
Reply to  gnomish
September 23, 2017 9:48 am

wow… i just looked up cunctator-
” Fabian strategy sought gradual victory against the superior Carthaginian army under the renowned general Hannibal through persistence, harassment, and wearing the enemy down by attrition rather than pitched, climactic battles.”
” “The logo of the Fabian Society, a tortoise, represented the group’s predilection for a slow, imperceptible transition to socialism, while its coat of arms, a ‘wolf in sheep’s clothing’, represented its preferred methodology for achieving its goal.”[9] The wolf in sheep’s clothing symbolism was later abandoned, due to its obvious negative connotations.”
man, that’s kummer, all right. not even trying to hide it…

Chris Hanley
September 22, 2017 2:36 pm

Hallelujah! At last, after years of waffly prevaricating ambiguous posts, The Editor has finally come out of the closet and declared himself the ‘true believer’ that I think most readers knew he was all along — he wasn’t fooling me anyway.

Sixto
Reply to  Chris Hanley
September 22, 2017 3:27 pm

As if there were ever any doubt.

Sixto
Reply to  Chris Hanley
September 22, 2017 3:33 pm

He’s a straphanger on the sc@m.
Trying lamely to have it both ways to draw eyeballs to his pathetic site.

AndyG55
Reply to  Chris Hanley
September 22, 2017 6:25 pm

The “editors” pretence of any sort of scientific acumen or knowledge is quite hilarious. 🙂
Brain-washed sludge.

September 22, 2017 2:47 pm

Does anyone know what program Gavin uses to archive current data?
I’m asking because, as I understand it, an archive program is not only designed to record and store data but also to keep the archive as small as possible.
One way to achieve both seemingly contradictory objectives is to run the data (whatever it might represent) through “tests” before a value is actually recorded, that is, saved to the archive.
One “test” could be if a number is +2 or -1 than the previous archived number.
Any value that is not greater than +2 or less -1 than the previous archived number is dropped from the record of, say, current temperatures.
Past records could be passed through the same “test” but using +1 and -2 for the test.
The actual values that remain could then be passed to another “test” before they are actually recorded.
Obviously, honest tests would be one where the +/- is the same value for all the test and set to give a range that reflects reality.
PS Maybe the really “raw” data is gone because it’s been archived via such methods?

Reply to  Gunga Din
September 22, 2017 2:49 pm

The the archived data is what is used for the models.

Reply to  Gunga Din
September 22, 2017 3:32 pm

That’s one way it could be adjusted “realtime”.
Set up the tests to only archive values that move in the desired direction.

RichardT
September 22, 2017 2:58 pm

Here is where I would go.
A Modest Proposal (for consideration in advancing to a reasoned posture on climate actions)
Assume:
1. Lomborg’s analysis of the Paris Accord’s provisions for moderating future delta T (minimal impact) and the attendant costs (major negative impacts to GDP) is reasonable.
2. The mean of the last IPCC GCM projections of LT delta T is reasonable to support planning of mitigation actions for future impacts of delta T.
3. The GCM mean projected 2030 (as in the Paris Accord) climate state represents a reasonable benchmark to assess impacts of future delta T to 2030.
4. The UAH measured LT delta T (from the date of the last IPCC to current date) can be used as a basis for projecting forward to 2030.
5. Annual differences in 3 and 4 above form a basis to assess gains and losses in time with respect to the 2030 benchmark state, allowing adjustments to the projections of the future impacts of delta T and associated dates.
Then:
1. Request a high integrity, high competency engineering consensus body (like ASME) empanel a group to define and evaluate realistic impacts associated with potential delta T future states, and prepare a mitigation course of action planning document.
2. Request Lomborg to empanel his economics study group to assess the cost of the ASME proposal with a cost benefit analysis.
3. Assign a US agency (FEMA?) to spearhead implementation planning, as/if appropriate.
Attendant action:
1. Redirect CC funding from GCM model studies to basic research of climate physics, climate history and, higher quality field measurements (my preferences, you can pick your own).
Benefits:
1. Affords the opportunity for a reasoned engineering assessment of potential impacts due to likely postulated future climate states with courses of action defined and programs, as recommended, formulated and, as needed, acted upon.
2. Creates a postulated position in time to which we can compare to reality and determine the need to accelerate or decelerate (or even abandon) program elements.
3. Dampens the atmosphere of the highly politized and agenda driven climate arena.
4. Refocuses monies to basic questions of climate science still needing study.
RT 2017

EternalOptimist
September 22, 2017 3:25 pm

Maybe Mr Fabius would understand the sceptical mindset better if he would list some of the ten year predictions that have failed to materialise.

Sixto
Reply to  EternalOptimist
September 22, 2017 3:31 pm

You mean, polar bears aren’t exinct? Children do know what snow is? Arctic sea ice still exists in summer? Humans aren’t reduced to a few breeding pairs on Antarctica (that one retracted by the Archdruid of Gaia)?

Sixto
Reply to  EternalOptimist
September 22, 2017 3:32 pm

Not to mention Himalayan glaciers. They’re growing in the Karakorum.

Sara
Reply to  EternalOptimist
September 22, 2017 3:44 pm

Maybe Mr. Fabius would understand the viewpoint of others better if he took one decade of data points, stretched those ten years out to include months (12 per year) and stopped compressing the charts into tiny proportions. I believe the exaggerated temperatures line would prove to be almost as flat as a full section of Midwestern cornfield.
I have to deal with realities, not hypotheses and forecasts that turn out to be wrong, so I’m inclined t feel that I have a right to demand a realistic version: 10 years, 12 months per year, or even better, that Figure 4 chart, 1980 to the projected 2020. Since the prediction is not in 10s of degrees, but rather is decimal portions of degrees, I believe that the rather alarming peaks and lows, which appear to be intentionally massive exaggerations by compressing them, will be shown to be as close to a flatline as possible.
I challenge you, Larry, to do that. I despise people to use manipulative language and figures to try to alarm me. It is as despicable as you can get. It’s on the same level as people who call me and want to sell me a burial plot without asking me where I want to be buried, because I could drop dead tomorrow. I tell them ‘so could you’ and hang up on them.
Come on, Larry, take the plunge. Ten years, 12 months per year, or maybe even the entire 40 year span that you show in Figure 4 in your article. Meet the challenge.
Or would you rather I did it for you?

Sixto
Reply to  Sara
September 22, 2017 3:50 pm

I’m guessing that you’ll have to do it for him.
But maybe that’s just me, eternal pessimist.

EternalOptimist
Reply to  Sara
September 22, 2017 3:56 pm

In the olden days, one failed measurement out of thousands of sucesses meant the end of the hypothesis. In the new fabian era, one dubious success merits a round of applause and the many failures are ignored.

Sixto
Reply to  Sara
September 22, 2017 4:01 pm

Although it’s only a “success” by cooking the books and nookying with numbers.\
It’s beyond science fiction. It’s science fantasy. Only in an alternative universe where the laws that govern this one and the scientific method don’t apply is this pack of lies and drivel a “success”.
Only where success is measured in being able to bamboozle the public and rip them off for more of their hard-earned tax dollars.

Sara
Reply to  EternalOptimist
September 22, 2017 3:46 pm

Please see my response to him below.

Sara
Reply to  Sara
September 22, 2017 3:48 pm

Durn! I was replying to Eternal Optimist, up above.

Sixto
Reply to  EternalOptimist
September 22, 2017 4:10 pm

Join the crowd. It’s hard to find because it’s all in the Cunctator’s imagination.

September 22, 2017 3:40 pm

When you alter the data to fit the model, as this has, you cannot also claim that the data has validated the model. This is just yet more supportive propaganda.

Nick Werner
September 22, 2017 3:40 pm

The models do appear to be improving.
Similar to my old watch that’s right twice a day (provided I don’t make any adjustments), the models’ predictive skill looks pretty good twice for every two El Nino’s. Apart from that, they still appear to drift away towards the high side of observations.

AndyG55
Reply to  Nick Werner
September 22, 2017 3:42 pm

Not really,
All they do is get rid of the very worst of them, thus reducing the number.
Then move the starting point back to the middle of the model simulations.

AndyG55
Reply to  AndyG55
September 22, 2017 3:44 pm

That gives them several years the massively adjust surface data might just align for a while if they have large NATURAL El Ninos on

Sixto
Reply to  AndyG55
September 22, 2017 3:46 pm

Best would be to admit that climate can’t yet be modeled, because we don’t know enough about it and probably lack the computing power.
But at least better would be to toss out all model runs with an implied ECS above 2.0 degrees C per doubling of CO2. Better yet would be 1.5, but I don’t think there are any that low.
Real ECS is almost certainly below the lab-derived 1.2 degrees per doubling, thanks to net negative feedbacks.

AndyG55
Reply to  AndyG55
September 22, 2017 3:46 pm

too my scripts running or something

Sixto
Reply to  AndyG55
September 22, 2017 3:53 pm

Andy,
It’s always something. Especially with blog hosters.
Too bad for Fabio the Fab Cunctator that his side has lost, despite his trying to play both sides of the street.

Reply to  AndyG55
September 22, 2017 4:21 pm

And set another “test” to filter the archived data that is retrieved before it goes into the model.
https://wattsupwiththat.com/2017/09/22/a-climate-science-milestone-a-successful-10-year-forecast/comment-page-1/#comment-2617706

Bob Hoye
September 22, 2017 4:00 pm

Larry
I took a shot at Krugman because he believes that there is a national economy and that it can be “managed”. His next step was to believe that the temperature of the Earth can also be “managed”.
Both concepts show audacity hitherto only attributed to gods.
Three years ago, I gave a short speech to a CMRE dinner in Manhattan, Can be Googled:
Video: Bob Hoye CMRE Speech. It reviews that authoritarians are using central banking as well as imagined climate scares to gain political control beyond constitutional norms.
Some of the lines are amusing.

Sixto
September 22, 2017 4:42 pm

Larry Cunctator,
Here’s the deal.
You reject the scientific method, so imagine that “consensus” science is worth heeding, while at the same time recognizing how government funding controls “science”. But the scientific method can’t be repealed because it’s based upon objective reality.
We should spend no more on bogus, GIGO “climate science” because it has been repeatedly shown false, in the best Popperian tradition. As Einstein noted, that’s all that matters. How many third rate scientists, historians of science, sociologists, psychologists, science communicators and Australian cartoonists agree, Mother Nature says “Huh-uh!”, so they all lose.

Sixto
Reply to  Sixto
September 22, 2017 5:05 pm

IOW, CACA is a crock, and you and your colleagues have wasted your time and ours with your disgusting blog.

ENRICO BELMONTE
Reply to  Sixto
September 22, 2017 10:10 pm
Glenn E Stehle
Reply to  Sixto
September 22, 2017 5:16 pm

NeverTrumper Kummer’s speculations and predictions about climate change need to be filed away in the same place as his speculations and predictions about politics, in the garbage can.
If there’s anything more complex, chaotic and unpredictable than the climate, it’s the behavior of large groups of human beings. But NeverTrumper Kummer believes he’s got politics all figured out, just like he believes he’s got the climate all figured out.
When it comes to NeverTrumper Kummer, a large grain of salt is in order.comment imagecomment imagecomment image

Sixto
Reply to  Glenn E Stehle
September 22, 2017 5:21 pm

The Swamp-dwelling Republocrats, Demolicans and Dumpocraps might yet succeed in dumping Trump, but they can’t easily suppress the voters who elected him.
Larry Cunctator, thankfully, is a swamp creature from the black lagoon of the pestilential past.

Sara
Reply to  Glenn E Stehle
September 22, 2017 5:44 pm

Yeah, well, at noon today, the news was that Trump’s approval rating was up 40%.

Glenn E Stehle
Reply to  Glenn E Stehle
September 22, 2017 6:10 pm

Sixto,
Too many pompous and hubristic claims for my taste. No hint of skepticism or self-examination whatsoever.
Remember the famous quip by Cromwell, “I beseech you, in the bowels of Christ, think it possible that you may be mistaken?”
NeverTrumper Krummer reminds me of Rocket Man.

Reply to  Glenn E Stehle
September 22, 2017 6:15 pm

+100 …I have to admit that I had my doubts as to the abilities of Trump as president, early on. he has more than proven his abilities since then. Imagine what he would be accomplishing, if his administration was not being attacked daily from multiple angles.

Sixto
Reply to  Glenn E Stehle
September 22, 2017 6:47 pm

Glenn E Stehle September 22, 2017 at 6:10 pm
Are you referring to me or to Larry Fabio “Rocket Man” Cunctator as pompous and hubristic?
Just making sure.

Glenn E Stehle
Reply to  Glenn E Stehle
September 22, 2017 6:58 pm

Sixto,
Yep.
NeverTrumper Kummer makes entirely too many pompous and hubristic claims for my taste. No hint of skepticism or self-examination to be found in his decrees.

Sixto
Reply to  Glenn E Stehle
September 22, 2017 7:00 pm

IMO you’re going way to easily on the smarmy weasel.
But that’s just me.

Sixto
Reply to  Glenn E Stehle
September 22, 2017 7:02 pm

I feel duped and stupid for having gone to Cunctator’s site, as the sniveling sycophant lured me into doing by posting here. Fool me once, shame on the weasel. Won’t be fooled again.

Sara
Reply to  Sixto
September 22, 2017 5:42 pm

I’m going to wait and see if Larry can meet the simple challenge I gave him. Take out the squished-together data points, expand the chart to years and months, and prove what he said.
I’m more than aware of glacial and interglacial cycles, since we are in the post-Wisconsin warming period known as the Holocene interglacial. All glacial and interglacial cycles have internal warming and cooling episodes. It’s how glaciers move – they melt underneath and slide on the soil lubricated with meltwater. This is normal. it has never been something to panic over. And we humans have been around in our current form since at least the Wisconsin glacial maximum (pre-Illinoisan period), drove out or bred out our competition and now we send satellites into space to survey distant plants and land on comets.
Well, if we’re this successful at such complicated things, then what makes it so difficult for these Warmians and their ilk to accept the mere idea that this particular warming period – the Holocene – may actually NOT be an individual interglacial, but simply a part of the history of the Wisconsin glacial maximum? Is it because the ice sheets melted back to Hudson’s Bay and further north? Or is it because they expect instant results and in geology, there is no such thing? Just trying to understand this, that’s all.
There have been episodes of the Swiss-Italian Alps with ZERO snow cover and others, as noted elsewhere with deep snows. Same in South America, same here in North America. It’s part of this planet’s cyclical nature, and we have ZERO control over it.
In regard to forecasts, if a meteorologist has only a 50% chance of an accurate forecast for a week ahead, how can any of the Warmians, including Larry, expect any reasoning person (including me) to believe their forecasts, which seem to be aimed entirely at panicking people over naturally occurring cycles.
Like I said, I’m just trying to understand this hyperbole and really grandiose behavior.

Tom Dayton
Reply to  Sara
September 23, 2017 8:50 am

Sara: Expanding the scale of the graph will not change the trend. That’s grade school math.
Weather is not climate: https://skepticalscience.com/weather-forecasts-vs-climate-models-predictions-intermediate.htm

Old Grump
September 22, 2017 5:15 pm

“That’s progress, a milestone — a successful decade-long forecast!”
I thought they had been telling us for many years that the models make projections not forecasts or predictions. Silly me for expecting anything approaching consistency.

Sixto
Reply to  Old Grump
September 22, 2017 5:40 pm

If Larry ever surfaces here again, it’s because he’s either a glutton for well-deserved punishment, or pathetically desperate to draw attention to his justly never visited site.

September 22, 2017 5:48 pm

More fallacy and nonsense from Larry.
Since when have “ensembles” become “predictions”?
That’s like claiing a broken clock accurately predicts the time twice a day.
It’s called an “ensemble” since the authors felt that numerous runs might hint at a prediction. Just like rooms full of monkeys and typewriters are writing Shakespeare.
Then the Joules shell game nonsense.
Do you have any idea exactly what joules are and what they measure?
Joules are a measurement of work over time. Similar to the concept of horsepower.
One horsepower represents moving 33,000 foot-pounds of work per minute.
One joule can be represented by lifting one apple one meter.
Representing ocean temperatures as joules is false usage for joules. A very stretched rationale, one thousand joules, (kilojoule) equals one calorie of food energy.
How many calories do you consume a day Larry?
NOAA and it’s cohorts converting temperatures Celsius to joules are playing fast and very loose with energy and serially abusing the concepts of temperature.
It also means that if NOAA and their fakirs had bothered to include proper error bars, joules would be swamped by temperature error ranges over miniscule thermometer readings.
Such whole hearted people work at NOAA. Honest they are not.
Don’t forget that these wonderful anti-science crackpots are comparing ship derived human observation whole digit temperatures to ocean floating buoys that allegedly resolve temperatures to a thousandth of a degree.
Provided that the device was properly calibrated and certified during installation with regular cyclical re-certifications. Which those thermometers never experience.
Enjoy your party Kummer.
Though It’s definitely not worth a celebration except by alcoholics who party for any reason.
Call us when you have honest temperatures and genuine predictions.

ENRICO BELMONTE
Reply to  ATheoK
September 22, 2017 10:05 pm

Joule is a unit of energy. 1 joule = 0.00024 kilocalorie = 0.00024 Calorie (with capital C used in food Calories). 1 Calorie = 1 kilocalorie = 1,000 calories (with small c)
Horsepower is a unit of power. 1 horsepower = 746 watts = 746 joules/second

DWR54
Reply to  ATheoK
September 22, 2017 10:15 pm

Don’t forget that these wonderful anti-science crackpots are comparing ship derived human observation whole digit temperatures to ocean floating buoys that allegedly resolve temperatures to a thousandth of a degree.

As I understand it no one is claiming that degree of accuracy for an individual instrument. The stated level of precision is the result of the averaging process over thousands of measurements.

Hans-Georg
Reply to  DWR54
September 23, 2017 12:00 am

An average of false and incomplete measurements is the truth? That in turn reminds the entrance posting.

Sandy In Limousin
Reply to  ATheoK
September 23, 2017 12:01 am

If you have enough broken clocks at least one will be approximately right at any point in the day.

hunter
September 23, 2017 5:03 am

This a reasonable analysis, on the face of it.
But looking at a few points raises serious concerns:
The “simplified” graph Schmidt has come up with has huge error bars, much if which covers a negation of the claimed risk.
The rehabilitation of OHC, without acknowledging the other points made about climate by Dr. Pielke who raised the importance of OHC, much less an apology to him for the shoddy treatment he received raises the issue that once again Schmidt is is simply cherry picking and manipulating evidence to sound sciencey and is still not actually engaging in science.
The clear problems with historical data And it’s well documented “editing” is not addressed.
The failure of predictions offered over many years about negative impacts of what Schmitt at al now call “Climate change” are unaddressed.
The top conclusion, tgat yet more money be poured into “Climate research” is a non-sequitur.
If the path us clear, then let us spend money on the infrastructure we need for the future. So called “climate research” is already a major world industry. The quality of science produced has in many ways been either stagnant or dismal. Further rewarding “Climate science” because Schmidt now choose to use better marketing visuals makes little sense.
Nothing in his new graph is new.
To ignore the benefits and proven safety of nuclear power in favor of giant wind turbines and solar arrays industrilizing the world’s open spaces seems strange. Wind is inherently extremely costly and unreliable. Solar is the same.
Both wind and solar at industrial grid significant capacities require vast amounts of open land and complex backup systems to work. Not to mention that both are far more vulnerable to weather disruption and expensive damage than fossil fuel or nuclear.
Finally and mist important, dismissing skeptical arguments because a new graph has been developed is not discussing an issue. And committing even more trillions to “climate” does not seem like a new way forward, unless jumping off a cliff is considered a good option.

hunter
September 23, 2017 5:35 am

Perhaps the biggest clue that the climate crisis promoters ar operating in bad faith is the manipulative abuse of the scales used in the graphs.

Sara
Reply to  hunter
September 23, 2017 6:35 am

That is precisely the reason I challenged good ol’ Larry to stop compressing the scale of those charts and spread them out into a month-by-month/decade-by-decade format. He won’t do it because it will prove my point, that the highs and lows will flatline to nearly nothing, and make him look ridiculous.

Tom Dayton
Reply to  Sara
September 23, 2017 8:50 am

Sara: Changing the scale cannot change the trend.

Dean Rojas
September 23, 2017 5:48 am

Thanks for recomfirmin that we hve not had any rise in temperatures in thelast 18 years and that the models are easy to manipulate.
Dean Rojas

John
September 23, 2017 8:12 am

Wowsers. I thought I’d ended up on the skeptical science site.
Where to begin. The mean of CMIP3 is what exactly? I know it to be the models presented in AR4. However, AR4 actually includes a model set based on 2000 concentrations remaining. Were those runs part of the mean as well? If so, they brought it down by an awful lot. Those different scenarios (other than the 2000 ones) pretty much matched each other by now. No deviation at this time, they go off in different directions later.
The GISS set has been changed massively in the past decade. How about you show the GISS version being used back then? The AR4 used an average of 4 datasets, I believe.
Also, by pure chance and by maintaining the very temperature record used for the verification, you managed to get a match twice in a decade. Based on whatever mean dark magic you used……
Erm, congrats?
I suspect someone like David Middleton would have a field day with this, referencing back to what was in AR4 and the datasets used. Of course, he might not be able to figure out which of the AR4 models were used for the mean. Or is it mean of means even….
Perhaps Euan Mearns could take a crack of it was well. He did quite well with https://wattsupwiththat.com/2014/06/12/the-temperature-forecasting-track-record-of-the-ipcc/
The idea what you can make a mean out of completely different model runs boggles the mind. There is zero point. I mean, it’s garbage.

richard verney
Reply to  John
September 23, 2017 8:49 am

How can the average of wrong, be right? Mathematically, it would only be a per chance happening.
There is a lack of detail of how many models make up the ensemble, but if there is say 50 models each one projecting a different outcome, they are simply averaging the 50 incorrect projections to produce what they call the mean. This lacks mathematical integrity.
Some years back Dr brown of Duke University made some very insightful comments on the absurdity of the model projections.

Tom Dayton
Reply to  John
September 23, 2017 8:56 am

John: Regarding the mean of the models, see my comment at https://wattsupwiththat.com/2017/09/22/a-climate-science-milestone-a-successful-10-year-forecast/comment-page-1/#comment-2617732 and the one after that.
The models are the ones listed by the CMIP3 project: https://www.wcrp-climate.org/wgcm-cmip/wgcm-cmip3
Comparing models to observations for the past 10 years requires using the observations from the past 10 years. Your demand of using observations from only 10 years ago is nonsensical.

John
Reply to  Tom Dayton
September 23, 2017 9:33 am

Thanks. Weren’t the models used for CMIP3 initialized in 2000? They didn’t issue the report in 2007 and initialize the models in the same year and hope for the best, did they? Therefore should we not have 17 years of model successes to compare to temperature records? Including 7 years of data where various different GISS sets were used. Any chance of one from 2000 from Gavin?
Indeed. Wasn’t CMIP5 based on models initialized in 2007? That would be the 10 year one to use, starting from 2007, but of course, a 17 year proof of forecast would be much better than a 10 year one.
Thanks for the response re means, but I stand by what I said. Taking means from different models, all with different initializations/simulations, is nonsense. Even if you arrive at something that matched observations, it did so by pure mathematical chance from the numbers used.
How many of the models did Gavin use to get his mean? At various points in AR4 they use differing amounts of models for some of their claims, ranging from 14 to 24, but then they also state that each model model also has multiple initializations/simulations as well, which they interchange throughout the report. Sadly AR4 isn’t as clear as AR5 and CMIP5.
But, rather than giving links, why don’t you explicitly state the amount of models and simulations Gavin used? It’s a complete guess, of course, but just curious if you will give it a go without giving a link.
Any chance Gavin would actually open up his method and data, do you think?

John
Reply to  Tom Dayton
September 23, 2017 10:34 am

A quick look at GISS history shows that the 2002, 2012 and 2013 versions of GISS would give different results for the years from 2007 than the 2016 version.
Meaning that until last year, Gavin wouldn’t have got the result, but would still likely have had something close to matches for different years in the different versions.
Hey, I wonder if he considered giving a mean or his own various versioned datasets and claiming a success that way? He would get even more than two matches!

Tom Dayton
Reply to  Tom Dayton
September 23, 2017 10:42 am

John: Actually looking more than quickly at the GISS version history shows you are wrong: https://data.giss.nasa.gov/gistemp/history/

John
Reply to  Tom Dayton
September 23, 2017 10:52 am

You are linking to graphs each showing a different value for the 2016, 2013, 2012 and 2002 versions ran between 2007 and 2012. No matter if you look at the 5 year means or 1 year means, they show different values. As a FYI, that’s what the different colours for the lines mean (sorry, couldn’t resist)

Reply to  John
September 25, 2017 3:23 am

I will definitely be taking a crack at this very soon… with a rock hammer.

John
September 23, 2017 10:17 am

A good example of something we can use for means is ENSO. A La Nina is the form horse, according to the mean of CFSv2. You see a mean starting from now, going all the way out to June next year. You can then see all the members that made up the mean. If the mean forecast today matches observations for two of the next 9 months, was it a successful mean forecast if it differed in the other 7 months? I’d say, no. Because the individual runs are what is supposed to be using known science to make a forecast. The individual runs are also going to be right at some point across the time period, but that also doesn’t mean they nailed the forecast.
Now, that’s not to say you can’t use means for a forecast, but for the mean to be successful forecast indicator, it would need to be correct 95% of the time.

John Steinmetz
September 23, 2017 11:18 am

Why not report warming sine 1940 insteat of 1980. It would be much more informative.. Why should we believe a good fit for 10 years is statisically relevant? How can one extra molecule of CO2 per 10,000 cause sea levels to rise appreciably?
Could others please help answer my questions.

Tom Dayton
Reply to  John Steinmetz
September 23, 2017 11:46 am

John Steinmetz: Regarding the number of CO2 molecules, see https://skepticalscience.com/CO2-trace-gas.htm

Sixto
Reply to  Tom Dayton
September 23, 2017 12:02 pm

Tom,
SS’ “argument” is idiotic.
The fact is that an extra molecule of CO2 so far has had only beneficial effects, and more would be even better. There is zero evidence that more of an essential trace gas will cause anything bad to occur, let alone catastrophic. Sea level rise hasn’t sped up. Ice sheets aren’t melting. But Earth has greened.
Nothing bad happened when CO2 was almost 20 times higher than now. Instead, our planet enjoyed the Cambrian Period, in which large, hard-bodied animals evolved. When it was five or more times higher than now, the largest land animals of all time evolved. C3 plants would flourish under CO2 levels three times higher than now, although even just twice as much would be a great improvement.

Tom Dayton
Reply to  Tom Dayton
September 23, 2017 12:11 pm

Sixto: John’s question was about the mechanism by which an increase in CO2 molecules can cause warming. I linked to an answer to that question. What you wrote is irrelevant to that topic. You are Gish Galloping.

Sixto
Reply to  Tom Dayton
September 23, 2017 12:20 pm

No, my reply was entirely valid and responsive.
The clowns and cartoonists at SS compared CO2 to arsenic. Idiotic, as I said.
You are resorting to an irrelevant analogy, not I.

Sixto
Reply to  Tom Dayton
September 23, 2017 12:56 pm

Apparently the idiocy of it wasn’t apparent to you, or you wouldn’t have used it. Most of SS is idiotic to ludicrous, not least its clownish perps dressing up as N@zi SS officers.
The reason the analogy is at best inappropriate is that we know the effect of increasing dosages of arsenic, which is bad. We also know the effect of increasing CO2 concentration, and it’s good for trees and food crops. There is no evidence that increasing CO2 has any negative effect on climate.

Tom Dayton
Reply to  John Steinmetz
September 23, 2017 12:36 pm

John Steinmetz: The post I linked for you earlier narrowly answers your narrow question of how a tiny amount of CO2 increase can have significant warming consequences. That post addresses the very narrow argument from incredulity about the tiny amount. When you want to learn more about the actual mechanism, please read the Basic tabbed pane, then the Intermediate one, then the Advanced one here: https://skepticalscience.com/empirical-evidence-for-co2-enhanced-greenhouse-effect-basic.htm

Tom Dayton
Reply to  Tom Dayton
September 23, 2017 12:37 pm

John Steinmetz: When you are ready to learn some more technical details of the mechanism, click the link inside the green Further Reading box at the bottom of that post.

September 23, 2017 11:25 am

Larry Kummer, thank you for the essay.
for the gridlock has left us unprepared for even the inevitable repeat of past extreme weather
I think that is not true. What has left us unprepared for the inevitable repeat of past extreme weather is a more prevalent idea that preparation is not valuable, more prevalent than in the past. Take California, for example: the neglect of the transportation and flood control infrastructure long predates the fanciful notion that vast distractions like the electricity portfolio requirements and the “bullet train to nowhere” can prevent climate change. The Houston region had stopped building up its flood control infrastructure despite warnings and reasonable plans. Tropical Storm Sandy revealed a comparable neglect in the Philadelphia – New York City corridor. If the end of the “gridlock” produces more such distractions, then the US will remain unprepared for the inevitable repeats of past extreme weather.

September 23, 2017 11:52 am

I think that the spaghetti graph is informative: it shows clearly that the model errors are serially correlated, meaning a model that is “high” some of the time is almost always “high” (and similarly for “low”). It also shows clearly that almost all of the forecasts have been “high” almost all of the time, enhancing the likelihood that the recent “nearer” approximation is a transient.
Missing from both graphs is a display of a 95% confidence interval on the mean trajectory: Schmidt’s graph shows the 95% interval of the sample paths, but the interval on the mean is much narrower. The data are outside the 95% CI of the mean trajectory most of the time. On the evidence of these two graphs, the prediction for the next 10 years is likely consistently high.
Save the prediction. If the model mean stays close to the data mean for the next 10 continuous years (2018-2027), then you might acquire confidence in its prediction for the subsequent 10 years (2028 – 2037).

September 23, 2017 12:03 pm

Models!
If a hindcast is not prefect, why bother running a simulation past todays date?
A hindcast, by definition has available to it all of the necessary global measurements.
Are the modelers and their supporters suggesting that we either do not have enough initial data or that model formulas are incorrect?
How can we expect to run a model for 10/20/50/100 years without starting with perfect hindcast?

Tom Dayton
Reply to  Steve Richards
September 23, 2017 12:18 pm

Your typing is not “prefect” so why should anyone bother reading it?

Reply to  Tom Dayton
September 23, 2017 12:41 pm

If only the CACA Cabal would dismiss what another member of “The Cause” says when he/she/it makes an insignificant error!
They’ve make HUGE errors but are still welcome (as long as they get a headline).

Tom Dayton
Reply to  Steve Richards
September 23, 2017 12:50 pm

Steve, in case you are genuinely interested in an answer despite your flippant question: Hindcasts differ from forecasts only in that hindcasts use the actual forcings that happened, but forecasts necessarily use estimates of the forcings that will happen. Climate models do not try to predict forcings; forcings are inputs to the models. In the absence of infinite computer time, climatologists enter a limited set of estimated forcings into the models, in an attempt to span the range of reasonable possibilities. For CMIP5 those were called RCPs, which you can find explained thoroughly here: https://skepticalscience.com/rcp.php

September 23, 2017 12:34 pm

Look at the scale of all of the graphs. Imagine what they would look like if the scale were reasonable – matching with the accuracy range of the instruments. Then imagine them using figures that weren’t created out of whole cloth by averaging a non-representative set of temperature measurements to find an “anomaly” figure using a statistically insignificant fraction of the accuracy of the instruments. Then imagine them including error bars for all of their misleadingly averaged figures, showing the statistical significance of the “anomalies.” Then imagine them using real temperatures instead of anomalies based on an adjusted baseline calculated using the same statistical games they use to come up with the average surface temperature. Then imagine them using only the actual temperature readings from instrumental measurement and not fake temperatures fabricated by averaging (again) the closest averaged fake temperatures in the grid that eventually are based on “real” adjusted readings somewhere halfway around the globe.
You can’t propagandize radical political policymaking using a nearly flat line with minor squiggles, can you? Not scary enough.

Uncle Gus
September 23, 2017 12:59 pm

“(1) Ocean heat content (OHC) as the best metric of warming.
This was controversial when Roger Pielke Sr. first said it in 2003 (despite his eminent record, Skeptical Science called him a “climate misinformer” – for bogus reasons). Now many climate scientists consider OHC to be the best measure of global warming. Some point to changes in the ocean’s heat content as an explanation for the pause.
Graphs of OHC should convert any remaining deniers of global warming (there are some out there).”
Oh dear. Oh deary deary me…
“It’s not air temperature, coz there ain’t no rise in air temperature, so it must be all going into the oceans, so OCEAN WARMING IS GOING TO KILL US ALL!!! (Give us your money…)”

Reply to  Uncle Gus
September 24, 2017 9:46 am

Uncle Gus,
That’s quite an odd comment. Evidence of global warming does not mean “OCEAN WARMING IS GOING TO KILL US ALL!!! (Give us your money…)”.
That’s hysteria just like that of alarmists. As the climate wars conclude their third decade, the fringes on both sides have come to resemble each other in tone and nature. Sad.

September 23, 2017 1:10 pm

” Hindcasts differ from forecasts only in that hindcasts use the actual forcings that happened,”
After the data had been archived and then retrieved?

Tom Dayton
Reply to  Gunga Din
September 23, 2017 1:13 pm

Which data are you talking about?

Reply to  Tom Dayton
September 23, 2017 1:18 pm

Good question!
How many of the actual values still exist?

Tom Dayton
Reply to  Tom Dayton
September 23, 2017 1:24 pm

Actual values of what? Forcing data used in hindcasts are actual values observed over the past–by definition. So of course they are “stored”–they are written down, stored in files, databases, and so on. Do you expect climatologists to memorize everything? How could data not be stored? I don’t know what you are asking about.

Sixto
Reply to  Tom Dayton
September 23, 2017 1:33 pm

Possibly the “data” lost by Hadley CRU? It supposedly still exists somewhere, but Phil Jones doesn’t know where, so can’t say which stations he used, for instance, to determine that urbanization had no effect on temperatures in China.
Without a valid historical record, you can’t hindcast. Without a valid recent record, you can’t say how well models match reality, since who knows what actually is reality? Certainly not GIGO “climate scientists”, who aren’t climatologists or any kind of real scientist, but computer gamers.

Reply to  Tom Dayton
September 23, 2017 1:46 pm

Tom “How could data not be stored? I don’t know what you are asking about.”
It could be archived. With the “tests” of the actual data set up to skew numbers are actually stored.
https://wattsupwiththat.com/2017/09/22/a-climate-science-milestone-a-successful-10-year-forecast/#comment-2617789

Tom Dayton
Reply to  Tom Dayton
September 23, 2017 1:49 pm

Sixto: You are talking about temperature observation data. Climate models do not take temperature observations as inputs. They are not statistical models. They are physical models that take in forcing data as inputs, and produce temperature data as outputs. The only “historical record” needed for hindcasting is the history of the forcings–solar irradiance, greenhouse gas amounts in the atmosphere, volcanic aerosols, and so on (not ENSO, because that is internal variability). Here, for example, are the CMIP5 forcing data: http://cmip-pcmdi.llnl.gov/cmip5/forcing.html. Here is an introduction to climate models: https://arstechnica.com/science/2013/09/why-trust-climate-models-its-a-matter-of-simple-science/

Tom Dayton
Reply to  Tom Dayton
September 23, 2017 1:56 pm

Gunga Din: Your comments are unclear. Now I’m going to guess you are talking about the forcing data that are input to the climate models. Yes, those are stored and publicly available. For example, here are the forcings used by the CMIP5 models: http://cmip-pcmdi.llnl.gov/cmip5/forcing.html

Sixto
Reply to  Tom Dayton
September 23, 2017 2:06 pm

Tom Dayton September 23, 2017 at 1:49 pm
The discussion was about hindcasting. You can’t hindcast models without supposed temperature data. If the “data” are bogus, man-made artifacts, as the “surface” sets indubitably are, then what good is the hindcasting?
And as noted, what good is comparing model outputs to phony recent “data”?

Michael Jankowski
Reply to  Tom Dayton
September 23, 2017 2:17 pm

“…Climate models do not take temperature observations as inputs…”
So climate models are independent of temperature? They produce the same results if the average global temp is 0K vs 273K vs 300K? Sounds pretty ridiculous since so many physical processes are temperature-dependent.

Sixto
Reply to  Tom Dayton
September 23, 2017 2:24 pm

Michael,
GCMs put in latent heat. As you observe, they have to start from some approximation of conditions as of run initiation.

Tom Dayton
Reply to  Tom Dayton
September 23, 2017 2:31 pm

Sixto: “They put in latent heat.” Not exactly. You should learn about climate models: https://arstechnica.com/science/2013/09/why-trust-climate-models-its-a-matter-of-simple-science/
Michael Jankowski: Of course models start with temperatures as inputs. But those temperatures are for times very long before the times that are to be projected–decades or even hundreds of years before. The models are initialized far enough in the past for their weather variations to stabilize within the boundary conditions. The temperature observations that Gunga Din and Sixto were referring to (in accusatory, conspiratorial ways) are temperatures from the times that the models are being used to project. Those temperatures are not inputs to the models, they are outputs. Learn about climate models at the link I provided above.

Sixto
Reply to  Tom Dayton
September 23, 2017 2:41 pm

Tom,
I know enough about them to know that they are for sh!t. They’re worse than worthless GIGO exercises in computer gaming to game the system. They’re not fit for purpose and have cost the planet millions of lives and trillions of dollars. Their perpetrators are criminals.

Michael Jankowski
Reply to  Tom Dayton
September 23, 2017 2:45 pm

“…Climate models do not take temperature observations as inputs…”
“…Of course models start with temperatures as inputs…”
LOL.
“…The temperature observations that Gunga Din and Sixto were referring to (in accusatory, conspiratorial ways) are temperatures from the times that the models are being used to project…”
That’s now how I read the comments I saw from them. And common-sense would dictate that the models would match the observations if that were the case instead of being wrong.

Sixto
Reply to  Tom Dayton
September 23, 2017 2:54 pm

I should add that the Father of NOAA’s GCMs, Syukuro Manabe, derived an ECS of 2.0 °C from his early models. IMO any run with an implied ECS higher than that should be tossed as unphysical.
Guess who in the 1970s came up with the preposterous ECS of 4.0 °C? If you guessed Jim “Venus Express” Hansen, you’re right.
A committee on anthropogenic global warming convened in 1979 by the National Academy of Sciences and chaired by meteorologist Jule Charney, estimated climate sensitivity to be 3 °C, plus or minus 1.5 °C. Only two sets of models were available. Syukuro Manabe’s exhibited a climate sensitivity of 2 °C, while James Hansen’s showed a climate sensitivity of 4 °C.
Manabe says that Charney chose 0.5 °C as a not unreasonable margin of error, subtracted it from his own number, and added it to Hansen’s figure. Hence the 1.5 °C to 4.5 °C range of likely climate sensitivity that has appeared in every greenhouse assessment since. No actual physical basis required. Just WAGs from primitive GCMs. In Hansen’s case, designed upon special pleading rather than science.

Sixto
Reply to  Tom Dayton
September 23, 2017 3:05 pm

But of course if GCMs implied an ECS with a physical basis, ie in the range of 0.0 to 2.0 °C per doubling of CO2, then the output wouldn’t show the desired scary projections out to AD 2100.
Using models manufacturing phony, unphysical, evidence-free higher ECSes has led to the mass murder and global robbery that is CACA.

Reply to  Gunga Din
September 23, 2017 2:42 pm

*sigh*
An archive can be set up honestly or “skewed” to only record/save the data that moves in the desired direction.
What program and what “tests” are used to choose what has been used to pick out which data point has actually be recorded ? What “tests” were used in retrieving the “data”?
Query an archive for a particular date and time, for example, and a value would be returned. That value may not be an actual record but rather one interpolated from the previous and former actual values, that is, actual values the passed the “skewable” tests to actually be archived.
What program and “tests” does Gavin use to archive current data? What did Hansen use to archive (and maybe re-archive) past data?

Reply to  Gunga Din
September 23, 2017 2:46 pm

Another *sigh*.
Meant as a response to Tom Dayton here:
https://wattsupwiththat.com/2017/09/22/a-climate-science-milestone-a-successful-10-year-forecast/comment-page-1/#comment-2618486
(Think I’ll take my nap now. 😎

Tom Dayton
Reply to  Gunga Din
September 23, 2017 2:48 pm

I gave you a pointer to the CMIP site, where you can find model code, forcing input data, and model output data. And documentation about all that. And there are peer reviewed publications describing much of that. I’m not going to do more of your homework, nor cater to your batshit crazy conspiracy theories.

Reply to  Gunga Din
September 24, 2017 9:40 am

Tom,
+1

September 24, 2017 9:41 am

Newspapers love weather porn, filling the space between ads with easy-to-write exciting nonsense like this (explaining climate science is more difficult to do).
https://www.washingtonpost.com/news/capital-weather-gang/wp/2017/09/23/harvey-irma-maria-why-is-this-hurricane-season-so-bad/comment image

Wight Mann
September 24, 2017 2:45 pm

Show me the proof that this guy wasn’t just accidentally right. As the Climateers said when confronted with the 20 year long pause…come back when it has been fifty years and we will discuss it.

Reply to  Wight Mann
September 24, 2017 6:37 pm

Wight,
There are a several powerful statistical tests for forecasts. From memory, fifteen years is significant at the 90% confidence level (hence it is a milestone); twenty years is significant at the 95% level.
Science, like almost everything in the real world, advances incrementally. Step by step.

Editor
September 25, 2017 4:31 am

The personal attacks on Larry Kummer are totally uncalled for.
While I strongly disagree with his characterization of “Gavin’s Twitter Trick” as a demonstration of predictive skill in a climate model and even more strongly disagree with half of his conclusions (1, 4 & 5), this was a very thoughtful essay.

JohnKnight
Reply to  David Middleton
September 25, 2017 3:05 pm

“The personal attacks on Larry Kummer are totally uncalled for.”
Make you case(s), or tone down the bulk judgmentalism, I suggest . . human ; )

September 26, 2017 1:15 pm

“A CLIMATE SCIENCE MILESTONE: A SUCCESSFUL 10-YEAR FORECAST!”
Kummer appears to have little understanding of the relationships between climate policies and climate science. Including Krugman in a list of references on how to test climate models has to be a mistake. Suggesting that we continue marching arm-in-arm in the wrong direction singing kumbaya on climate change is not going to happen. Until the we get the science right, we will continue be clueless on what policies are right.
“THIS GRAPH [the one from CMIP3] SHOWS A CLIMATE MODEL’S DEMONSTRATION OF PREDICTIVE SKILL OVER A SHORT TIME HORIZON OF ROUGHLY TEN YEARS. …………. THAT’S PROGRESS, A MILESTONE — A SUCCESSFUL DECADE-LONG FORECAST!”
The CMIP3 graph is out-of-date and misleading. Since the El Nino peak, HadCRUT4 monthly temperatures from March 2016 to July 2017 have declined nearly 40 percent. The various temperature curves that zigzag through the so-called 95% certainty range are meaningless. Those values are not best estimates. The most that can be said is a future prediction might lie somewhere between the estimated 95% extreme values. No single prediction is more likely than another value in the range, and the likely error is very large for a long-term prediction. The “new” CMIP3 is no better than the “spaghetti” graph, and neither has any long-term predictive value. Kummer declared victory far too soon.
“GRAPHS OF OHC SHOULD CONVERT ANY REMAINING DENIERS OF GLOBAL WARMING (THERE ARE SOME OUT THERE). THIS SHOWS THE INCREASING OHC OF THE TOP 700 METERS OF THE OCEANS.”
Even if there were an adequate OHC database, unless there has been a Second Coming, no one would know what to do with it. Many physicists posit that natural processes can only be modeled with particle physics, which current models barely touch. Application of particle physics in the CERN CLOUD experiments suggests the possibility of a century of non-warming in which CO2 does not play a significant role. CERN concludes IPCC estimates of future temperatures are too high and the models should be redone. IPCC reports are not credible sources for anything. Denigrating those with opposing viewpoints by labeling them “deniers” on climate change does nothing to advance the cause of those represented by Kummer.
“AS THE BELOW GRAPH SHOWS [Global and Land Temperature Anomalies, 1950-2017], ATMOSPHERIC TEMPERATURES APPEAR TO HAVE RESUMED THEIR INCREASE, OR TAKEN A NEW STAIR STEP UP.”
Kummer’s cited bar graph showing two straight-line trends does not support Kummer’s conclusion that the “pause” is behind us. In the following graph, a simple numerical analysis of the HadCRUT4 time-temperature series shows the rate of increase of the global mean temperature trendline has been constant or steadily decreasing since October 2000. The temperature anomaly decrease from March 2003, the El Nino peak, to July 2017 has been nearly 40 percent. The rate of increase will likely become negative within the next 20 years, reaching the lowest global mean trendline temperature in almost 40 years. Stock up on cold-weather gear.
https://imgur.com/a/p7Hcx
Kummer’s six conclusions are hardly worthy of comment. His “plan” is to develop and fund (more funding being the core objective of the “plan)” an expanded laundry list of climate research activities, to add more government employees to develop policies for extreme weather events that have yet to be forecast and to begin the conversion to non-carbon-based energy resources that are not economically feasible and not needed.
A simple thought experiment suggests to me the, thus far, fruitless attempts to model the earth’s climate system should be put on the back burner for the time being. Think of the earth’s climate system as a black box and the earth’s temperature the output from the black box. Assuming the black box contains an aggregation of spinning, zigging and zagging, oscillating particles, photons and assorted waves that may be mathematically represented by periodic functions (This assumption could be a stretch.), it would follow that the output, the temperature, can also be represented by a periodic function, which can be decomposed into various oscillatory components by Fourier analysis. The focus of climate research should be on analyzing the output of the black box rather than spinning wheels trying to analyze the countless interactions within the black box. Ultimately, the results might lead to a better understanding of how the climate system works. The U.S. is on the verge running off a cliff if we cannot make a midcourse correction on the current direction of climate change research and policies.

Reply to  Tom Bjorklund
September 26, 2017 6:52 pm

Figure for above post.

//s.imgur.com/min/embed.js

September 27, 2017 2:06 pm

Kummer is an enigma. Stopped reading his blog a while ago. A waste of time, at best.
But this article is Kummer at his worst. His genius for references for this article is Paul Krugman. Krugman, in his own area, economics, is a pitiful failure. He is wrong regularly and woefully.
Why would anyone think that applying Krugman’s “genius” to another area would be useful. Something is badly wrong in the whole article above.
A great overview of Krugman’s failures, from Quora:
https://www.quora.com/What-things-has-Paul-Krugman-been-very-wrong-about
1) The survival of the Euro:
Krugman was unable to fathom how the peripheral countries of Europe could possibly stay on in the Eurozone. He wrote a number of blog posts effectively saying that the Euro was doomed and that Greece would leave any day now, with Spain and possibly Italy following suit. Not only did that not happen, the Euro club is infact slated to grow further.
Sources:
Another Bank Bailout
Crash of the Bumblebee
Apocalypse Fairly Soon
Those Revolting Europeans
Europe’s Economic Suicide
What Greece Means
Legends of the Fail
The Hole in Europe’s Bucket
An Impeccable Disaster
Op-Ed Columnist – A Money Too Far – NYTimes.com
Op-Ed Columnist – The Euro Trap – NYTimes.com
2) The mechanism of the housing bust:
A number of people, including Krugman saw the housing bubble and predicted its demise, but Krugman was wrong about the details of how the bursting of the bubble would play out. He thought that it would involve a crisis in junk bonds and a fall of the dollar (None of these two happened)
He did say that subprime mortgages would go bust, but he underestimated the effect of that… He did not understand the risks posed by securitization and therefore, was not predicting an outright recession until well into 2008, a pretty big miss when dealing with the biggest worldwide slowdown since the great depression.
Krugman predicting fall of the dollar accompanying the housing bust:
Debt And Denial
3) Deflation:
Krugman was confidently predicting deflation starting in early 2010. It never materialized. Inflation remained stubbornly positive.
Source:
Core Logic
4) Relative performance of the worst-hit European countries:
For a very long-time, Krugman kept praising Iceland for implementing capital controls and predicted that it would do better than others which kept their capital markets free (like Estonia, Latvia, Lithuania and Ireland). Did not pan out….
Krugman on Iceland vs Baltics and Ireland in 2010:
The Icelandic Post-crisis Miracle
The council of foreign relations questions Krugman’s claim:
Geo-Graphics » Post-Crisis Iceland: Miracle or Illusion?
Geo-Graphics » “Iceland’s Post-Crisis Miracle” Revisited
Krugman, as classy as ever calls the people at CFR stupid:
Peaks, Troughs, and Crisis
CFR pwns:
Geo-Graphics » Paul Krugman’s Baltic Bust—Part III
5) The US under Bush would be attacked by bond vigilantes:
Long before Krugman started publicly ridiculing people who are worried that interest rates on US Government debt could spike suddenly as “Verious Serious People who are spooked by Invisible Bond Vigilantes”, Krugman was one of them…… He wrote a number of columns and blog posts arguing that the reckless policies of the Bush administration were certain to cause a loss of confidence in the credit worthiness of the US Government.
Source:
Mistakes
6) The sequester of 2013 would cause a slowdown in the US and the stimulus of 2009 would reduce unemployment:
Krugman issued dire warnings about the sequester, predicting that it would cause a slowdown in the US pointing to papers that predicted 2.9% growth without the sequester and 1.1% with it. In reality, the sequester was passed and growth was 4.1%.
Sources:
Krugman (as usual) name-calling people who proposed the sequester:
Sequester of Fools
Keynesian models showing reduced growth because of sequester (linked in above article)
MA’s Alternative Scenario: March 1 Sequestration
Krugman gloating when he thought things would go his way calling it a test of the market-monetarist view:
Monetarism Falls Short (Somewhat Wonkish)
Final reality check:
Mike Konczal: “We rarely get to see a major, nationwide economic experiment at work,”
This mirrored the experience of 2009 (but in reverse), when Keynesian models championed by Krugman predicted that US unemployment would top out at 9% without the stimulus and at 8% with it. The stimulus was passed and unemployment went up to 10%.
7) The recession would be over soon:
Krugman and Greg Mankiw had a spat in early 2009 on something known as the unit root hypothesis. The discussion is technical but it essentially boiled down to this: Team Obama had predicted that the economy would bounce back strongly from the great recession and their models predicted that real GDP would be 15.6% higher in 2013 than it was in 2008.
Mankiw disputed this on the basis of the unit root hypothesis and said that recessions sometimes tend to linger and therefore, predictions should give some positive probability weight to that event …. Krugman described Mankiw as “evil” for refuting the administration’s forecast based on what he believed to be flawed economics implicitly supporting the administration’s forecast. Mankiw invited him to take a bet on the issue which Krugman ignored.
In reality, it was not even close. Mankiw won by a landslide. Real GDP in 2013 was infact only 6% higher than 2008.
Sources:
Team Obama on the Unit Root Hypothesis
Krugman harshly criticizing Mankiw for the above:
Roots of evil (wonkish)
Mankiw responds by asking Krugman to take a bet:
Wanna bet some of that Nobel money?
The final reality check showing that Mankiw would have won handily:
The forces of evil easily triumph over Krugman and DeLong