A climate science milestone: a successful 10-year forecast!

From the Fabius Maximus Blog.  Reposted here.

By Larry Kummer. From the Fabius Maximus website, 21 Sept 2017.

Summary: The gridlock might be breaking in the public policy response to climate change. Let’s hope so, for the gridlock has left us unprepared for even the inevitable repeat of past extreme weather — let alone what new challenges the future will hold for us.

The below graph was tweeted yesterday by Gavin Schmidt, Director of NASA’s Goddard Institute of Space Sciences (click to enlarge). (Yesterday Zeke Hausfather at Carbon Brief posted a similar graph.) It shows another step forward in the public policy debate about climate change, in two ways.

 

(1) This graph shows a climate model’s demonstration of predictive skill over a short time horizon of roughly ten years. CMIP3 was prepared in 2006-7 for the IPCC’s AR4 report. That’s progress, a milestone — a successful decade-long forecast!

(2) The graph uses basic statistics, something too rarely seen today in meteorology and climate science. For example, the descriptions of Hurricanes Harvey and Irma were very 19th C, as if modern statistics had not been invented. Compare Schmidt’s graph with Climate Lab Book’s updated version of the signature “spaghetti” graph — Figure 11.25a — from the IPCC’s AR5 Working Group I report (click to enlarge). Edward Tufte (The Visual Display of Quantitative Information) weeps in Heaven every time someone posts a spaghetti graph.

Note how the graphs differ in the display of the difference between observations and CMIP3 model output during 2005-2010. Schmidt’s graph shows that observations are near the ensemble mean. The updated Figure 11.25a shows observations near the bottom of the range of CMIP5 model outputs (Schmidt also provides his graph using CMIP5 model outputs).

Clearing away the underbrush so we can see the big issues.

This is one in a series of recent incremental steps forward in the climate change policy debate. Here are two more examples of clearing away relatively minor issues. Even baby steps add up.

(1) Ocean heat content (OHC) as the best metric of warming.

This was controversial when Roger Pielke Sr. first said it in 2003 (despite his eminent record, Skeptical Science called him a “climate misinformer” – for bogus reasons). Now many climate scientists consider OHC to be the best measure of global warming. Some point to changes in the ocean’s heat content as an explanation for the pause.

Graphs of OHC should convert any remaining deniers of global warming (there are some out there). This shows the increasing OHC of the top 700 meters of the oceans, from NOAA’s OHC page. See here for more information about the increase in OHC.

 

(2) The end of the “pause” or “hiatus”.

Global atmospheric temperatures paused during period roughly between the 1998 and 2016 El Ninos, especially according to the contemporaneous records (later adjustments slightly changed the picture). Activists said that the pause was an invention of deniers. To do so they had to conceal the scores of peer-reviewed papers identifying the pause, exploring its causes (there is still no consensus on this), and forecasting when it would end. They were quite successful at this, with the help of their journalist-accomplices.

Now that is behind us. As the below graph shows, atmospheric temperatures appear to have resumed their increase, or taken a new stair step up — as described in “Reconciling the signal and noise of atmospheric warming on decadal timescales“, Roger N. Jones and James H. Ricketts, Earth System Dynamics, 8 (1), 2017. Click to enlarge the graph.

 

What next in the public policy debate about climate change?

Perhaps now we can focus on the important issues. Here are my nominees for the two most important open issues.

(1) Validating climate models as providers of skillful long-term projections.

The key question has always been about future climate change. How will different aspects of weather change, at what rate? Climate models provide these answers. But acceptable standards of accuracy and reliability differ for scientists’ research and policy decisions that affect billions of people and the course of the global economy. We have limited resources; the list of threats is long (e.g., the oceans are dying). We need hard answers.

There has been astonishingly little work addressing this vital question. See major scientists discussing the need to do so. We have the tools to do so. A multidisciplinary team of experts (e.g., software engineers, statisticians, chemists), adequately funded, could do so in a year. Here is one way to do so: Climate scientists can restart the climate policy debate & win: test the models! That post also lists (with links) the major papers in the absurdly small literature — and laughably inadequate — about validation of climate models.

There is a strong literature to draw on about how to test theories. Let’s use it.

  1. Thomas Kuhn tells us what we need to know about climate science.
  2. Daniel Davies’ insights about predictions can unlock the climate change debate.
  3. Karl Popper explains how to open the deadlocked climate policy debate.
  4. Milton Friedman’s advice about restarting the climate policy debate.
  5. Paul Krugman talks about economics. Climate scientists can learn from his insights.
  6. We must rely on forecasts by computer models. Are they reliable? (Many citations.)
  7. Paul Krugman explains how to break the climate policy deadlock.

 

(2) Modeling forcers of climate change (greenhouse gases, land use).

Climate models forecast climate based on the input of scenarios describing the world. This includes factors such as amounts of the major greenhouse gases there are in the atmosphere. These scenarios have improved in detail and sophistication in each IPCC report, but they remain an inadequate basis for making public policy.

The obvious missing element is a “business as usual” or baseline scenario. AR5 used four scenarios — Representative Concentration Pathways (RCPs). The worst was RCP8.5 — an ugly scenario of technological stagnation and rapid population growth, in which coal becomes the dominant fuel of the late 21st century (as it was in the late 19th C). Unfortunately, “despite not being explicitly designed as business as usual or mitigation scenarios” RCP8.5 has often been misrepresented as the “business as usual” scenario — becoming the basis for hundreds of predictions about our certain doom from climate change. Only recently have scientists began shifting their attention to more realistic scenarios.

A basecase scenario would provide a useful basis for public policy. Also useful would be a scenario with likely continued progress in energy technology and continued declines in world fertility (e.g., we will get a contraceptive pill for men, eventually). That would show policy-makers and the public the possible rewards for policies that encourage these trends.

Conclusions

Science and public policy both usually advance by baby steps, incremental changes that can accomplish great things over time. But we can do better. Since 2009 my recommendations have been the same about our public policy response to climate change.

  1. Boost funding for climate sciences. Many key aspects (e.g., global temperature data collection and analysis) are grossly underfunded.
  2. Run government-funded climate research with tighter standards (e.g., posting of data and methods, review by unaffiliated experts), as we do for biomedical research.
  3. Do a review of the climate forecasting models by a multidisciplinary team of relevant experts who have not been central players in this debate. Include a broader pool than those who have dominated the field, such as geologists, chemists, statisticians and software engineers.
  4. We should begin a well-funded conversion to non-carbon-based energy sources, for completion by the second half of the 21st century — justified by both environmental and economic reasons (see these posts for details).
  5. Begin more aggressive efforts to prepare for extreme climate. We’re not prepared for repeat of past extreme weather (e.g., a real hurricane hitting NYC), let alone predictable climate change (e.g., sea levels climbing, as they have for thousands of years).
  6. The most important one: break the gridlocked public policy by running a fair test of the climate models.

For More Information

For more about the close agreement of short-term climate model temperature forecasts with observations, see “Factcheck: Climate models have not ‘exaggerated’ global warming” by Zeke Hausfather at Carbon Brief. To learn more about the state of climate change see The Rightful Place of Science: Disasters and Climate Change by Roger Pielke Jr. (Prof of Environmental Studies at U of CO-Boulder).

For more information see all posts about the IPCC, see the keys to understanding climate change and these posts about the politics of climate change…

Get notified when a new post is published.
Subscribe today!
1 1 vote
Article Rating
512 Comments
Inline Feedbacks
View all comments
4 Eyes
September 22, 2017 12:24 pm

Averaging models is meaningless. The way forward is to pick 2 or 3 models that actually history match temperatures to date and then run those models out for 100 years. If you use models that bear no resemblance to reality in your averaging you are fooling yourself or fooling others. There seems to be a lot of effort put into fooling others.

John Smith
September 22, 2017 12:51 pm

I read somewhere that when they do a hindcast they re-align their models to actual measurement data every 10 years because of unpredictable ‘natural variation’ that would otherwise cause a deviation. If this is true, it is jaw-dropping because it would make almost any model correlate reasonably well with the past. Anyone confirm this?

Tom Dayton
Reply to  John Smith
September 22, 2017 1:51 pm

No, it’s not true. Hindcasts use the actual forcings that occurred, because those are known (having already occurred). Climate models take forcings as inputs.

Snarling Dolphin
September 22, 2017 12:57 pm

One successful 10-year forecast to match adjusted temperatures breaks the gridlock? God, let’s hope not Fabio!

Reply to  Snarling Dolphin
September 22, 2017 1:08 pm

Dolpine,
“One successful 10-year forecast to match adjusted temperatures breaks the gridlock”
Let’s replay the tape to see that the post actually says the exact opposite of what you claim.

“The gridlock might be breaking in the public policy response to climate change. …This graph shows a climate model’s demonstration of predictive skill over a short time horizon of roughly ten years. …
“It shows another step forward in the public policy debate about climate change …
“This is one in a series of recent incremental steps forward in the climate change policy debate. Here are two more examples of clearing away relatively minor issues. Even baby steps add up. …
“Perhaps now we can focus on the important issues. …”

September 22, 2017 12:58 pm

Regarding the Krugman suggestion, it’s a good one. And not unique to him (eg Feynman makes it here)
[youtube https://www.youtube.com/watch?v=MIN_-Flswy0&w=560&h=315%5D
I wonder if there isn’t an attempt to kick the hornets nest by choosing authorities that will be perceived as perverse choices; because then one can paint oneself as a more rational, cooler head when compared to the irritated people reflexively flinging abuse.

Reply to  tarran
September 22, 2017 1:01 pm

tarran,
“I wonder if there isn’t an attempt to kick the hornets nest by choosing authorities that will be perceived as perverse choices; because then one can paint oneself as a more rational, cooler head when compared to the irritated people reflexively flinging abuse.”
(1) In the real world, smart people have useful insights even if they happen to disagree with you on some things.
(2) You should read before making your knee-jerk criticism. Citing Krugman in support of skeptical positions is a powerful argument — something like “admission against interest” in the courtroom.
https://www.nolo.com/dictionary/admission-against-interest-term.html

Reply to  Editor of the Fabius Maximus website
September 22, 2017 1:24 pm

And maybe you should reread the first sentence of my comment “Mr I really should practice what I preach”. 😉

September 22, 2017 12:58 pm

A nice demonstration of how much research about effects of global warming has gone bonkers. Before reading, note that many patent examiners work remotely — and probably a large fraction work in places with air conditioning.
Too hot to reject: The effect of weather variations on the patent examination process at the United States Patent and Trademark Office” by Balázs Kovács in Research Policy, in press.
Highlights
This paper demonstrates that external weather variations affect if patent applications are allowed or rejected United States Patent and Trademark Office (USPTO).
The analyses are based on detailed records of 8.8 million “allow”/”non-final reject”/”final reject” decisions made at the USPTO between 2001 and 2014.
Temperatures warmer than usual in that week of the year lead to higher allowance and lower final rejection rates.
Higher cloud coverage than usual in that week of the year lead to lower final rejection rates.
Abstract
This paper documents a small but systematic bias in the patent evaluation system at the United States Patent and Trademark Office (USPTO): external weather variations affect the allowance or rejection of patent applications. I examine 8.8 million reject/allow decisions from 3.5 million patent applications to the USPTO between 2001 and 2014, and find that on unusually warm days patent allowance rates are higher and final rejection rates are lower than on cold days. I also find that on cloudy days, final rejection rates are lower than on clear days. I show that these effects constitute a decision-making bias which exists even after controlling for sorting effects, controlling for applicant-level, application-level, primary class-level, art unit-level, and examiner- level characteristics. The bias even exists after controlling for the quality of the patent applications. While theoretically interesting, I also note that the effect sizes are relatively modest and may not require policy changes from the USPTO. Yet, the results are strong enough to provide a potentially useful instrumental variable for future innovation research.

Sixto
Reply to  Editor of the Fabius Maximus website
September 22, 2017 3:29 pm

This lame attempt to ingratiate yourself to genuine skeptics just comes across as smarmy.
Of course there reams of worthless journal papers which assume the worst bogus CACA forecasts, then try to come up with bizarre effects therefrom across a host of different phenomena.
And if you want to study ground squirrels or great horned owls, then you have to tie your research to “climate change” Say the magic words and win a grant.

Philo
September 22, 2017 1:25 pm

Larry Kummer- a 10 year “successful” climate forecast is just silly. By definition it has to be at least 30 years, the WMO diktat, or at least 17 years to be a “trend” via Trenberth or whomever.
In any case, global average temperature is a statistical construct, not a measurement or observation, and has little, if any functional meaning. It is a temperature average of sorts, and doesn’t represent anything physical that might “drive” the climate.
So a 10 year match is simple happenstance and doesn’t mean anything, or, more suspiciously, it could be manufactured. I didn’t see any registry of procedures, data, or code mentioned.

AZ1971
September 22, 2017 1:29 pm

Use sci-hub.cc and the doi to read the revisionist magic by Gavin Schmidt and company. It’s hilarious.

One cause might be the chaotic internal variability of the coupled system of oceans and atmosphere, for example in the tropical Pacific Ocean, or in variability in deep ocean circulation. Alternatively, decadal-scale temperature variations can be a response of the climate system to external influences, such as volcanic eruptions or the solar cycle.

In other climate discussions (including the UN IPCC AR5) there is so little variation in solar insolation as to be a non-starter in climate forcing, and as yet there has been no large-scale volcanic eruptions post-2000 which would allow volcanic aerosols to circulate high enough and in sufficient quantity as to cause a negative forcing. Yet this is precisely what Gavin came up with to explain ‘the pause’?
And people believe these fools? WHY??

Sixto
September 22, 2017 1:32 pm

Larry,
I’ve visited your site and find that I cannot say enough bad things about it.
Surely you’re aware that the Fabian Socialists also took their name from Fabius. And probably you, if not your young Marine collaborator, know that Dalton Trumbo was a Communist, But you might not be aware that Zero Hedge is a Russian propaganda site. One of its two founders, Daniel Ivandjiiski, barred from the hedge fund industry for insider trading, is the son of a former Bulgarian KGB officer whose cover was “journalism”.
I also don’t know to what you refer by claiming that “study of actual scientists disproved” Popper’s “theory”.

Chris Hanley
Reply to  Sixto
September 22, 2017 3:21 pm

The Fabian strategy is to wear down the opponent by attrition, maybe by sheer boredom.

Sixto
Reply to  Chris Hanley
September 22, 2017 3:26 pm

In the latter case, Larry Cunctator has succeeded admirably.

gnomish
Reply to  Sixto
September 22, 2017 3:39 pm

“I also don’t know to what you refer by claiming that “study of actual scientists disproved” Popper’s “theory”.”
especially since popper had no theory and merely rebranded plato’s noumenal essence nonsense.

Sixto
Reply to  gnomish
September 22, 2017 3:58 pm

I don’t see Popper that way at all.
I see him saying what Feynman said in his famous lecture. And what Einstein said succinctly. But being a philosopher, he had to say it all at some length to be taken seriously.
And to his credit, he refined his “theory” as he learned more about real science. Couldn’t agree more though that his thought isn’t a theory, but simply a statement of the real scientific method.
Fabio Cunctator buys into Oreskes’ poisonous attempt to redefine the scientific method. He mistakes Kuhn’s largely justified commentary on the sociology of science for a new take on the scientific method. That science is political and sociological is hardly surprising, but that doesn’t mean that its best practice isn’t the method as elucidated by Einstein, Feynman and Popper.

gnomish
Reply to  gnomish
September 22, 2017 11:14 pm

sixto- if yu really like the topic of epistemology (or just want to see popper shredded and flushed) just find mr science.or.fiction comment and click his nick
his worst absurdity was the claim that nothing can be proven.
you’re supposed to take that on faith cuz it ws a divine revelation
he’s a mystic. that is as far from science as it gets.

gnomish
Reply to  gnomish
September 23, 2017 9:48 am

wow… i just looked up cunctator-
” Fabian strategy sought gradual victory against the superior Carthaginian army under the renowned general Hannibal through persistence, harassment, and wearing the enemy down by attrition rather than pitched, climactic battles.”
” “The logo of the Fabian Society, a tortoise, represented the group’s predilection for a slow, imperceptible transition to socialism, while its coat of arms, a ‘wolf in sheep’s clothing’, represented its preferred methodology for achieving its goal.”[9] The wolf in sheep’s clothing symbolism was later abandoned, due to its obvious negative connotations.”
man, that’s kummer, all right. not even trying to hide it…

Chris Hanley
September 22, 2017 2:36 pm

Hallelujah! At last, after years of waffly prevaricating ambiguous posts, The Editor has finally come out of the closet and declared himself the ‘true believer’ that I think most readers knew he was all along — he wasn’t fooling me anyway.

Sixto
Reply to  Chris Hanley
September 22, 2017 3:27 pm

As if there were ever any doubt.

Sixto
Reply to  Chris Hanley
September 22, 2017 3:33 pm

He’s a straphanger on the sc@m.
Trying lamely to have it both ways to draw eyeballs to his pathetic site.

AndyG55
Reply to  Chris Hanley
September 22, 2017 6:25 pm

The “editors” pretence of any sort of scientific acumen or knowledge is quite hilarious. 🙂
Brain-washed sludge.

September 22, 2017 2:47 pm

Does anyone know what program Gavin uses to archive current data?
I’m asking because, as I understand it, an archive program is not only designed to record and store data but also to keep the archive as small as possible.
One way to achieve both seemingly contradictory objectives is to run the data (whatever it might represent) through “tests” before a value is actually recorded, that is, saved to the archive.
One “test” could be if a number is +2 or -1 than the previous archived number.
Any value that is not greater than +2 or less -1 than the previous archived number is dropped from the record of, say, current temperatures.
Past records could be passed through the same “test” but using +1 and -2 for the test.
The actual values that remain could then be passed to another “test” before they are actually recorded.
Obviously, honest tests would be one where the +/- is the same value for all the test and set to give a range that reflects reality.
PS Maybe the really “raw” data is gone because it’s been archived via such methods?

Reply to  Gunga Din
September 22, 2017 2:49 pm

The the archived data is what is used for the models.

Reply to  Gunga Din
September 22, 2017 3:32 pm

That’s one way it could be adjusted “realtime”.
Set up the tests to only archive values that move in the desired direction.

RichardT
September 22, 2017 2:58 pm

Here is where I would go.
A Modest Proposal (for consideration in advancing to a reasoned posture on climate actions)
Assume:
1. Lomborg’s analysis of the Paris Accord’s provisions for moderating future delta T (minimal impact) and the attendant costs (major negative impacts to GDP) is reasonable.
2. The mean of the last IPCC GCM projections of LT delta T is reasonable to support planning of mitigation actions for future impacts of delta T.
3. The GCM mean projected 2030 (as in the Paris Accord) climate state represents a reasonable benchmark to assess impacts of future delta T to 2030.
4. The UAH measured LT delta T (from the date of the last IPCC to current date) can be used as a basis for projecting forward to 2030.
5. Annual differences in 3 and 4 above form a basis to assess gains and losses in time with respect to the 2030 benchmark state, allowing adjustments to the projections of the future impacts of delta T and associated dates.
Then:
1. Request a high integrity, high competency engineering consensus body (like ASME) empanel a group to define and evaluate realistic impacts associated with potential delta T future states, and prepare a mitigation course of action planning document.
2. Request Lomborg to empanel his economics study group to assess the cost of the ASME proposal with a cost benefit analysis.
3. Assign a US agency (FEMA?) to spearhead implementation planning, as/if appropriate.
Attendant action:
1. Redirect CC funding from GCM model studies to basic research of climate physics, climate history and, higher quality field measurements (my preferences, you can pick your own).
Benefits:
1. Affords the opportunity for a reasoned engineering assessment of potential impacts due to likely postulated future climate states with courses of action defined and programs, as recommended, formulated and, as needed, acted upon.
2. Creates a postulated position in time to which we can compare to reality and determine the need to accelerate or decelerate (or even abandon) program elements.
3. Dampens the atmosphere of the highly politized and agenda driven climate arena.
4. Refocuses monies to basic questions of climate science still needing study.
RT 2017

EternalOptimist
September 22, 2017 3:25 pm

Maybe Mr Fabius would understand the sceptical mindset better if he would list some of the ten year predictions that have failed to materialise.

Sixto
Reply to  EternalOptimist
September 22, 2017 3:31 pm

You mean, polar bears aren’t exinct? Children do know what snow is? Arctic sea ice still exists in summer? Humans aren’t reduced to a few breeding pairs on Antarctica (that one retracted by the Archdruid of Gaia)?

Sixto
Reply to  EternalOptimist
September 22, 2017 3:32 pm

Not to mention Himalayan glaciers. They’re growing in the Karakorum.

Sara
Reply to  EternalOptimist
September 22, 2017 3:44 pm

Maybe Mr. Fabius would understand the viewpoint of others better if he took one decade of data points, stretched those ten years out to include months (12 per year) and stopped compressing the charts into tiny proportions. I believe the exaggerated temperatures line would prove to be almost as flat as a full section of Midwestern cornfield.
I have to deal with realities, not hypotheses and forecasts that turn out to be wrong, so I’m inclined t feel that I have a right to demand a realistic version: 10 years, 12 months per year, or even better, that Figure 4 chart, 1980 to the projected 2020. Since the prediction is not in 10s of degrees, but rather is decimal portions of degrees, I believe that the rather alarming peaks and lows, which appear to be intentionally massive exaggerations by compressing them, will be shown to be as close to a flatline as possible.
I challenge you, Larry, to do that. I despise people to use manipulative language and figures to try to alarm me. It is as despicable as you can get. It’s on the same level as people who call me and want to sell me a burial plot without asking me where I want to be buried, because I could drop dead tomorrow. I tell them ‘so could you’ and hang up on them.
Come on, Larry, take the plunge. Ten years, 12 months per year, or maybe even the entire 40 year span that you show in Figure 4 in your article. Meet the challenge.
Or would you rather I did it for you?

Sixto
Reply to  Sara
September 22, 2017 3:50 pm

I’m guessing that you’ll have to do it for him.
But maybe that’s just me, eternal pessimist.

EternalOptimist
Reply to  Sara
September 22, 2017 3:56 pm

In the olden days, one failed measurement out of thousands of sucesses meant the end of the hypothesis. In the new fabian era, one dubious success merits a round of applause and the many failures are ignored.

Sixto
Reply to  Sara
September 22, 2017 4:01 pm

Although it’s only a “success” by cooking the books and nookying with numbers.\
It’s beyond science fiction. It’s science fantasy. Only in an alternative universe where the laws that govern this one and the scientific method don’t apply is this pack of lies and drivel a “success”.
Only where success is measured in being able to bamboozle the public and rip them off for more of their hard-earned tax dollars.

Sara
Reply to  EternalOptimist
September 22, 2017 3:46 pm

Please see my response to him below.

Sara
Reply to  Sara
September 22, 2017 3:48 pm

Durn! I was replying to Eternal Optimist, up above.

Sixto
Reply to  EternalOptimist
September 22, 2017 4:10 pm

Join the crowd. It’s hard to find because it’s all in the Cunctator’s imagination.

September 22, 2017 3:40 pm

When you alter the data to fit the model, as this has, you cannot also claim that the data has validated the model. This is just yet more supportive propaganda.

Nick Werner
September 22, 2017 3:40 pm

The models do appear to be improving.
Similar to my old watch that’s right twice a day (provided I don’t make any adjustments), the models’ predictive skill looks pretty good twice for every two El Nino’s. Apart from that, they still appear to drift away towards the high side of observations.

AndyG55
Reply to  Nick Werner
September 22, 2017 3:42 pm

Not really,
All they do is get rid of the very worst of them, thus reducing the number.
Then move the starting point back to the middle of the model simulations.

AndyG55
Reply to  AndyG55
September 22, 2017 3:44 pm

That gives them several years the massively adjust surface data might just align for a while if they have large NATURAL El Ninos on

Sixto
Reply to  AndyG55
September 22, 2017 3:46 pm

Best would be to admit that climate can’t yet be modeled, because we don’t know enough about it and probably lack the computing power.
But at least better would be to toss out all model runs with an implied ECS above 2.0 degrees C per doubling of CO2. Better yet would be 1.5, but I don’t think there are any that low.
Real ECS is almost certainly below the lab-derived 1.2 degrees per doubling, thanks to net negative feedbacks.

AndyG55
Reply to  AndyG55
September 22, 2017 3:46 pm

too my scripts running or something

Sixto
Reply to  AndyG55
September 22, 2017 3:53 pm

Andy,
It’s always something. Especially with blog hosters.
Too bad for Fabio the Fab Cunctator that his side has lost, despite his trying to play both sides of the street.

Reply to  AndyG55
September 22, 2017 4:21 pm

And set another “test” to filter the archived data that is retrieved before it goes into the model.
https://wattsupwiththat.com/2017/09/22/a-climate-science-milestone-a-successful-10-year-forecast/comment-page-1/#comment-2617706

Bob Hoye
September 22, 2017 4:00 pm

Larry
I took a shot at Krugman because he believes that there is a national economy and that it can be “managed”. His next step was to believe that the temperature of the Earth can also be “managed”.
Both concepts show audacity hitherto only attributed to gods.
Three years ago, I gave a short speech to a CMRE dinner in Manhattan, Can be Googled:
Video: Bob Hoye CMRE Speech. It reviews that authoritarians are using central banking as well as imagined climate scares to gain political control beyond constitutional norms.
Some of the lines are amusing.

Sixto
September 22, 2017 4:42 pm

Larry Cunctator,
Here’s the deal.
You reject the scientific method, so imagine that “consensus” science is worth heeding, while at the same time recognizing how government funding controls “science”. But the scientific method can’t be repealed because it’s based upon objective reality.
We should spend no more on bogus, GIGO “climate science” because it has been repeatedly shown false, in the best Popperian tradition. As Einstein noted, that’s all that matters. How many third rate scientists, historians of science, sociologists, psychologists, science communicators and Australian cartoonists agree, Mother Nature says “Huh-uh!”, so they all lose.

Sixto
Reply to  Sixto
September 22, 2017 5:05 pm

IOW, CACA is a crock, and you and your colleagues have wasted your time and ours with your disgusting blog.

ENRICO BELMONTE
Reply to  Sixto
September 22, 2017 10:10 pm
Glenn E Stehle
Reply to  Sixto
September 22, 2017 5:16 pm

NeverTrumper Kummer’s speculations and predictions about climate change need to be filed away in the same place as his speculations and predictions about politics, in the garbage can.
If there’s anything more complex, chaotic and unpredictable than the climate, it’s the behavior of large groups of human beings. But NeverTrumper Kummer believes he’s got politics all figured out, just like he believes he’s got the climate all figured out.
When it comes to NeverTrumper Kummer, a large grain of salt is in order.comment imagecomment imagecomment image

Sixto
Reply to  Glenn E Stehle
September 22, 2017 5:21 pm

The Swamp-dwelling Republocrats, Demolicans and Dumpocraps might yet succeed in dumping Trump, but they can’t easily suppress the voters who elected him.
Larry Cunctator, thankfully, is a swamp creature from the black lagoon of the pestilential past.

Sara
Reply to  Glenn E Stehle
September 22, 2017 5:44 pm

Yeah, well, at noon today, the news was that Trump’s approval rating was up 40%.

Glenn E Stehle
Reply to  Glenn E Stehle
September 22, 2017 6:10 pm

Sixto,
Too many pompous and hubristic claims for my taste. No hint of skepticism or self-examination whatsoever.
Remember the famous quip by Cromwell, “I beseech you, in the bowels of Christ, think it possible that you may be mistaken?”
NeverTrumper Krummer reminds me of Rocket Man.

Reply to  Glenn E Stehle
September 22, 2017 6:15 pm

+100 …I have to admit that I had my doubts as to the abilities of Trump as president, early on. he has more than proven his abilities since then. Imagine what he would be accomplishing, if his administration was not being attacked daily from multiple angles.

Sixto
Reply to  Glenn E Stehle
September 22, 2017 6:47 pm

Glenn E Stehle September 22, 2017 at 6:10 pm
Are you referring to me or to Larry Fabio “Rocket Man” Cunctator as pompous and hubristic?
Just making sure.

Glenn E Stehle
Reply to  Glenn E Stehle
September 22, 2017 6:58 pm

Sixto,
Yep.
NeverTrumper Kummer makes entirely too many pompous and hubristic claims for my taste. No hint of skepticism or self-examination to be found in his decrees.

Sixto
Reply to  Glenn E Stehle
September 22, 2017 7:00 pm

IMO you’re going way to easily on the smarmy weasel.
But that’s just me.

Sixto
Reply to  Glenn E Stehle
September 22, 2017 7:02 pm

I feel duped and stupid for having gone to Cunctator’s site, as the sniveling sycophant lured me into doing by posting here. Fool me once, shame on the weasel. Won’t be fooled again.

Sara
Reply to  Sixto
September 22, 2017 5:42 pm

I’m going to wait and see if Larry can meet the simple challenge I gave him. Take out the squished-together data points, expand the chart to years and months, and prove what he said.
I’m more than aware of glacial and interglacial cycles, since we are in the post-Wisconsin warming period known as the Holocene interglacial. All glacial and interglacial cycles have internal warming and cooling episodes. It’s how glaciers move – they melt underneath and slide on the soil lubricated with meltwater. This is normal. it has never been something to panic over. And we humans have been around in our current form since at least the Wisconsin glacial maximum (pre-Illinoisan period), drove out or bred out our competition and now we send satellites into space to survey distant plants and land on comets.
Well, if we’re this successful at such complicated things, then what makes it so difficult for these Warmians and their ilk to accept the mere idea that this particular warming period – the Holocene – may actually NOT be an individual interglacial, but simply a part of the history of the Wisconsin glacial maximum? Is it because the ice sheets melted back to Hudson’s Bay and further north? Or is it because they expect instant results and in geology, there is no such thing? Just trying to understand this, that’s all.
There have been episodes of the Swiss-Italian Alps with ZERO snow cover and others, as noted elsewhere with deep snows. Same in South America, same here in North America. It’s part of this planet’s cyclical nature, and we have ZERO control over it.
In regard to forecasts, if a meteorologist has only a 50% chance of an accurate forecast for a week ahead, how can any of the Warmians, including Larry, expect any reasoning person (including me) to believe their forecasts, which seem to be aimed entirely at panicking people over naturally occurring cycles.
Like I said, I’m just trying to understand this hyperbole and really grandiose behavior.

Tom Dayton
Reply to  Sara
September 23, 2017 8:50 am

Sara: Expanding the scale of the graph will not change the trend. That’s grade school math.
Weather is not climate: https://skepticalscience.com/weather-forecasts-vs-climate-models-predictions-intermediate.htm

Old Grump
September 22, 2017 5:15 pm

“That’s progress, a milestone — a successful decade-long forecast!”
I thought they had been telling us for many years that the models make projections not forecasts or predictions. Silly me for expecting anything approaching consistency.

Sixto
Reply to  Old Grump
September 22, 2017 5:40 pm

If Larry ever surfaces here again, it’s because he’s either a glutton for well-deserved punishment, or pathetically desperate to draw attention to his justly never visited site.

September 22, 2017 5:48 pm

More fallacy and nonsense from Larry.
Since when have “ensembles” become “predictions”?
That’s like claiing a broken clock accurately predicts the time twice a day.
It’s called an “ensemble” since the authors felt that numerous runs might hint at a prediction. Just like rooms full of monkeys and typewriters are writing Shakespeare.
Then the Joules shell game nonsense.
Do you have any idea exactly what joules are and what they measure?
Joules are a measurement of work over time. Similar to the concept of horsepower.
One horsepower represents moving 33,000 foot-pounds of work per minute.
One joule can be represented by lifting one apple one meter.
Representing ocean temperatures as joules is false usage for joules. A very stretched rationale, one thousand joules, (kilojoule) equals one calorie of food energy.
How many calories do you consume a day Larry?
NOAA and it’s cohorts converting temperatures Celsius to joules are playing fast and very loose with energy and serially abusing the concepts of temperature.
It also means that if NOAA and their fakirs had bothered to include proper error bars, joules would be swamped by temperature error ranges over miniscule thermometer readings.
Such whole hearted people work at NOAA. Honest they are not.
Don’t forget that these wonderful anti-science crackpots are comparing ship derived human observation whole digit temperatures to ocean floating buoys that allegedly resolve temperatures to a thousandth of a degree.
Provided that the device was properly calibrated and certified during installation with regular cyclical re-certifications. Which those thermometers never experience.
Enjoy your party Kummer.
Though It’s definitely not worth a celebration except by alcoholics who party for any reason.
Call us when you have honest temperatures and genuine predictions.

ENRICO BELMONTE
Reply to  ATheoK
September 22, 2017 10:05 pm

Joule is a unit of energy. 1 joule = 0.00024 kilocalorie = 0.00024 Calorie (with capital C used in food Calories). 1 Calorie = 1 kilocalorie = 1,000 calories (with small c)
Horsepower is a unit of power. 1 horsepower = 746 watts = 746 joules/second

DWR54
Reply to  ATheoK
September 22, 2017 10:15 pm

Don’t forget that these wonderful anti-science crackpots are comparing ship derived human observation whole digit temperatures to ocean floating buoys that allegedly resolve temperatures to a thousandth of a degree.

As I understand it no one is claiming that degree of accuracy for an individual instrument. The stated level of precision is the result of the averaging process over thousands of measurements.

Hans-Georg
Reply to  DWR54
September 23, 2017 12:00 am

An average of false and incomplete measurements is the truth? That in turn reminds the entrance posting.

Sandy In Limousin
Reply to  ATheoK
September 23, 2017 12:01 am

If you have enough broken clocks at least one will be approximately right at any point in the day.

hunter
September 23, 2017 5:03 am

This a reasonable analysis, on the face of it.
But looking at a few points raises serious concerns:
The “simplified” graph Schmidt has come up with has huge error bars, much if which covers a negation of the claimed risk.
The rehabilitation of OHC, without acknowledging the other points made about climate by Dr. Pielke who raised the importance of OHC, much less an apology to him for the shoddy treatment he received raises the issue that once again Schmidt is is simply cherry picking and manipulating evidence to sound sciencey and is still not actually engaging in science.
The clear problems with historical data And it’s well documented “editing” is not addressed.
The failure of predictions offered over many years about negative impacts of what Schmitt at al now call “Climate change” are unaddressed.
The top conclusion, tgat yet more money be poured into “Climate research” is a non-sequitur.
If the path us clear, then let us spend money on the infrastructure we need for the future. So called “climate research” is already a major world industry. The quality of science produced has in many ways been either stagnant or dismal. Further rewarding “Climate science” because Schmidt now choose to use better marketing visuals makes little sense.
Nothing in his new graph is new.
To ignore the benefits and proven safety of nuclear power in favor of giant wind turbines and solar arrays industrilizing the world’s open spaces seems strange. Wind is inherently extremely costly and unreliable. Solar is the same.
Both wind and solar at industrial grid significant capacities require vast amounts of open land and complex backup systems to work. Not to mention that both are far more vulnerable to weather disruption and expensive damage than fossil fuel or nuclear.
Finally and mist important, dismissing skeptical arguments because a new graph has been developed is not discussing an issue. And committing even more trillions to “climate” does not seem like a new way forward, unless jumping off a cliff is considered a good option.

hunter
September 23, 2017 5:35 am

Perhaps the biggest clue that the climate crisis promoters ar operating in bad faith is the manipulative abuse of the scales used in the graphs.

Sara
Reply to  hunter
September 23, 2017 6:35 am

That is precisely the reason I challenged good ol’ Larry to stop compressing the scale of those charts and spread them out into a month-by-month/decade-by-decade format. He won’t do it because it will prove my point, that the highs and lows will flatline to nearly nothing, and make him look ridiculous.

Tom Dayton
Reply to  Sara
September 23, 2017 8:50 am

Sara: Changing the scale cannot change the trend.

Dean Rojas
September 23, 2017 5:48 am

Thanks for recomfirmin that we hve not had any rise in temperatures in thelast 18 years and that the models are easy to manipulate.
Dean Rojas

John
September 23, 2017 8:12 am

Wowsers. I thought I’d ended up on the skeptical science site.
Where to begin. The mean of CMIP3 is what exactly? I know it to be the models presented in AR4. However, AR4 actually includes a model set based on 2000 concentrations remaining. Were those runs part of the mean as well? If so, they brought it down by an awful lot. Those different scenarios (other than the 2000 ones) pretty much matched each other by now. No deviation at this time, they go off in different directions later.
The GISS set has been changed massively in the past decade. How about you show the GISS version being used back then? The AR4 used an average of 4 datasets, I believe.
Also, by pure chance and by maintaining the very temperature record used for the verification, you managed to get a match twice in a decade. Based on whatever mean dark magic you used……
Erm, congrats?
I suspect someone like David Middleton would have a field day with this, referencing back to what was in AR4 and the datasets used. Of course, he might not be able to figure out which of the AR4 models were used for the mean. Or is it mean of means even….
Perhaps Euan Mearns could take a crack of it was well. He did quite well with https://wattsupwiththat.com/2014/06/12/the-temperature-forecasting-track-record-of-the-ipcc/
The idea what you can make a mean out of completely different model runs boggles the mind. There is zero point. I mean, it’s garbage.

richard verney
Reply to  John
September 23, 2017 8:49 am

How can the average of wrong, be right? Mathematically, it would only be a per chance happening.
There is a lack of detail of how many models make up the ensemble, but if there is say 50 models each one projecting a different outcome, they are simply averaging the 50 incorrect projections to produce what they call the mean. This lacks mathematical integrity.
Some years back Dr brown of Duke University made some very insightful comments on the absurdity of the model projections.

Tom Dayton
Reply to  John
September 23, 2017 8:56 am

John: Regarding the mean of the models, see my comment at https://wattsupwiththat.com/2017/09/22/a-climate-science-milestone-a-successful-10-year-forecast/comment-page-1/#comment-2617732 and the one after that.
The models are the ones listed by the CMIP3 project: https://www.wcrp-climate.org/wgcm-cmip/wgcm-cmip3
Comparing models to observations for the past 10 years requires using the observations from the past 10 years. Your demand of using observations from only 10 years ago is nonsensical.

John
Reply to  Tom Dayton
September 23, 2017 9:33 am

Thanks. Weren’t the models used for CMIP3 initialized in 2000? They didn’t issue the report in 2007 and initialize the models in the same year and hope for the best, did they? Therefore should we not have 17 years of model successes to compare to temperature records? Including 7 years of data where various different GISS sets were used. Any chance of one from 2000 from Gavin?
Indeed. Wasn’t CMIP5 based on models initialized in 2007? That would be the 10 year one to use, starting from 2007, but of course, a 17 year proof of forecast would be much better than a 10 year one.
Thanks for the response re means, but I stand by what I said. Taking means from different models, all with different initializations/simulations, is nonsense. Even if you arrive at something that matched observations, it did so by pure mathematical chance from the numbers used.
How many of the models did Gavin use to get his mean? At various points in AR4 they use differing amounts of models for some of their claims, ranging from 14 to 24, but then they also state that each model model also has multiple initializations/simulations as well, which they interchange throughout the report. Sadly AR4 isn’t as clear as AR5 and CMIP5.
But, rather than giving links, why don’t you explicitly state the amount of models and simulations Gavin used? It’s a complete guess, of course, but just curious if you will give it a go without giving a link.
Any chance Gavin would actually open up his method and data, do you think?

John
Reply to  Tom Dayton
September 23, 2017 10:34 am

A quick look at GISS history shows that the 2002, 2012 and 2013 versions of GISS would give different results for the years from 2007 than the 2016 version.
Meaning that until last year, Gavin wouldn’t have got the result, but would still likely have had something close to matches for different years in the different versions.
Hey, I wonder if he considered giving a mean or his own various versioned datasets and claiming a success that way? He would get even more than two matches!

Tom Dayton
Reply to  Tom Dayton
September 23, 2017 10:42 am

John: Actually looking more than quickly at the GISS version history shows you are wrong: https://data.giss.nasa.gov/gistemp/history/

John
Reply to  Tom Dayton
September 23, 2017 10:52 am

You are linking to graphs each showing a different value for the 2016, 2013, 2012 and 2002 versions ran between 2007 and 2012. No matter if you look at the 5 year means or 1 year means, they show different values. As a FYI, that’s what the different colours for the lines mean (sorry, couldn’t resist)

Reply to  John
September 25, 2017 3:23 am

I will definitely be taking a crack at this very soon… with a rock hammer.

John
September 23, 2017 10:17 am

A good example of something we can use for means is ENSO. A La Nina is the form horse, according to the mean of CFSv2. You see a mean starting from now, going all the way out to June next year. You can then see all the members that made up the mean. If the mean forecast today matches observations for two of the next 9 months, was it a successful mean forecast if it differed in the other 7 months? I’d say, no. Because the individual runs are what is supposed to be using known science to make a forecast. The individual runs are also going to be right at some point across the time period, but that also doesn’t mean they nailed the forecast.
Now, that’s not to say you can’t use means for a forecast, but for the mean to be successful forecast indicator, it would need to be correct 95% of the time.

John Steinmetz
September 23, 2017 11:18 am

Why not report warming sine 1940 insteat of 1980. It would be much more informative.. Why should we believe a good fit for 10 years is statisically relevant? How can one extra molecule of CO2 per 10,000 cause sea levels to rise appreciably?
Could others please help answer my questions.

Tom Dayton
Reply to  John Steinmetz
September 23, 2017 11:46 am

John Steinmetz: Regarding the number of CO2 molecules, see https://skepticalscience.com/CO2-trace-gas.htm

Sixto
Reply to  Tom Dayton
September 23, 2017 12:02 pm

Tom,
SS’ “argument” is idiotic.
The fact is that an extra molecule of CO2 so far has had only beneficial effects, and more would be even better. There is zero evidence that more of an essential trace gas will cause anything bad to occur, let alone catastrophic. Sea level rise hasn’t sped up. Ice sheets aren’t melting. But Earth has greened.
Nothing bad happened when CO2 was almost 20 times higher than now. Instead, our planet enjoyed the Cambrian Period, in which large, hard-bodied animals evolved. When it was five or more times higher than now, the largest land animals of all time evolved. C3 plants would flourish under CO2 levels three times higher than now, although even just twice as much would be a great improvement.

Tom Dayton
Reply to  Tom Dayton
September 23, 2017 12:11 pm

Sixto: John’s question was about the mechanism by which an increase in CO2 molecules can cause warming. I linked to an answer to that question. What you wrote is irrelevant to that topic. You are Gish Galloping.

Sixto
Reply to  Tom Dayton
September 23, 2017 12:20 pm

No, my reply was entirely valid and responsive.
The clowns and cartoonists at SS compared CO2 to arsenic. Idiotic, as I said.
You are resorting to an irrelevant analogy, not I.

Sixto
Reply to  Tom Dayton
September 23, 2017 12:56 pm

Apparently the idiocy of it wasn’t apparent to you, or you wouldn’t have used it. Most of SS is idiotic to ludicrous, not least its clownish perps dressing up as N@zi SS officers.
The reason the analogy is at best inappropriate is that we know the effect of increasing dosages of arsenic, which is bad. We also know the effect of increasing CO2 concentration, and it’s good for trees and food crops. There is no evidence that increasing CO2 has any negative effect on climate.

Tom Dayton
Reply to  John Steinmetz
September 23, 2017 12:36 pm

John Steinmetz: The post I linked for you earlier narrowly answers your narrow question of how a tiny amount of CO2 increase can have significant warming consequences. That post addresses the very narrow argument from incredulity about the tiny amount. When you want to learn more about the actual mechanism, please read the Basic tabbed pane, then the Intermediate one, then the Advanced one here: https://skepticalscience.com/empirical-evidence-for-co2-enhanced-greenhouse-effect-basic.htm

Tom Dayton
Reply to  Tom Dayton
September 23, 2017 12:37 pm

John Steinmetz: When you are ready to learn some more technical details of the mechanism, click the link inside the green Further Reading box at the bottom of that post.

September 23, 2017 11:25 am

Larry Kummer, thank you for the essay.
for the gridlock has left us unprepared for even the inevitable repeat of past extreme weather
I think that is not true. What has left us unprepared for the inevitable repeat of past extreme weather is a more prevalent idea that preparation is not valuable, more prevalent than in the past. Take California, for example: the neglect of the transportation and flood control infrastructure long predates the fanciful notion that vast distractions like the electricity portfolio requirements and the “bullet train to nowhere” can prevent climate change. The Houston region had stopped building up its flood control infrastructure despite warnings and reasonable plans. Tropical Storm Sandy revealed a comparable neglect in the Philadelphia – New York City corridor. If the end of the “gridlock” produces more such distractions, then the US will remain unprepared for the inevitable repeats of past extreme weather.