From the Fabius Maximus Blog. Reposted here.
By Larry Kummer. From the Fabius Maximus website, 21 Sept 2017.
Summary: The gridlock might be breaking in the public policy response to climate change. Let’s hope so, for the gridlock has left us unprepared for even the inevitable repeat of past extreme weather — let alone what new challenges the future will hold for us.
The below graph was tweeted yesterday by Gavin Schmidt, Director of NASA’s Goddard Institute of Space Sciences (click to enlarge). (Yesterday Zeke Hausfather at Carbon Brief posted a similar graph.) It shows another step forward in the public policy debate about climate change, in two ways.
(1) This graph shows a climate model’s demonstration of predictive skill over a short time horizon of roughly ten years. CMIP3 was prepared in 2006-7 for the IPCC’s AR4 report. That’s progress, a milestone — a successful decade-long forecast!
(2) The graph uses basic statistics, something too rarely seen today in meteorology and climate science. For example, the descriptions of Hurricanes Harvey and Irma were very 19th C, as if modern statistics had not been invented. Compare Schmidt’s graph with Climate Lab Book’s updated version of the signature “spaghetti” graph — Figure 11.25a — from the IPCC’s AR5 Working Group I report (click to enlarge). Edward Tufte (The Visual Display of Quantitative Information) weeps in Heaven every time someone posts a spaghetti graph.
Note how the graphs differ in the display of the difference between observations and CMIP3 model output during 2005-2010. Schmidt’s graph shows that observations are near the ensemble mean. The updated Figure 11.25a shows observations near the bottom of the range of CMIP5 model outputs (Schmidt also provides his graph using CMIP5 model outputs).
Clearing away the underbrush so we can see the big issues.
This is one in a series of recent incremental steps forward in the climate change policy debate. Here are two more examples of clearing away relatively minor issues. Even baby steps add up.
(1) Ocean heat content (OHC) as the best metric of warming.
This was controversial when Roger Pielke Sr. first said it in 2003 (despite his eminent record, Skeptical Science called him a “climate misinformer” – for bogus reasons). Now many climate scientists consider OHC to be the best measure of global warming. Some point to changes in the ocean’s heat content as an explanation for the pause.
Graphs of OHC should convert any remaining deniers of global warming (there are some out there). This shows the increasing OHC of the top 700 meters of the oceans, from NOAA’s OHC page. See here for more information about the increase in OHC.
(2) The end of the “pause” or “hiatus”.
Global atmospheric temperatures paused during period roughly between the 1998 and 2016 El Ninos, especially according to the contemporaneous records (later adjustments slightly changed the picture). Activists said that the pause was an invention of deniers. To do so they had to conceal the scores of peer-reviewed papers identifying the pause, exploring its causes (there is still no consensus on this), and forecasting when it would end. They were quite successful at this, with the help of their journalist-accomplices.
Now that is behind us. As the below graph shows, atmospheric temperatures appear to have resumed their increase, or taken a new stair step up — as described in “Reconciling the signal and noise of atmospheric warming on decadal timescales“, Roger N. Jones and James H. Ricketts, Earth System Dynamics, 8 (1), 2017. Click to enlarge the graph.
What next in the public policy debate about climate change?
Perhaps now we can focus on the important issues. Here are my nominees for the two most important open issues.
(1) Validating climate models as providers of skillful long-term projections.
The key question has always been about future climate change. How will different aspects of weather change, at what rate? Climate models provide these answers. But acceptable standards of accuracy and reliability differ for scientists’ research and policy decisions that affect billions of people and the course of the global economy. We have limited resources; the list of threats is long (e.g., the oceans are dying). We need hard answers.
There has been astonishingly little work addressing this vital question. See major scientists discussing the need to do so. We have the tools to do so. A multidisciplinary team of experts (e.g., software engineers, statisticians, chemists), adequately funded, could do so in a year. Here is one way to do so: Climate scientists can restart the climate policy debate & win: test the models! That post also lists (with links) the major papers in the absurdly small literature — and laughably inadequate — about validation of climate models.
There is a strong literature to draw on about how to test theories. Let’s use it.
- Thomas Kuhn tells us what we need to know about climate science.
- Daniel Davies’ insights about predictions can unlock the climate change debate.
- Karl Popper explains how to open the deadlocked climate policy debate.
- Milton Friedman’s advice about restarting the climate policy debate.
- Paul Krugman talks about economics. Climate scientists can learn from his insights.
- We must rely on forecasts by computer models. Are they reliable? (Many citations.)
- Paul Krugman explains how to break the climate policy deadlock.

(2) Modeling forcers of climate change (greenhouse gases, land use).
Climate models forecast climate based on the input of scenarios describing the world. This includes factors such as amounts of the major greenhouse gases there are in the atmosphere. These scenarios have improved in detail and sophistication in each IPCC report, but they remain an inadequate basis for making public policy.
The obvious missing element is a “business as usual” or baseline scenario. AR5 used four scenarios — Representative Concentration Pathways (RCPs). The worst was RCP8.5 — an ugly scenario of technological stagnation and rapid population growth, in which coal becomes the dominant fuel of the late 21st century (as it was in the late 19th C). Unfortunately, “despite not being explicitly designed as business as usual or mitigation scenarios” RCP8.5 has often been misrepresented as the “business as usual” scenario — becoming the basis for hundreds of predictions about our certain doom from climate change. Only recently have scientists began shifting their attention to more realistic scenarios.
A basecase scenario would provide a useful basis for public policy. Also useful would be a scenario with likely continued progress in energy technology and continued declines in world fertility (e.g., we will get a contraceptive pill for men, eventually). That would show policy-makers and the public the possible rewards for policies that encourage these trends.
Conclusions
Science and public policy both usually advance by baby steps, incremental changes that can accomplish great things over time. But we can do better. Since 2009 my recommendations have been the same about our public policy response to climate change.
- Boost funding for climate sciences. Many key aspects (e.g., global temperature data collection and analysis) are grossly underfunded.
- Run government-funded climate research with tighter standards (e.g., posting of data and methods, review by unaffiliated experts), as we do for biomedical research.
- Do a review of the climate forecasting models by a multidisciplinary team of relevant experts who have not been central players in this debate. Include a broader pool than those who have dominated the field, such as geologists, chemists, statisticians and software engineers.
- We should begin a well-funded conversion to non-carbon-based energy sources, for completion by the second half of the 21st century — justified by both environmental and economic reasons (see these posts for details).
- Begin more aggressive efforts to prepare for extreme climate. We’re not prepared for repeat of past extreme weather (e.g., a real hurricane hitting NYC), let alone predictable climate change (e.g., sea levels climbing, as they have for thousands of years).
- The most important one: break the gridlocked public policy by running a fair test of the climate models.
For More Information
For more about the close agreement of short-term climate model temperature forecasts with observations, see “Factcheck: Climate models have not ‘exaggerated’ global warming” by Zeke Hausfather at Carbon Brief. To learn more about the state of climate change see The Rightful Place of Science: Disasters and Climate Change by Roger Pielke Jr. (Prof of Environmental Studies at U of CO-Boulder).
For more information see all posts about the IPCC, see the keys to understanding climate change and these posts about the politics of climate change…
- Why the campaign to warn people about climate change fail: incompetence.
- Ignoring science to convince the public that we’re doomed by climate change.
- Look at the trends in extreme weather & see the state of the world.
- Focusing on worst case climate futures doesn’t work. It shouldn’t work.
- Paul Krugman shows why the climate campaign failed.
- Manichean paranoia has poisoned the climate debate.
- What you need to know & are not told about hurricanes — About Harvey and Irma.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

Aren’t these the same people who were whining that those who claimed a pause used the 1998 El Nino as a baseline (which was false, BTW)? Now they’re using the peak of the current El Nino as vindication (along with questionable statistical analysis, what else is new). Temps are falling from that peak is already falling rapidly. It will be fun to see they’re explanation after it pull the temperature outside of their hyper-extended band.
correct link
https://fabiusmaximus.com/2017/09/21/a-small-step-for-climate-science-at-last-a-step-for-humanity/
don’t include target= when copy/paste
Really?
1) The warming is no different to the ongoing trend previously so anyone could have guessed it.
2) The warming follows scenario C where big reductions in emissions should had occur to reflect this reality.
3) The warming trend supporting scenario C indicates no concerns regarding dangerous climate change because the warming would be less than 2c per century.
4) The 10 year period cooled for most of it and only a strong El Niño towards the end has kept it roughly on track.
5) For most of the 10 year period global temperature were below the middle range forecast estimates.
6) The timeline is full of confirmation bias and ignores better observation tools that don’t fit the groups vision.
7) Exaggerating the surface temperature trend is not a proud moment that it only roughly with a recent strong El Nino, fits a short period that has shown little change and no concerns for the future.
Bravo!
My reading of this is that without the most recent, very strong El Nino the skill of this projection (forecast, prediction, guess) would have been much less impressive. When a feature no accounted for in the model helps the model meet observations, then success is not exactly the word I would use.
The 95% model spread is about 0.8 deg C. That’s huge. You can drive a model truck through that.
An infinite number of monkeys with an infinite number of typewriters and one accidentally writes a daytime soap opera so everybody runs around yelling “monkeys can write shakespear”…
Do we HAVE to support fallacious articles here?
a broken / stopped clock is right twice a day … still not fit for purpose 🙂
What is shown, is a graph of observations at or below the ensemble forecast average the majority of time………and diverging, with the observations not as warm as the ensemble average………until the El Nino spike higher, that takes the observations up to the ensemble average…….and ends the successful 10 year forecast
Even if you buy the surface observations, along with their bias and adjustments, if the global climate models had predicted the 2015/16 spike higher in global temperatures from the El Nino and dialed that in(which of course they can’t), then the ensemble average would also have spiked higher and the actual temperatures would have not been able to spike up to the ensemble average.
If you take out the El Nino spike, it’s crystal clear that the global climate models are too warm.
Not just clear…….crystal clear! When the end point on the graph represents a spike higher that everybody on the planet knows will not have sustained momentum and the source uses it to rescue their global climate model that is clearly too warm…….it tells you about the objectively of the source.
I would bet a large sum of money that observations cannot keep up with the slope of the temperature increase on the global climate model ensemble average the next 10 years.
It’s possible but the reality/science as viewed by an independent operational meteorologist making observations of the global atmosphere the past 35 years say’s no. We may get warmer but not at the rate predicted by the models.
This is not to say that global climate models don’t have value. They clearly do but only when used honestly and with adjustments that reconcile differences between observations and models predictions.
It’s funny how the climate news about the same exact thing can change and be spun. Before the El Nino, “The Pause” or warming “slow down” was being legitimately discussed, even by those who thought we were headed for catastrophic warming.
Then, we have 2 years with a Super El Nino and a global temperature spike higher changing the latest tune to “the models are now confirmed”
If they were confirmed, then the El Nino spike higher should have taken the observations far into the upper range of the ensemble average.
Another problem that I see is that the observations show an increase of more than 1 Deg. C from the starting point in 1975 to the top of the El Nino spike. Satellites do/did not show that much of an increase and 1975 was the low point from the previous 30 years of modest global cooling.
So the starting point is at THE spike lower and the end point is just after a spike higher.
That’s not how you should get a trend to judge model performance.
http://www.drroyspencer.com/2017/09/uah-global-temperature-update-for-august-2017-0-41-deg-c/
How about, instead, we cherry pick 2012, during a cooling La Nina as our end point and use the cooler satellite data.
So the cold extreme in observations is below 97% of climate models(using the cherry pick). The warm extreme cherry pick in observations from this article spikes up to reach the ensemble average.
I would bet a large sum of money that observations cannot keep up with the slope of the temperature increase on the global climate model ensemble average the next 10 years.
Pre or post ‘adjustment’ ?
Mike Maquire: That graph by Christy is severley flawed and misleading. Here is one critique; click the link in its first sentence to see a prior critique: http://www.realclimate.org/index.php/archives/2017/03/the-true-meaning-of-numbers/
What a load of blather!
The model’s may have started with a different objective originally but now they and their results are nothing more than a tool used by the UN-IPCC to frighten politicians.
Regardless of all the crud written above they are not accurate as they do not allow for the real physical changes that happens on this planet that changes the weather trends and thus the climate over time. But then again that is not what the elites of the UN want — they want a stick the bash the western nations with. These model are neither scientific nor clearly statistical (has the code been openly validated?).
So let me repeat — these models are nothing more than a tool used by the UN-IPCC to frighten politicians and the public — that is their ONLY use!
so if there where only 2 models and model one predicted a temp that turned out to be .5 degrees higher than observed and model two predicted a temp that turned out to be .5 degrees lower than observed … does anyone in their right mind think that taking the model “average” and saying it matches observed means the model are predictive …
I guarantee you there are models being run that are tuned to show lower temperatures so that the models “average” manages to come closer to reality … the guys with the hot models and the guys with the cool models (both sets which are way off reality) coordinate to ensure the average doesn’t look as bad … all the gloom and doom folks point at the hot models of course …
Claiming Schmidt’s graph uses “basic statistics” is mystifying to me. You might be able to apply basic statistics to many runs of the same model, but the CIMP3 ensemble is anything but that. Applying basic statistics to a variety of model runs produced by a variety of model doesn’t actually produce any statistically meaningful prediction.
However, I’m tremendously impressed at how well the ensemble mean wiggle matches in a 5-year spread around Pinatubo. Even the “95% range” wiggle matches extremely well. That feat is matched nowhere else on the graph.
Including 2006-2016, where the ensemble mean neither wiggle matches the observational record nor shows a similar magnitude. That “estimated 2017” appears to be bang-on is a product of the choice of where to center the anomaly. I’d be interested in a chart that compares the *absolute* temperature instead of anomaly.
One also wonders if the emissions over the last ten years tracked SRES A1B *and* the resulting concentration of greenhouse gases in the atmosphere matches what the models calculated based on those emissions. Unless both are true, the models can’t collectively claim that their projection matching observations was the result of skill.
Please convert the OHC chart to temperature and show error bars in the estimate, so we can better understand the scale of the detected warming in part of the ocean.
Projections by Hansen et al. 1981 and Hansen et al. 1988 also have been accurate, far more than 10 years after their projections: http://www.realclimate.org/index.php/climate-model-projections-compared-to-observations/
Tom, you obviously believe that DLWIR can heat the Ocean.
But it would appear that all Watts or Joules are not created equal.
Can you explain why it cannot be made to do any “work” as Sunlight can, after all warming the Oceans must be doing work isn’t it?
Why can’t we harvest it?
Can you explain why when Solar Ovens, which can focus the sun to raise temperatures up to 450 degrees C, but at night those same Ovens using DLWIR turn to Refrigerators and lower the temperature of the object in them?
Ask your friends over at Science of doom and elsewhere why we can’t use it to do work, after all it is Heat isn’t it?
Oooh, Tom. Pinched your foreskin again on the realclimate site you posted: Their chart of actual measurements generally track Hansen’s 1988 “projections” Scenario C (constant 2000 forcing), the low-ball scenario. Observations only jump up to his Scenario B (BAU) in the 2014-16 Super El Nino years. His Scenario A (high emissions), oddly enough that tracks actual CO2 emissions, is never even remotely reflected in actual temperature measurements.
“What changed in 2013-2015?”
Gavin took over from Jimmy. !!!!
Not intended to be that funny.
I noted a couple of years ago that divergence from temperature reality took on a new breathe of life when Gavin took over.
It was as though Jimmy had thought he had mutilated the data enough..
But Gavin was INTENT on doing far more. !
Forrest: Your questions do not make sense to me, because you seem to be giving those scenarios definitions they do not fit. Here is a good explanation; after you read the Basic tabbed pane, read the Intermediate one and then the Advanced one: https://skepticalscience.com/Hansen-1988-prediction.htm
Tom, skepticalscience’s arm waving cannot hide the fact that global temperatures did not evolve as predicted by Hansen.
Tom,
That’s almost an urban legend. Schmidt’s graph here shows results of well-documented models and their equally well-documented forecasts. Eventually the forecast-observation match will be published in some form of peer-reviewed report. There is nothing remotely like that for Hansen’s long-ago papers.
There is almost nothing documenting and reviewing the 1981 paper. Here is the more important one: “Global climate changes as forecast by Goddard Institute for Space Studies three-dimensional model” by Hansen et el, Journal of Geophysical Research, 20 August 1988.
Its skill is somewhat evaluated in “Skill and uncertainty in climate models” by Julia C. Hargreaves, WIREs: Climate Change, July/Aug 2010 (ungated copy). She reported that “efforts to reproduce the original model runs have not yet been successful”, so she examined results for the scenario that in 1988 Hansen “described as the most realistic”. How realistic she doesn’t say (no comparison of the scenarios vs. actual observations); nor can we know how the forecast would change using observations as inputs.
Two blog posts discuss this forecast (for people who care about such things): “Evaluating Jim Hansen’s 1988 Climate Forecast” (Roger Pielke Jr, May 2006) and “A detailed look at Hansen’s 1988 projections” (Dana Nuccitelli, Skeptical Science, Sept 2010).
Before popping the corks and revamping the world economy on the basis of Hansen’s 1988 paper, there are some questions needing answers. Why no peer-reviewed analysis? What does the accuracy (if any) of his 1988 work tell us about current models? Why so many mentions of Hansen’s 1998 paper — and few or no reviews of the models used in the second and third Assessment Reports? Those would provide multi-decade forecast records.
Perhaps the best known attempt at model validation is “Global climate changes as forecast by Goddard Institute for Space Studies three-dimensional model” by Hansen et el, Journal of Geophysical Research, 20 August 1988.
Its skill is somewhat evaluated in “Skill and uncertainty in climate models” by Julia C. Hargreaves, WIREs: Climate Change, July/Aug 2010 (ungated copy). She reported that “efforts to reproduce the original model runs have not yet been successful”, so she examined results for the scenario that in 1988 Hansen “described as the most realistic”. How realistic she doesn’t say (no comparison of the scenarios vs. actual observations); nor can we know how the forecast would change using observations as inputs.
Two blog posts discuss this forecast (for people who care about such things): “Evaluating Jim Hansen’s 1988 Climate Forecast” (Roger Pielke Jr, May 2006) and “A detailed look at Hansen’s 1988 projections” (Dana Nuccitelli, Skeptical Science, Sept 2010).
Climate scientists can restart the climate policy debate & win: test the models! Especially note the last section: cites and links to a wide range of papers on validation of climate models. Mostly weak tea, not an adequate foundation for anything — let alone policies to save the world.
The small literature on this vital subject tells us a lot about the situation.
Here is just one excellent discussion of Hansen et al. 1988’s projections’ skill; after you read the Basic tabbed pane, read the Intermediate one and then the Advanced one: https://skepticalscience.com/Hansen-1988-prediction.htm
Your links to Dr. Pielke’s articles appear to be broken.
This comment by Tom Curtis conveniently links to multiple and more recent evaluations of which of Hansen’s scenarios most closely match the actual forcings: https://skepticalscience.com/Hansen-1988-prediction-advanced.htm#107965
Forrest: The scenarios in Hansen’s projections are not things to be matched. They included (i.e., varied across scenarios) only greenhouse gas forcings (not just CO2, but not reflective aerosols). Also, the sensitivity emerging from his model was too high, as has been known and acknowledged by climatologists for many years. Details: https://skepticalscience.com/Hansen-1988-prediction-advanced.htm
Hilariously, Hansen’s scenario for drastic reductions in GHGs has come closest, although still far off the mark, despite his maximal scenario for CO2 actually having been realized.
Use that scenario, and there is less than no coincidence between his simpleminded extrapolations and reality.
Forrest: Here is more recent detail about the actual forcings versus Hansen’s scenarios: “Overall in order to evaluate which scenario has been closest to reality, we need to evaluate all radiative forcings. In terms of GHGs only, the result has fallen between Scenarios B and C. In terms of all radiative forcings, the result has fallen closest to and about 16% below Scenario B. Scenario A is the furthest from reality, which is a very fortunate result.” https://skepticalscience.com/hansen-1988-update-which-scenario-closest-to-reality.html
Forrest: By “not things to be matched” I meant merely that I got the impression from your questions of which scenario I expect Hansen’s projections to match in the future, that you thought Hansen was predicting which scenario would happen in the future. He was not. His model did not. His model was intended to project the temperature response to whatever greenhouse gas forcings actually happen in the future. Lacking a time machine or infinite computer time, he made three scenarios of those forcings, fully expecting that none of those scenarios would actually come true in its details, but hoping that their range would encompass the future reality.
Hansen, like all other climate modelers, made assumptions of volcanic eruptions just as their best guesses. For example, the CMIP web site gives instructions to CMIP modelers, on what to assume for those aspects. They are not trying to predict volcanic eruptions. They are merely trying to put something reasonable into their climate models, knowing full well that those aspects will not be correct.
El Nino and La Nina are not forcings. They are not entered into Hansen’s or anyone else’s models. They are emergent phenomena as some of the internal variability of the climate system, largely responsible for the wiggles of the individual model runs.
Forrest: I don’t know which of Hansen’s scenarios most likely will turn out to be closest to the reality. I haven’t even considered it. I don’t think about Hansen’s scenarios. Instead I consider the more thorough and up to date RCPs: https://skepticalscience.com/rcp.php.
I don’t know in what ways the reality changed in 2013-2014. Don’t much care. Focused on newer, better models.
El Nino and La Nina are only weather, not climate. They greatly affect short term atmospheric and ocean temperatures, but being only internal variability they are unforced variability. They do not affect the long term trend. They cancel each other out over the long term, and anyway merely shift the forced energy accumulation between oceans and air. See my comment at https://wattsupwiththat.com/2017/09/22/a-climate-science-milestone-a-successful-10-year-forecast/#comment-2617732 and the one following that.
CMIP3 individual realisations (20C3M+SRESA1B)
What is this?
I thought 20 was the number of models, but I googled it and there were 25 models used in CMIP3.
SRES stands for Special Report on Emission Scenarios. According to the IPCC website:
“Since the SRES was not approved until 15 March 2000, it was too late for the modelling community to incorporate the final approved scenarios in their models and have the results available in time for this Third Assessment Report.”
So it appears that SRESA1B was not part of the original forecast.
I was correct. From Gavin’s site:
“Model spread is the 95% envelope of global mean surface temperature anomalies from all individual CMIP3 simulations (using the SRES A1B projection post-2000). Observations are the standard quasi-global estimates of anomalies with no adjustment for spatial coverage or the use of SST instead of SAT over the open ocean. Last updated Feb 2017.”
So Gavin is presenting information as if it was part of the original CMIP3 forecasts, when it was not.
Enough models to hit the side of a barn. !!
Then move the barn as necessary. !
I work in the financial industry and do you know what we call a modeler who’s model is off by 50% or more ever year ? a barista … or a waiter …
The basic premise here is false. The CMIP5 model set (102 runs from 32? different model groups is hindcast before YE2005 (with parameter tuning) and forecast from Jan 1 2006. CMIP3 is hindcast from 2000 and forecast from 2001. It is not when yhe models were run. It is the initialization date that determines forecast/hindcast. Kummer should have known this.
Christy’s 29 March 2017 publicly available congressional testimony shows both graphically and statistically that all but one CMIP5 run failed to even get close to balloon, satellite, and reanalysis estimates, all of whichnare in good agreement with each other. The lone CMIP5 exception reasonably tracking reality is the Russian model INM-CM4. It has higher ocean thermal inertia, lower water vapor feedback, and low sensitivity.
ristvan,
“It is the initialization date that determines forecast/hindcast.”
That’s the practice in climate science. It’s one of their quaint customs that have resulted in three decades of chasing tails in the public policy debate.
In finance model failures mean unemployment. So the hindcast/forecast label is based upon the date of the model — to avoid influence of tuning the model to match past results.
Public policy issues concern spending trillions of dollars, the path of the national and world economies, and allocation of scarce resources against the many serious threats the world faces. The standards are higher than the usual academic debate.
Ristvan’s comment reminds me of what I consider the ur-narrative of the climate change policy debate — explaining why it failed. It’s a story, FWIW.
My first encounter with the climate wars was soon after James Hansen’s boffo Senate testimony in 1988. He spoke before the Quantitative Methods Group of the San Francisco Society of Security Analysts. These lunches of 20 or 30 included some of the finest mathematicians in the nation — Wall Street pays to get the best (I was a very junior member). They laughed at Hansen’s presentation of hindcast as definitive evidence, and ripped his methods. It was a typical firefight among academics, but brutal to watch.
As always, what happened afterwards is the important part. Hansen had received feedback from some smart experts. So, with the future of the world at stake, he went back to his office and revised his presentation to respond to their critique. Right or wrong, convincing people means responding to their objections — not just repeatedly saying “We’re right and you’re wrong.”
Nope. He ignored them. (Now he would be accompanied by a chorus of Leftists who would chant “denier denier” when anyone raised an objection.)
Three decades later we’re still hearing the exact same kind of presentations. Rebuttals by experts of various kinds are met by screams of “deniers!” Requests for independent verification — second opinions by unaffiliated experts — are ignored. This is not the behavior of people who believe the world is as risk.
For more about this see How we broke the climate change debates. Lessons learned for the future..
Look, the models do not reflect early 20th Century cooling and warming. Christ! They can’t even get history straight.
You are making a compelling case for minarchism. The standards
areshould be higher:Gas temperature is driven by mass, pressure and volume (pV=nRT), not by composition. This can be observed, measured and verified in a laboratory and the planets of the solar system. The impact of part per million variations in the atmospheric composition to temperature is scientifically comparable to homeopathy.
You are free to disagree with it, but while we are discussing public policies:
Anthropogenic climate change policy is anti-mankind by its own definition. As such, substandard by any measure. At best, it’s a waste of scarce shared resources and, at worst, against human rights, as UN defines it. Although recognising some of the intentions good, they are paved to oppressive enough direction to trigger resistance.
Did IPCC publish this graph in 2007, or did authors re-create it only in 2017?
George,
“Did IPCC publish this graph in 2007, or did authors re-create it only in 2017?”
The post contains the relevant from from AR5. I don’t know what graph was in AR4.
Why does it matter? The data are the evidence, not the graphic presenting it.
It matters because they did not publish a forecast 10 years ago. Publishing it in 2017 makes it a hindcast, not a forecast.
George,
“Publishing it in 2017 makes it a hindcast, not a forecast.”
You appear unclear about how this works. The forecast was published in AR4 and other publications back then. The graph comparing the forecast with observations thru now has to be published now.
It is an old trick. Lord Knowitall to butler James:
– Think of a number between 1 and 10.
– Eight, Your Lordship.
– I knew you would say Eight. Read a note in the flower pot on the windowsill.
– “I knew you would say Eight”. How did you do it, Your Lordship?
Where in AR4 did they publish it? The closest thing I found was FAQ 8.1 Fig 1, and that one has a wildly different temperature data.
Forrest,
“Talking about tuning, isn’t that one of the core problems with the models?”
Not at all. “Tuning” is a factor to consider in the validation of models. The easy solution is to consider as “forecasts” their predictions made after the model was created.
Forrest, here is a recent article on tuning: http://journals.ametsoc.org/doi/abs/10.1175/BAMS-D-15-00135.1
Larry Kummer,
Let’s look at the CMIP3 ‘prediction’ another way. If the “CMIP3 individual realizations” was a chart of stock values of a group of similar industry stocks (colored lines), and the black line were the predicted stock values from your financial advisor’s industry model, how would you fare investing in the industry group? From some initial investment in year 1998, when the investments and model are equal valued, you would promptly lose a significant amount; it would take about two years before your investment was again on a par with the prediction. For about 5 or 6 years, the investments and prediction would dance around each other with the less volatile prediction doing slightly better on average. Then, starting about 2007, your investment values would dive and not recover until about 2014. If you were buying and selling during this time, you could have lost a lot of money. If you bought during downturns and sold on upturns, you would definitely be in the negative realm. If you sold on downturns with a stop-loss order, and bought on upturns, you might see some gains, but it would be dependent on timing and your tolerance for pain. Indeed, the transient ‘recovery’ about 2015 can be expected reasonably to be followed by another downturn (which has happened already) and it is probable that it will go lower yet still. I think that most investors would conclude that the ‘Black Line’ prediction, were it real, would be a better overall performer (running hotter, especially between about 2007 and 2014) than the colored-line industry-group. Really, there is only comparable performance between about 2001 and 2007 – six years, not 10! If there is an analogy between the predictive ability of any model, and faith in the performance of a stock, then I think that most investors would be disappointed with this model. Only the general trends are similar, not the actual values at any point in time.
What should be considered is that while I offer an analogy to better deal with temperature changes, detached from ideological commitments, it shouldn’t be forgotten that all the ‘solutions’ to the claimed anthropogenic ‘problems’ will cost money. That is, we will be making future investments and we should be certain that the models we follow are reliable.
Clyde,
That was my business for 30 years, so I understand the logic. I don’t see how it applied to climate forecasts.
Forecasts are evaluated using statistical tools, not word analogies. In fact, investments are now evaluated using statistical tools — which is why we now know that few (perhaps none) can outperform passive strategies. This was not clear for decades using the kind of word pictures and chart junk that used to be investment analysis.
Larry,
OK, my words were obviously wasted on you. I was trying to explain why I would not consider the graph to represent the level of skill many demand when their money is at risk.
I went back and re-read the section (and the links) where you make the claim “This graph shows a climate model’s demonstration of predictive skill over a short time horizon of roughly ten years.” You further claim, “The graph uses basic statistics,…” I’m sorry, but I don’t see your claimed “basic statistics.” I see a line that supposedly represents an average of a number of ‘projections,’ accompanied by a claimed 95% envelope, and several lines that represent smoothed averages of different temperature data sets. That’s it! No information on SD of the data sets, no information on correlation between the lines, no information on the variance of the slopes, etc. That is, your claim of “basic statistics” is without substance unless you want to call graphing lines “basic statistics.”
Call me unconvinced. However, let me close with a quote from the chapter on the evaluation of
climate models from Working Group I, AR5: “Although future climate projections CANNOT BE DIRECTLY EVALUATED, climate models are based, to a LARGE EXTENT, on verifiable physical principles and are able to reproduce many important aspects of past response to external forcing.” You have more faith in the skill of ‘projections’ than the authors of chapter 9.
How about holding climate ‘science’ to the same ,ethical , professional practice and academic standard as other sciences. Rather than thinking it is OK to have ‘standards ‘ in all these areas that would see an undergraduate getting a big fat ‘F’ if they used them in an essay ?
knr,
Could you please state your objections is a clearer form? I don’t speak Rant.
Out of hundreds of divergent hypotheses, one of them was bound to produce a correlation with reality, eventually. Still, prediction and reproduction are the hallmarks of viable science.
nn,
“Out of hundreds of divergent hypotheses”
The major modeling system now is CMIP5. As the number ‘5″ suggests, the IPCC has not been using hundreds of hypotheses/models.
NN,
“The number of models would be measured by the number of strands of spaghetti, wouldn’t it?”
That’s why observations are best compared with the ensemble mean plus the confidence range.
The number “5” does not represent anything to do with the number of models used (300, iirc)
Why the deliberate mis-direction?
30 , not 300 !!
Andy,
“The number “5” does not represent anything to do with the number of models used (300, iirc)”
Not everything in climate science can be reduced to kindergarten level. Read about the CMIP project to learn what it is, and what the different generations mean.
Wikipedia is a good starting place. For more detail see their website:
http://cmip-pcmdi.llnl.gov/
Each strand of spaghetti is one model run. Some models are run multiple times, so there are fewer models than strands of spaghetti. You can look at the CMIP site for details.
“As the number ‘5″ suggests, the IPCC has not been using hundreds of hypotheses/models.”
The number “5” suggests NOTHING of the kind.
“Not everything in climate science can be reduced to kindergarten level”
Yet you keep managing to. !!
“As the number ‘5″ suggests”……..
The “5” has zero meaning related to the number of models, why did you imply it did. ?
“the gridlock has left us unprepared for even the inevitable repeat of past extreme weather”
Ah, so this is about weather, not climate. Thought so. And somehow, man can control the weather. Good luck with that.
Bruce,
“so this is about weather, not climate.”
Weather (e.g, storms, drought) is how we experience climate.
“And somehow, man can control the weather.”
That’s quite a delusional summary of the debate.
That is an Exact summary of the reduce CO2 debate, which is what the IPCC is all about.
Your definition of weather sends the climate faithful a case of vapors followed by end of discussion.
And your point about how the cage of climate obsession leaves us less prepared is likely proof that nearly all money spent on climate science and renewables is wasted money.
Forrest,
“The first thing I noticed was that the grey area for hindcasts seems to have a similar range to the forecasts. Surely they should match known figures better than they do.”
No. A closer match would be clear evidence that they were improperly tuned to match past observations.
Forrest,
“How do you distinguish between the two?”
By testing the model by comparing observations with forecasts, not hindcasts. As Popper said, successful predictions are the gold standard for science.
Forrest, I’m not sure that a model that doesn’t reflect exactly the past is a very good model. Using parameter fiddling is dishonest.
In fact one should pay more attention to the past when evaluating the future.
All these graphs show is the flexibility anomalies offer when matching models to temp series. What exactly is being claimed to be being accurately forecast? Obviously not the absolute value of the anomaly since they can be adjusted across a wide range by simply choosing different base periods as the following graph shows:
http://i68.tinypic.com/nv4pkp.jpg
Assuming I download the data sets correctly ’20C3M A1B’ is the same as what Gavin shows but including back to 1900. The anomalies are relative to 1980-1999. HadCRUT4 anomalies are relative to 1961-1990 so need to be adjusted to the same base which is what ‘Adj HadCRUT4’ shows. All well and good, the forecast looks pretty good (forgetting so called error distributions), as does the period Gavin includes. But 1930-1950 doesn’t look that good. This calls into question how well the forecast model is really working.
And just to show what can be done with anomalies we could take the not unreasonable view that we’re looking at the 20th century for our base period and we’ll adjust HadCRUT4 so it lines up on the middle two decades of 20C3M (1940-1959). That’s ‘Adj2 HadCRUT4’ on the graph. You will see it’s now getting well off beam over the forecast period.
The conversion of the various series to absolutes would give one justifiable stake in the ground, but the range of model temps and the consequent error ranges would cover most conceivable futures on a decadal scale.
Largely bread and circuses IMHO.
HAS: We care about trends. Trends are slopes. The baseline is irrelevant to the trend. Changing the baseline merely moves the line up and down the y axis without affecting the trend. That’s elementary geometry. If you want to know more about trends and anomalies versus absolute temperatures, read this recent explanation: http://www.realclimate.org/index.php/archives/2017/08/observations-reanalyses-and-the-elusive-absolute-global-mean-temperature/
Tom, thanks, I know that.
But you’ll appreciate we are dealing with a different problem when evaluating the output from models from what you linked to. And unfortunately trends in anomalies are even more fraught when it comes to evaluating forecasts from multiple models.
As I said bread and circuses, but I guess it’s Twitter.
HAS: Your reply makes no sense,
Tom there were two sentences of substance in what I wrote.
The first said that your link didn’t deal with the problem in hand – evaluating the output of climate models against global estimates of temperatures. Your link deals with the construction of the anomaly and its rationale.
I’m happy to explain the difference if it isn’t obvious.
The second said that testing trends in anomalies derived from ensembles was fraught, particularly the problems of using a simple linear combination of the output from non-linear models as a well behaved statistic over time and across models.
Putting out stuff like this graph making bold claims based on it detracts from the complexity of the science.
Hence my last sentence.
OKay, Fabius, you can throw all the charts and graphs you want to at me, but you still won’t get my money. One-half of one degree, centigrade or Fahrenheit, is irrelevant, whether long-term or short term. Maybe you don’t know the difference between an omega blocking high dredged up by Irma and supported by Jose, blocking cooler autumn breezes coming from the northwest, but I DO. And, Sport, It’s W-E-A-T-H-E-R.
The exaggerated ups and downs of temperature swings don’t amount to a hill of beans because if they were put in proper size instead of a 3×3 inch twitterpated graph, they’d be a nearly flat line. Squeezing and distorting the lack of real difference in average temperature is nothing but a grab and a passionate plea for MORE MONEY, MORE FUNDING, MORE THIS AND THAT including attention.
If you stretch those charts of yours out to full-size, meaning actual lengths of time, those exaggerated ups and downs will flatline, just like a bad EKG readout.
If the temperature changes 10 degrees from one day to the next or one month to the next, that is weather. It is NOT climate. And unless you can physically prove otherwise, which you haven’t done anything but convince me that you’re a crank and a carnival barker at a county fair.
Just so you know, we’ve been in a solar minimum since 2008, when the Sun blew a wad and went to sleep until the fall of 2010, and did NOT come back to its previous level of activity, which surprised NASA. I have that all recorded, Kiddo. We’re in a solar minimum and will be for a while. You have no control over that. Humans can’t even control their own digestive systems or their emotions, so what in the blue-eyed bleeping world makes you think any of us puny puddle hoppers can control the freaking climate?
I’m truly curious to know if you understand the SIZE of this planet. I’m not sure that you do. Believe me, SPORT, it has its own agenda and it doesn’t give a crap about yours. But if you really ARE interested in reducing carbon emissions, which plants desperately need to stay alive, you could help the Cause of Plants by wearing a rebreathing device.
Smooches!!!!
(Snipped) MOD
Gavin’s graph doesn’t appear to have current temperature data (or incorrect data). According to Wood for Trees the HADCRUT4 global temperature anomaly at the beginning of 2007 was 0.8 C it is now at 0.6 C.
http://www.woodfortrees.org/plot/hadcrut4gl/from:2007/to:2017
Ah, so climate is “an experience”. Got it.
Right. The “debate” is about whether and to what degree man controls climate, which would obviously affect weather, which is how we “experience” climate.
Talk about delusional.
Bruce,
“Ah, so climate is “an experience”.”
Yes, we “experience”‘ weather — as we do all other real world phenomena.
Sadly, WUWT is not moderated to ban trolls.
Otherwise you would not be here.
OK, we have a Zinger of the Day winner!
Mushing dogs in the Alaska winter I wore heavy clothing. Scooping horse droppings in the Las Vegas summer I wear shorts. That’s how I “experience” climate.
Had it not been for my wife, I would not have been doing either. Does that mean I could have avoided climate change, through divorce?
Ah, gridlock man . . ; )
Larry, I am convinced you are a totalitarian control freak, who will not be happy ’till our “gridlock” problem goes the way of China’s . .
John,
“Larry, I am convinced you are a totalitarian control freak, who will not be happy ’till our “gridlock” problem goes the way of China’s . ”
That’s quite delusional. The only policy remedies given here are standard parts of the Republican platform for many decades: better infrastructure and diversifying our energy sources.
Has a majority of GOP members of Congress voted for “renewables” subsidies?
We’ll see if the now GOP-controlled Senate and a nominally GOP president continue this insane folly.
There’s nothing stopping the people of say, New York, from improving THEIR infrastructure, Larry, and diversifying THEIR energy sources, if THEY feel that is important . . don’t need to stomp out all “gridlock” (AKA disagreement) in America over the likelihood of catastrophic global warming anytime soon . .
John,
Right on!
The best example is NYC. It was totally predictable that New York would yet again be hit by a hurricane or tropical storm, which might, as Sandy did, arrive at high tide. But instead of building a storm surge barrier, as Providence, RI did after the bad hurricanes before the 1960s, NYC preferred to blame Sandy’s damage on “climate change’ and beg for federal handouts. The cost of the barrier would have been less than Sandy’s damage, but environmentalists feared a barrier would upset the fragile ecology of the Bay. Yeah, right!
Let’s see ocean heat content from the 1920s, ’30s, ’40s and ’50s.
You know, when the Siberian coast was ice free in summer and even a sneaky German ship, obviously unaided by Soviet icebreakers, was able to steam to Japan during WWII.
Sixto,
“Let’s see ocean heat content from the 1920s, ’30s, ’40s and ’50s.”
That would be nice. Unfortunately, global OHC data gets rapidly sketch as one goes back in time from 2004-06, when the Argo system was developed.
Editor,
Yes, indeed it would be nice, but inconvenient for Warmunistas.
If the 1960s can be constructed, then why not previous decades? Starting at the low point of the postwar cooling is misleading, apparently intentionally so.
Just like all CACA presentations.
Sixto,
“If the 1960s can be constructed”
Data goes far father back than the 1960s, but with rapidly diminishing quality.
Larry,
US submarine data did improve with the nuclear fleet, but the US and other navies have lots of temperature data from previous decades.
It appears to me that the reconstruction too conveniently starts when ocean heat content was at its lowest since World War I.
Your pro-CACA bias is blatant.
Larry,
Check this out, if you really do imagine that OHC from before the ’60s can’t be reconstructed:
https://wattsupwiththat.com/2017/09/21/2014-hgs-presentation-climate-change-facts-and-fictions/comment-page-1/#comment-2616995
Yet again you show yourself a shill for CACA pukers.