Guest essay by Mike Jonas
There are dozens of climate models. They have been run many times. The great majority of model runs, from the high-profile UK Met Office’s Barbecue Summer to Roy Spencer’s Epic Fail analysis of the tropical troposphere, have produced global temperature forecasts that later turned out to be too high. Why?
The answer is, mathematically speaking, very simple.
The fourth IPCC report [para 9.1.3] says : “Results from forward calculations are used for formal detection and attribution analyses. In such studies, a climate model is used to calculate response patterns (‘fingerprints’) for individual forcings or sets of forcings, which are then combined linearly to provide the best fit to the observations.”
To a mathematician that is a massive warning bell. You simply cannot do that. [To be more precise, because obviously they did actually do it, you cannot do that and retain any credibility]. Let me explain :
The process was basically as follows
(1) All known (ie. well-understood) factors were built into the climate models, and estimates were included for the unknowns (The IPCC calls them parametrizations – in UK English : parameterisations).
(2) Model results were then compared with actual observations and were found to produce only about a third of the observed warming in the 20th century.
(3) Parameters controlling the unknowns in the models were then fiddled with (as in the above IPCC report quote) until they got a match.
(4) So necessarily, about two-thirds of the models’ predicted future warming comes from factors that are not understood.
Now you can see why I said “You simply cannot do that”: When you get a discrepancy between a model and reality, you obviously can’t change the model’s known factors – they are what they are known to be. If you want to fiddle the model to match reality then you have to fiddle the unknowns. If your model started off a long way from reality then inevitably the end result is that a large part of your model’s findings come from unknowns, ie, from factors that are not understood. To put it simply, you are guessing, and therefore your model is unreliable.
OK, that’s the general theory. Now let’s look at the climate models and see how it works in a bit more detail.
The Major Climate Factors
The climate models predict, on average, global warming of 0.2 deg C per decade for the indefinite future.
What are the components of climate that contribute to this predicted future warming, and how well do we understand them?
ENSO (El Nino Southern Oscillation) : We’ll start with El Nino, because it’s in the news with a major El Nino forecast for later this year. It is expected to take global temperature to a new high. The regrettable fact is that we do not understand El Nino at all well, or at least, not in the sense that we can predict it years ahead. Here we are, only a month or so before it is due to cut in, and we still aren’t absolutely sure that it will happen, we don’t know how strong it will be, and we don’t know how long it will last. Only a few months ago we had no idea at all whether there would be one this year. Last year an El Nino was predicted and didn’t happen. In summary : Do we understand ENSO (in the sense that we can predict El Ninos and La Ninas years ahead)? No. How much does ENSO contribute, on average, to the climate models’ predicted future warming? 0%.
El Nino and La Nina are relatively short-term phenomena, so a 0% contribution could well be correct but we just don’t actually know. There are suggestions that an El Nino has a step function component, ie. that when it is over it actually leaves the climate warmer than when it started. But we don’t know.
Ocean Oscillations : What about the larger and longer ocean effects like the AMO (Atlantic Multidecadal Oscillation), PDO (Pacific Decadal Oscillation), IOD (Indian Ocean Dipole), etc. Understood? No. Contribution in the models : 0%.
Ocean Currents : Are the major ocean currents, such as the THC (Thermohaline Circulation), understood? Well we do know a lot about them – we know where they go and how big they are, and what is in them (including heat), and we know much about how they affect climate – but we know very little about what changes them and by how much or over what time scale. In summary – Understood? No. Contribution in the models : 0%.
Volcanoes : Understood? No. Contribution in the models : 0%.
Wind : Understood? No. Contribution in the models : 0%.
Water cycle (ocean evaporation, precipitation) : Understood? Partly. Contribution in the models : the contribution in the climate models is actually slightly negative, but it is built into a larger total which I address later.
The Sun : Understood? No. Contribution in the models : 0%. Now this may come as a surprise to some people, because the Sun has been studied for centuries, we know that it is the source of virtually all the surface and atmospheric heat on Earth, and we do know quite a lot about it. Details of the 11(ish) year sunspot cycle, for example, have been recorded for centuries. But we don’t know what causes sunspots and we can’t predict even one sunspot cycle ahead. Various longer cycles in solar activity have been proposed, but we don’t even know for sure what those longer cycles are or have been, we don’t know what causes them, and we can’t predict them. On top of that, we don’t know what the sun’s effect on climate is – yes we can see big climate changes in the past and we are pretty sure that the sun played a major role (if it wasn’t the sun then what on Earth was it?) but we don’t know how the sun did it and in any case we don’t know what the sun will do next. So the assessment for the sun in climate models is : Understood? No. Contribution in the models : 0%. [Reminder : this is the contribution to predicted future warming]
Galactic Cosmic Rays (GCRs) : GCRs come mainly from supernovae remnants (SNRs). We know from laboratory experiment and real-world observation (eg. of Forbush decreases) that GCRs create aerosols that play a role in cloud formation. We know that solar activity affects the level of GCRs. But we can’t predict solar activity (and of course we can’t predict supernova activity either), so no matter how much more we learn about the effect of GCRs on climate, we can’t predict them and therefore we can’t predict their effect on climate. And by the way, we can’t predict aerosols from other causes either. In summary for GCRs : Understood? No. Contribution in the models : 0%.
Milankovich Cycles : Milankovich cycles are all to do with variations in Earth’s orbit around the sun, and can be quite accurately predicted. But we just don’t know how they affect climate. The most important-looking cycles don’t show up in the climate, and for the one that does seem to show up in the climate (orbital inclination) we just don’t know how or even whether it affects climate. In any case, its time-scale (tens of thousands of years) is too long for the climate models so it is ignored. In summary for Milankovich cycles : Understood? No. Contribution in the models : 0%. (Reminder : “Understood” is used in the context of predicting climate).
Carbon Dioxide (CO2) : At last we come to something which is quite well understood. The ability of CO2 to absorb and re-emit a specific part of the light spectrum is well understood and well quantified, supported by a multitude of laboratory experiments. [NB. I do not claim that we have perfect understanding, only that we have good understanding]. In summary – Understood? Yes. Contribution in the models : about 37%.
Water vapour : we know that water vapour is a powerful greenhouse gas, and that in total it has more effect than CO2 on global temperature. We know something about what causes it to change, for example the Clausius-Clapeyron equation is well accepted and states that water vapour increases by about 7% for each 1 deg C increase in atmospheric temperature. But we don’t know how it affects clouds (looked at next) and while we have reasonable evidence that the water cycle changes in line with water vapour, the climate models only allow for about a third to a quarter of that amount. Since the water cycle has a cooling effect, this gives the climate models a warming bias. In summary for water vapour – Understood? Partly. Contribution in the models : 22%, but suspect because of the missing water cycle.
Clouds : We don’t know what causes Earth’s cloud cover to change. Some kinds of cloud have a net warming effect and some have a net cooling effect, but we don’t know what the cloud mix will be in future years. Overall, we do know with some confidence that clouds at present have a net cooling effect, but because we don’t know what causes them to change we can’t know how they will affect climate in future. In particular, we don’t know whether clouds would cool or warm in reaction to an atmospheric temperature increase. In summary, for clouds : Understood? No. Contribution in the models : 41%, all of which is highly suspect.
Summary
The following table summarises all of the above:
| Factor | Understood? | Contribution to models’ predicted future warming |
| ENSO | No | 0% |
| Ocean Oscillations | No | 0% |
| Ocean Currents | No | 0% |
| Volcanoes | No | 0% |
| Wind | No | 0% |
| Water Cycle | Partly | (built into Water Vapour, below) |
| The Sun | No | 0% |
| Galactic Cosmic Rays (and aerosols) | No | 0% |
| Milankovich cycles | No | 0% |
| Carbon Dioxide | Yes | 37% |
| Water Vapour | Partly | 22% but suspect |
| Clouds | No | 41%, all highly suspect |
| Other (in case I have missed anything) | 0% |
The not-understood factors (water vapour, clouds) that were chosen to fiddle the models to match 20th-century temperatures were both portrayed as being in reaction to rising temperature – the IPCC calls them “feedbacks” – and the only known factor in the models that caused a future temperature increase was CO2. So those not-understood factors could be and were portrayed as being caused by CO2.
And that is how the models have come to predict a high level of future warming, and how they claim that it is all caused by CO2. The reality of course is that two-thirds of the predicted future warming is from guesswork and they don’t even know if the sign of the guesswork is correct. ie, they don’t even know whether the guessed factors actually warm the planet at all. They might even cool it (see Footnote 3).
One thing, though, is absolutely certain. The climate models’ predictions are very unreliable.
###
Mike Jonas
September 2015
Mike Jonas (MA Maths Oxford UK) retired some years ago after nearly 40 years in I.T.
Footnotes
1. If you still doubt that the climate models are unreliable, consider this : The models typically work on a grid system, where the planet’s surface and atmosphere are divided up into not-very-small chunks. The interactions between the chunks are then calculated over a small time period, and the whole process is then repeated a mammoth number of times in order to project forward over a long time period (that’s why they need such large computers). The process is similar to the process used for weather prediction but much less accurate. That’s because climate models run over much longer periods so they have to use larger chunks or they run out of computer power. The weather models become too inaccurate to predict local or regional weather in just a few days. The climate models are less accurate.
2. If you still doubt that the climate models are unreliable, then perhaps the IPCC themselves can convince you. Their Working Group 1 (WG1) assesses the physical scientific aspects of the climate system and climate change. In 2007, WG1.said “we should recognise that we are dealing with a coupled nonlinear chaotic system, and therefore that the long-term prediction of future climate states is not possible.”
3. The models correctly (as per the the Clausius-Clapeyron equation) show increased atmospheric water vapour from increased temperature. Water vapour is a greenhouse gas so there is some warming from that. In the real world, along with the increased water vapour there is more precipitation. Precipitation comes from clouds, so logically there will be more clouds. But this is where the models’ parameterisations go screwy. In the real world, the water cycle has a cooling effect, and clouds are net cooling overall, so both an increased water cycle and increased cloud cover will cool the planet. But, as it says in the IPCC report, they had to find a way to increase temperature in the models enough to match the observed 20th century temperature increase. To get the required result, the parameter setttings that were selected (ie, the ones that gave them the “best fit to the observations“), were the ones that minimised precipitation and sent clouds in the wrong direction. Particularly in the case of clouds, where there are no known ‘rules’, they can get away with it because, necessarily, they aren’t breaking any ‘rules’ (ie, no-one can prove absolutely that their settings are wrong). And that’s how, in the models, cloud “feedback” ends up making the largest contribution to predicted future warming, larger even than CO2 itself.
4. Some natural factors, such as ENSO, ocean oscillations, clouds (behaving naturally), etc, may well have caused most of the temperature increase of the 20th century. But the modellers chose not to use them to obtain the required “best fit“.If those natural factors did in fact cause most of the temperature increase of the 20th century then the models are barking up the wrong tree. Model results – consistent overestimation of temperature – suggest that this is the case.
5. To get their “best fit“, the chosen fiddle factors (that’s the correct mathematical term, aka fudge factors) were “combined linearly“. But as the IPCC themselves said, “we are dealing with a coupled nonlinear chaotic system”. Hmmmm ….
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

Just flip a coin and be right half the time. Cost to taxpayers, as low a 1 cent (assuming you want to reuse it indefinitely).
Having written and designed numerical models for petroleum reservoirs, it was necessary to take the model back and run it to confirm a history match of the actual results before the projections of the model were used to make economic decisions. It appears that the programmer’s failure of climate models to even remotely project accurately have caused them to abandon attempts to correct the models. Rather than tweak the model to match historical reality, today they seem to be going back and adjusting the temperature history record to match the models.
Climate models are only as reliable as people maintaining them. A failure to correct a known flaw – and running known flawed models thousands of times instead – does destroy any confidence.
Ignoring cyclic events in the models, means that the models are only good for predicting steady state averages of climate. IE, weather, averaged over a long enough period that all the cycles have averaged out.
That means that any claims that we are going to warm X degrees in the next Y years are worthless, since the models simply are not capable of discerning what the temperature is going to be in a particular year.
Beyond that, as long as CO2 levels are changing, we are not in a steady state condition.
If the models were any good (which they aren’t) the only thing they would be useful for is to declare that once CO2 levels have stabilized at a particular level. Wait a couple hundred years for things to stabilize, then average the temperatures over 3 or 4 centuries, and this is what the average will be.
Answer: Reliable enough to ignore in policy messaging over reach efforts
How reliable are the models? A very simple fact – you cannot model that which you do not know. You cannot model based on anything but DATA, and proxy information is NOT data. How reliable, then, are climate models? Far less reliable than expecting to make a windfall at any casino.
You make one serious error in the way you lay out your table. GCRs and aerosols belong in completely different boxes, because GCRs are not well understood and have a questionable effect (and what we know of them at all has largely come from work done after the GCMs were originally written) and hence are neglected but aerosols are not. In fact, in the GCMs aerosols are included and produce a strong cooling effect.
This is one of their major problems. One of the ways they balanced the excessive warming in the reference period is by tweaking the aerosol knob and turning it up until it balanced the rapidly increasing CO2 driven effect. Then as time passed, CO2 was kept going up and aerosols were not across the rapidly warming part of the late 80’s through the 90’s. In this way, the CO2 sensitivity had to be turned up still more to match the rapid rise against the large background cancellation that allowed the 70’s and early 80’s to be fit well enough not to laugh at the result.
Of course, then came the 1997-1998 super-ENSO that produced one last glorious burst of warming, and then warming more or less stopped and the infamous “pause” began. CO2 continued increasing, faster than ever, aerosols didn’t, the climate models that used this pattern clearly showed the world metaphorically catastrophically “cooking” from a climate shift (if an increase in average temperature that represents moving from one city in NC to another 40 miles away can be called a climate shift at all) but the world refused to cooperate. Since the late 90’s temperatures have been at the very least close to flat, even as frantic efforts have continued to find some excuse not to fail the models, if necessary by changing the data they do not predict.
The models clearly overestimate the impact of aerosols. This is bad news for the entire hypothesis that radiative chemistry is the dominant forcer in climate change, because if aerosols are not a major cooling factor, then the large CO2 sensitivity needed to fit the rapid increase in temperature in single 15 year stretch where the planet significantly warmed at all in the last 70 years (back to 1945) is simply wrong, far too high. They have to go back to a sensitivity that is too low to produce a catastrophe and that fits the other 55 years of the last 70 decently but that fails to do a good job on the 15. Which is just as well, because a burst of warming almost identical to the late 20th century warming occurred over a 15 year period in the early 20th century, and this warming was already being skated right over by the GCMs (although this hindcast error was ignored).
If you reduce the aerosol’s contribution to “very little”, you have to reduce CO2 sensitivity including all feedbacks to ballpark 1 to 1.5 C per doubling in order to come close to fitting — or at least not disagreeing with to the point of obvious failure — the data. But this is still in GCMs that, as you point out, do not credibly include the modulation of cooling efficiencies of things like the multidecadal oscillations and thermohaline transport that are self-organized phenomena that obviously have a huge impact on the absorption, transport, and dissipation of heat and that are clearly correlated with both climate change and weather events, major and minor, across the instrumental and proxy-inferred record. Once those were accurately included, what would sensitivity be? What would the net feedbacks be? Nobody knows.
And does it matter? Research demonstrating fairly convincingly that aerosol cooling is overestimated is not a year or two old, but people are still trying to argue that increased volcanic aerosols are the cause of the pause (when they aren’t trying to erase the pause altogether). The only real problem is that they can’t find the volcanoes to explain the increase in aerosols, or demonstrate that atmospheric transmittance at the top of Mauna Loa has been modulated, or come up with a credible model for the impact of even large volcanoes on climate except to note that it is transient and very small until the volcanoes get up to at least VEI 5, if not VEI 6 or higher. And those volcanic eruptions are rare.
So no, it does not matter. What matters is that in the world’s eyes, the energy industry is accurately portrayed in Naked Gun 2 1/2 — a cabal that would willingly use assassination and lies and bribes and corruption to maintain its “iron grip” on society and the wealth of its owners. One isn’t just saving the world ecologically — this may not even be the real point of it all. It is all about the money and power. In the real world, it is a lot simpler to co-opt and subvert a political movement than it is to fight it. And that’s what the power industry has done.
Who makes, and stands to make, the most money out of completely retooling the energy industry to be less efficient? Oh, wait, would that be the energy industry? Do they really give a rat’s ass if they get rich selling power generated by solar panels (if that’s what we are taught to “want”) rather than coal? Not at all. They’ll make even larger profits from the margin on more expensive power. They want power to be as expensive as possible, which means as scarce as possible, especially in a nearly completely inelastic market.
If climate catastrophism didn’t exist, the energy industry would probably invent it. Come to think of it, they probably did. And there is no Leslie Nielsen to come to the rescue.
rgb
I have sat in a room with a US Senator and Nuclear Power executives from all the major US energy companies; who when asked what they wanted, universally answered “a price for carbon”. Why? It was explained that the regulated market for power in the US restricted their ability to obtain project financing from todays customers for building new plants without showing a cost to consumers for the alternatives.
RGB I was sorry to see the last three paragraphs of your comment. Your comments on scientific matters have always been objective ,rational and illuminating. Your views on the energy industry are ill informed and seem to adopt the naïve ” evil fossil fuel company “paradigm. The “energy” Industry is far from monolithic and while all companies will try to influence policy to their advantage, the interests of the different segments eg wind, solar, nuclear, biomass ,fossil fuel and indeed of the companies within those segments e,g international major oil companies, national oil companies, independent operators are very different.
What is true is that the energy companies at the moment ,by and large, blindly go along with the consensus CAGW meme as a basis for looking at the future and where convenient will use it to influence politicians where it suits their particular ends.
An interesting example of how belief in CAGW influences individual companies is Shell Oil, who are betting billions in their Arctic offshore drilling program while other companies have withdrawn, They must believe that Arctic sea ice is going to decrease in the next 20 years or so .I think the opposite.
For forecasts if the timing and extent of the coming cooling see
http://climatesense-norpag.blogspot.com/2014/07/climate-forecasting-methods-and-cooling.html
and
http://climatesense-norpag.blogspot.com/2015/08/the-epistemology-of-climate-forecasting.html
For the most part, the actual companies that are building wind and solar are not the same companies that are building oil, gas and coal.
Conflating all companies that build plants that make electricity into a monolithic “power industry” is the kind of tactic used by one who either has no knowledge about which he is talking, or who hopes that his listeners don’t.
Do you have any evidence that the power industry is behind the global warming movement? Or are you just letting your normal paranoia take over again?
Thanks for your comment, rgb. In my article “understood” was in the context of prediction. Aerosols may well have a strong cooling effect, but what matters is how much they contribute (+ or -) to the predicted future warming. Since the modellers don’t know how aerosols will change in future they can’t predict their future impact. Since GCRs are known to create aerosols, and also can’t be predicted, I lumped the two together for simplicity. Maybe I should have just done them separately. They are both “understood : No; contribution : 0%” because no-one knows how they will change in future, and there is no contribution (+ or -) from them to future warming in the models. The three non-zero factors I cited do deliver 100% of the predicted future warming, according to the IPCC.
“Tom O said (September 18, 2015 at 8:15 am)
.” You cannot model based on anything but DATA, and proxy information is NOT data”.
And when the system is a dynamical one (chaotic system), it is responding so monstruously to the tiniest change in intial conditions that, regarding the experimental and data processing errors associated to these conditions, you cannot make any prediction. Also the data are not spread normally (gaussan distribution) around a given mean value or trend line, and you cannot define any confidence interval, because the underlying statisitcal hypotheses are not met…..
And, as I recalled and showed earlier in this discussion (henri Masson September 18, 2015 at 2.15 am and at 4.13am ) the climate system is of dynamical nature.
This means that speaking about scenarios fed in models for limiting the global temperature to 2°C by the end of the century (or by doubling CO2) is absolute mathematical and statisitcal non-sense. (by the way, the “magic limit” is now 1.5°C because even a large number of the models with their latest settings predict an increase in temperature of less than 2°C)
Even more preposterous is the idea that 2 degrees of warming will somehow be in any way a problem, let alone a catastrophe, on a planet which has large areas perpetually frozen solid, and much of the habitable surface cold enough, during large stretches of any given year, to kill any person who gets caught without sufficient protections and stores of supplies.
It seems to me that if you ignore ocean currents and Enso in you models ,you end up barking up the wrong tree with your climate modeling. We are all well aware of the back ground warming of about 0.75 C /decade during the past century. We are also aware that for reasons that we do not yet completely understand , this back ground temperature has not always wamed but really fluctuates as we saw with the LITTLE ICE AGE , MEDIEVAL WARM PERIOD ,Middle ages cold period , ROMAN WARM PERIOD , etc. Man was not responsible for these fluctations .Recent evidence shows that EL Ninos, especially the strong ones raise this back ground global temperature in series of steps. We are also aware that superimposed on the background temperatures is a 60 year climate cycle with aproximately 30 years of warm and temperatures and 30 years of cooler temperatures . These changes are caused by changes in ocean temperatures and currents . So in a 100 year period , the rate of temperature change is greatly modified and the expected rise is much less than if you ignore them in your models . Two cold troughs were centered around 1910 and 1979 greatly modified our climate during the last 100 years . There will be two during next 100 years . If you cannot simulate these , your temperature predictions are worthless and all your alarmism is unjustified
So along come the alarmists and they first blamed mankind for the warming in the post 1970,s which was reallycaused caused by the major oceans when both were in their warm mode and 3-4 strong El Ninos . They then changed their tune and blamed mankind for all the warming of the background( 0.75C/DECADE) after the start of the industrial reveolution going back some 100 years We know this to be wrong also since the planet has been naturally warming after the Little ice age . Now they have again changed the tune blaming all weather events and extreme events on mankind and telling us their models tell them so.
If you use models that do not refelct realty your out put is worthless and the public should be told so.
http://appinsys.com/GlobalWarming/GW_TemperatureProjections.htm#akasofu
The background warming should read O.75 C/ CENTURY not per decade
Your article has much better writing than prior articles — so much better I suspected a ghost writer!
Unfortunately, your science knowledge and logic have not improved, so the greatly improved writing quality now makes you dangerous!
YOU wrote:
“Carbon Dioxide (CO2) : At last we come to something which is quite well understood. The ability of CO2 to absorb and re-emit a specific part of the light spectrum is well understood and well quantified, supported by a multitude of laboratory experiments. [NB. I do not claim that we have perfect understanding, only that we have good understanding]. In summary – Understood? Yes. Contribution in the models : about 37%”
Your statement is wrong.
CO2 is NOT “quite well understood”.
You seem confident scientists know what CO2 does to Earth’s average temperature.
You are wrong.
If CO2 was really so well understood, this website probably would not exist.
Climate science would be “settled”.
Mr. Watts would probably have a blog where he posted pictures of his family and his vacations. (ha, ha)
Geologists and other non-climate modeler scientists (i.e.; real scientists) report huge past changes in CO2 levels with absolutely no correlation with average temperature, from climate proxy studies of Earth’s climate history.
Do you dismiss all the work of non-climate modeler scientists over the past century — if so, you are practicing “data mining”.
Laboratory experiments prove little about what CO2 actually does to the average temperature.
They suggest warming, but with a rapidly diminishing warming effect as CO2 levels increase.
There is no scientific proof that manmade CO2 is the dominant cause of the minor warming over the past 100 years.
Where is that proof written down for us to see?
No actual proof exists (based on what the word “proof” means in science).
Is there an upper limit of CO2 concentration where there is no more “greenhouse” warming, or too little warming to be measured?
— What is that upper limit?
Does the first 100 ppmv of CO2 cause warming?
— Probably.
How much warming does the next +100 ppmv of CO2 cause?
— No one knows.
Why was there a mild cooling trend from 1940 to 1976?
Why was there no warming from the late 1990s to 2015?
— For both time periods, CO2 was said to be rising rapidly!
There are many theories for the lack of warming in those periods, but no scientific proof.
We can confidently state, based on climate proxy studies, that even very high levels of CO2 do not guarantee Earth will be warm.
We can state with great certainty that the era of rising manmade CO2 from 1940 to 2015, based on smoothed data, has had periods of FALLING or STEADY average temperatures more often than periods of RISING average temperatures.
Would a CO2 increase from 400 to 500 ppmv cause any warming, or at least enough warming to be measurable?
— No one knows.
Are there positive or negative feedbacks, that amplify or buffer, greenhouse gas warming from CO2, assuming CO2 caused at least some of the warming since 1850?
— No one knows.
And since I enjoy giving you a hard time, I will continue:
The chart you selected was very clear and easy to read.
However, it implies the climate models are wrong simply because the short term climate has not matched the predictions (simulations).
That seems like a logical conclusion … but …
I am reminded of investment advice from the great Fidelity Magellan Fund manager, Peter Lynch, who bought what he thought were undervalued stocks.
.
He has written in his books that quite a few times the stocks he bought fell a lot, sometimes even over 50%, before they turned around and became big winners for him.
His point was that his long-term stock predictions were often right, even when the short-term performance of his new stock purchases initially made him look like a fool.
My point, and I do have one — It is possible today’s climate model “predictions” will be right about the climate in 100 years, even if they do not appear accurate for the first decade, or even for the next 50 years.
Of course it’s MORE likely if a 100-year prediction looks foolish in the first 10, 20 or 40 years of observations, the prediction was really nothing more than a wild guess by people who do not understand what causes changes in Earth’s climate.
Based on ice core studies, it is a pretty safe guess that the 1850 Modern Warming would last for hundreds of years … so a wild guess that warming will continue tends to make the “climate astrologists” (modelers) appear to understand CO2 fairly well, when they don’t.
They don’t understand the effects of CO2 with any precision.
Neither do you.
Neither do I.
And if you want to avoid criticism in the future, do not state that effects of CO2 are “quite well understood”.
The THEORY of what CO2 does to Earth’s climate is well known.
Many people think CO2 levels / “greenhouse theory” can be used to predict the future average temperature.
The inaccuracy of climate model “predictions” suggests CO2 is not well understood.
Your article did a pretty good job of “trashing” the models.
In my opinion, the climate modelers are “props” for politicians, on the public dole, not real scientists … and their computer games are political tools for scaring people — climate astrology, not real science.
Models are not data.
With no data, there is no science.
My climate blog for non-scientists:
Free
No ads.
No money for me.
A public service.
http://www.elOnionBloggle.blogspot.com
“If CO2 was really so well understood, this website probably would not exist.“. Not so. It has been stated and explained time and time again on WUWT that the argument is not about the basic physics of CO2 but is about the other factors that are built into the models and which exaggerate the warming. My article explains the origin of that exaggeration.
Once again, Mike Jonas, your false claim that the “basic physics of CO2 (as relates to AGW) is well understood” is easily proven false for 8 plus reasons in 4 papers by Japanese physical chemist Kyoji Kimoto:
http://hockeyschtick.blogspot.com/search?q=kimoto
and the comments by KevinK et al above:
http://hockeyschtick.blogspot.com/2015/09/why-greenhouse-gases-dont-trap-heat-in.html
The fact that CO2 is a radiative gas and why it is such, is well understood.
Likewise the laboratory properties of CO2 are more or less well known and understood.
But that is where it ends. Earth’s atmosphere is not laboratory conditions. How CO2 acts in Earth’s atmosphere is not known or understood. Some consider that it will behave as it does in a laboratory with like results and impact. Others that Earth’ atmosphere is so far divorced from laboratory conditions that the behavoir of CO2 in laboratory conditions tells us little if anything as to what impact CO2 has when not working under laboratory conditions and when it is simply a small part of a much larger system.
To some extent, this is a question of feedbacks. but it is far more complex than that since other processes (convection, the water cycle, lapse rate etc ) are in play and those other processes may wholly swamp the effect of CO2.
The real world effects of CO” can only be deduced from studying the real world, not the laboratory, still les from models that are imperfect on almost every level.
Isn’t the attempt to model the climate of the Earth a bit of a throwback to nineteenth century thinking?
I’m reminded of the following quote by Pierre-Simon de Laplace
“Consider an intelligence which, at any instant, could have a knowledge of all the forces controlling nature together with all the momentary conditions of all the entities of which nature consists.
If this intelligence were powerful enough to submit all these data to analysis it would be able to embrace in a single formula the movements of the largest bodies in the universe and those of the lightest atoms; for it nothing would be uncertain the past and future would be equally present for its eyes.”
Abstract
This article introduces this JBR Special Issue on simple versus complex methods in forecasting. Simplicity in forecasting requires that (1) method, (2) representation of cumulative knowledge, (3) relationships in models, and (4) relationships among models, forecasts, and decisions are all sufficiently uncomplicated as to be easily understood by decision-makers. Our review of studies comparing simple and complex methods – including those in this special issue – found 97 comparisons in 32 papers. None of the papers provide a balance of evidence that complexity improves forecast accuracy. Complexity increases forecast error by 27 percent on average in the 25 papers with quantitative comparisons. The finding is consistent with prior research to identify valid forecasting methods: all 22 previously identified evidence-based forecasting procedures are simple. Nevertheless, complexity remains popular among researchers, forecasters, and clients. Some evidence suggests that the popularity of complexity may be due to incentives: (1) researchers are rewarded for publishing in highly ranked journals, which favor complexity; (2) forecasters can use complex methods to provide forecasts that support decision-makers’ plans; and (3) forecasters’ clients may be reassured by incomprehensibility. Clients who prefer accuracy should accept forecasts only from simple evidence-based procedures. They can rate the simplicity of forecasters’ procedures using the questionnaire at simple-forecasting.com.
Keywords
Analytics;
Big data;
Decision-making;
Decomposition;
Econometrics;
Occam’s razor
Answer to herkimer September 18, 2015 at 10:40 am
“forecasters’ clients may be reassured by incomprehensibility. Clients who prefer accuracy should accept forecasts only from simple evidence-based procedures”
I think you make some confusion between “transparency”, “sophisitcation of the code of a model” and “complexity of the system description”.
Transparency on all hypotheses made and details of methods used is of course a scientific must. Your results must be reproducible.
Sophisitcation of the code has to make with software skills, methods used (and you can use rather standard subroutines for transparency and ease of duplication of your results) and spatio-temporal resolution of your model.
But for the physical system you describe, you must be either able to describe absolutely and correctly ALL the phenomena happening in your system AND all their connections and feedback loops (as well as the associated time constants) exisitng between them, because actually it is largely the structure of the system defines its overall behaviour.
Good luck if you try to do this for the Climate system made of a huge number of interconnected phenomena (often far away from equilibrium and in dynamic mode, as you are looking to changes over time). And if you look , as IPCC does, only to the greenhouse anthropogenic gases as a driver for global temperature change (a funny concept, from a mathematical point of view actually), and then use different forcings for finally fine tuning the sensisitvity knob, and considering the other factors as noise, you are completely wrong and without any chance to get any convincing result. And this because the system is higgly non linear and the effects are all except additive. The residues of the adjustment of the data by the model, are not random noise and cannot be handled like this. The system is chaotic (dynamical system) as I said already before in this discussion
Alternatively, you can consider the (climate) system as a complete black box, or better a set of interconnected black boxes) and in this case for each of the boxes you use the input /output time series to define what is called in automation theory a transfer function (obtained by using Laplace transforms and convolution integrals). Alternatively you build up an artifical intelligence neuronal system that you tune by trial and errors to reproduce the time series of the different proxies taken into cnsideration. You work exclusively on signal theory and do not try to understand the underlying physics. This is called complex system theory and it is the future in modelisation, also for climate models.
Well at least this is my opinion, and I share it completely..
henri masson
Thanks for your thoughtful comments . I think you are saying what I believe the authors of the above Abstract implied . Having more complex models with bad science is worse than simpler models with more valid science.
Perhaps they should be called the “Old King Cole” models:
Old King Climatologist was a merry old soul,
And a merry old soul was he,
He called for his dataset, and applied for a grant,
And he called for his fiddlers three,
Every fiddler had a fiddle,
And a very fine fiddler was he.
With apologies to the Welsh bards.
‘you cannot do that and retain any credibility’
Normally the author would correct in this idea however, and this is big however , your dealing with climate ‘science’ and normal is not something that has any real relationship to this area where what matters is not the validity of your facts but their ‘usefulness’
So the models can be wildly inaccurate and sceintffically worthless , but if what they produced is ‘useful ‘ in the political or idealogical sense then they are ‘perfect’ .
Rule one of climate ‘science’ is after all , ‘if reality and the models differ in value , it is reality which is in error’
Indeed and this is called “scientism”: to justify a posteriori a political agenda, undertake some “oriented and biased”research. This is exactly why IPCC has been created, actually:trying and justifying the fight against fossil fuels. They are buzzy now for more than 2 decades without any scientific proof of their saying.
In building a GCM climate simulation I would assume that one would start with an existing and working GCM weather simulation that for the most part seems to work with a significant track record. To simulate climate in finite time one has to first increase the spatial temporal sampling interval. Such increasing intervals may cause the computational process to become at least marginally unstable. A global warming result could totally be caused by this computational instability. Has any work been done to rule out the possibility of computational instability in association with these climate simulations? To simulate climate, it sounds like they hard coded in the idea of CO2 based warming. It that is the case then the simulations beg the question and their result are totally useless. They code in that more CO2 causes warming and that is exactly what the results show making the effort entirely useless. The theory is that adding more CO2 to the atmosphere causes an increase in the radiant thermal insulation properties of the atmosphere resulting in restricted heat energy flow resulting in higher temperatures at the Earth’s surface and lower atmosphere but lower temperatures in the upper atmosphere. If it were really true then one would expect that the increase in CO2 over the past 30 years would have caused a noticeable increase in the natural lapse rate in the troposphere but that has not happened. But let us go on and assume that CO2 does actually cause an increase in insulation. The increased temperatures would cause more H2O to enter the atmosphere which according to the AGW conjecture would cause even more warming because H2O is also a greenhouse gas and is in fact the primary greenhouse gas. That is the positive feedback they like to talk about. That is where the AGW conjecture ends but that is not all what happens. Besides being a greenhouse gas, H2O is a major coolant in the Earth’s atmosphere moving heat energy from the Earth’s surface to where clouds form via the heat of vaporization. According to energy balance models, more energy is moved by H2O via the heat of vaporization then by both convection and LWIR absorption band radiation combined so that without even considering clouds, H2O provides a negative feedback to changes in CO2 thus mitigating any effect that CO2 might have on climate. The wet lapse rate is smaller than the dry lapse rate which further shows that more H2O in the atmosphere has a cooling effect. So that coding in that H2O amplifies CO2 warming, cannot possible be correct. The feedbacks have to be negative for the climate to be stable which it has for at least the past 500 million years, enough for life to evolve. We are here.
“The theory is that adding more CO2 to the atmosphere causes an increase in the radiant thermal insulation properties of the atmosphere resulting in restricted heat energy flow resulting in higher temperatures at the Earth’s surface and lower atmosphere but lower temperatures in the upper atmosphere. If it were really true then one would expect that the increase in CO2 over the past 30 years would have caused a noticeable increase in the natural lapse rate in the troposphere but that has not happened. ”
The lapse rate formula is
dT/dh = -g/Cp
CO2 (and H2O) have higher Cps than air, therefore increase of Cp by added CO2 decreases/b> the lapse rate dT/dh, thus COOLING, not insulating or warming, the surface. The models have this bassackwards!
I agree. The natural lapse rate should include everything that that effects the lapse rate. The natural lapse rate is a measure of the insulating properties of the atmosphere. CO2 is not a source of energy so the only way it can affect climate is passively changing the radiant thermal insulation properties of the atmosphere. The climate sensitivity of CO2 must be directly proportional to how much an increase in CO2 changes the natural lapse rate. Observations over the past thirty years show that the climate sensitivity of CO2 equals zero. Hence CO2 does not affect climate. There is no evidence in the paleoclimate record that CO2 has any effect on climate either.
willhaas:
What make you think that “the climate sensitivity of CO2” is a constant?
Personally I just love the following statement by IPCC:
“When initialized with states close to the observations, models ‘drift’ towards their imperfect climatology (an estimate of the mean climate), leading to biases in the simulations that depend on the forecast time. The time scale of the drift in the atmosphere and upper ocean is, in most cases, a few years. Biases can be largely removed using empirical techniques a posteriori. The bias correction or adjustment linearly corrects for model drift.”
(Ref: Contribution from Working Group I to the fifth assessment report by IPCC; 11.2.3 Prediction Quality; 11.2.3.1 Decadal Prediction Experiments )
Holy Moses!!!!
I am sorry – but an expression seems to be appropriate , and all other expressions seems so inadequate.
Models drifts towards “Imperfect climatology” and biases can be removed by “using empirical techniques a posteriori”. How can this possible pass the extensive scientific and governmental review process without anyone being alarmed? It seems to be honest however, it seems to be a glimpse of realism, which just failed to alert the scientific reviewers, failed to alert Intergovernmental Panel on Climate Change and failed to alert the governmental reviewers. How can that possibly happen?
The reviewers of this sections must have slept through their classes on logic and scientific theory – If they ever took such classes.The worst thing is that IPCC conclusion heavily relies on the result of such models. Such an admission, is alone sufficient to blow off the whole thing. I am sorry – but a fundamental flaw has been identified in the works of IPCC. Consensus or not – I couldn´t care less. A fundamental flaw has been identified, even if it has not been realized by the review process. Which is just another reason to suspend further action. Please, United Nations, can you please stop this nonsense and start using all effort to help those who already suffer – by known and real causes.
“The WGI contribution to the IPCC AR5 assesses the current state of the physical sciences with respect to climate change. This report presents an assessment of the current state of research results and is not a discussion of all relevant papers as would be included in a review. It thus seeks to make sure that the range of scientific views, as represented in the peer-reviewed literature, is considered and evaluated in the assessment, and that the state of the science is concisely and accurately presented.”
Yeah – I agree! The state of the climate science is very concisely presented by such nonsense.
What struck me the most was, again, the assertion of linear adjustments to correct for “drift” in the output of a nonlinear chaotic system. If the system being modeled is nonlinear and chaotic, and it drifts, why on earth (no pun intended) would you think that the drift is linear? That’s what sets my alarm bells off. I see it as an implicit admission that the models they use to simulate the earth’s climate are too complex for them to understand the way the model behaves during the simulation. They just look and see what the output is, without the slightest understanding of how the output resulted from the input. Basically, to the climate scientists and modelers, what goes on inside the black box is just “magic” that’s not amenable to any kind of reasoned analysis and at at the end of the day, if the model results don’t match reality for more than two years out, they just do a linear curve fit that looks good, but has no mathematical foundation to it other than that its the only thing they really can do.
How can they be sure “drift” is not signal? Unless of course they already know the answer.
And this is classic:
Biases can be largely removed using empirical techniques a posteriori.
Empirical techniques? Translation – “What ever we need to make it look good.”
When I was in school, we called that the fudge factor.
The author’s definition of parameterizations is incorrect. In GCM’s, parameterizations are procedures to account for sub-grid scale phenomena – those which cannot be physically simulated using the grid-level finite element modeling. For example, a thunderstorm is small compared to grid sizes, so the model cannot simulate its physics. Instead, it uses another, non-physical model to estimate the the thunderstorm and ultimately, its effect on the grid output.
Simply: these parameterizations are not just variables to be fiddled with to get a desired results. They are, themselves, models and can be extremely complex.
However… the (very necessary) use of them puts lie to the claim that the models are just simulating the physics of the atmosphere, and since we understand those physics well (we do, for the most part), the model must be right.
There are, of course, knobs to twiddle to adjust the model. If they are done in the ex post facto approach described in the article, that procedure is obviously extremely suspect.
The real problem with climate models isn’t adjusting for backcasting – although that is done. It is the simple fact that the atmosphere is far to complex to accurately simulate over an extended time frame. If you don’t believe that, tell me what the weather will be in 14 days? The weather models, which are very, very close to the climate models, are useless that far out. Now tell me the weather in 10 years. Yep.
Plain wrong. IPCC AR4 Box TS.8 :- “parametrizations are still used to represent unresolved physical processes such as the formation of clouds and precipitation“
Which is *exactly* what I said. “Unresolved” means sub-grid scale – the processes are below the spatial resolution of the model. Represent means a way of coming up with the data other than the main physics package of the model.
I should add – the term “parameterization” is very misleading, so your mistake is understandable. When one hears the term, one imagines parameters that can be twiddled. But, that is not how it is used in this context. However, the parameterizations, which are models, can be tweaked and twiddled for hindcasting, the same way other aspects can.
John Moore – “Unresolved” typically means not fully understood, as in
“Thus, the physical causes of the variability seen in the tide gauge record are uncertain.These unresolved issues relating to sea level change and its decadal variability …” [AR4 TS.4]
“It has been hypothesised that a transient increase in the diffuse fraction of radiation enhanced CO2 uptake by land ecosystems in 1992 to 1993, but the global significance and magnitude of this effect remains unresolved“.[AR4 7.3.2.4.3]
“Whether emissions of soil dust aerosols increase or decrease in response to changes in atmospheric state and circulation is still unresolved” [AR4 10.4.4]
but, even if your interpretation is correct in this instance, they would still be guessing. Which is the central thrust of the article. And when you say the parameterisations can be “tweaked and twiddled for hindcasting“, well, that is exactly the problem that I identify : tweaking and twiddling the models to match observation just adds more guesswork to the models. A lot more.
Mike, we are discussion the meaning of parameterization in the context of climate models.
IPCC: “Moreover, many physical processes, such as those related to clouds, also occur at smaller scales and cannot be properly modelled. Instead, their known properties must be averaged over the larger scale in a technique known as parameterization. ”
And, as one example trivially found by Googling: “A new parameterization is presented for the shortwave radiative properties of water clouds, which is fast enough to be included in general circulation models (GCMs).”
Another: “The parameterization schemes currently in use range in complexityfrom simple moist convective adjustment schemes that are similar to that proposedby Manabe et al. (1965) almost three décades ago to complicated mass flux schemes utilizing and elaborating thé basic concepts set forth by Arakawa and Schubert (1974).”
here
And here
Parameterization, far from being a pile of numbers to twiddle, is a huge body of complex computer code implementing models of sub-grid processes. In fact, most of a modern GCM is in the parameterizatiom models, not in the main physics package.
There is another thing you need to consider: GCM’s are also used in weather modeling, including these complex parameterizations. In the case of weather, the results can be falsified. That process leads to improvements in the sub-grid models (parameterizations) over time, and those improvements also go into the climate models.
As to “unresolved” – yes, it can *also* mean not fully understood. In this context, however, it means “unresolved by the grid resolution.” But… that’s a diversion, since the point is what parameterization means.
Please, before you respond – research parameterization. Google “gcm parameterization” and look through the results. You’ll see what I mean.
John Moore – sorry but your comments are just fantasy.
9.2 “parametrizations are still used to represent unresolved physical processes such as the formation of clouds and precipitation“
TS.6.4.2 “Large uncertainties remain about how clouds might respond to global climate change.”
Believe it or not, Mike, I am on your side. I don’t believe the climate model projections. I was just trying to correct an error in your article. I am sorry that you are not interested in learning more about climate models.
Mike Jonas – based on John Moore’s comment, the IPCC is not using the term “unresolved” in its colloquial sense of meaning “not known.” They are instead using it in the technical sense of not being represented on a fine enough scale so that they instead have to simply come up with a parameter as a substitute for having the model physically simulate clouds as they form, or rain as it falls.
Your latter quote of uncertainty as to how clouds respond to increasing temperatures DOES NOT say that everything that is “unknown” is “parameterized” in the models and is then adjusted when tuning the model. That was an assumption on your part, and John Moore is challenging the factual validity of that assumption. My recollection of earlier explanations of climate models by books or articles by climate scientists (I think John Christy, but I’m not sure – Maybe Michaels) tracks with what John Moore claims. That’s not to say that the models are accurate – the thrust of your essay is still true in that the things the models do treat as parameters are arbitrarily adjusted in a glorified and pointless curve-fitting exercise, but your details are not correct.
Also, as noted above, I still think that you are incorrectly conflating the IPCC AR4 description of how they approach the attribution problem by linearly combining the outputs of computer models to individual forcings, with the different issue of how the models themselves are tuned. As I read the quote you gave from the IPCC AR4 report,when they try to determine how much of historical warming to attribute to CO2, they run each forcing individually through a model, linearly combine the results using weights that achieve a “best fit” and then with the weights that achieved the best fit, simply proportion how much of the output was due to the CO2 forcing relative to other forcings. This process seems silly to me, not being designed to produce any kind of useful result, but no parameters or any other feature of the models are being adjusted in this procedure.
Certainly, your statement in the essay that climate models don’t account for solar variations is incorrect, and maybe the volcanoes too given that these could be accounted for in the aerosols (although the IPCC discourages this interpretation by referring to aerosols only as a “man-made” forcing.
perplexed – thank you for your observations. I made my comments in the spirit that the worst way to combat the poor science of global warming is by misrepresenting it. We need to be accurate, or we will confirm the incorrect ideas in the public world and especially the scientific world that all criticisms are poorly informed.
One important note: they don’t come up with a “parameter” – they come up with a “parameterization.” It is this messy neologism based on the word “parameter” that leads to a lot of confusion. It is natural, when hearing “parameterization” to imagine varying a few “parameters” – simple numbers. Unfortunately, this natural reading leads one astray. A parameter is a number. A parameterization is a model ranging from trivial to vast complexity.
It is valid to critique models which are tuned to hind-cast, but not to assume that this is done through parameterization. It may be, but often the parameterization, since it is tested against the real world, is less likely to be tweaked. Of course, there is a gotcha – it is hard to test a lot of the parameterizations in a higher CO2 concentration world – so those parameterizations might be wrong.
In my mind, the biggest problems with climate models are not parameter twiddling, they are:
Inability to falsify the model – a violation of Popper’s model of science. Why should we believe a forecast for far in the future from a process which has never had its forecasts tested?
The difficulty (impossibility, I think) of simulating over time the complex non-linear feedback system that is the atmosphere, due to chaos, which itself was discovered with the first atmospheric models by Lorenz. Weather models fail rather quickly. Last night, Tropical Depression 16-E was supposed to be on top of Phoenix right now. I look out the window and it isn’t there. Instead, it changed track, and then dissipated in Mexico. More general weather forecasting loses accuracy over time, and becomes pretty much worthless 5 to 15 days in the future, as chaos dominates. The weather models and the climate models share the same physics modeling and parameterizations. In other words, the climate models are just as vulnerable to chaos, because they are using the same approach and often the same code. Climatologists attempt to compensate for this by using ensembles, but so do meteorologists, and the ensembles don’t add that much time before chaos dominates – days, not years.
John Moore:
To your list of shortcomings today’s climate models can be added that they convey no information to a policy maker about the outcomes from his/her policy decisions thus making control of the climate impossible. These models are worthless for their advertised purpose.
Oops, I should have typed “shortcomings of” instead of “shortcomings.”
@terry – it’s even worse than that, because they appear to provide information that is useful, when it is not.
John Moore:
That’s right. The information that is useful for the purpose of controlling the climate (the “mutual information” in information theoretic terms) is nil. Non-nil mutual information is the product of models that make predictions but today’s models make projections. An effect from application of Mike Jonas’s “duck test” is to obscure the important difference between a “prediction” and a “projection” thus covering up the actual incapacity of today’s climate models to support regulation of the climate..
Interesting post Mike, but can you please tell me where you sourced your data? You say that the models deduce temperature increase from only 3 factors, namely carbon dioxide, water vapour and clouds , contributing 37%, 22% and 41% respectively. These are very specific numbers, did you deduce them yourself from model performance, or is there some IPCC reference for this? I know catastrophists tell us that co2 is THE control knob, aided by positive feedback from water vapour, but surely any “sophisticated” computer model would take all of the many factors you mention into account?
IPCC AR4 8.6.2.3 : “Using feedback parameters from Figure 8.14, it can be
estimated that in the presence of water vapour, lapse rate and
surface albedo feedbacks, but in the absence of cloud feedbacks,
current GCMs would predict a climate sensitivity (±1 standard
deviation) of roughly 1.9°C ± 0.15°C (ignoring spread from
radiative forcing differences). The mean and standard deviation
of climate sensitivity estimates derived from current GCMs are
larger (3.2°C ± 0.7°C) essentially because the GCMs all predict
a positive cloud feedback (Figure 8.14) but strongly disagree
on its magnitude.”
With CO2 alone at 1.2, that’s 37% of 3.2, then water vapour etc adds 0.7 ie. 22% and clouds 1.3 ie. 41%. I’m working with the model average and trying to keep things simple, so use just the bare figures without the +-.
There is disagreement on whether climate models predict, project, or forecast. Whatever the intent, they in practice have proven incapable of performing any of the above so the question is moot.
They do burn a lot of cpu cycles though. I think there you might find consensus.
No. The models do project.
In real world usage, the difference between prediction and projection has to do with how much confidence you have in the result.
When I declare that I am making a prediction, I am putting my reputation on the line and declaring that I am confident that this is going to happen.
If I call it a projection instead, I am declaring that something might happen.
If you want to rework the economy of the entire world and transfer trillions of dollars of wealth from one group of people to another, you had darn well better be making predictions, not merely projections.
Until you are willing to put your reputation on the line and start making PREDICTIONS, then go away and stop bothering me.
MarkW
The real world contains numerous people who are susceptible to being duped by applications of the equivocation fallacy in making global warming arguments and numerous people who enjoy duping them out of their money through applications of this fallacy. Observing that this is happening, some of us try to prevent it from happening by making a distinction between “prediction” and “projection” under which “predictions” are made by a model possessing an underlying statistical population and “projections” are made by a model lacking an underlying statistical population. Lacking a statistical population a model is insusceptible to being validated and conveys no information to a policy maker about the outcomes from his/her policy decisions; these traits mark the model as unscientific and unsuitable for use in controlling the climate.
Others prefer not to distinguish between a model that possesses an underlying statistical population and a model that does not. Some of them argue that “prediction” should not be disambiguated because it is used ambiguously in the English vernacular. If they prevail, this has the effect of making non-scientific research look scientific and for models that are unsuited to their intended purpose of controlling the climate seem suited to it. The effect is to put us on track to spend an estimated 100 trillion US$ over the next century on phasing out fossil fuels for no gain.
You should use shorter sentences, and smaller words.
I have to admit that I’m over 60 years old, have a vocabulary of at least 300 words, yet had never heard or seen these two words used in your comment: “insusceptible” and “disambiguated”
I read your post out loud for friends.
I used it as an example of how some smart people can have difficulty communicating their knowledge.
No one else knew what you were talking about.
I tried to explain your post, but while reading it I ran out of breath, started coughing, and someone had to call 911.
My solution, when I hear anyone predicting the future (climate or anything else, with computer models or not), is to plug my ears with my fingers and loudly hum the US national anthem.
I’m tired of listening to predictions of the future that will be wrong.
Especially predictions of doom used by smarmy religious and political leaders to control the masses and/or take their money … which is why I got interested in global warming in the late 1990s — the “warmists” seemed like a new religion, or cult. … And I also thought Al Gore was a stupid head, so didn’t trust him.
Richard Greene September 19, 2015 at 9:58 am
Obviously you need a larger vocabulary. I have seen those words before. I may even have used them.
I know all of the words used, and knew what Terry, was saying, and may in fact be one of the people he was referring to as responsible for all the wasted money…but still laughed out loud at Richard Greene’s very funny comment.
But since I have argued vociferously against CAGW since the late 1980’s, I know for sure that there aint none of it my dang fault.
Terry:
Your article here: http://wmbriggs.com/post/7923/ is very interesting, and makes some very good and valid points. However, it is rather technical and trying to explain it here in fewer words is probably not very fruitful. Instead, I would refer people to that link and let it speak for itself.
Anne Ominous:
Thanks for the feedback. “Scientists” who include President Obama’s science advisor have fallen down on the job by representing a pseudoscience to be a science. Unless I can figure out how to communicate this message to voters or someone else can figure how to do so mankind is screwed. Your continuing support in figuring this out would be much appreciated. You can reach me at terry@knowledgetothemax.com .
Terry:
I will probably email you but it will likely be a couple of days due to prior obligations.
I have long been interested in the practice of effectively explaining complex ideas to the laity, but that does not imply I am any sort of expert. Still, in my own field I am well regarded for that, if only informally.
I have long felt that many of the logical failures of AGW advocates should be addressed, but many exchanges with others over the years have left me feeling rather isolated in that regard.
So I will email. It may be as late as Wednesday.
Anne,
I myself have noted that there are pages and pages, amounting to reams of commentary, here at WUWT in which many individuals almost continually point out the various logical fallacies, shortcomings, and failures of the warmista cadre and their long since falsified meme.
Thanks, Mike Jonas. Excellent article.
“The climate models’ predictions are very unreliable.”, you are too kind.
Several bloggers have joined the authors of the IPCC assessment reports in using the term “signal” inappropriately. For a control system there is no such thing as a signal or noise as the signal + noise would have to travel at a superluminal speed to carry information to the controller about the outcomes of the events. As the aim of global warming research is control, this “signal” does not exist.
Maybe the signal is communicate by means of quantum mechanical entanglement, and thus has the superluminal aspects you scoff at?
Didja ever think a dat?
(I reckon I should speak in common vernacular when addressing Mr. Oldberg, so I does.)
Menicholas:
You exhibit difficulty in or unwillingness to communicate in technical English. Are you having a seizure? If so, call 911. Do you mean to mock me? If so, bug off. Otherwise, perhaps you have sufficient training to wrap your head around technical concepts when they are communicated in technical English and to reply in technical English.
In technical English, a signal is not “communicated” but rather is the basis for communication. Communication is conducted by the passage of a signal at or below the speed of light. There is no violation of relativity theory.
It is in control that passage of a signal above the speed of light would be required for the flow of information. It can be concluded that there is no such signal. Climatologists imply that there is. As the issue is control and not communication, communication by quantum mechanical entanglement is irrelevant.
Terry, I will “bug off”, and not respond to you here again.
Anyone can read the thread and decide if I am out of line for having the temerity to make a joke or mock one such as you, who would never stoop to mocking anyone, least of all those who hold the same general position as you.
Likewise, and one can decide if I know how to communicate using English, or if I have any degree of technical knowledge and am able to effectively communicate my thoughts.
As well as whether or not you display any willingness to engage people in the tone of the then ongoing conversation, whatever such tone may be.
I will say, that I get the impression that you may be a rather humorless sort, and that you seem to relish displaying a grating tone of belittlement in your expert commentary.
L. Buckingham,
Since Mr. Oldberg is a published, peer reviewed author, his understanding of peer review is clearly superior to Mr. Buckingham’s — unless Lewis B is also a peer reviewed author?
Also, Terry, if I may play peacemaker here, I think Menochilas was being funny, and probably a little sarcastic. He sometimes comments like that. As a more objective observer, I think you both might be misreading the other, and with that unasked-for comment I will STFU and GTFA.☺
The Buckster says:
Without providing a citation for any “peer reviewed” work, all Stealey is doing is blowing smoke.
Whoa there, cowboy! One thing at a time. As always, I can back up my statements.
But you were digging a different hole just 4 minutes ago, erroneously claiming that T. Oldberg didn’t provide a source to a particular paper.
Maybe you’ve found it now, so you’re deflecting to this question?
Ya know, Bucky, you ask dozens and dozens of questions in these threads. But you never really answer any. Why not?
So, if I provide you with the citation you’re demanding here, will you agree to answer ‘Yes’ or ‘No’ to just five (5) questions from me? I’ll make them easy.
You will still be far ahead in the ‘asking questions but not answering questions’ game. But at least we will have narrowed the gap by five. So, are you game?
I have to go out for a while, but as soon as I’m back, I will log on to see if you accept my kind offer. See you then, pardner.
dbstealey: “Whoa there, cowboy! One thing at a time. As always, I can back up my statements.”
Yes, you keep saying that.
So go on, do it.
Or back off.
Lewis P Buckingham:
I frequently observe you playing the game called “defamation by innuendo.”
I’m back! And just and as I knew he would, Bucky is still at it, digging his hole deeper:
Terry, it is real easy for both you and Stealey to end this discussion. Simply post the citation of the peer-reviewed journal…&blah, blah, etc.
First, off, I don’t want to end this discussion. It is highly amusing watching Buckingham squirm. The referenced paper is there, in the posted link. Bucky says, I need to see that the person that makes a big fuss…&etc.
But of course, Mr. B is actually the one making the big, constant, unending fuss, whining that others must do what he demands. Bucky, no one else is claiming ignorance like you, by complaining that they can’t find the paper that Terry Oldberg linked to. Only you are. No one else seems to have that problem.
Next:
I don’t play games Stealey.
And a good thing, too (if it was true). From the looks of your comments, you would have a difficult time winning a game of tic-tac-toe.
Anyway, I simply pointed out that you incessantly ask questions, but you rarely answer anyone else’s questions. So let me know if you are willing to answer a few, like I proposed.
If so, I’ll post the paper you can’t find. If not, go find it on your own.
Oh, goody. I’m being ignored.
Terry Oldberg, I found your link in about fifteen seconds with no trouble. Buckingham is desperate to make you, or me, or someone post it for him.
Don’t do it. It’s there, he can find it himself. Bucky claims he doesn’t play games, but this is a game to him. He wants you or me to give him what he demands.
I told him I would post the link for him if Bucky would agree to answer a few questions. That’s fair, no? So he can ge the link, easy-peasy. But as we see, he’s playing a game here. And so far, he’s losing.
The best thing to do about blog denizens who clearly live under bridges is to refrain from feeding them.
catweazel666 says:
So go on, do it. Or back off.
Why the hostility? Could you not find the referenced paper, either?
Or are you questioning my comment about Terry Oldberg? You’re not clear about which, and WordPress as usual screwed up the placement of comments. It was only by accident that I happened to see yours, and looked at the time stamp.
So, please explain clearly what you need to know. And I can do wirthout the hostility. I have never said anything negative about you, not once, nor have I ever been snarky or unpleasant with you. Same with Buckingham: he inserted himself into a conversation I was having with someone else, and he did it in a hostile, unfriendly and confrontational way. What does he expect now? Kissy-face?
Treat me good, I’ll treat you better
Treat me bad, I’ll treat you worse
We can start over on the right foot. Or not. It’s up to you.
dbstealey:
I admire and endorse your firm but fair approach to human relations on scientific issues!
catweazel666,
Now that Buckingham has stated that he will ignore my comments, I feel I owe you an answer.
I was having fun by not giving Lewis B what he was incessantly – and impolitely – demanding. But since you asked, here is a peer-reviewed publication authored by Terry Oldberg:
Oldberg, T. and R. Christensen, 1995, “Erratic Measure” in NDE for the Energy Industry 1995; The American Society of Mechanical Engineers, New York, NY.
There may be more. I cannot vouch as to the accuracy or data contained in anyone’s papers, and you can be sure that detractors can be found for almost any peer reviewed paper. But the specific challenge questioned whether Terry has any peer reviewed publications. This answers Bucky’s challenge, which he has now lost. He doesn’t matter, but I wanted to show you, at least, that I don’t fabricate things.
I can also post the peer reviewed, published paper that got Buckinham so spun up in this thread. It’s right there, and easy to find.
It’s always been there, if anyone had taken the time to look, instead of arguing for about twenty times longer than it would take to find it, demanding that someone else must produce it for him. It took me less than half a minute to find it and start reading it. The link is still there.
When speaking about how reliable the climate models are, IPCC does another flaw in considering the average of the model results. The average value of an ensemble of climate models is often used as an argument in the debate. What does it mean? The following is a quote from the contribution from Working group I to the fifth assessment report by the Intergovernmental Panel on Climate Change.:
Box 12.1 | Methods to Quantify Model Agreement in Maps
“The climate change projections in this report are based on ensembles of climate models. The ensemble mean is a useful quantity to characterize the average response to external forcings, but does not convey any information on the robustness of this response across models, its uncertainty and/or likelihood or its magnitude relative to unforced climate variability.”
This can be regarded as a glimpse of realism. Except from the logical fallacy expressed in the same section. Let us rephrase the section:
The ensemble mean does not convey any information on:
– the robustness of this response across models
– its uncertainty
– likelihood
– its magnitude relative to unforced climate variability
but it is a useful quantity to characterize the average response to external forcing.
That is a quite silly thing to say – isn´t it?
How can it be useful when you do not know
– if it is robust
– its uncertainty
– its likelihood
– its magnitude relative to unforced climate variability?
Exactly what is the ensemble mean then supposed to be useful for?
Later in the same section it is stated:
“There is some debate in the literature on how the multi-model ensembles should be interpreted statistically. This and past IPCC reports treat the model spread as some measure of uncertainty, irrespective of the number of models, which implies an ‘indistinguishable’ interpretation.”
I think this section speaks for it self. What ” implies an indistinguishable interpretation” is supposed to mean – I have no idea. It seams to me to be totally meaningless.
And this passed the peer review by experts and the governmental review IPCC so heavily relies on and trust. Ref. Article 3 in the “Principles governing IPCC work”.
Something that “…does not convey any information on the robustness of this response across models, its uncertainty and/or likelihood or its magnitude relative to unforced climate variability.” cannot on any objective criteria be considered useful.
It is only the likes of those like Mosher who consider climate models useful. They are so bad that they are a hinderance to our understanding, not an aid.
It has been patently obvious for a long time that Climate Models should be ditched and the money spend on them redirected to more productive science.
Like the projections from which it is formed, the multimodel mean is non-falsifiable and conveys no information to a policy maker about the outcomes from his/her policy decisions. Thus, it is scientifically nonsensical.
The fact that the IPCC thinks that any useful information is conveyed by an average of different climate models, or even different runs of the same climate model, should be a flag to everyone that the IPCC does not know what it is talking about. Assuming that the models used in the IPCC report are a representative sample of the entire universe of climate models, the average would then only give you an expected value of what another climate model would produce as an output. That’s it. It conveys no other information. The average certainly tells you nothing about how the actual climate behaves.
Say I’m forced to bet on a hockey game – a sport I know nothing about. I check out the predicted scores by ESPN, CBS, Fox Sports – as many as I can for the game I have to bet on, and they are all over the place. Being the amateur that I am, what do I do? I just take a meaningless average of the predictions, which again only tells me an expected value of some other person’s prediction that I didn’t consider. If I did know anything about hockey I could either form my own intelligent prediction or at least have the wherewithal to distinguish whose predictions made sense and whose did not. Taking an average of a prediction is a sign of an amateur – not a sign of expertise.
To understand what the average of a series of samples represents, you first have to understand what the samples are. In this case, the samples are of model runs of the Earth’s climate – and are not samples of the climate itself. The average, therefore, does not convey any useful information about how the climate actually behaves EXCEPT under the unproven (delusional) assumption that the climate models accurately simulate the behavior of the real climate.
How many of you think that climate models programmed by people who fervently believe that CO2 plays an important role in the Earth’s average temperature will produce an output that DOES NOT show CO2 having an important effect on the Earth’s average temperature? I think that climate scientists use computer models as a backdoor way of fabricating the data that the real world won’t let them collect scientifically. I think its unethical, and I can’t image any other scientific discipline in which this kind of nonsense would be tolerated.
There is no way of conducting a controlled experiment of the Earth’s climate system to actually measure the effect of CO2 concentration on temperature (or any other climate state variable). You can’t tell the sun to not change its output, or control the amount of clouds, etc. You can’t tell volcanoes to stop erupting. Neither is there any way of conducting a study, like epidemiological studies, that determine the effect that X has on system Y by examining a large number of instances of the system with control populations, etc. There’s only one Earth. In other words, all the scientific ways of measuring the effect CO2 has on the Earth’s climate are not available to scientists, and since scientists are experts only in the scientific procedure, this leaves them up a creek without a paddle.
So instead what they do is invent the data they so desperately want by programming computers to simulate they way they THINK the climate operates. When the computers runs don’t match with actual, measured climate data, they make all kinds of excuses:
“Not enough time has passed, the real-world data is bad and needs adjusting, maybe it’s the aerosols, or the oceans are absorbing the heat – whatever. But don’t you worry, even though we don’t know why temperatures aren’t moving the way our models said they should, our confidence that CO2 contributed to at least half of the observed warming just went up from 90% to 95%. Trust us. What? You want performance before relying on our conclusions? Ha – we don’t need no stinkin’ performance. We’re ‘experts.’ Trust us on that.”
Diatribe over.
Thanks for an excellent article which has stimulated a large number of very interesting and thoughtful posts (and, with one possible exception, no denials).
By the way, instead of arguing about “projections” and “predictions” why not use “prognostications” which, for me at least, carries a greater element of guesswork than do the other two words.
dbstealey
By the way, in the literature of global warming climatology, the word “science” is polysemic thus fostering applications of the equivocation fallacy. It references pseudoscience as well as legitimate science thus being ideally suited to the task of covering up the pseudoscientific nature of global warming research.
Solomon Green:
To suggest that debaters are “arguing about ‘projections’ and ‘predictions’ muddies waters needing clarification. When described in logical terms the topic of the debate is the equivocation fallacy in global warming arguments. One subtopic is the role of the polysemic form of “prediction” in applying this fallacy to the task of reaching false or unproved conclusions from global warming arguments. A second subtopic is the availability of means by which applications of the equivocation fallacy can be made impossible. This is by assignment of one of the two definitions of the polysemic form of “prediction” to a monosemic form of “prediction” and the other definition to “projection.” This approach is already in widespread use.
In making a global warming argument, the words “prediction” and “projection” are merely ways of referencing the associated meanings. Thus, they can be replaced by any other pair of words, including made-up words, that reference the same meanings without effect on the logical legitimacy of the argument being made. Thus to suggest that an argument is being conducted over the trivial issue of the semantics of “prediction” and “projection” is inaccurate and misleading.