Is It Time To Stop The Insanity Of Wasting Time and Money On More Climate Models?

Guest Opinion: Dr. Tim Ball

Nearly every single climate model prediction, projection or whatever else they want to call them has been wrong. Weather forecasts beyond 72 hours typically deteriorate into their error bands. The UK Met Office summer forecast was wrong again. I have lost track of the number of times they were wrong. Apparently, the British Broadcasting Corporation had enough as they stopped using their services. They are not just marginally wrong. Invariably, the weather is the inverse of their forecast.

Short, medium, and long-term climate forecasts are wrong more than 50 percent of the time so that a correct one is a no better than a random event. Global and or regional forecasts are often equally incorrect. If there were a climate model that made even 60 percent accurate forecasts, everybody would use it. Since there is no single accurate climate model forecast, the IPCC resorts to averaging out their model forecasts as if, somehow, the errors would cancel each other out and the average of forecasts would be representative. Climate models and their forecasts have been unmitigated failures that would cause an automatic cessation in any other enterprise. Unless, of course, it was another government funded, fiasco. Daily weather forecasts are improved from when modern forecasting began in World War I. However, even short term climate forecasts appear no better than the Old Farmers Almanac, which appeared in 1792, using moon, sun, and other astronomical and terrestrial indicators.

I have written and often spoken about the key role of the models in creating and perpetuating the catastrophic AGW mythology. People were shocked by the leaked emails from the Climatic Research Unit (CRU), but most don’t know that the actual instructions to “hide the decline” in the tree ring portion of the hockey stick graph were in the computer code. It is one reason that people translate the Garbage In, Garbage Out (GIGO) acronym as Gospel in, Gospel Out when speaking of climate models.

I am tired of the continued pretense that climate models can produce accurate forecasts in a chaotic system. Sadly, the pretense occurs on both sides of the scientific debate. The reality is the models don’t work and can’t work for many reasons, including the most fundamental; lack of data, lack of knowledge of major mechanisms, lack of knowledge of basic physical processes, lack of ability to represent physical mechanisms like turbulence in mathematical form, and lack of computer capacity. Bob Tisdale summarized the problems in his 2013 book Climate Models Fail. It is time to stop wasting time and money and put people and computers to more important uses.

The only thing that keeps people working on the models is government funding, either at weather offices or in academia. Without this funding computer modelers would not dominate the study of climate. Without the funding, the Intergovernmental Panel on Climate Change could not exist. Many of the people involved in climate modeling were not familiar with or had no training in climatology or climate science. They were graduates of computer modeling programs looking for a challenging opportunity with large amounts of funding available and access to large computers. The atmosphere and later the oceans fit the bill. Now they put the two together to continue the fiasco. Unfortunately, it is all at massive expense to society. Those expenses include the computers and the modeling time but worse the cost of applying the failed results to global energy and environmental issues.

Let’s stop pretending and wasting money and time. Remove that funding and nobody would spend private money to work on climate forecast models.

I used to argue that there was some small value in playing with climate models in a laboratory, with only a scientific responsibility for the accuracy, feasibility, and applicability. It is clear they do not fulfill those responsibilities. Now I realize that position was wrong. When model results are used as the sole basis for government policy, there is no value. It is a massive cost and detriment to society, which is what the Intergovernmental Panel on Climate Change (IPCC) was specifically designed to do.

The IPCC has one small value. It illustrates all the problems identified in the previous comments. Laboratory-generated climate models are manipulated outside of even basic scientific rigor in government weather offices or academia, and then become the basis of public policy through the Summary for Policymakers (SPM).

Another value of the IPCC Physical Science Basis Reports is they provide a detailed listing of why models can’t and don’t work. Too bad few read or understand them. If they did, they would realize the limitations are such that they preclude any chance of success. Just a partial examination illustrates the point.

Data

The IPCC people knew of the data limitations from the start, but it didn’t stop them building models.

In 1993, Stephen Schneider, a primary player in the anthropogenic global warming hypothesis and the use of models went beyond doubt to certainty when he said,

“Uncertainty about important feedback mechanisms is one reason why the ultimate goal of climate modeling – forecasting reliably the future of key variables such as temperature and rainfall patterns – is not realizable.”

A February 3, 1999, US National Research Council Report said,

Deficiencies in the accuracy, quality and continuity of the records place serious limitations on the confidence that can be placed in the research results.

To which Kevin Trenberth responded,

It’s very clear we do not have a climate observing system….This may come as a shock to many people who assume that we do know adequately what’s going on with the climate, but we don’t.

Two Directors of the CRU, Tom Wigley, and Phil Jones said,

Many of the uncertainties surrounding the causes of climate change will never be resolved because the necessary data are lacking.

70% of the world is oceans and there are virtually no stations. The Poles are critical in the dynamics of driving the atmosphere and creating climate yet there are virtually no stations in 15 million km2 of the Arctic Ocean or for the 14 million km2 of Antarctica. Approximately 85% of the surface has no weather data. The IPCC acknowledge the limitations by claiming a single station data are representative of conditions within a 1200km radius. Is that a valid assumption? I don’t think it is.

But it isn’t just lack of data at the surface. Actually, it is not data for the surface, but for a range of altitudes above the surface between 1.25 to 2 m and as researchers from Geiger (Climate Near the Ground) on show this is markedly different from actual surface temperatures as measured at the few microclimate stations that exist.  Arguably US surface stations are best, but Anthony Watts diligent study shows that only 7.9 percent of them accurate to less than 1°C. (Figure 1) To put that in perspective, in the 2001 IPCC Report Jones claimed a 0.6°C increase over 120 years was beyond a natural increase. That also underscores the fact that most of the instrumental record temperatures were measured to 0.5°C.

clip_image002

Figure 1

Other basic data, including precipitation, barometric pressure, wind speed, and direction are worse than the temperature data. For example, in Africa there are only 1152 weather watch stations, which are one-eighth the World Meteorological Organization (WMO) recommended minimum density. As I noted in an earlier paper, lack of data for all phases of water alone guarantees the failure of IPCC projections.

The models attempt to simulate a three-dimensional atmosphere, but there is virtually no data above the surface. The modelers think we are foolish enough to believe the argument that more layers in the model will solve the problem, but it doesn’t matter if you have no data.

Major Mechanisms

During my career as a climatologist, several mechanism of weather and climate were either discovered or measured, supposedly with sufficient accuracy for application in a model. These include, El Nino/La Nina (ENSO), the Pacific Decadal Oscillation (PDO), the Atlantic Multidecadal Oscillation (AMO), the Antarctic Oscillation (AAO), the North Atlantic Oscillation (NAO), Dansgaard-Oeschger Oscillation (D-O), Madden-Julian Oscillation (MJO), Indian Ocean Dipole (IOD), among others.

Despite this, we are still unclear about the mechanisms associated with the Hadley Cell and the Inter-tropical Convergence Zone (ITCZ), which are essentially the entire tropical climate mechanisms. The Milankovitch Effect remains controversial and is not included in IPCC models. The Cosmic Theory appears to provide an answer to the relationship between sunspots, global temperature, and precipitation but is similarly ignored by the IPCC. They do not deal with the Monsoon mechanism well as they note,

In short, most AOGCMs do not simulate the spatial or intra-seasonal variation of monsoon precipitation accurately.

There is very limited knowledge of the major oceanic circulations at the surface and in the depths. There are virtually no measures of the volumes of heat transferred or how they change over time, including measures of geothermal heat.

Physical Mechanisms.

The IPCC acknowledge that,

“In climate research and modeling, we should recognize that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible.”

That comment is sufficient to argue for cessation of the waste of time and money. Add the second and related problem identified by Essex and McKitrick in Taken By Storm and it is confirmed.

Climate research is anything but a routine application of classical theories like fluid mechanics, even though some may be tempted to think it is. It has to be regarded in the “exotic’ category of scientific problems in part because we are trying to look for scientifically meaningful structure that no one can see or has ever seen, and may not even exist.

In this regard it is crucial to bear in mind that there is no experimental set up for global climate, so all we really have are those first principles. You can take all the measurements you want today, fill terabytes of disk space if you want, but that does not serve as an experimental apparatus. Engineering apparatus can be controlled, and those running them can make measurements of known variables over a range of controlled physically relevant conditions. In contrast, we have only today’s climate to sample directly, provided we are clever enough to even know how to average middle realm data in a physically meaningful way to represent climate. In short, global climate is not treatable by any conventional means.

Computer capacity

Modelers claim computers are getting better, and all they need are bigger, faster computers. It can’t make any difference, but they continue to waste money. In 2012, Cray introduced the promotionally named Gaea supercomputer (Figure 2). It has a 1.1 petaflops capacity. FLOPS means Floating-Point Operations per Second, and peta is 1016 (or a thousand) million floating-point operations per second. Jagadish Shukla says the challenge is

We must be able to run climate models at the same resolution as weather prediction models, which may have horizontal resolutions of 3-5 km within the next 5 years. This will require computers with peak capability of about 100 petaflops

Regardless of the computer capacity it is meaningless without data for the model.

clip_image004

Figure 2: Cray’s Gaea Computer with the environmental image.

Failed Forecasts, (Predictions, Projections)

Figure 3 shows the IPCC failed forecast. They call them projections, but the public believes they are forecasts. Either way, they are consistently wrong. Notice the labels added to Hayden’s graph taken from the Summary for Policymakers. As the error range increase in the actual data the Summary claims it is improving. One of the computer models used for the IPCC forecast belongs to Environment Canada. Their forecasts are the worst of all of those averaged results used by the IPCC (Figure 4).

clip_image006

Figure 3

clip_image008

Figure 4 Source; Ken Gregory

The Canadian disaster is not surprising as their one-year forecast assessment indicates. They make a one –year forecast and provide a map indicating the percentage of accuracy against the average for the period 1981-2010 (Figure 5).

clip_image010

Figure 5

The Canadian average accuracy percentage is shown in the bottom left as 41.5 percent. That is the best they can achieve after some thirty years of developing the models. Other countries results are no better.

In a New Scientist report Tim Palmer, a leading climate modeller at the European Centre for Medium-Range Weather Forecasts in Reading England said:

I don’t want to undermine the IPCC, but the forecasts, especially for regional climate change, are immensely uncertain.

The Cost

Joanne Nova has done most research on the cost of climate research to the US government.

In total, over the last 20 years, by the end of fiscal year 2009, the US government will have poured in $32 billion for climate research—and another $36 billion for development of climate-related technologies. These are actual dollars, obtained from government reports, and not adjusted for inflation. It does not include funding from other governments. The real total can only grow.

There is no doubt that number grew, and the world total is likely double the US amount as this commentator claims.

However, at least I can add a reliable half-billion pounds to Joanne Nova’s $79 billion – plus we know already that the EU Framework 7 programme includes €1.9 billion on direct climate change research. Framework 6 runs to €769 million. If we take all the Annex 1 countries, the sum expended must be well over $100 billion.

These are just the computer modeling costs. The economic and social costs are much higher and virtually impossible to calculate. As Paul Driessen explains

As with its polar counterparts, 90% of the titanic climate funding iceberg is invisible to most citizens, businessmen and politicians.

It’s no wonder Larry Bell can say,

The U.S. Government Accounting Office (GAO) can’t figure out what benefits taxpayers are getting from the many billions of dollars spent each year on policies that are purportedly aimed at addressing climate change.

If it is impossible for a supposedly sophisticated agency like US GAO to determine the costs, then there is no hope for a global assessment. There is little doubt the direct cost is measured in trillions of dollars. That does not include the lost opportunities for development and lives continuing in poverty. All this because of the falsified results from completely failed computer model prediction, projections or whatever they want to call them.

It is time to stop the insanity, which in climate science is the repetition of creating computer models that don’t and can’t work? I think so.

“Those who have knowledge don’t predict. Those who do predict don’t have knowledge.” Tzu, Lao (6th Century BC)


Note: this article was updated shortly after publication to fix a text formatting error.

5 1 vote
Article Rating
218 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Salvatore Del Prete
September 14, 2015 9:57 am

They can spend as much money as they want on climate models but the bottom line is they will never work because they do not put in correct, and complete data.
Let them keep wasting and knocking themselves out so when the day of reckoning comes they fall that much harder. That day of reckoning probably before this decade ends.

Mickey Reno
Reply to  Salvatore Del Prete
September 14, 2015 10:20 am

I’d stop them from wasting, immediately, if I had my druthers. And why do you suppose that if you allow them to continue to waste you’re money, that things will collapse? And that collapse is a manageable event, that the collapse would lead to a new direction you’d favor? What if they collapse in a direction that the Cloward’s and Piven’s of this world envision?
Let’s send Kevin and Gavin to the unemployment line, now. It galls me to think of the pensions they’re going to draw for a lifetime of “service” that helped no one.

Mickey Reno
Reply to  Mickey Reno
September 14, 2015 10:21 am

your money – not you’re money. [blush]

Reply to  Mickey Reno
September 14, 2015 3:15 pm

Are you advocating that a person in a position of authority inform the climate modeler’s:

“YOU’RE FIRED!”


?

schitzree
Reply to  Mickey Reno
September 14, 2015 6:21 pm

Stormy… I see what you did there. ^_^

Expat
Reply to  Salvatore Del Prete
September 14, 2015 10:38 am

Unfortunately, El Nino will most likely drive up temps in the short term and lessen skeptics ability to influence anything. My guess is more money will be put into climate models, not less.

Somebody
Reply to  Salvatore Del Prete
September 14, 2015 10:52 am

They will never work because of the Lyapunov exponents. End of story.

RD
Reply to  Salvatore Del Prete
September 14, 2015 11:36 am

Exactly right. Let them spend and model to their heart’s content. We will have the real world to compare and contrast with their imaginary pink unicorns.

Marcuso8
Reply to  Salvatore Del Prete
September 14, 2015 12:00 pm

I predict ( with a 99.9% accuracy ) that the Glo.Bull Warming hoax will come to a crashing halt in 2017 !!!!

csanborn
Reply to  Salvatore Del Prete
September 14, 2015 12:16 pm

As I understand it from Dr. Christopher Essex – University of Western Ontario – climate models can never be correct because of: 1) calculation residual, 2) machine epsilon, and 3) model parameterization.

Phaedrus
Reply to  csanborn
September 14, 2015 5:36 pm

If you have an hour this is a must!! Clear and concise in what is a very complex discussion!

Hivemind
Reply to  csanborn
September 14, 2015 10:25 pm

4) incorrect assumptions (assuming the Earth is like an onion – ie “radiative forcing”)
5) incomplete physics (they can’t even predict the Sahara Desert, they need to program it in instead)
6) a need to force them to fit existing (to 1978) data, instead of making genuine predictions

Reply to  Salvatore Del Prete
September 14, 2015 1:06 pm

The day of reckoning by 2020?
I wouldn’t bet on that one. They are getting away with it and they know it. The current game plan seems to be that every weather calamity is blamed on Climate Change. The longer they keep that up, the more believable it becomes. As personal employment becomes more and more dependent on combating Climate Change, the more intrenched it will become. Any day of reckoning will only occur when the economy will no longer support the scheme.

Reply to  Steve Case
September 14, 2015 3:48 pm

I forgot to point out that a week or so ago there was a post about the Climate Change industry being a $1.5 Trillion business world-wide.
With that kind of money driving it, I wouldn’t bet on a quick and decisive demise any time soon.

george e. smith
Reply to  Salvatore Del Prete
September 14, 2015 1:36 pm

Right on Dr. Tim !!
And add this to your list of ” well we just took a wild a*** guess because we can’t possibly get enough meaningful samples (from the ground), so we just made it up and some of it is something like something that happens sometimes. ”
…. http://www.aip.org/history/climate/impacts.htm … for the full version of historian emeritus Spencer Weart’s concise history of climate science. He wrote a book called “The discovery of global warming” or something close to that” so he pushes the gospel to sell his book.
From what you are saying Tim; this isn’t even a good approximation for throwing darts at a wall to see if anything sticks.
g
PS Weart does have an impressive resume in Physics. I’m not in any way suggesting his credentials are unsound; quite the contrary.
But you will find the word ” consensus ” in his essay, more times than I care to count. And mostly it is about the hidden inner machinations of committees, rather than scientific evidence supported by peer reviewed experimental data.
It certainly adds to the pile of shameful substitute for scientific rigor.

george e. smith
Reply to  george e. smith
September 14, 2015 1:55 pm

I noticed that fig. 3 is a Dr Roy Spencergraph, with subtitles.
I did like the fact that the further they go back in the past on their wayback machine, they get less and less confident, that they even know what happened back then.
Well that’s in line with Spencer Weart’s assertion that they basically took a guess.
I guess this whole line of research is fully in keeping with Lord Rutherford’s admonition:
” If you have to use statistics, you should have done a better experiment. ”
Or in this case, made better wild arse guesses.
g

george e. smith
Reply to  Salvatore Del Prete
September 14, 2015 2:17 pm

I always thought that statistics could only tell you what fraction of some large number of experiments would meet some criterion.
And most of the math is only valid if the variables show a normal (izzat Gaussian ??) distribution.
As far as I know any single experiment could come up with ANY physically possible result; which cannot be predicted.
Hey news flash, earth to scientists : This is not a dress rehearsal; we are only going to get ONE SHOT at the future; there will not be any instant replay , and no refunds will be issued if you don’t like the outcome.
So stop wasting our time with statistics. Observe and report what happens; and don’t do any 13 month running averages, or nine point filters, or any other information destroying filtering.
Read the meter, and write down the reading !!
g

indefatigablefrog
Reply to  Salvatore Del Prete
September 14, 2015 3:33 pm

Re: “they do not put in correct, and complete data”.
But, but, but…those are the official global temperatures as measured by real professional temperature measuring sailors in the open ocean, using real buckets on ropes. As then assessed, interpreted and suitably adjusted by the very trustworthy Phil Jones et al in the bucket modelling dept at East Angular.
Then readjusted later for various spurious reasons.
It is fortunate for the individual sailor who performed this bizarre duty, the he was never told that the output of his diligently (or not so diligently) performed duty would one day form the basis for an attempt to exactly assess the average temperature of the globe during the first half of the 20th century to within ONE TENTH OF A DEGREE.
Not that such an onerous responsibility would necessarily have bothered the sailor in question.
But, simply because he would have become, in that moment, aware that his own descendants were destined to be a bunch of complete idiots.
I maintain some temperature recording sheets as a part of my job. Since we do not regard the temperature recording to be of any great significance, the numbers recorded are mostly fictional. It is easier to concoct a realistic looking number and write that down, than to actually go to the trouble to reading the usually faulty thermometer.
Occasionally, for my own amusement I have attempted to check the accuracy of the thermometers by laying two of them side by side. When I have done this I have usually discovered a very significant discrepancy.
Sometimes I attempt to point this out. But nobody is at all interested.
I expect that a sailor in the Southern Ocean was beset by a similar dilemma. Whether to diligently record the real ocean temperature or whether to just guess something and write it in the log – thereby saving himself the bother of slinging a bucket overboard and hauling it back onto the deck etc.
The poor man never imagined that the future of the human race lay in his hands.

Brett Keane
Reply to  indefatigablefrog
September 15, 2015 5:14 am

Well put! Having done that job, I doubt the samples would be too far wrong, on average. Water doesn’t change T quickly, and the drying outside of bucket will supply slight evaporative cooling, But, ahem, much depends on Officer and Petty Officer vigilance. Not much reason for crewmen to care much, otherwise. Yes, real malevolent stupidity is left for the present day climate clowns like Karl and Coy.

Latitude
Reply to  Salvatore Del Prete
September 14, 2015 5:06 pm

Salvatore, they will never work because they keep adjusting the past data…
BTW…modeling climate should be magnitudes simpler than modeling weather

E.M.Smith
Editor
Reply to  Salvatore Del Prete
September 14, 2015 5:15 pm

It is not just the wrong data, but the wrong process coded into the programming as well. Crap assumptions in crap programs will just give more crap faster on bigger computers.

Vboring
Reply to  Salvatore Del Prete
September 14, 2015 5:44 pm

If it were play money, I would agree.
But that kind of cash could do real good. Total funding for novel fusion reactors in the US this year is less than $100M. For a technology that could bring virtually free energy to everyone and make space exploration practical, this is a travesty.

James Francisco
Reply to  Salvatore Del Prete
September 14, 2015 6:41 pm

Maybe they can use the climate modeling computers to figure out the total amount of money that was wasted on this fiasco.

george e. smith
Reply to  Salvatore Del Prete
September 14, 2015 6:42 pm

Tim poses a lot of arguments as to the sparcity of the ” surface / near surface / wherever data.”
No stations for 85% of the earth surface; few stations in the Arctic.
I seem to recall from some years back when I started following the climate horror story at Tech Central Station, reading an account, that said that around the turn of the century; which at that time meant 19 going on 30, there was something like 76 “weather stations ” in the Arctic, AKA > +60 deg. lat , but that number had grown to a bit less than 100, and stations had moved; but with the collapse and implosion of the Soviet Union, the number of Arctic weather stations had dropped to about 12 today.
Those numbers may not be the right ones, but it was that order of a progression that I recall.
But it is actually much worse than Dr. Ball suggests; because most of that ” data ” is not data at all, but simply noise.
The assumption of sampled data theory, is that a multivariable continuous but band limited (in all variables) function, can be represented by a set of ” snapshots ” of the state of the function; the contents of that snapshot being the value of each variable at the point of recording the snapshot.
Now time is not necessarily a variable in all sampled data systems; but it certainly is one of the most common variables.
In the case of global weather which integrates over time to climate, the independent variables are essentially time and position.
So a ” datum ” of this two independent variable continuous function consists of the value of the dependent variable ( maybe Temperature ) at a specific instant of time for ALL sample values of the spatial variable (measuring locations), and also for each measuring station, is needed a reading for each value of the time variable.
That means you must get simultaneous readings for all sampled locations all at the same time, and for all stations their readings all need to be made at the same set of time intervals.
If you think about the ” weather ” in your back yard, and the weather over at your uncle’s house at the same instant, it is differences in the weather variables such as Temperature, that will determine from the laws of physics, just what energy exchanges are going on between those locations. Is the wind blowing from your house over to his, or verse vicea.
If you don’t both look simultaneously, it is meaningless. So your uncle got rained on last week , but today you got some sprinkles. Can’t tell a damn thing about what is happening from that. Your house may have burned down since last week, so rain today isn’t going to help you.
A photograph is a map that shows you everything that is in a certain physical space all at the same moment. A video will show you a sequence of photographs each showing everything that is wherever, all at the same instant when the frame was recorded.
So this hodge podge of each station recording a Temperature whenever they feel like it, so long as the get two numbers some time each day, is simply garbage it isn’t a video of anything. It’s more like the neighborhood garbage dump with all kinds of bric a brac at a land fill, with nobody knowing where all the junk came from or when it was dumped at the land fill.
Prof John Christy reported in Jan 2001 the results of some simultaneous ocean water and ocean near surface air temperatures ( from about -1 m and + 3 m respectively ) over about 20 years from some ocean buoys, and found that water and air Temperatures aren’t the same, and they aren’t correlated so you can’t regenerate one from the other.
So prior to about 1980, none of the ocean water based temperature numbers are of any use for plotting on a global lower troposphere map; it’s just garbage at a land fill.
So it is time that all these otherwise unemployable statisticians, started boning up on the general theory of sampled data systems, and then try to comply with those rules.
g

Alan the Brit
Reply to  Salvatore Del Prete
September 15, 2015 12:12 am

The problem is this, because they are in the public employ, paid by taxpayers generous funding, they are answerable to no one, therefore are responisble to no one, therefore they carry on as they do. Until the money is withdrawn, they continue. The system is geared up so that they will be allowed to take early retirement, on great pensions, or allowed to move sideways into less prominent positions, cozy, cushy, stress free, with the only repost available to them, “We did what we thought was right, based on the best available science!”. The really big lie!

Reply to  Salvatore Del Prete
September 15, 2015 3:06 pm

It doesn’t matter that the data is sparse and inaccurate, because even if the data was perfect, the computer would convert the numbers into an approximation called a floating point arithmetic and once the numerical representation is corrupted in the most minute way, you fly headlong into chaos.

catweazle666
Reply to  Salvatore Del Prete
September 16, 2015 4:54 pm

It doesn’t matter what data they put in, the models will always fail.
Anyone who claims that an effectively infinitely large open-ended non-linear feedback-driven (where we don’t know all the feedbacks, and even the ones we do know, we are unsure of the signs of some critical ones) chaotic system – hence subject to inter alia extreme sensitivity to initial conditions – is capable of making meaningful predictions over any significant time period is either a charlatan or a computer salesman.
Ironically, the first person to point this out was Edward Lorenz – a climate scientist.
You can add as much computing power as you like, the result is purely to produce the wrong answer faster.

September 14, 2015 9:59 am

Reblogged this on WeatherAction News and commented:
When model results are used as the sole basis for government policy, there is no value. It is a massive cost and detriment to society

Resourceguy
September 14, 2015 9:59 am

The Great Wall of China was a similarly scaled failure of trust and public policy.

Bill 2
September 14, 2015 10:07 am

Dr. Ball, why is your twitter account a spambot? https://twitter.com/pplonia

Reply to  Bill 2
September 14, 2015 10:55 am

My wife set it up and operates it. Thanks for advising of the problem, we will get it corrected. Thanks

Reply to  Bill 2
September 14, 2015 11:09 am

We use Apple and Spambot is not supposed to be a problem, although some are now reporting the problem. Will continue to resolve the problem.

KTM
September 14, 2015 10:10 am

“Remove that funding and nobody would spend private money to work on climate forecast models.”
We should follow the Human Embryonic Stem Cell model for funding this. When the federal government reinforced its moratorium, California stepped forward with $30 billion to fund the research, saying it would make California the world leader for research and innovation in the area.
California is already vowing to spend billions and inflict billions more dollars of damage to their own economy to support and promote the global warming ideology. Let them pick up the banner and devote a few billion to modeling.
As a note about the whole HESC issue, that work is now all scientifically obsolete due to the development of IPS cells that have no ethical concerns and much more clinical promise.

MarkW
Reply to  KTM
September 14, 2015 10:24 am

There was never a federal moratorium on Human Embryonic Stem Cell research.
There was a ban on federal funding for research using any embryonic stem cell lines created after a certain date. Funding on stem cell lines created prior to that date were always allowed.

September 14, 2015 10:11 am

The IPCC Figure 4 (the first graph in this post) has a problem in that new measurements are being obscured by the (Observations) dialogue box … this setup will remain in effect until ~2035.

Crispin in Waterloo
September 14, 2015 10:13 am

The Canadian model living virtually on Vancouver Island is an embarrassment to science and industry. Its predictions are laughable, sophomoric. It is ‘kept’ to keep up the IPCC’s ‘average’ predictions. We are paying for this junk science. Eliminating (defunding) the worst climate model each year would bring a sense of rigour to the forecasting industry and a sensitivity number below 1 degree per doubling of CO2.
If any P.Eng ran their operations like that they would be defrocked. Instead, the fabricators of junk climate science are canonized.

September 14, 2015 10:18 am

yes

September 14, 2015 10:20 am

I think the issue with climate models is that they are tuned to hindcast the past (from 2005 IIRC for the CMIP5 models) as best as they can, with the assumption that none of the rapid warming from the early 1970s to 2005 or so is from any natural cycles other than changes in volcanic emissions. If a contribution to that warming period by multidecadal cycles is determined and accounted for, and the models retuned accordingly, then I think they will become accurate.

September 14, 2015 10:25 am

Regarding: “It has a 1.1 petaflops capacity. FLOPS means Floating-Point Operations per Second, and peta is 1016 (or a thousand) million floating-point operations per second.” A petaflop is 10^15 floating point operations per second.

urederra
Reply to  Donald L. Klipstein
September 14, 2015 12:27 pm

I might be wrong, but that is not capacity. It is speed or potency.

george e. smith
Reply to  Donald L. Klipstein
September 14, 2015 2:03 pm

Peta is a thousand trillion; that is with a T not a thousand million with an m or even with a B.
g
Why did I always think that the climate models started with the known laws of physics; including the supposition (which becomes obvious every morning, when the sun rises in the east) that the earth rotates on its axis every 24 hours or so ??
just asking !

MarkW
Reply to  Donald L. Klipstein
September 14, 2015 2:53 pm

I’m pretty sure it’s 1024 million flops, not 1016. 2 to the 10th power is 1024.

Reply to  MarkW
September 15, 2015 4:02 pm

it’s 1*10^15 flops, your pc is probably hitting 1024 million flops, 10^9 flops, 4 cores doing 4 flops/ cycle at 2.4Ghz.. I believe that it’s fairly straight forward to download a GCM, compile it and run one on your PC, it would just run really slow.

Sam Prather
September 14, 2015 10:26 am

This is an excellent article!! One minor correction: a petaflop is a quadrillion (thousand trillion) floating point operations per second (FLOPS) which is a thousand teraflops, or 10 to the 15th power FLOPS.

MarkW
Reply to  Sam Prather
September 14, 2015 2:54 pm

When the 8080 first came out, it’s clock speed was around 100KHz, at it took several seconds to do a single FLOP.

neutronman2014
Reply to  MarkW
September 14, 2015 4:11 pm

(When the 8080 first came out, it’s clock speed was around 100KHz, at it took several seconds to do a single FLOP.)
Mark, the 8080 was the successor to the 8008. it ran at 2 Mhz clock speed and performed close to 300,000 flops per second.Microprocessor architecture and bus width were different in those early chips and it is difficult to compare their operation with today’s designs.

Reply to  MarkW
September 14, 2015 8:16 pm

Neutronman. An 8080 had no native floating point abilities whatsoever. You had to code them yourself, and a floating point divide was a brute of a thing.

JoeJ
Reply to  MarkW
September 14, 2015 8:38 pm

Not sure why I can’t reply to neutronman2014, but you are both about 3 orders of magnitude off, but in opposite directions. The 8080 did not have any floating point instructions, so required use of a software library. A single-precision add or subtract took about 0.7 milliseconds, multiply was about 1.5 msec, and divide took about 3.6msec. (that last one would be nearly 278 FLOPS!).
Times are taken from the Intel 8080/8085 Floating Point Arithmetic Library User’s Manual, Appendix C.

MarkW
Reply to  MarkW
September 16, 2015 11:18 am

Nuetronman2014, the 8008 was the successor to the 4004, which was built to run a 4 function calculator.

MarkW
Reply to  MarkW
September 16, 2015 11:20 am

Leo, the 8080 had the ability to do add and subtract directly. There was no native multiply or divide op-code, you had to build up those functions from the add and subtract functions.

george e. smith
Reply to  Sam Prather
September 15, 2015 1:06 pm

Where do people come up with these numbers. I seem to recall that the original IBM PC had a microprocessor with a 4.7 MHz clock frequency.
Before there was an 8080 or an 8008 or a 4004, there were already calculators that could do floating point math; even the cordic algorithm of the HP 35 hand held calculator.
I used to use a Wang calculator, that did multiplication using logs (1965). So it had all the math functions that were on the HP-35 in a weird desktop package. It had a weird magnetic core ROM made by stringing a bunch of wires through (or not through) a stack of ferrite cores. The sequence of cores that a wire went through or bypassed determined the 1-0 pattern of the word stored on that wire.
I never was able to determine whether Wang had figured out the cordic algorithm, or not because I never could find any literature that described that process, until it appeared in the HP-35 (and in the HP-Journal.
g

MarkW
Reply to  george e. smith
September 16, 2015 11:25 am

The first IBM PC used the 8086, two generations past the 8080.
The 4004 was the first chip designed specifically for a calculator, and it was a simple 4 function calculator with no memory.
The HP-35 was introduced in 1980, which is about 5 years AFTER the 8080 came out.

chapprg1
Reply to  Sam Prather
September 20, 2015 2:26 pm

Altho Dr. Ball was too polite to say it succinctly, more peta flops means “garbage in, garbage out faster”.

F. Ross
September 14, 2015 10:35 am

Excellent article Dr. Ball.

Jim
September 14, 2015 10:37 am

The $ value of being able to predict the growing season weather a few months in advance is an astronomical number.

Jimbo
September 14, 2015 10:39 am

No comment.

Abstract
The Key Role of Heavy Precipitation Events in Climate Model Disagreements of Future Annual Precipitation Changes in California
Climate model simulations disagree on whether future precipitation will increase or decrease over California, which has impeded efforts to anticipate and adapt to human-induced climate change……..Between these conflicting tendencies, 12 projections show drier annual conditions by the 2060s and 13 show wetter. These results are obtained from 16 global general circulation models downscaled with different combinations of dynamical methods…
http://dx.doi.org/10.1175/JCLI-D-12-00766.1

Fred Singer: “Successive IPCC summaries have claimed increasing certainty [from 50% in 1996, rising to >95% in 2013] about a human cause of global warming — even as the disparity between observations and IPCC models continues to grow year by year –now for more than 18 years. This is becoming somewhat ridiculous…
http://www.energyadvocate.com/gc1.jpg

Alx
Reply to  Jimbo
September 15, 2015 1:29 pm

Well obviously that is why the climate models are 100% correct. Since they forecast all possibilities they can never be wrong.
In other words I bet any climate modeler $100 that a independently supervised coin flip will be either heads or tails.

Jimbo
Reply to  Alx
September 16, 2015 4:32 am

Some people may not take what you said seriously, but it is a truism. Everytime we get a result that contradics the narrative they say ‘but the models predicted it.’ Well of course they did!

September 14, 2015 10:41 am

Well, it’s one thing to argue that studying the climate through models should be discontinued because it hasn’t produced verifiable predictions yet (or that the predictions the models have produced have failed verification). Problem is, as soon as you start torturing your reasoning by conflating climate modelling and weather forecasts, you’ve lost all credibility for the rest of your argument.

Reply to  bregmata
September 14, 2015 10:53 am

No you haven’t

Reply to  bregmata
September 14, 2015 11:44 am

bregmata, you beg the question.
Climate can only be more predictable than the weather it is composed of if, and only if, an understandably small number of factors dominate the weather over the long-term.
That’s never happened in the past. So why believe it will happen now? We’ve had forest fires and volcanos and CO2 hasn’t dominated.
You need to justify why you believe climate will be simpler than the weather that happens.

Reply to  MCourtney
September 14, 2015 8:12 pm

Clime forecasting vs. weather forecasting: Weather forecasting is like giving a microsecond-by-microsecond forecast of the states of the switching transistors in a class D amplifier. Climate forecasting is like predicting the duty cycle of the switching transistors as a function of input voltage and component changes. This means that a 50-year climate forecast should be much easier than a one-month day-by-day weather forecast. The biggest problem I see now with climate forecasting (such as with a composite of many models) is tuning the models to hindcast the past, with an incorrect understanding of the amounts of past temperature change caused by each of several factors, such as changes of manmade aerosols, volcanic emissions, greenhouse gases, any cloud effects from changes of solar activity such as indirectly through change of cosmic rays, other effects from changes of solar activity, and multidecadal oscillations. Using an incorrect understanding of how much past warming was due to increase of greenhouse gases leads to incorrect determinations of at least some of the feedbacks. Another issue is incorrect consideration (or none at all) of the feedback magnitudes changing with temperature and/or greenhouse gas presence.

Peter Sable
Reply to  MCourtney
September 15, 2015 4:13 am

The biggest problem I see now with climate forecasting (such as with a composite of many models) is tuning the models to hindcast the past, with an incorrect understanding of the amounts of past temperature change caused by each of several factors, such as changes of manmade aerosols, volcanic emissions, greenhouse gases, any cloud effects from changes of solar activity such as indirectly through change of cosmic rays, other effects from changes of solar activity, and multidecadal oscillations.

With four parameters I can fit an elephant, and with five I can make him wiggle his trunk. – John Von Neumann.
You have substantially more than 5 variables on your list. None of which we can measure very accurately.
From an information theory viewpoint the entire idea of climate modeling is doomed from the start…
Peter

Reply to  MCourtney
September 16, 2015 10:33 am

I can not predict where an individual bird will fly. I can not even predict the exact location where a skein of geese will fly on a given day for each of the next 5 days. I can, however, predict the pattern that the geese will fly south each fall and return each sprint.
I’m not saying computerized climate models are right (or wrong). I’m saying that to argue they’re wrong because some other unrelated computerized models do not give completely accurate predictions is just a straw man, and a very poor one that in this case only taps into people’s deep-seated ignorance of the fundamental concepts involved (ie. confounding climate and weather).
One certainly does not do rational argument any favours by demonstrating one’s preference for avoiding it.

RACookPE1978
Editor
Reply to  bregmata
September 16, 2015 11:27 am

bregmata

I’m not saying computerized climate models are right (or wrong). I’m saying that to argue they’re wrong because some other unrelated computerized models do not give completely accurate predictions is just a straw man, and a very poor one that in this case only taps into people’s deep-seated ignorance of the fundamental concepts involved (ie. confounding climate and weather).

But, consider the following. EVERY YEAR since 1988, 23 of 23 “geese flying models” have predicted the geese will fly southeast towards the Azores Islands and Bermuda Islands, winter there over the cold months, then fly back to the north. And, every fall, the geese actually fly south-southwest towards the TX and Mississippi and Louisiana gulf coast swamps.
Now, should we discount those “geese flying models” models just because they are “consistently” and “almost right” and spend 92 billion dollars to bring our guns and cameras to the Azores to look for geese? the geese did, after all, on average “fly south.”

MarkW
Reply to  MCourtney
September 16, 2015 11:28 am

Weather models can ignore most of the problems that break the climate models.

MarkW
Reply to  bregmata
September 14, 2015 2:55 pm

Not that tired old line again? Sheesh, can’t you guys come up with new material?

chapprg1
Reply to  MarkW
September 20, 2015 2:34 pm

Global climate models disagree among themselves by 600% and have not improved in 27 years of gargantuan not to mention incredibly expensive efforts. Is this not prima facie evidence that further progress might not be expected.
Dr Ball details the reasons why.

Alx
Reply to  bregmata
September 15, 2015 1:56 pm

Yes we know the story, weather is not climate and climate is not weather, except when an individual catastrophic weather event like Sandy is used as proof of climate change.
Can play the same game with forecasts by calling it trending, projections, forecasts, future cast anomaly patterns or whatever. Doesn’t matter what you call it or the context, a prediction is a prediction that can be proven accurate or not and in the case of climate models they are consistently wrong.
How they are consistently wrong is itself an issue. That climate models have wildly varying results but yet deliver a consistent warming result only proves built-in biases. If there were no biases the trends would be wrong both above and below reality.
Regardless, even if modelers got lucky and their predictions or projections actually occurred they are still highly suspect due to lack of understanding of what they are modeling and the uncertainty and scarcity of global data.

September 14, 2015 10:45 am

[q]peta is 1016 (or a thousand) million floating-point operations per second[/q}
Actually a petaflop is a million billion floating-point operations per second, not a “thousand million” (which is a billion). The Gaea supercomputer that NOAA uses can do 1.1 petaflops, which is 1.1 million times faster than a “thousand million” (or billion).
1,000,000,000 is a billion (“giga”)
1,000,000,000,000 is a trillion (“tera”) or 1,000 billion
1,000,000,000,000,000 is a quadrillion (“peta”) or 1,000,000 (a million) billion
1,000,000,000,000,000,000 is a quintillion (“exa”) or 1,000,000,000 (a billion) billion
https://en.wikipedia.org/wiki/Names_of_large_numbers

MarkW
Reply to  Lauren R.
September 14, 2015 2:57 pm

1024, not 1016

Magma
September 14, 2015 10:46 am

[Fake email address. ~mod.]

Reply to  Magma
September 14, 2015 11:47 am

What about this article?
Can you find a fault with this?
Or can we assume that he is now so expert you can’t find any fault with his current work?
Friendly advice. The science isn’t Dr Ball’s weakness. Attack his political understanding if you want to have a go.

Magma
Reply to  MCourtney
September 14, 2015 1:50 pm

[Fake email address. ~mod.]

Stephen Richards
Reply to  Magma
September 14, 2015 12:25 pm

S bloody what. What is your point? How does it refer to this post?

Jimbo
Reply to  Magma
September 14, 2015 1:01 pm

Magma, forget Tim Ball and his expertise. How accurate have the IPCC surface temperature projections been? FAIL. That is the only qualification one needs, the ability to see projection V observations.
Here is someone with extensive model training and teaching. He is even more stinging than Ball.
http://wattsupwiththat.com/2014/10/06/real-science-debates-are-not-rare/
Here is a friend.

Abstract – 1994
Naomi Oreskes et al
Verification, validation, and confirmation of numerical models in the earth sciences
Verification and validation of numerical models of natural systems is impossible. This is because natural systems are never closed and because model results are always non-unique. Models can be confirmed by the demonstration of agreement between observation and prediction, but confirmation is inherently partial. Complete confirmation is logically precluded by the fallacy of affirming the consequent and by incomplete access to natural phenomena. Models can only be evaluated in relative terms, and their predictive value is always open to question. The primary value of models is heuristic…….
In some cases, the predictions generated by these models are considered as a basis for public policy decisions: Global circulation models are being used to predict the behavior of the Earth’s climate in response to increased CO2 concentrations;…….
Finally, we must admit that
a model may confirm our biases and support incorrect intuitions. Therefore, models are most useful when they are used to challenge existing formulations, rather than to validate or verify them. Any scientist who is asked to use a model to verify or validate a predetermined result should be suspicious.
http://acmg.seas.harvard.edu/students/Oreskes_1994.pdf

4 eyes
Reply to  Magma
September 14, 2015 3:13 pm

Magma,
Attack and dispute the article, not the author. I used to think AGW was something to worry about but about 15 years ago it became apparent the science was not rigorous. And things haven’t improved despite all the money spent. If it was your money wouldn’t you occasionally audit the progress being made. And if no progress had been made I am dead sure you would consider withdrawing support.

Svend Ferdinandsen
September 14, 2015 10:51 am

The problem with lack of data is made worse by the fact, that the data available is constant adjusted to show warming. If you base a climate theory on fabricated warming, you get a useless theory.
Most papers try to corrolate the warming with observations in nature, but when there are hardly any warming the results point in all directions. Try to corrolate anything with a flat line that just wiggles a bit.

Bulldust
Reply to  Svend Ferdinandsen
September 14, 2015 6:33 pm

This aspect has been bothering me for years. The fact that the input data are constantly being revised means previous work is in need of review. I don’t see how you can take climate science seriously if the core data are in question. Another thing I find hilarious, is the desire to model ever smaller cells in the models, despite the temperature data being tortured to represent areas orders of magnitude larger than those modelled. I mean if you want to make pretty pictures, just run a simple Mandelbrot program. Essentially that is all these multi-billion dollar projects are… an expensive way of making abstract graphics that have no practical use whatsoever, unless you count their use in pushing policy agendas.

September 14, 2015 11:02 am

Reblogged this on The Arts Mechanical and commented:
Using computers to model anything chaotic is a crap shot at best. A computer model is at best a guess on boundary conditions and chaotic turbulent systems don’t behave in a linear boundary contained fashion.

KTWO
September 14, 2015 11:04 am

The increasing confidence noted in Figure 3 indicates that by 2020 they should be ‘Absolutely Sure’. And once AS is achieved much bigger budgets will be needed as it becomes harder to become even more sure.
Being able to accurately forecast the growing season would be of great $ value if it can be done. If not, not.

Gamecock
September 14, 2015 11:07 am

“Short, medium, and long-term climate forecasts are wrong more than 50 percent of the time so that a correct one is a no better than a random event.”
With all due respect, Dr. Ball, a short-term climate forecast seems an oxymoron.

Berényi Péter
September 14, 2015 11:17 am

Computational general circulation climate models are clearly nothing but a waste of time and money, but they are actually worse than that. They are diverting attention and resources from fundamental scientific questions and attract the wrong kind of minds to the field, who are unable to do experimental work to solve basic riddles; whose the lack of understanding prohibits any further progress.
We have a fairly comprehensive understanding of reproducible non equilibrium thermodynamic systems. Unfortunately the terrestrial climate system does not belong to this class. It is not reproducible in the thermodynamic sense, i.e. microstates belonging to the same macrostate can evolve to different macrostates in a short time. Of course, it is nothing, but the other side of the coin called “chaos”, a.k.a. “butterfly effect”.
Such systems are not understood at all, their Jaynes entropy can’t even be defined.
If the climate system were reproducible, it would lend itself to the Maximum Entropy Production Principle. However, it clearly does not.
As energy exchange between the climate system and its cosmic environment goes exclusively by electromagnetic radiation, one hardly has to do more than to count incoming vs. outgoing photons, with a small correction for the different angular distribution of their momenta and deviations of their spectra from that of pure thermal radiation, to calculate net entropy production of the system. Turns out it would be very easy to increase entropy production by making Earth darker, that is, by decreasing its albedo.
However, Earth’s albedo is what it is, the planet is very far from being pitch black. In spite of this, albedo is strictly regulated, only its sweet spot is different. Due to a peculiar property of Keplerian orbits, annual average insolation of the two hemispheres is the same. It is a curious fact, that annual average reflected shortwave radiation is also the same though, which can’t be explained by simple astronomy. It is an emergent property of the climate system, for clear sky albedo of the Southern Hemisphere is much lower due to prevalence of oceans there, still, its all sky albedo is the same. The difference is, of course, due to clouds.
This simple symmetry property is neither understood nor replicated by computational general circulation climate models.
Under these circumstances any rational scientist would leave the climate system alone for a while, go back to the lab and study the behavior of non reproducible (chaotic) non equilibrium thermodynamic systems in general, until a breakthrough is achieved.
Plenty of such systems would fit into a lab happily, the terrestrial climate system being an exception in this respect.

Reply to  Berényi Péter
September 14, 2015 2:34 pm

BP, Feynman did that for a significant portion of his career (middle third). Lectures on Physics, V2, Chapter 41, delightfully titled the Flow of Wet Water. (Insider joke, chapter 40 is the Flow of Dry Water, only omitting a simple small ‘detail’–viscosity (see paragraph 1 of V2:41 for confirmation). The last paragraph of this very famous chapter is also known as his sermon on the Mysteries of Mathematics. As famous in some circles as the Sermon on the Mount. We await the next Feynman.
And, both V2 chapters should be required reading for every climate modeler. To their shame.
Just a little historical context. None of this is new. Lectures is Caltech Physics 101, 1961-1962. (and of course, much more…Feynman’s then statement about everything he understood about everything then existing in physics). Pity the poor CalTech freshman and sophomores then. It is said that by the second year, only genius undergrads, grad students, and post docs were following along. The other CalTech physics professors were too terrified to show up. Lectures is also the origins of his aphorism, ‘if you cannot explain it simply, then you do not understand it yourself’.
Regards

Victor
September 14, 2015 11:18 am

It doesn’t matter how much money they sink into their high tech crystal ball, it still just shows what the developers want to see.

September 14, 2015 11:20 am

The IPCC AR5 report itself has graphics showing the divergence of climate models from measurements.
From the AR5 Technical Summary, pg. 64:
http://www.climatechange2013.org/images/figures/WGI_AR5_FigTS_TFE.3-1.jpg
And detail of figure 1-4:
http://www.climatechange2013.org/images/figures/WGI_AR5_Fig1-4.jpg
And here:comment image
When a climate alarmists tells you the climate models are accurate, point them to these 3 graphics from the IPCC’s own report. A picture is worth a thousand words.

SAMURAI
September 14, 2015 11:22 am

What do you mean, Dr. Ball?
Adjusted hind casting and raw-data manipulation show the models are working great!
Everyone knows empirical evidence must be adjusted to match CAGW hypothetical projections, because the empirical evidence is obviously off, because 97% of scientists agree the models are correct.
How can you fault that logic?
Always remember that, War is peace, freedom is slavery and ignorance is strength, because the consensus says so.
Again, how can you fault that impeccable logic?

David in Cal
September 14, 2015 11:35 am

“The Poles are critical in the dynamics of driving the atmosphere and creating climate”
Doubly true, as Poland often describes EU climate policy as “costly, overambitious, unrealistic in terms of targets, and disproportionally burdensome for the Polish economy.”

urederra
Reply to  David in Cal
September 14, 2015 12:35 pm

And the Pole guy living at the south is increasing its ice extension.

MarkW
Reply to  urederra
September 16, 2015 11:30 am

I prefer the guy living at the north pole.

Gary Pearse
September 14, 2015 11:36 am

Actually I think the models could do better than 40% if they just accepted that their CO2 theory of warming is just incorrect. They know it and they won’t change it. They must see that their ceteris parabus CO2 theory of warming is countered by some negative feedback phenomena, even if they don’t know what it is. A reading of the virtually universal Le Chatelier Principle (initially identified for chemical reactions and later found to be much broader, is for all intents and purposes a law). If an agent perturbs a system, the system changes such a way as to resist the change. It doesn’t succeed in stopping the change, but less change happens.
This works in the test tube but also anticipates such broader phenomena as Newton’s laws of motion, back-emf in a motor, supply and demand price behavior (push up the price and demand falls, or push up the price and supply increases while demand is falling, causing the price to fall again if supply increase goes too far or is badly timed), friction opposing motion, heating ice water – the ice has to disappear before the temperature begins to go up, buffering ocean ‘acidification’ – adding CO2 re adjusts, the equilibrium of carbonate reaction to resist the reduction in pH – we are having the same kind of fun with this looming ‘disaster’.
I think it suitable to add at least an unknown negative feedback to depress the ceteris paribus CO2 doubling effect to what it appears to be in fact about ~1 C per doubling. It would certainly bring the models into better congruency with the observational records. What is wrong with this idea? Now it still wouldn’t work for ever in a chaotic system but it might work for a couple of decades with some skill. The idiocy is doing it over and over and expecting something different. I think Einstein weighed in on this practice in an unkindly way.

Stuart Jones
Reply to  Gary Pearse
September 14, 2015 6:32 pm

if they were to adjust the CO2 factor from 3.something to 1 then the models would be closer to reality. but that would mean that CO2 has no effect on the climate. has anyone done this? taken just the estimated CO2 feedback out of the model results and see where the “predictions” end up after that, simplistic I know but just a back of an envelope estimation shows that the output of the models will come down in temperature, closer to the actual temp and therefore will be more useful, they may even be right, after all those billions of dollars must have produced some sort of theoretical model that may have some usefulness (minus the cO2 fudge factor)

Reply to  Gary Pearse
September 14, 2015 8:28 pm

I did not hear of LeChatelier’s name in my econ course or my physics courses, but I did in my chemistry course. Meanwhile, Earth’s feedbacks to a climate forcing can be positive and were for surges and ebbings of ice age glaciations. However, once the Earth is so ice-covered or so free of snow and ice that a temperature change does not cause significant albedo change, then the total feedback is less positive (or more negative). Also, the lapse rate feedback (a negative one) gets more negative as the Earth gets warmer or greenhouse gases increase.

chapprg1
Reply to  Gary Pearse
September 20, 2015 3:19 pm

What ‘unknown’ negative feedback. Any forcing of rise in surface temperature will increase water vaporization. Nature in her infinite wisdom utilizes water vapor to lift heat energy upward to altitudes where water vapor can radiate energy to space. In spite of the random chaotic processes which have entertained physicists for decades, nature has used water vapor to provide a NET cooling. We know this is true since, aside from the direct IR window radiation to space (also a negative feedback) there is no other physical phenomenon available for natural processes to use to rid the planet of the balance of the solar energy being absorbed. Thus evaporated water vapor provides a NET cooling. This is by definition a negative feedback to any rise in surface temperature. To assert that some increment to extra water vapor would result in positive feedback is ludicrous. Nature has already figured this out since it is the only mechanism for ridding the planet from any excess heat from whatever forcing source.

climatologist
September 14, 2015 11:48 am

In any case, trying to forecast the behavior of 3 (atmosphere, ocean, sun) chaotic systems is doomed to failure.

MarkW
Reply to  climatologist
September 14, 2015 3:04 pm

I like to call them the 5 spheres.
Atmosphere
hydrosphere
lithosphere
biosphere
cryosphere
Unless your model can successfully juggle all 5 spheres and the constantly changing interactions between them. Then it is useless.

Gamecock
Reply to  MarkW
September 14, 2015 4:58 pm

What about that sphere 93 million miles away?

Dawtgtomis
Reply to  MarkW
September 15, 2015 8:14 am

Heliosphere (#6)

Dawtgtomis
Reply to  MarkW
September 15, 2015 8:22 am

magnetosphere very possible also, MarkW.
(hmm… the ‘magic number’ seven?)

Dawtgtomis
Reply to  climatologist
September 15, 2015 8:18 am

#6 -The big sphere that contains all the other spheres.

September 14, 2015 12:01 pm

No. Just no. This site needs to stop giving untrue information to its readers. It says:

People were shocked by the leaked emails from the Climatic Research Unit (CRU), but most don’t know that the actual instructions to “hide the decline” in the tree ring portion of the hockey stick graph were in the computer code.

Most people don’t know that because it is basically untrue. The HARRY_READ_ME file had nothing to do with the hockey stick graph. It was all about the CRU TS2.1/3.0 data set. Despite that, something like half the text in the link offered is about that file.
Other text is like:

FOIA\documents\osborn-tree6\mann\mxdgrid2ascii.proprintf,1,’Osborn et al. (2004) gridded reconstruction of warm-season’
printf,1,’(April-September) temperature anomalies (from the 1961-1990 mean).’
printf,1,’Reconstruction is based on tree-ring density records.’
printf,1
printf,1,’NOTE: recent decline in tree-ring density has been ARTIFICIALLY’
printf,1,’REMOVED to facilitate calibration. THEREFORE, post-1960 values’
printf,1,’will be much closer to observed temperatures then they should be,’
printf,1,’which will incorrectly imply the reconstruction is more skilful’
printf,1,’than it actually is. See Osborn et al. (2004).’

Which is to print warning statements about what was done to the data, hardly a damning thing. Especially since some of the code being referenced is for papers which were never published, papers which went to great length to discuss what they did. Here is an excerpt from a draft of one:

Warm-season temperature reconstructions with extended spatial coverage have also been developed, making use of the spatial correlation evident in temperature variability to predict pasttemperatures even in grid boxes without any tree-ring density data. The calibration was undertaken on a box-by-box basis, and each grid-box temperature series was predicted using multiple linear regression against the leading principal components (PCs) of the calibrated, gridded reconstructions described in section 4.4. The PCs were computed from the correlation matrix of the reconstructions, so the calibration was in effect removed and similar results would have been obtained if the PCs of the raw, gridded density data had been used instead. The only difference is that the calibrated data with the artificial removal of the recent decline were used for the PCA. Using the adjusted data avoids the problems otherwise introduced by the existence of the decline (see section 4), though all reconstructions after 1930 will be artificially closer to the real temperatures because of the adjustment(the adjustment is quite small until about 1960 – Figure 5c). Tests with the unadjusted data show that none of the spatial patterns associated with the leading PCs are affected by the adjustment, and theonly PC time series that is affected is the leading PC and then only during the post-1930 period. Inother words, the adjustment pattern is very similar to the leading EOF pattern, and orthogonal to theothers, and thus only influences the first PC time series.

The draft discussed exactly what changes were made to the data and why, then showed what effect the changes had. There’s nothing dishonest or wrong about. I personally don’t think the changes were justified, but I could never claim someone is hiding things by telling me what they’re doing and showing me what effect it has.
If you’re going to say “the actual instructions to “hide the decline” in the tree ring portion of the hockey stick graph were in the computer code,” you need to do things like make sure the code you’re talking about is actually for graphs that were published. And were for hockey stick graphs. And was for hiding a decline. Because most of what readers are linked to wasn’t.
In fact, I’m not sure any of it was. I can’t rule it out though. There was a bit of code there I’m not familiar with. So hey, maybe 5% of it does something to support what this post says?

whiten
September 14, 2015 12:04 pm

Tim seems to be engaged lately in a “crusade” to validate the climate models…..
The main logical flaw with this is that it supposes to be the other way around……..that the climate models actually are and exist to validate Tim’s and every one else knowledge of climate and climate system, including the mainstream orthodox climatology and climate knowledge.
Probably somewhere on the line the models do hurt Tim’s feelings in the issue of climate change……
And I think that is the case with many other so called sceptiks.
Still Tim, as any body else, has the right to take any position with this but never the less that does not mean that he will be correct or right about his take in this one.
I do not know and can not even begin to contemplate the possibility of a climate science and progress in climatology without climate models, but some like Tim seem to not have a problem at all with such a non realistic, regressive and backward position…..and the only thing I can say is: “good luck to all of you with your non realistic and “blind” running towards what you may call “knowledge”..”
After all chaos exist in the absence of the knowledge.
cheers

Reply to  whiten
September 14, 2015 12:33 pm

Dr. Ball’s position is not regressive or backward. It is simply realistic. Read my previous illustrated guest post here on this here a few months ago. You evidence severe lack of knowledge about climate models.
1.1 petaflops is 7 orders of magnitude, not 2, below the minimum resolution to approximate convective processes using computational fluid dynamics as is done in weather forecasting. Therefore essential processes like tropical thunderstorms(and therefore water vapor feedback, Lindzens adaptive iris and Eschenbacks albedo governor) cannot be simulated, so must be parameterized. Until the attribution problem can be resolved (anthropogenic forcing v. natural variation), no aproximately correct model parameterization is possible. GCM attribution has been essentially all anthropogenic except for part of the erroneously tuned high aerosols used to cool models to get reasonable hindcasts. Which is why all the models run hot now, and likely will for something like another 15 years if the Curry/Wyatt stadium wave, the Akasofu Arctic ice cycle, the PDO, and the AMO are any indication. It will take at least another half full cycle, about another 30 years of ARGO and UAH, to begin to untangle attribution. And until then, all funding should be stopped as a provable waste of time and money.
Those resources would be far better spent researching basic empirical climate science and weather forecasting (why the absence of Atlantic hurricanes, why the Arctic ice cycle), energy storage, and 4 gen nuclear like MSRs (which will need plenty of detailed engineering design simulation).

Reply to  whiten
September 14, 2015 4:17 pm

Maybe I misunderstood, Whiten. Did you say that we can’t learn to predict the future if we don’t compare our predictions with chicken entrails?

Reply to  whiten
September 14, 2015 8:39 pm

I think you are failing to distinguish prognostic models from process models. As far as I can tell, Dr. Ball’s complaints are mostly if not entirely about prognostic models, and your assertions are, ISTM, most likely to be about process models. If not, then ISTM that your objection to Dr. Ball’s “crusade” is mere assertion, to which you are of course entitled, but which may also be dismissed by mere assertion.
BTW, I think Dr. Ball has given the prognostic models an undeserved pass on their poor hindcasting, but I do know that one can only write about so much at one time.

MarkW
Reply to  whiten
September 16, 2015 11:32 am

In your opinion, it’s better to have models that are always wrong, than to have no models at all?

Marcuso8
September 14, 2015 12:12 pm

When the climate STOPS changing , THEN we should start worrying !!!!

September 14, 2015 12:16 pm

Just a minor point:
By law the BBC is required from time to time to renew the contracts that it has for all the services that it uses, and this time another forecasting company undercut the Met Office offer.
I understand, though, that the new forecasting company will still be using the information from the Met Office computer to make their forecasts.

Stephen Richards
Reply to  Oldseadog
September 14, 2015 12:29 pm

That is also my understanding and seems perfectly feasible when you realise that only the UKMO collect data in the UK. HJow much they will have to pay for it and if that cost was factored into their tender only time will tell but these secondary contracts very often fail in the UK

September 14, 2015 12:18 pm

This brings up “Lies, damned lies, Statistics and Climate Forecasting.” A song.
With apologies to Tennessee Ernie Ford:
Some people say people are made outta mud
Global warmists they are, they are chewing their cud,
Chewing their cud and follow Al Gore
A mind that’s a-weak can you ask for much more?
More than one megawatt, and what do you get?
Another prognosis and deeper in debt
Saint Peter don’t you call ’em ’cause you must let ‘em be
They sold their souls to the IPCC.
They came in one mornin’ when the sun didn’t shine
They picked up their papers and continued the grind
They had sixteen conditions, mostly falsified bull
And the straw boss said “Well, a-bless my soul”.
More than one megawatt, and what do you get?
Another prognosis and deeper in debt
Saint Peter don’t you call ’em ’cause you must let ‘em be
They sold their souls to the IPCC.
They came in one mornin’, it was drizzlin’ rain
the prognosis had failed them again and again
The boss harshly told them, You will do many more
Do as I tell you, and agree with Al Gore.
More than one megawatt, and what do you get?
Another prognosis and deeper in debt
Saint Peter don’t you call ’em ’cause you must let ‘em be
They sold their souls to the IPCC.
The cold snap we’re having now, it just cannot last
and hidin’ the warming that occurred in the past
Their ol’ man Mann and his hockey stick.
With conditions like this nothing ever will click.
More than one megawatt, and what do you get?
Another prognosis and deeper in debt
Saint Peter don’t you call ’em ’cause you must let ‘em be
They sold their souls to the IPCC.
http://lenbilen.com/2012/01/31/lies-damned-lies-statistics-and-climate-forecasting-a-song/

H.R.
Reply to  lenbilen
September 14, 2015 2:13 pm

*grin* Nice!

BallBounces
September 14, 2015 12:26 pm

As a catastrophic global warming, er, warmist, I want to say a big “No!” to the headline: Is It Time To Stop The Insanity Of Wasting Time and Money On More Climate Models? It is NOT!!!! In fact, we should be even more insane and waste even more money, if that’s possible. Oh sure, we had to drop “hide the decline” and settle for “realign the decline”. And we may even be pushed one day to “redefine the decline”, but, “malign the decline”? — never!!! There is NO place for sanity or belt-tightening when it comes to something as important as the climate.Too many lives (and careers) depend on it. And I am unanimous in this.

Stephen Richards
Reply to  BallBounces
September 14, 2015 12:30 pm

errr /sarc

Reply to  BallBounces
September 14, 2015 5:48 pm

Remind me never to agree with anyone who disagrees with me.

Resourceguy
September 14, 2015 12:26 pm

How many supercomputers would it take to provide real time simulation and prediction of the AMO? Start the grant writing. It could be the next virtual high speed rail project.

commieBob
September 14, 2015 12:40 pm

“Those who have knowledge don’t predict. Those who do predict don’t have knowledge.” Tzu, Lao (6th Century BC)

Dr. Ball, you may have stretched Laozi beyond the breaking point. 😉

September 14, 2015 12:52 pm

So um… yeah. I hadn’t read the entire post before submitting my last comment, since I wasn’t worried about the post as a whole. I’ve always been interested in the hockey stick debate, but global warming as a whole mostly bores me. Still, since I had commented on the post, I thought I should read the post as a whole.
I haven’t managed to do so though. You see, I got kind of stuck when I found out Tim Ball misquoted the IPCC in a rather severe way. He claims:

The IPCC acknowledge that,

“In climate research and modeling, we should recognize that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible.”

That comment is sufficient to argue for cessation of the waste of time and money.

And had to stop. Why in the world would anyone think that a sufficient reason to stop spending money trying to forecast the weather? Here’s a fuller quote:

In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible. The most we can expect to achieve is the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions

Or from another part of the IPCC report being quoted:

The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future exact climate states is not possible. Rather the focus must be upon the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions.

All these quotes are saying is we cannot hope to predict exactly what future weather/climate will be via a model. That’s not surprising or remarkable. Nobody thinks we could. We don’t create weather forecasts by running a single model and taking its results. We run models over and over and see which results are the most likely.
That doesn’t prove weather models are useless, yet that’s exactly what this post claims. It claims because the IPCC acknowledges it’s impossible to predict exact climate states, models should be scrapped. That’s beyond silly. Maybe models aren’t worth their costs, but this argument does nothing to show that. If we accept this argument, every weather and climate model in existence would have to be scrapped, including ones many businesses rely upon.

Reply to  Brandon Shollenberger
September 14, 2015 2:07 pm

Like your comment — it is better than mine that follows.
With regard to: “. . . {chaos insures] the long-term prediction of future climate states is not possible.”
Too vague about “long term.” Is it a day, a week, a year, a century? Where is the reference showing this chaotic behavior? Don’t appeal to generalities about chaotic systems because:
1) Some chaotic systems have areas of stable behavior for which predictions may be possible and our climate may be in one. Where are the studies showing ore climate isn’t stable?;
2) The displayed temperature behavior may be in error, but doesn’t appear chaotic, so where is the evidence that these errors aren’t due to parameter errors rather than chaos?

Reply to  Philip Lee
September 14, 2015 2:10 pm

“our climate” not “ore climate”

emsnews
Reply to  Brandon Shollenberger
September 14, 2015 3:16 pm

Except the predictions are uniformly way off base from reality and dead wrong. Any business relying on these bad predictions would go bankrupt.

Reply to  Brandon Shollenberger
September 14, 2015 4:29 pm

The Australian BOM gave us a prediction that went something like – For the rest of the country, the chance of above average rainfall is 50%.
How much would you pay for a prediction like that?

Stuart Jones
Reply to  Robert B
September 14, 2015 6:39 pm

and they were wrong!!!

Reply to  Brandon Shollenberger
September 14, 2015 8:32 pm

I think you misunderstand the mathematical subtleties involved. That IPCC complete statement is skating on very very thin ice.
Let’s take an example: a chaotic system with three attractors. So at any given point it can flip from one attractor, with one sort of average-ish value, to a completely different attractor with another longish term but very different average.
Computing the overall average of all the states is useless, because depending on micro differences, it might stay stuck around an attractor for several centuries, before wandering off to a new one.
I.e it is of no political or economic use to say ‘there are three possible states for the climate in 20 years, it might be 3 degrees warmer, three degrees colder or pretty much the same as it is now, on average with an equal probability of all three’
But that sort of prediction is exactly what prediction of the probability distribution of the system’s future possible states means.
We have established that its bounded – and we have assigned a probability distribution to some possible future states.
The fact that the answer, though accurate, is pragmatically totally useless, is the point being made.

Reply to  Leo Smith
September 15, 2015 5:13 pm

Not “totally useless” — such a prediction might show that warming was limited and not dangerous.

PaulH
September 14, 2015 1:07 pm

Simply make their funding contingent upon results. For example, if last year’s forecast/projections/predictions are correct, they get money. If not then, bad scientists, no cookie — they get exactly zero.

September 14, 2015 1:22 pm

The Mathematics of Turbulence

Accuracy and Precision

Gary Pearse
Reply to  Roy Denio
September 14, 2015 6:31 pm

Van Gogh had visual problems. I had cataracts removed that had me seeing blurred clusters of stars from one star and several blurred moons distributed about a centre. I kept one cataract for about 10 years because I could read fine print and see my computer clearly without glasses. Now with ocular implants I can read a licence plate at more than 100 ft, but need reading glasses. Several other painters experienced eye disorders, especially with age, that could be diagnosed from their paintings. Van Gogh’s “math” was simply the chaos of pathological light dispersion.

JerryM
September 14, 2015 1:22 pm

The problem is the people who believe in AGW will not react to facts and reason. As Aristotle said, ‘some people just cannot be taught’. You can show them graphs, p-value’s, error bars, photo’s of thousands of polar bears enjoying the sun on an oil rig. They will not change their minds with Reason. Until something changes their ‘feelings’, they will go on believing.

September 14, 2015 1:27 pm

Extremely likely that it’s all a fraud?

Resourceguy
September 14, 2015 1:32 pm

In a related story by the WSJ on zombie data servers, roughly 10 gw of power is being wasted on them being powered on but not connected or doing anything. I wonder how much CO2 footprint that is.

September 14, 2015 1:32 pm

The 1/14 APS workshop minutes revealed serious doubts about IPCC and GCMs, just didn’t have the cajones to outright say so.

David S
September 14, 2015 1:33 pm

The role of confirmation bias I think overwhelms and has controlled the argument s on global warming. As a non scientist I actually don’t believe there is any correlation between CO 2 and temperature / climate. There would be better correlation between temperature and business cycles. When you start with the aim of establishing whether a correlation exists you can’t start with the assumption that it does theoretically or otherwise. Even many sceptics seem to concede that there is a relationship between CO 2 and temperature. They could’ve just as easily established a theory based on oxygen. The amount of energy and money wasted on proving or disproving correlation with CO 2 has actually perpetuate the scam. By arguing about why models don’t work sceptics have fallen for the trap of assuming that scientifically there should be a correlation between temp and CO 2. I think as a layman there is none both theoretically or otherwise.

Gary Pearse
Reply to  David S
September 14, 2015 6:48 pm

CO2 does what physicists say it does but the interaction with negative feedbacks largely neutralizes the effect. The atmosphere/land/water/ice/biophere butts in to counteract the effect. The recent observation by NASA of greening of the planet is an example. Growth in plants is endothermic (it takes heat out of the system). If the plants are growing on a desert, albedo is reduced which would tend to warm things up, but it also anchors moisture and itself emits water vapor, cooling things down. The net effect is to cool the day down and warm up the otherwise cold desert night. This in turn reacts to create more precipitation and…..
The well established physics of CO2’s absorption of long wave radiation gives so much comfort and encouragement to CAGW proponents that isn’t deserved, because the planet reacts in a negative response to it in multiple ways, not only in a part of the biosphere as I have described above.

September 14, 2015 1:34 pm

as said elsewhere:
In engineering and design models are essential tools. In the climate ‘science’ models are essentially for the WUWT’s punters amusement.

Reply to  vukcevic
September 14, 2015 4:21 pm

In engineering and design models are essential tools.
Yeah, but an engineer’s model has to actually work.

Ric Haldane
Reply to  Nicholas Schroeder
September 14, 2015 6:16 pm

Only because engineers know they have to deal with reality. Climate modelers try to create reality.

Simon
September 14, 2015 1:49 pm

Tim Ball says….”Weather forecasts beyond 72 hours typically deteriorate into their error bands.”
So you are saying weather forecasting and predicting climate are the same thing? I’m not so sure they are.

Reply to  Simon
September 14, 2015 4:23 pm

“…weather forecasting and predicting climate are the same thing?”
IPCC AR5 and WMO define climate as weather averaged over thirty years. So, yeah, they are.

Simon
Reply to  Nicholas Schroeder
September 14, 2015 5:04 pm

Not long term they are not and that is what we are worried about. Very different science.

Gamecock
Reply to  Simon
September 14, 2015 5:03 pm

I agree with you, Simon. He at first seems to conflate weather and climate.

H.R.
September 14, 2015 1:59 pm

As far as the U.S. goes, I don’t think any congress-critter ever actually funds climate models. Congress funds Departments, Bureaus, and Agencies which fund climate modeling. Only in Washington D.C., when a 12% increase in budget is submitted, is a 7% budget increase called a budget cut (horrors!). From their ever-increasing budgets, the various TLAs allocate funding, some of which is for climate modeling.
Keep in mind that the potential for catastrophe or the need to be doing “sumpthin’ ’bout sumpthin” – no one remembers the original purpose of most of the TLAs anyway – keeps the bureaucracy in place. The way to make the bureaucracy bigger and thus more powerful, is to increase funding on all the various things currently being done and find new things to do, whether or not they are already being done by other agencies.
IMHO, our elected political ‘leaders’ have been overtaken by the entrenched bureaucracies, and climate modeling and climate science are entrenched in the bureaucracies.
Climate modeling will go on being funded and receive increased funding in the U.S. regardless of the wisdom or merit in pursuing a reasonably working climate model and despite the abject failures of the current crop of models. And if by chance a perfect climate was accidentally produced tomorrow, all the agencies would ask for a budget increase to improve it!

H.R.
Reply to  H.R.
September 14, 2015 2:02 pm

Oopsie!
“And if by chance a perfect climate model was accidentally produced tomorrow, all the agencies would ask for a budget increase to improve it!”

Reply to  H.R.
September 14, 2015 3:48 pm

Isn’t this what they’re trying to accomplish? If they accidentally create a perfect model, then they will know just what to do to create a perfect climate. Then they can ask for a budget increase to improve it! 😉 /sarc

TRM
September 14, 2015 2:01 pm

I don’t want to stop funding climate models because basic research is vital. What I do want is merit based allocation of funds. So 99% of the models lose their entire budget and 1% get more money. It is easy to pick them out. The one or two closest to reality win. This would encourage a lot of accuracy and pit one team against others trying to “fix” the numbers.

Gamecock
Reply to  TRM
September 14, 2015 5:06 pm

“I don’t want to stop funding climate models because basic research is vital.”
This is not in evidence. You are begging the question.

Gary Pearse
Reply to  TRM
September 14, 2015 6:55 pm

How do we know the ‘close’ models are related to reality? I’ve been thankful that they didn’t by accident match the actual climate at the height of the hysteria over global warming. Getting killed by falling windmill parts or dead geese would become like traffic accidents by now.

MarkW
Reply to  TRM
September 16, 2015 11:36 am

Basic research is vital, models aren’t basic research.
Basic research is going out in the field and getting actual data.

Jer0me
September 14, 2015 2:11 pm

peta is 1016 (or a thousand) million

Not quite. The number should be 1024, but even if we stick to to conventional 1000’s, peta refers to one quadrillion. Terra is one trillion, Giga one billion, and mega one million. We will have to start using SI notation with computer sizes soon!

Reply to  Jer0me
September 14, 2015 4:24 pm

Peta gram = giga ton metric

Dawtgtomis
Reply to  Jer0me
September 14, 2015 7:49 pm

…But Petabytes sounds like a treat cookie for abused dogs.

Reply to  Dawtgtomis
September 14, 2015 8:43 pm

That is not a bad description of harassed sysadmins.
Of course the BOFH strikes back…
https://en.wikipedia.org/wiki/Bastard_Operator_From_Hell

JK
September 14, 2015 2:11 pm

I was interested in the second sentence of the post:
‘Weather forecasts beyond 72 hours typically deteriorate into their error bands.’
A reference would be very useful here. Certainly the scientific establishment claim that their performance in weather forecasting is much better than this.
An example comes from Peter Bauer, Alan Thorpe and Gilbert Brunet writing in Nature last week (The quiet revolution of numerical weather prediction, Nature 525, 47–55, 03 September 2015).
The abstract is freely available here:
http://www.nature.com/nature/journal/v525/n7567/full/nature14956.html
You can also see figure 1 for free at that link (click to enlarge).
According to their figure 1, they are claiming that forecasts have significant value out to 7 days (= 168 hours, more than twice what is claimed by Tim Ball here).
Furthermore, they show a significant improvement in forcast performance over time. According to their statistics you would have to go back to around 1980 before useful performance fell to 72 hours. In the article they claim that both improved data collection and bigger computers have contributed to this improvement.
While Tim Ball’s focus here is on climate, he does kick off with an attack on weather forecasting (‘They are not just marginally wrong. Invariably, the weather is the inverse of their forecast’), and does later refer to workers in ‘weather offices’.
It would be useful if he could clarify the extent to which he really means to critique weather forecasting. If he does, then I think his piece as written is very weakly supported. As it stands I just don’t see the numbers or the supporting references that could be used to demolish an article such as Bauer et al.
If Ball wants to go on the offensive with an openning line, such as ‘forecasts beyond 72 hours typically deteriorate into their error bands’ then he should expect that people will ask him to justify that.

Reply to  JK
September 14, 2015 2:45 pm

Perhaps he’s referring TWC’s “Weather on the 8’s”. 😉
(I’ve seen them forecast thunderstorms for the day at 5:08 AM and at 6:18 AM forecast clear skies. (Later that day, it rained.))

Marcuso8
Reply to  Gunga Din
September 14, 2015 3:00 pm

I have seen that many, many times in Canada !!! They even put out ” Flash Flood ” warnings for my area , then we get a sprinkle of 2 MM !!!

JK
Reply to  Gunga Din
September 14, 2015 3:30 pm

Who needs skepticism when you have anecdotes.

Reply to  JK
September 14, 2015 8:55 pm

A weather forecaster at WTAE-TV Channel 4 in Pittsburgh has started issuing a “4-degree guarantee”: “Every weekday during the 5pm newscast, Mike Harvey will give viewers his 4-Degree Guarantee. He guarantees the next day’s high temperature will be within 4 degrees of what he predicts.”
http://www.wtae.com/weather/pittsburghs-action-weather-with-the-4degree-guarantee/32379066
As far as I can tell, that’s +/- 4 degrees (F). In other words, this weather forecaster is claiming bragging rights if he gets a day-ahead high-temp forecast within a nine-degree range.

Peter Sable
Reply to  JK
September 15, 2015 4:39 am

‘Weather forecasts beyond 72 hours typically deteriorate into their error bands.’

Certainly the scientific establishment claim that their performance in weather forecasting is much better than this.

https://www.google.es/search?q=Deteriorate&ie=utf-8&oe=utf-8&gws_rd=cr&ei=XAL4Ve6yDMGRaNHaprgF
Deteriorate: “become progressively worse.”
“Deteriorate into their error bands” doesn’t mean fall to zero instantly, it means there’s less accuracy at 4 days, even less at 5, with very minimum utility at 7 days per your referenced paper.
I’m a surfer, I know quite well how long and even under what conditions a forecast is good for. (e.g. if there’s an approaching low, plus/ minus 100km on the center will dramatically alter surfing conditions)
Peter

manicbeancounter
September 14, 2015 2:17 pm

Unlike the Roy Spencer the reconciling actual data to the CIMP5 models, there is one that shows the actual temperature range within that of the hindcasted models. Reproduced below, it is from Prof Ed Hawkins of Reading University, near London.
There are problems.
1. It is only from 1985,
2. That is as least 20 years BEFORE the models were devised.
3. The model range through to 2050 is huge, and the bottom end is virtually no warming. So we have both the dire apocalypse and the trivial all within the range of the climate models.
4. There is less than a decade of actual forecast – and that is bumping along the bottom. Hardly something to get all in a tizzy about.
5. The “actual” data is HADCRUT4, which shows a greater recent warming trend than HADCRUT3. This in turn shows a greater recent warming trend than the satellites.
http://www.climate-lab-book.ac.uk/wp-content/uploads/fig-nearterm_all_UPDATE_2014.png

manicbeancounter
Reply to  manicbeancounter
September 14, 2015 2:29 pm

A way of looking evaluating the evidence for climate catastrophism is to consider whether over time climatology is making statements in support of the CAGW hypothesis that with increasing empirical content, or whether it is degenerating towards more banal statements, which do not exclude milder versions of the AGW hypothesis or even random or natural variations.
http://manicbeancounter.files.wordpress.com/2015/09/090515_1425_degeneratin11.png?w=600
Explained in detail at http://manicbeancounter.com/2015/08/14/can-climatology-ever-be-considered-a-science/

JK
Reply to  manicbeancounter
September 14, 2015 3:29 pm

At the link manicbeancounter gives his favourite Richard Feynman quote:
‘You cannot prove a vague theory wrong. If the guess that you make is poorly expressed and the method you have for computing the consequences is a little vague then ….. you see that the theory is good as it can’t be proved wrong. If the process of computing the consequences is indefinite, then with a little skill any experimental result can be made to look like an expected consequence.’
This reminded me of a sentence in Tim Ball’s post:
‘The Cosmic Theory appears to provide an answer to the relationship between sunspots, global temperature, and precipitation but is similarly ignored by the IPCC.’
It seems to me that the way Ball refers to the theory here is a bit vague. I can understand that a more precise explanation would be off topic for the post but, as with the criticims of weather forecasting, a reference or two would go a long, long way.
It also seems that his complaint about the IPCC in this sentence is imprecise. The report of Working Group I, Chapter 7, on Clouds and Aerosols does provide some assessment of connection between cosmic rays and clouds (section 7.4.6 Impact of Cosmic Rays on Aerosols and Clouds, in http://www.ipcc.ch/pdf/assessment-report/ar5/wg1/WG1AR5_Chapter07_FINAL.pdf ) It might be argued that they are dismissive, or wrong, or don’t pay it enough attention. But they do discuss over 40 papers. I may be mistaken, but I think that’s probably more than Wattsupwiththat have discussed.
This may seem like a small complaint, but if Ball wants to really win the argument then he needs to be sharper, and not give opponents a reason to dismiss what he says as ‘not literally true’.
This is especially so since I don’t think it really stands up to say that the scientific establishment have ignored the cosmic ray theory. If the IPCC had simply had to scrape together every cursory mention they could find. But this is not so. If you look up papers by Svensmark or Kirkby on Google Scholar you will find them cited many hundreds of times. Admittedly, many are from sceptics who might be considered ‘outsiders’, but they are by no means dominant.
So if you want to claim that there is a theory which is being ignored, you really need to point to it. Stop being so vague.

Reply to  manicbeancounter
September 14, 2015 9:10 pm

In figure 7 of http://www.drroyspencer.com/2015/04/version-6-0-of-the-uah-temperature-dataset-released-new-lt-trend-0-11-cdecade , radiosonde data indicate that the lowest 100-200 or so meters of the lower troposphere outwarmed the “satellite-measured lower troposphere as a whole” (my words) from 1979-2015 by about the same amount that HadCRUT3 did. I think HadCRUT3-global is closer to the truth of global surface temperature than any any satellite dataset of the global lower troposphere, all surface global datasets obsoleted before 2005 (since post-2005 matters), all surface ones newer than 2008, and all versions of GISS and NCDC/NCEI global temperature of around or after 2008. HadCRUT3 even has better correlation with all versions of both major satellite-measured global lower troposphere temperature than every other global surface temperature dataset that was current around or after 2008 in terms of fluctuations. I think HadCRUT3 had accuracy so great that it should be restored.

Marcuso8
September 14, 2015 2:53 pm

Give me a high speed computer and I can make it tell you what ever you pay me to tell you !!!! Funny how that works !!!

Marcuso8
Reply to  Marcuso8
September 14, 2015 2:54 pm

The ONLY model that counts is REALITY !!

MarkW
Reply to  Marcuso8
September 14, 2015 3:12 pm

The only model that counts is Heidi Klum.

Reply to  Marcuso8
September 14, 2015 8:47 pm

The only near-model that could count was Carol Vordeman…
https://en.wikipedia.org/wiki/Carol_Vorderman

Matt
September 14, 2015 2:53 pm

In this whole scamming industry how many are involved. How much of the trillions of dollars does each scammer get? Are these some of the highest paid people on earth?
Just curious.

Marcuso8
Reply to  Matt
September 14, 2015 3:03 pm

The trail is hidden because it’s government !!! Just ask the EPA for backup data !!!

knr
September 14, 2015 3:04 pm

In the end the ‘models’ are merely automated ways to manipulate numbers , they have no real intelligence they can do nothing but what they are told to do. And that is where the ‘trick’ is , by starting with the ‘right ‘ assumptions you can get the ‘right’ results even if that results are in reality worth nothing .

Marcuso8
Reply to  knr
September 14, 2015 3:09 pm

Actually , that would be ” Left ” results, because it is mostly liberals that push this lie !!!

pat
September 14, 2015 3:06 pm

14 Sept: CarbonBrief: Daily Briefing | 2015 and 2016 set to break global heat records, says Met Office
Scientists confirm there’s enough fossil fuel on Earth to entirely melt Antarctica
A new “blockbuster” study finds that burning all the world’s fossil fuel resources would raise global temperatures enough to eliminate the Antarctic ice sheet. The process would likely take up to 10,000 years but we would be committing ourselves to more than 50 metres of sea-level rise, enough to submerge major cities from Shanghai to New York, says RTCC. Most of the scientific focus has been on west Antarctica and this is the first research to look at the impact of fossil fuel burning on the entire sheet, says The Independent. The New York Times’s Andy Revkin has a video chat with the authors over at his Dot Earthblog. The Guardian, Reuters and TIME all have more on the new study. You can also readCarbon Brief’s write up. The Washington Post
http://www.carbonbrief.org/blog/2015/09/daily-briefing-2015-and-2016-set-to-break-global-heat-records/
btw link also has several links to UK media’s exaggerated headlines of Met Office claims that 2015 and 2016 COULD break global heat records…OR NOT.
hedging their bets:
14 Sept: UK Telegraph: Emily Gosden: World is warming again, says Met Office – but Britain could see cooler summers
Pause in global warming could be about to end, yet changes in the Atlantic Ocean could bring colder weather to northern Europe
“The world is warming again,” Professor Adam Scaife, head of monthly to decadal forecasting at the Met Office, said. “We can’t be sure this is the end of the slowdown, but decadal warming rates are likely to reach late-twentieth century levels within two years.”…
However, other likely changes to the climate, in the North Atlantic Ocean, could still lead to cooler and drier summers in the UK and northern Europe, scientists said…
***He said he thought it was “likely” that “we would see an absolute cooling – actually summers getting cooler than they have been recently” but insisted this was not a “forecast” because other factors – such as rising global surface temperatures – could counteract the impact…
A cooler North Atlantic could result in some “temporary recovery” in sea-ice in the North Atlantic in the Labrador and possibly Nordic seas, he said.
“This does not mean we are headed for the next ice age – absolutely not. We are talking about a modest cooling… but it is potentially enough to affect weather patterns in Europe and elsewhere.”…
http://www.telegraph.co.uk/news/earth/environment/climatechange/11862392/World-is-warming-again-says-Met-Office-but-Britain-could-see-cooler-summers.html

Marcuso8
Reply to  pat
September 14, 2015 3:11 pm

In other words , Toss a set of dice and you would be more accurate !!!

Gamecock
Reply to  pat
September 14, 2015 5:10 pm

Ahhh, the ubiquitous “new study.”

Gary Pearse
Reply to  pat
September 14, 2015 7:48 pm

Only a social scientist could imagine us burning fossil fuels for 10,000 yrs more. The Neanderthal component in these authors is huge. We already have a replacement and it is thermonuclear – hopefully fusion soon to follow – it probably would already be here if the linear thinkers weren’t shouting nuclear down for over half a century. Imagining what will happen in a quarter century has never been accomplished. It is what is wrong with the doomsters that keep doing this.
They have one hand tied behind their backs because they never consider the elephant in the room factor – human ingenuity!! They never see it because they don’t possess ingenuity. They are students of fixed systems – how the lion behaves or the sex life of a flea. Thomas Malthus (late 18th, early 19thC) had us all buried in horse manure by the middle of the 19th Century; Jevons had the industrial revolution starving itself out with coal resource decline by the end of the 19th Century. The club of Rome in 1972 (Ehrlich, Holdren,…) had us starving to death before the end of the 20th Century. By the first decade of the 21st Century, the 1972 population had been doubled and the number of people in poverty was fewer than that in 1972!
These people are biological accountants whose thoughts about the future are worthless, because human ingenuity is not within the purview of their considerations. Ceterus paribus thinking disqualifies them from the business of pronouncements about the future. Humankind has done some awful things, but with countless doom scenarios that NEVER, EVER came to pass, I think it safe to say that it is axiomatic that humans cannot destroy the planet – they can do some temporary damage.
I don’t wish to make small the horror of Hiroshima, but radiation levels had decreased to background levels in one year and the city was rebuilt. Thinkers on this remarkable fact ask, why is this the case in Hiroshima (and Nagasaki) and not in Chernobyl? Well the answer is supplied by nature. The vast ‘exclusion zone’ of Chernobyl has sprung up as a remarkable wildlife refuge with wild pigs, wolves, bears, beavers and all sorts of creatures thought to have been extirpated in much of Europe. Anti nuclear energy and lefty activists hate that this has happened and snipe about how the animals are all sick and other BS. Have a look:
https://ca.search.yahoo.com/search?fr=mcafee&type=C111CA662D20141029&p=Hiroshima+today
http://www.slate.com/articles/health_and_science/nuclear_power/2013/01/wildlife_in_chernobyl_debate_over_mutations_and_populations_of_plants_and.html
Looks bad, eh? But:
“Chernobyl’s abundant and surprisingly normal-looking wildlife has shaken up how biologists think about the environmental effects of radioactivity. The idea that the world’s biggest radioactive wasteland could become Europe’s largest wildlife sanctuary is completely counterintuitive for anyone raised on nuclear dystopias.”

MarkW
Reply to  Gary Pearse
September 16, 2015 11:40 am

Thermonuclear is fusion.

MarkW
Reply to  Gary Pearse
September 16, 2015 11:41 am

The types and amount of radiation put out by Chernobyl and the Hiroshima/Nagasaki bombs were vastly different.

Reply to  pat
September 14, 2015 9:18 pm

The catastrophe assumes feedbacks that I think are unrealistically high, and CO2 absorption by the oceans that I think is unrealistically low. I think more likely, burning all available fossil fuels will merely delay the next ice age glaciation by a few thousand years. And, I think we should do reasonably economic energy conservation (I prefer mainly by energy-efficiency improvement because that is largely economically practical) for many reasons, because improvable inefficiencies of homes, motor vehicles and appliances waste money and resources and increase pollution. Also, postponing burning of fossil fuels postpones the oceans removing their CO2 barrier to the next ice age glaciation.

Marcuso8
September 14, 2015 3:12 pm

More Humans die from cold than from heat around the world !! REALITY !!!

Phaedrus
September 14, 2015 3:24 pm

It seems very simple to me, rather than waste money chasing the Global Warming solution (there isn’t one and thats why supporting researchers love the endless funding) why don’t we look at ways that genuinely make life more energy efficient. Better housing, use of waste heat, fantastic public transport etc etc. The list is endless and will result in a solution rather than Armageddon of energy poverty for all but the few.

Reply to  Phaedrus
September 14, 2015 3:52 pm

Or close a hurricane’s predicted landfall by 50+ miles when it’s 2 days out?
Or how about giving a tornado warning or alert 15 minutes sooner?
Or even just be reliably tell a farmer he can drive his tractor out into the field and it won’t get stuck in the mud?
Climate models might have a place in research. But invest in getting tomorrow right before warning us about 20, 50, 100 years from now and pretending you can do something about it today (with our dollars and freedoms as the price).

MarkW
Reply to  Gunga Din
September 16, 2015 11:43 am

Basic research, such as trying to figure out exactly how clouds “work” would benefit both weather models and climate models.

Reply to  Phaedrus
September 14, 2015 10:22 pm

I would like reduction of wasteful electricity consumption by the “vampires”. Reasonable reduction of electricity consumption by those will save consumers more within a couple years than the cost increase of improved energy efficiency of the “vampires”. For example, my cable modem and another box that my cable company put into my home that was necessary to separate phone and Internet signals from my cable feed. These two devices combined have 13 LEDs, of which 9 are on nearly all the time and another is on about half the time. These LEDs are fairly obviously very likely cheapies using cheap LED chemistries developed in the 1970s and which had little improvement since the mid 1980s. I have means to examine their spectra, and I know LEDs. I’m fairly sure these LEDs and their associated losses in dropping resistors, driving circuitry and power supplies is around .7 to 1.5 watts. At US national average residential electricity cost of $0.12 per KWH, for 10 years, this is around $7 to $16 – in 2015 dollars and assuming residential electricity does not have inflation exceeding the official inflation index.
I know of LEDs where these losses can be reduced by 90-plus percent and costing a few cents each more at the factory. And even if 5-times that is what the cable companies have to pay for cable modems and similar “boxes” with more-modern higher-efficiency LEDs, it would cost my cable company about $5 more to provide me both “boxes” if those were made with LEDs that would save me $6-$15 in my electric bill over 10 years. And one of these boxes I had for more than 10 years, and the other was imposed onto me last year to make it’s function using my grid electricity as opposed to my landlord’s with loss of economy of scale, and probably due to a modernization project of my cable company (to make apartment renters more similar to homeowners) as opposed to something that my landlord asked for.
Meanwhile, my PC has another of these LEDs for power-on indication. I know a fairly easy hack, something so easy that “hack” may be an unfairly strong word, that I can do to reduce its impact on my electric bill by around .1 watt. I recently got some InGaN green LEDs that are suitable for this, with some surplus to get a better quantity discount that reduced the cost of a R&D project that I am working on. With one of these and a resistor that cost me maybe 2 cents, and maybe 5 minutes of personal labor (I can do other things while my soldering iron is warming up), I will soon reduce the .1 watt load to about .005 watt, for electricity savings of .095 watt, which would reduce a US-national-average residential electric bill by about 10 cents per year if the PC is always on.
For that matter, my monitor has a power-on LED of the same old-tech chemistry, so does my printer, and so does my computer’s set of loudspeakers. My landline answering machine has old-tech LED chemistry for a digital display that is always on, and it is powered by a “wall wart” that consumes about a watt more than if it had the energy-efficient technology typical of modern cellphone chargers.
More in that area – why should there be incandescent nightlights that consume 3 to 7 watts? Why should many with power consumption of 5 to 7 watts have shades to block a fair amount of the already-inefficiently-produced light? Due to some known economies of scale, a good 7-watt 120V incandescent has half or less the efficiency of a good 75W 120V incandescent. Effective LED nightlights consuming .4-1.25 watts have been available for a few years already, and ~3-watt LED lightbulbs with light output like that of 25W incandescents have been on the market in brick-and-mortar home centers throughout 2015, along with an 8.5W LED lightbulb that would match or slightly outperform even a better 75W non-halogen incandescent (now largely unavailable) for nighttime porch lighting – available from Lowes in US for about $5-$6.

tabnumlock
September 14, 2015 3:33 pm

I can easily predict the climate and with high accuracy. It’s going to get very cold for a very long time.
http://www.scottcreighton.co.uk/images/Spiral-Precession/Glacial_eras.jpg

Reply to  tabnumlock
September 14, 2015 4:31 pm

“…going to get very cold for a very long time.”
A -8 C anomaly is not all that cold. Here the annual real value swings from -15 F to 100 F. Somehow we all survived.

Reply to  tabnumlock
September 14, 2015 4:49 pm

tabnumlock,
Here’s another, going back 4.5 billion years:
http://www.newscientist.com/data/images/archive/2839/28392301.jpg
Notice that we’re now at the cold end of the range. We could use a few degrees more warming. Unfortunately, you’re probably right. The odds favor a future that’s colder, rather than warmer.

tabnumlock
Reply to  dbstealey
September 15, 2015 6:17 am

We’re at the end of a small warming period within a declining series of warming periods at the end of an interglacial within an ice age and in one of only two CO2 crashes in earth’s history.
http://www.dandebat.dk/images/1365p.jpg
http://www.climate4you.com/images/VostokTemp0-420000%20BP.gif
Red box:
http://www.climate4you.com/images/GISP2%20TemperatureSince10700%20BP%20with%20CO2%20from%20EPICA%20DomeC.gif

RoHa
September 14, 2015 4:04 pm

“Deficiencies in the accuracy, quality and continuity of the records place serious limitations on the confidence that can be placed in the research results.”
Not a problem. We change the records to fit the results.
“…the necessary data are lacking …”
Not a problem. We’ve got the theory. What do we need the data for? And if we had it, someone would only use it to try to prove us wrong.

stevek
September 14, 2015 4:15 pm

I would not be the least bit surprised that models these institutions have come up with do in fact predict and accurately model the temperature record with few input parameters. However the problem is these models do not show much global warming so are rejected.
Same type of thing happens in boardrooms. An analyst will come along and show the company that his model predicts declines in profits and revenue over the years if something is not done. The board rejects the analyst views because too much political power depends on status quo. The egos are too attached to a false reality. The won’t even entertain the idea it is false. Any will dismiss any attacks out of hand.
Letters were written to the sec warning that bernie maddoffs results were impossible. Nobody did anything. The were attached to a false reality and could not accept that they were duped.

September 14, 2015 4:34 pm

stevek
See “Trip to Abilene” paradox, a lesson in “go along to get along” mis-management.

Bill Illis
September 14, 2015 4:41 pm

The biggest problem is that 100,000 climate scientists, 50 governments, $300 billion per year in green energy proponents and 100 million people …
… have staked their reputations on the CO2 global warming theory being correct.
How do they back down? Why would they back down when they can just continually just adjust the temperature record every single month and then take another upward adjustment step every few years with a new methodology backed by a flimsy paper.
And how many climate model results indicating 3.0C per doubling do we really need anyway? Hasn’t there been enough already? Do we need to keep funding 50 climate models for the next 100 years?
Answer is: As long as they can keep getting away with it.

Reply to  Bill Illis
September 14, 2015 4:45 pm

Bill Illis,
Excellent comment, best one I’ve seen today.

kim
Reply to  Bill Illis
September 14, 2015 11:05 pm

The BRICS are on to the sham, still in it for the scam.
===============

September 14, 2015 5:29 pm

There is a very good book about economic forecasting called “The Death of Economics” by Paul Ormerod. It has a lot of information about chaotic systems and why forecasting their future state is impossible. One of the points he makes is that unless a model is perfect, it cannot give an accurate forecast. But even more disconcertingly, making the model closer to reality can actually make the forecast worse! Loads of well-explained maths about attractors, cycles, etc. Recommended.

amirlach
September 14, 2015 5:53 pm

Had an interesting back and fort with Ed Wiebe, who works at UVic’s School of Earth and Ocean Sciences.
http://climate.uvic.ca/people/ewiebe/
Link to the UVic Model here. http://climate.uvic.ca/model/
Interesting because he was utterly clueless when it came to actual real world observations and how badly the models actually preformed. He repeatedly claimed “the Models work surprisingly well”. Yet was unable to produce a single example.
His got to sources were SkS and Desmog. All he could really muster was repeated appeals to authority like this gem.
“Just read the IPCC reports, the summaries at least, they are the reference documents you should use.”
The fact that every single IPCC Model has been invalidated by observation was ignored. Referenced Documents? By who?
Too Funny.

dmh
Reply to  amirlach
September 14, 2015 6:48 pm

“Just read the IPCC reports, the summaries at least, they are the reference documents you should use.”
That’s funny. The reason I know the models don’t match observations is because I HAVE read the IPCC reports.
I don’t have the link handy, but one of the amazing things about AR5 was that they set aside model results in regard to sensitivity in favour of “expert opinion”. It would be enlightening to get a comment from Ed Wiebe on the matter.

ScienceABC123
September 14, 2015 7:09 pm

I think an accurate climate model would be very useful. However, the first step to developing an accurate climate model must be not adjusting the observational data to match the model!

September 14, 2015 8:06 pm

IPCC AR5 TS.6 Key Uncertainties is where climate science “experts” admit what they don’t know about some really important stuff. IPCC is uncertain about the connection between climate change and extreme weather especially drought. IPCC is uncertain about how the ice caps and sheets behave. Instead of gone missing they are bigger than ever. IPCC is uncertain about heating in the ocean below 2,000 meters which is 50% of it, but they “wag” that’s where the missing heat of the AGW hiatus went, maybe. IPCC is uncertain about the magnitude of the CO2 feedback loop, which is not surprising since after 18 plus years of rising CO2 and no rising temperatures it’s pretty clear whatever the magnitude, CO2 makes no difference.
In IPCC AR5 text box 9.2 they admit the models don’t work. The full box is several pages of weasel words.
“Model Response Error
The discrepancy between simulated and observed GMST trends during 1998–2012 could be explained in part by a tendency for some CMIP5 models to simulate stronger warming in response to increases in greenhouse gas (GHG) concentration than is consistent with observations (Section 10.3.1.1.3, Figure 10.4).”
IPCC’s CO2/climate sensitivity is too high. See “Climate Change in 12 Minutes.”
“Their staff is too long, they are digging in the wrong spot.”
“Almost all CMIP5 historical simulations do not reproduce the observed recent warming hiatus. There is medium confidence that the GMST trend difference between models and observations during 1998–2012 is to a substantial degree caused by internal variability, with possible contributions from forcing error and some CMIP5 models overestimating the response to increasing GHG and other anthropogenic forcing. The CMIP5 model trend in ERF shows no apparent bias against the AR5 best estimate over 1998–2012. However,
confidence in this assessment of CMIP5 ERF trend is ****low, ****primarily because of the *****uncertainties**** in model aerosol forcing and processes, which through spatial heterogeneity might well cause an undetected global mean ERF trend error even in the absence of a trend in the global mean aerosol loading.”
Translation: IPCC AR5 admits to the hiatus/pause/stasis/lull and the GCMs don’t have a clue!

willhaas
September 14, 2015 9:26 pm

I believe that they started with a weather simulation and hard coded in that an increase in CO2 over time causes warming. As such the simulation begs the question and is hence worthless. A general circulation simulation, numerically will include predictor corrector loops that can become unstable if one increases the spatial temporal sample size. To generate climate prediction results from essentially a weather simulation program in finite time they would have had to increase the spatial temporal sample size. That alone could make the simulation unstable no matter what the detailed modeling assumptions are. The results may be more a function of the processing instability rather than the actual model and hence the results may be meaningless. I think that for climate prediction the entire GCM simulation approach is a big mistake and for many reasons worthless. A totally different approach is required.

kim
September 14, 2015 10:06 pm

The models drone on, they’ve got a following wind.

David Cage
September 14, 2015 11:59 pm

I have only ever seen one accurate forecast of climate. That was done using the assumption that there is no change in climate FROM ITS NORMAL COMPLEX PATTERN. This pattern was determined using some very restricted data analysis software designed to look for hidden information in signals. It managed to reproduce the five year rolling average and was done for 64 zones before being averaged. It made correct predictions for any six hundred year period with any of the randomly selected start points.
Averaging the whole earth first appeared to lose too much information on the patterns to produce any usefulinformation.

William Astley
September 15, 2015 2:29 am

CGC will end the CAGW paradigm, stop the insanity. The long range climate forecasts are completely incorrect as climatology is chock full of Zombie theories.
The past is a guide to what will happen in the future. What is missing in paleo climatology is even the most basic understanding as to what is the physical cause of the glacial/interglacial cycle, what causes cyclic abrupt warming and cooling, and what caused the warming in the last 150 years.
There is cyclic warming and cooling (sometimes abrupt warming and cooling) in the paleo record. Interglacial periods end abruptly not gradually.
The Younger Dryas abrupt cooling period (11,900 years ago) at which time the planet went from interglacial warm to glacial cold with 70% of the cooling occurring in less than a decade and the duration of the cooling lasting for 1200 years, at a time when summer solar insolation at 65N has maximum. Solar insolation at 65N does not drive the glacial/interglacial cycle. Atmospheric CO2 levels has nothing to do with the glacial/interglacial cycle or the warming in the last 150 yeas.
There is a massive forcing function that causes the changes in the paleo record. The planet’s climate does not ‘jump’ or ‘tip’ from one state to other. Rocks do not occasionally jump up hill. Each and every time there was an abrupt slowdown in the solar cycle the planet cooled.
The North American warm blob is starting to disappear as wind speeds pick up and Pacific Ocean cloud cover increases. The same region of the Pacific ocean will cool as has already occurred in the Atlantic ocean.
Greenland ice temperature, last 11,000 years determined from ice core analysis, Richard Alley’s paper. William: The Greenland Ice data shows that have been 9 warming and cooling periods in the last 11,000 years. There was abrupt cooling 11,900 years ago (Younger Dryas abrupt cooling period when the planet went from interglacial warm to glacial cold with 75% of the cooling occurring in less than a decade and there was abrupt cooling 8200 years ago during the 8200 BP climate ‘event’).
http://www.climate4you.com/images/GISP2%20TemperatureSince10700%20BP%20with%20CO2%20from%20EPICA%20DomeC.gif
http://www.hidropolitikakademi.org/wp-content/uploads/2014/07/4.gif
http://www.climate4you.com/images/VostokTemp0-420000%20BP.gif
http://wattsupwiththat.com/2012/09/05/is-the-current-global-warming-a-natural-cycle/

“Does the current global warming signal reflect a natural cycle”
…We found 342 natural warming events (NWEs) corresponding to this definition, distributed over the past 250,000 years …. …. The 342 NWEs contained in the Vostok ice core record are divided into low-rate warming events (LRWEs; < 0.74oC/century) and high rate warming events (HRWEs; ≥ 0.74oC /century) (Figure). … ….The current global warming signal is therefore the slowest and among the smallest in comparison with all HRWEs in the Vostok record, although the current warming signal could in the coming decades yet reach the level of past HRWEs for some parameters. The figure shows the most recent 16 HRWEs in the Vostok ice core data during the Holocene, interspersed with a number of LRWEs. …. ….We were delighted to see the paper published in Nature magazine online (August 22, 2012 issue) reporting past climate warming events in the Antarctic similar in amplitude and warming rate to the present global warming signal. The paper, entitled "Recent Antarctic Peninsula warming relative to Holocene climate and ice – shelf history" and authored by Robert Mulvaney and colleagues of the British Antarctic Survey ( Nature, 2012,doi:10.1038/nature11391), reports two recent natural warming cycles, one around 1500 AD and another around 400 AD, measured from isotope (deuterium) concentrations in ice cores bored adjacent to recent breaks in the ice shelf in northeast Antarctica. ….

ulriclyons
September 15, 2015 3:00 am

“Global and or regional forecasts are often equally incorrect. If there were a climate model that made even 60 percent accurate forecasts, everybody would use it.”
My solar based NAO/AO long range forecasts have been achieving over 90% in recent years. Most people ignore it, and some attack it viciously, like Willis E did in 2013. The only sincere interest that I am getting is from farmers.

mwh
September 15, 2015 4:14 am

I watched a documentary not so long ago on how the BBC (could have been the Met Office) compiled its forecasts – I wish I had been concentrating then I might have found the link. From this it would appear that every forecast is derived from multiple possible scenarios fed in to their super computer and allowed to run. They then remove the extreme outliers and then assess the most likley outcome. Seeing as this computer cost many millions of pounds and is still operated on a GIGO basis (the scenarios) is it any better or less costly than say 25 qualified and unbiased meteorologists predicting the weather and taking the mean from them or even 25 seasoned farmers and fishermen. This would probably be just as accurate as the Met Office has preconceived ideas about which way the climate is going and therefore their scenario imputs are very unlikely to be neutral in nature, a predisposition to CO2 forcing and CAGW will definitely influence any human interface.
If forecasting past 5 days in the future is around 50% accurate then surely pure educated guesswork will not be a lot less accurate!

September 15, 2015 7:27 am

The science is settled.
No need for climate scientists.
No need for climate models.
Fire all the scientists.
Delete all the model software and sell the computers.
Use the money to buy gondolas for NYC so everyone can get to work on Wall Street

Darkinbad the Brighdayler
September 15, 2015 7:40 am

Modelling is not Mirroring!
Part of the purpose of a model is to express a thread of current knowledge that can be used as a benchmark to measure future incoming data against.
There can be numbers of concurrent models that explore different features or threads of incoming data, some of which eventually get ruled out by an increasing gulf between the observed data and the model’s predictive data.
From a scientific perspective, it is as important to know what isn’t the case as what is. A discounted model isn’t necessarily therefore wasted time or money, it all adds to the overall picture.
Winnowing the signal from the noise takes a lot of time, effort and data.
Perhaps the problem lies in the expectations of the observers in the sidelines more than in the process itself?
The answer to Life, The Universe and Everything is 42!

chapprg1
Reply to  Darkinbad the Brighdayler
September 20, 2015 5:34 pm

A model, computer or otherwise is not likely to add to understanding if it is not responsive to real world measurements since the it can go off in a direction presupposed by its programmers and continue to bias thinking in a direction deviating from reality. It can allow one to answer ‘what if’ questions in a quantitative way which may or may not reflect reality unless constrained by accurate and statistically valid data.

terrence
September 15, 2015 9:54 am

“Nearly every single climate model prediction, projection or whatever else they want to call them has been wrong.” I think they are more appropriately called “prophecies”, a la Harold Camping. But, “When the Judgment Day he foresaw did not materialize, the preacher revised his prophecy, saying he had been off by five months.”
The big difference between Camping and the AGW money grubbers is that “His independent Christian media empire spent millions of dollars — some of it from donations made by followers who quit their jobs and sold all their possessions– to spread the word..” So, he used his and his followers MONEY; the AGW money grubbers use OUR money (TAXPAYER money).

LT
September 15, 2015 11:00 am

Perhaps those of us that can contribute should build a climate model that works.

Gamecock
Reply to  LT
September 15, 2015 11:58 am

One must first believe it relevant.

LT
Reply to  LT
September 15, 2015 3:06 pm

Not really, if you can accurately hind cast and forecast with historically available climate metrics, you would have a climate model like no other. Then, a prediction can be made. Of course you have to wait for it come true to be vindicated. Sounds simple to me.

September 15, 2015 11:57 am

“Model” is one of several polysemic terms that are used in drawing logically illicit conclusions from climatological arguments by application of the equivocation fallacy. Through the technique of disambiguating currently polysemic terms the potential for drawing illicit conclusions from arguments can be eliminated and a path toward attainment of policy objectives identified.
For control of the climate, information about the conditional outcomes of events is required. Currently when a climate model is referenced by the IPCC in one of its assessment reports this model is sure to be a non-information producing “model-a” and sure not to be an information producing “model-b.” Policy makers need model-b’s but climatologists continue to produce model-a’s. The corrective is for climatologists to bury the model-a’s and build a model-b.

roaldjlarsen
September 15, 2015 12:32 pm

The models fails spectacularly in a climate that is virtually not changing at all. Imagine the climate in fact did change, temperature fluctuations of more than the error bars ..
The fact that we (they) are making a fuss out of nothing, over a period of time that is to short to figure out anything, really documents the fundamental lack of understanding of the climate and the climate system.
We got no other options, we have to conclude, it’s all about the money! https://roaldjlarsen.wordpress.com/2015/08/21/facts-be-damned-its-all-about-the-money/

Alx
September 15, 2015 2:04 pm

… the British Broadcasting Corporation had enough as they stopped using their services … Invariably, the weather is the inverse of their [The UK Met Office] forecast.

Well being consistently wrong can be helpful. When the UK Met Office says no more snow this winter, get your boots and shovels ready.

September 15, 2015 7:32 pm

Indeed it is time to get rid of those worthless models. I have expressed the same idea from time to time but nobody is listening. Same problem with fake warming that I have pointed out. They covered up an 18 year hiatus in the eighties and nineties with a fake warming called “late twentieth century warming.” I have been calling attention to this since 2010 but again, nobody listens. Or take the computer footprints on official temperature curves, They have been there since the nineties, clearly visible on HadCRUT3 and 4, NCDC, and GISS, left over from manipulating the cover-up of the hiatus. But they look like noise, nobody cares, and nothing has been done. There is no discipline that a scientific organization needs and the climate “scientists” just do what they want and ignore those pesky critics from the outside. Meaningless models and falsified temperature curves, among other things, are the result. The worst of it is that this mishmash is presented to politicians as scientific truth.

Reply to  Arno Arrak (@ArnoArrak)
September 15, 2015 9:24 pm

Arno Arrak:
In global warming climatology, “model” is polysemic making it impossible to assign a truth value to the proposition that “it is time to get rid of these worthless models” without disambiguation of “model.”

September 16, 2015 9:16 am

Indeed it is time to get rid of those worthless models and the rest of fake science. I have expressed the same idea from time to time but nobody is listening. Same problem with fake warming that I have pointed out. They covered up an 18 year hiatus in the eighties and nineties with a fake warming called “late twentieth century warming.” I have been calling attention to this since 2010 but again, nobody listens. Or take the computer footprints on official temperature curves, They have been there since the nineties, clearly visible on HadCRUT3 and 4, NCDC, and GISS, left over from their manipulating the cover-up of the hiatus. They are long spikes near ends of years, assumed to be noise by users. Two of them sit right on top of the super El Nino of 1998, easy to spot in comparison with satellites. There is no discipline here that a scientific organization needs and as a result these climate pseudo-scientists do just what they want and ignore those pesky critics from the outside. As a result, meaningless models are created and falsified temperature curves are produced, all intended to make us think the Armageddon is near. This mishmash is presented to politicians as scientific truth who act upon it by throwing more money at these fakers.

Richard Harris
September 16, 2015 1:21 pm

“Short, medium, and long-term climate forecasts are wrong more than 50 percent of the time so that a correct one is a no better than a random event. Global and or regional forecasts are often equally incorrect. If there were a climate model that made even 60 percent accurate forecasts, everybody would use it. Since there is no single accurate climate model forecast, the IPCC resorts to averaging out their model forecasts as if, somehow, the errors would cancel each other out and the average of forecasts would be representative. Climate models and their forecasts have been unmitigated failures that would cause an automatic cessation in any other enterprise. Unless, of course, it was another government funded, fiasco. …………………
Dr. Kenneth Arrow, a Nobel laureate out of Stanford, once wrote an exquisite little story about his younger days as a military weather forecaster in WWII. He had become increasingly concerned that his forecasts were an embarrassing failure and no better than pulling numbers out of a hat. He suggested to his superior that the forecasting group should be disbanded only to receive this reply to his suggestion: “The Commanding General is aware that your forecasts are not good. However, he needs them anyway for planning purposes”. To which Dr. Arrow would later remark: “To me our knowledge of the way things work, in society or in nature, comes trailing clouds of vagueness. Vast ills have followed a belief in certainty, whether historical inevitability, grand diplomatic designs, or extreme views on economic matters. When developing policy with wide effects on an individual or society, caution is needed because we cannot forecast the consequences.”

Reply to  Richard Harris
September 16, 2015 2:48 pm

You say “…climate forecasts are wrong more than 50 percent of the time so that a correct one is a no better than a random event” but I’m unaware of any forecasts. If you know of some please provide citations.

u.k.(us)
September 16, 2015 5:50 pm

“I am tired of the continued pretense that climate models can produce accurate forecasts in a chaotic system.”
==============
Yup, we are all tired.
Take a break if you need it, we’ve got plenty reserves.