A Simple Truth; Computer Climate Models Cannot Work

Guest opinion by Dr. Tim Ball –

Ockham’s Razor says, “Entities are not to be multiplied beyond necessity.” Usually applied in making a decision between two competing possibilities, it suggests the simplest is most likely correct. It can be applied in the debate about climate and the viability of computer climate models. An old joke about economists’ claims they try to predict the tide by measuring one wave. Is that carrying simplification too far? It parallels the Intergovernmental Panel on Climate Change (IPCC) objective of trying to predict the climate by measuring one variable, CO2. Conversely, people trying to determine what is wrong with the IPCC climate models consider a multitude of factors, when the failure is completely explained by one thing, insufficient data to construct a model.

IPCC computer climate models are the vehicles of deception for the anthropogenic global warming (AGW) claim that human CO2 is causing global warming. They create the results they are designed to produce.

The acronym GIGO, (Garbage In, Garbage Out) reflects that most working around computer models knew the problem. Some suggest that in climate science, it actually stands for Gospel In, Gospel Out. This is an interesting observation, but underscores a serious conundrum. The Gospel Out results are the IPCC predictions, (projections), and they are consistently wrong. This is no surprise to me, because I have spoken out from the start about the inadequacy of the models. I watched modelers take over and dominate climate conferences as keynote presenters. It was modelers who dominated the Climatic Research Unit (CRU), and through them, the IPCC. Society is still enamored of computers, so they attain an aura of accuracy and truth that is unjustified. Pierre Gallois explains,

If you put tomfoolery into a computer, nothing comes out but tomfoolery. But this tomfoolery, having passed through a very expensive machine, is somehow ennobled and no-one dares criticize it.

Michael Hammer summarizes it as follows,

It is important to remember that the model output is completely and exclusively determined by the information encapsulated in the input equations.  The computer contributes no checking, no additional information and no greater certainty in the output.  It only contributes computational speed.

It is a good article, but misses the most important point of all, namely that a model is only as good as the structure on which it is built, the weather records.

The IPCC Gap Between Data and Models Begins

This omission is not surprising. Hubert Lamb, founder of the CRU, defined the basic problem and his successor, Tom Wigley, orchestrated the transition to the bigger problem of politically directed climate models.

clip_image002

Figure 2: Wigley and H.H.Lamb, founder of the CRU.

Source

Lamb’s reason for establishing the CRU appears on page 203 of his autobiography, “Through all the Changing Scenes of Life: A Meteorologists Tale”

“…it was clear that the first and greatest need was to establish the facts of the past record of the natural climate in times before any side effects of human activities could well be important.”

Lamb knew what was going on because he cryptically writes,

“My immediate successor, Professor Tom Wigley, was chiefly interested in the prospects of world climates being changed as a result of human activities, primarily through the burning up of wood, coal, oil and gas reserves…” “After only a few years almost all the work on historical reconstruction of past climate and weather situations, which first made the Unit well known, was abandoned.”

Lamb further explained how a grant from the Rockefeller Foundation came to grief because of,

“…an understandable difference of scientific judgment between me and the scientist, Dr. Tom Wigley, whom we have appointed to take charge of the research.”

Wigley promoted application of computer models, but Lamb knew they were only as good as the data used for their construction. Lamb is still correct. The models are built on data, which either doesn’t exist, or is by all measures inadequate.

clip_image004

Figure 2

Climate Models Construct.

Models range from simple scaled down replicas with recognizable individual components, to abstractions, such as math formula, that are far removed from reality, with symbols representing individual components. Figure 2 is a simple schematic model of divisions necessary for a computer model. Grid spacing (3° by 3° shown) varies, and reduction is claimed as a goal for improved accuracy. It doesn’t matter, because there are so few stations of adequate length or reliability. The mathematical formula for each grid cannot be accurate.

Figure 3 show the number of stations according to NASA GISS.

clip_image006

Figure 3.

It is deceiving, because each dot represents a single weather station, but covers a few hundred square kilometers at scale on the map. Regardless, the reality is vast areas of the world have no weather stations at all. Probably 85+ percent of the grids have no data. The actual problem is even greater as NASA GISS, apparently unknowingly, illustrated in Figure 4.

clip_image008

Figure 4.

4(a) shows length of record. Only 1000 stations have records of 100 years and almost all of them are in heavily populated areas of northeastern US or Western Europe and subject to urban heat island effect (UHIE) 4(b) shows the decline in stations around 1960. This was partly related to the anticipated increased coverage of satellites. This didn’t happen effectively until 2003-04. The surface record remained the standard for the IPCC Reports. Figure 5 shows a CRU produced map for the Arctic Climate Impact Assessment (ACIA) report.

clip_image010

Figure 5.

It is a polar projection for the period from 1954 to 2003and shows “No Data” for the Arctic Ocean (14 million km2), almost the size of Russia. Despite the significant decline in stations in 4(b), graph 4(c) shows only a slight decline in area covered. This is because they assume each station represents, the percent of hemispheric area located within 1200km of a reporting station.” This is absurd. Draw a 1200km circle around any land-based station and see what is included. The claim is even sillier if a portion includes water.

Figure 6 a, shows the direct distance between Calgary and Vancouver at 670 km and they are close to the same latitude.

clip_image012

Figure 6 a

Figure 6 b, London to Bologna, distance 1154 km.

clip_image014

Figure 6 b

Figure 6 c, Trondheim to Rome, distance 2403 km. Notice this 2400 km circle includes most of Europe.

clip_image016

Figure 6 c

An example of problems of the 1200 km claim occurred in Saskatchewan a few years ago. The Provincial Ombudsman consulted me about frost insurance claims that made no sense. The government agricultural insurance decided to offer frost coverage. Each farmer was required to pick the nearest weather station as the base for decisions. The very first year they had a frost at the end of August. Using weather station records, about half of the farmers received no coverage because their station showed 0.5°C, yet all of them had “black frost”, so-called because green leaves turn black from cellular damage. The other half got paid, even though they had no physical evidence of frost, but their station showed -0.5°C. The Ombudsman could not believe the inadequacies and inaccuracies of the temperature record and this in essentially an isotropic plain. Especially after I pointed out that they were temperatures from a Stevenson Screen, for the most part at 1.25 to 2 m above ground and thus above the crop. Temperatures below that level are markedly different.

 

Empirical Test Of Temperature Data.

A group carrying out a mapping project, trying to use data for practical application, confronted the inadequacy of the temperature record.

The story of this project begins with coffee, we wanted to make maps that showed where in the world coffee grows best, and where it goes after it has been harvested. We explored worldwide coffee production data and discussed how to map the optimal growing regions based on the key environmental conditions: temperature, precipitation, altitude, sunlight, wind, and soil quality.

The first extensive dataset we could find contained temperature data from NOAA’s National Climatic Data Center. So we set out to draw a map of the earth based on historical monthly temperature. The dataset includes measurements as far back as the year 1701 from over 7,200 weather stations around the world.

Each climate station could be placed at a specific point on the globe by their geospatial coordinates. North America and Europe were densely packed with points, while South America, Africa, and East Asia were rather sparsely covered. The list of stations varied from year to year, with some stations coming online and others disappearing. That meant that you couldn’t simply plot the temperature for a specific location over time.

clip_image018

Figure 7

The map they produced illustrates the gaps even more starkly, but that was not the only issue.

At this point, we had a passable approximation of a global temperature map, (Figure 7) but we couldn’t easily find other data relating to precipitation, altitude, sunlight, wind, and soil quality. The temperature data on its own didn’t tell a compelling story to us.

The UK may have accurate temperature measures, but it is a small area. Most larger countries have inadequate instrumentation and measures. The US is probably the best, certainly most expensive, network. Anthony Watts research showed that the US record has only 7.9 percent of weather stations with a less than 1°C accuracy.

Precipitation Data A Bigger Problem

Water, in all its phases, is critical to movement of energy through the atmosphere. Transfer of surplus energy from the Tropics to offset deficits in Polar Regions (Figure 8) is largely in the form of latent heat. Precipitation is just one measure of this crucial variable.

clip_image020

Figure 8

It is a very difficult variable to measure accurately, and records are completely inadequate in space and time. An example of the problem was exposed in attempts to use computer models to predict the African monsoon. (Science, 4 August 2006,)

Alessandra Giannini, a climate scientist at Columbia University. Some models predict a wetter future; others, a drier one. “They cannot all be right.”

 

One culprit identified was the inadequacy of data.

One obvious problem is a lack of data. Africa’s network of 1152 weather watch stations, which provide real-time data and supply international climate archives, is just one-eighth the minimum density recommended by the World Meteorological Organization (WMO). Furthermore, the stations that do exist often fail to report.

It is likely very few regions meet the WMO recommended density. The problem is more complex, because temperature changes are relatively uniform, although certainly not over 1200km. However, precipitation amounts vary in a matter of meters. Much precipitation comes from showers that develop from cumulus clouds that develop during the day. Most farmers in North America are familiar with one section of land getting rain while another is missed.

Temperature and precipitation, the two most important variables, are completely inadequate to create the conditions, and therefore the formula for any surface grid of the model. As the latest IPCC Report, AR5, notes in two vague under-statements,

The ability of climate models to simulate surface temperature has improved in many, though not all, important aspects relative to the generation of models assessed in the AR4.

The simulation of large-scale patterns of precipitation has improved somewhat since the AR4, although models continue to perform less well for precipitation than for surface temperature.

But the atmosphere is three-dimensional and the amount of data above the surface is almost non-existent. Just one example illustrates the problems. We had instruments every 60 m on a 304 m tower outside the heat island effect of the City of Winnipeg. The changes in that short distance were remarkable, with many more inversions than we expected.

Some think parametrization is used to substitute for basic data like temperature and precipitation. It is not. It is a,

method of replacing processes that are too small-scale or complex to be physically represented in the model by a simplified process.

Even then, IPCC acknowledge limits and variances

The differences between parameterizations are an important reason why climate model results differ.

Data Even More Inadequate For Dynamic Atmosphere.

They “fill in” the gaps with the 1200 km claim, which shows how meaningless it all is. They have little or no data in any of the cubes, yet they are the mathematical building blocks of the computer models. It is likely that between the surface and atmosphere there is data for about 10 percent of the total atmospheric volume. These comments apply to a static situation, but the volumes are constantly changing daily, monthly, seasonally and annually in a dynamic atmosphere and these all change with climate change.

Ockham’s Razor indicates that any discussion about the complexities of climate models including methods, processes and procedures are irrelevant. They cannot work because the simple truth is the data, the basic building blocks of the model, are completely inadequate. Here is Tolstoi’s comment about a simple truth.

 

“I know that most men, including those at ease with problems of the greatest complexity, can seldom accept even the simplest and most obvious truth if it be such as would oblige them to admit the falsity of conclusions which they delighted in explaining to colleagues, which they have proudly taught to others, and which they have woven, thread by thread, into the fabric of their lives.”

Another simple truth is the model output should never be used as the basis for anything let alone global energy policy.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
301 Comments
Inline Feedbacks
View all comments
October 16, 2014 8:22 pm

“They create the results they are designed to produce.”
++++++++
This is the crux of the problem. Circular logic… CO2 is the knob and everything else must be adjusted so that CO2 can be the cause.

Evan Jones
Editor
Reply to  Mario Lento
October 17, 2014 3:54 am

CONFESSIONS OF A HOMOGENIZER, STATION-ADJUSTER AND STATION-DROPPER
Circular logic
Yes. Entirely wrong approach. One needs to embrace (and limit) your MoE by taking it from the top down. Keeps it all on the rails. It’s a meataxe approach, but any other is futility-squared. And game developer worth half his salt knows this (game designers, sometimes not so much!) Anyone transitioning from player to designer to developer (note the order) comes to know this in his bones.
These guys think they can take a bunch of Advanced Squad leader maps and simulate the Eastern Front. If they ever designed a game (a historical simulation
It’s all Victor Venema’s fault. RECURSE YOU, RED BARON!
That’s what it comes down to. doesn’t it? When I was running recursive logic on pairwise comparisons, I never saw so many Excel circular logic errors in my life.
Yes, I am Unclean: Homogenize — just once — and you are a Homogenizer for the rest of your life …
Homogenization is, most emphatically, not a zero-sum game. It does not merely smear the microsite error around, as in an average. It isolates the outliers (in this case, most of the well sited stations) and adjusts them upwards to match the poorly sited majority. The result is not an average, but a considerable overall upward adjustment of the record (i.e., in exactly the wrong direction).
After the dust clears, the Little Boxes on the Hillside have all come out the same.

Tim
Reply to  Evan Jones
October 18, 2014 5:20 am

More like Little boxes at the Airport and the car park. In the last 25 years there has been an accelerating reduction in thermometer counts globally with the pace of deletion rising rapidly in recent years. Over 6000 stations were active in the mid-1990s. Just over 1000 are in use today.
The stations that dropped out were mainly rural and at higher latitudes and altitudes — all cooler stations.

Auto
Reply to  Mario Lento
October 17, 2014 1:50 pm

http://www.imo.org/blast/blastDataHelper.asp?data_id=24475&filename=1293.pdf
This link shows – on page 6/6 – for one month (August 2008) only – where ship observations were made.
<400,000 for the month, globally.
Error bars of mental arithmetic. . .
Average – one per 300-ish square miles of ocean. For the month.
one per 90-100 thousand square miles each day.
Area of he UK – about 92,000 square miles.
I am encouraging ship masters I know to have their ships become Voluntary Observing Ships.
Auto

October 16, 2014 8:32 pm

It is indeed unfortunate that the study of historical records that H.H. Lamb was so expert at was completely abandoned by the computer mongers.
In the U.S. it happened when A Gore gave Kevin Trenberth at the University of Michigan $5 million dollars to buy a super computer. Took the money from NASA. This is while at UAH John Christy was starved of funds…

Evan Jones
Editor
Reply to  denniswingo
October 17, 2014 3:59 am

Do models work?
They “perform as advertised”. More’s the pity. There also a known advisory: Do not homogenize a system with mostly bad datapoints. Right there on the warming label (between the disclaimer and the skull and crossbones).

Just an engineer
Reply to  Evan Jones
October 17, 2014 5:18 am

“Right there on the warming label.” Intentional?

October 16, 2014 8:33 pm

As we said back in 1971 when I got my Systemprogrammer Exam…: Bad Input -> Bad Output. (short version was BIBO)

SandyInLimousin
Reply to  norah4you
October 17, 2014 1:04 am

Bad Input -> Guesswork -> Nonsense Out = BIGNO

Reply to  SandyInLimousin
October 17, 2014 1:06 am

OK BIGNO and BIBO. Some persons must have been asleep or dreaming from Primary School on…..

Reply to  SandyInLimousin
October 17, 2014 2:37 am

Do models work?
BIGNO!

Reply to  SandyInLimousin
October 17, 2014 8:32 am

Or: Bad Input -> Nonsense -> Garbage Out = BINGO!

Dr. S. Jeevananda Reddy
October 16, 2014 8:36 pm

“I watched modelers take over and dominate climate conferences as keynote presenters. It was modelers who dominated the Climatic Research Unit (CRU), and through them, the IPCC” — it is true, I made such observations few decades back with reference to India Meteorological Department/IITM research priorities. At that time using ground based data excellent short range forecasts were given. Here the experience with local conditions were given top priority. But, with collaboration from USA groups [mostly Indians] the shift changed to model based forecasts — with poor quality of predictions. Here the promotions and awards were given to model groups rather than land based forecasters. Also, in several fields of meteorology the deficiencies in models were discussed and published in 70s & 80s. With the sophisticated computers, the entire research shifted. I did my research with 256 kb computer purchased for US$ from South Africa — working in Mozambique. The programmes were written by me in Fortran IV.
Dr. S. Jeevananda Reddy

Dr. S. Jeevananda Reddy
Reply to  Dr. S. Jeevananda Reddy
October 16, 2014 8:38 pm

US$ 3000
Dr. S. Jeevananda Reddy

Reply to  Dr. S. Jeevananda Reddy
October 17, 2014 7:37 pm

Sir, your experience shows that climat change marketeers masquerading as grant snuffling scientists are not just laughably bad at science, what they do has major, potentially lethal, consequences.

October 16, 2014 8:39 pm

Tim: This is a nice post… thank you!

policycritic
Reply to  Mario Lento
October 17, 2014 12:26 am

Yeah, I agree. I think it is explosive, frankly.

October 16, 2014 8:41 pm

In “The death of economics” by Paul Ormerod, he makes an important point: a model that is nearly correct can be completely wrong. And there is no relationship between relative correctness and accuracy. A bad model can give a more accurate result than a good one, by accident. Since the climate models are by necessity incomplete, they can be entirely wrong. Therefore no inference that “only with CO2 can we make it work” has any validity. None whatever. If that is all they have, the world is wrecking itself, wrecking the environment, impoverishing the poor, for no good reason.

cg
October 16, 2014 8:49 pm

[Snip. OT – mod]

zenrebok
Reply to  cg
October 16, 2014 9:00 pm

cg [snipped]
[Thanks for spotting the OT – mod]

LewSkannen
October 16, 2014 8:56 pm

Another interesting snippet from the book ‘Chaos’ by Gleick (brother of the other one) is that even if you measured perfectly all the relevant average parameters (temperature, humidity, wind velocity etc) in every cubic meter of the entire atmosphere and had a perfect algorithm to crunch the numbers you would be unable to make any predictions more than a month ahead simply due to the chaotic effects on the algorithm from the tiny discrepancies between the average parameter of each cube and the actual state of each cube.

xyzzy11
Reply to  LewSkannen
October 16, 2014 9:30 pm

Again, I cite the wonderful article from Pointman (albeit 3 years ago) that indicates, very simply, the futility of using models to predict anything. See http://thepointman.wordpress.com/2011/01/21/the-seductiveness-of-models/

Editor
Reply to  LewSkannen
October 16, 2014 10:04 pm

Thx LewSkannen. You are absolutely correct, and the post really needed to include that information. The missing data that the post refers to pales into insignificance beside the utterly useless structure of the climate models. They aren’t climate models, they are low quality weather models. Even the very best weather models can only successfully predict a few days ahead. A climate model would have in it the things that actually drive climate, such as orbit, sun, clouds, ocean oscillations, etc, plus GHGs of course. Its structure would be quite different to current “climate” models, as it could not be based on small slices of space-time for the reason you give.

Bill Marsh
Editor
Reply to  Mike Jonas
October 17, 2014 5:01 am

They are not ‘climate’ models, they are ‘circulation’ models, as in General Circulation Models, intended (but failing) to model atmospheric circulation.

JohnTyler
Reply to  LewSkannen
October 17, 2014 7:42 am

Numerical solutions to partial, non-linear differential equations ALWAYS produce a tiny error and if you seek just the solution, numerically produced, to just one equation, you can construct the algorithm to make the error insignificant. (Actually, this is true also for linear – and far SIMPLER – differential equations as well).
But when the results of many of these numerical solutions are used as input parameters for the next set of equations, the “error” becomes magnified. Repeating this process thousands of times produces results with huge errors and the final results are simply WRONG.

PiperPaul
Reply to  LewSkannen
October 17, 2014 10:21 am

Evil twin phenomenon?

Duster
Reply to  LewSkannen
October 17, 2014 2:50 pm

This observation was originally made by Edward Lorentz in Deterministic Nonperiodic Flow in the 1960s. Lorenz concluded that very small variations could make immense changes in the state of deterministic systems over time. Since Lorentz’s work was in computational meteorology one would have thought the modelers would have considered the conclusions and at least qualified the discussion of model accuracy.

chrisyu
Reply to  LewSkannen
October 17, 2014 3:10 pm

even for a simple system, a double pendulum, accurately predicting the position and velocity after 15-20 swings becomes impossible. But yet CC scientists claim they can predict the climate 100 years out. Show me an accurate predictive computer model for a double pendulum then maybe we can talk about your climate model.

ferdberple
Reply to  chrisyu
October 17, 2014 5:48 pm

http://www.math24.net/double-pendulum.html
A working model of double pendulums, about 1/2 a page down, showing how unpredictable even a simple dynamic system is from first principles.

LewSkannen
Reply to  chrisyu
October 17, 2014 7:42 pm

I remember that being mentioned in the book. After 2 minutes to calculate the position of the pendulum you would need to have the initial conditions exact, down to the gravitational effect of a rain drop two miles away.

LewSkannen
October 16, 2014 9:00 pm

The other analogy I like to use is to relate CO2 to Currency Forgery.
Everyone agrees that currency forgery drives up inflation in the same way that everyone agrees that CO2 acts as a GHG.
No sane person, however, expects to be able to be able to predict the world economy in a hundred years time just based on the rate of currency forgery.

David A
Reply to  LewSkannen
October 17, 2014 11:17 pm

True, but currency forgery is theft, all bad.
CO2 is clearly net beneficial.

Nick Stokes
October 16, 2014 9:24 pm

“It was modelers who dominated the Climatic Research Unit (CRU),”
Very strange ideas of models here, if GCM’s are what is meant, as Fig 2 implies. Which modellers dominated CRU?
“It doesn’t matter, because there are so few stations of adequate length or reliability. The mathematical formula for each grid cannot be accurate.”
What role are weather stations imagined to play in a GCM?
I think a lot of different things are mixed up here.

Keith Willshaw
Reply to  Nick Stokes
October 17, 2014 1:26 am

Nick Stokes asked
‘What role are weather stations imagined to play in a GCM?’
1) Computer models rely on input data – Weather stations are the source of that data.
I use models that simulate thermodynamic processes. If I don’t have valid input data the model CANNOT produce anything useful.
2) Validation. When I run a new computer model I compare its output with real life experimental results. If the model is not validated this way its worse than useless.

Nick Stokes
Reply to  Keith Willshaw
October 17, 2014 2:01 am

GCM’s do not use station data as input. They use Earth properties – topography, forcings etc. Some recent programs try to make decadal predictions from known state input. That would generally be a state of the kind you would get from a weather forecasting model. But there is no direct relation between stations and a GCM grid.
Validation – GCM’s do not predict station data. People might wish to compare them with some kind of global or regional index. But they are climate models, not weather models.

DEEBEE
Reply to  Keith Willshaw
October 17, 2014 2:30 am

“GCM’s do not predict station data.”
As usual with Nick watch the pea. No body is claiming that Nick — just make an obvious assertion and wait till someone falls into it so you can swoop and conquer and publically preen your intellect

Nick Stokes
Reply to  Keith Willshaw
October 17, 2014 3:13 am

This post says that computer models cannot work, and then says a whole lot about deficiencies in station data, and particularly related to GCM grids.. But GCM’s don’t use station data. So how can issues with it stop them working?

DirkH
Reply to  Keith Willshaw
October 17, 2014 4:44 am

Nick Stokes
October 17, 2014 at 3:13 am
“This post says that computer models cannot work, and then says a whole lot about deficiencies in station data, and particularly related to GCM grids.. But GCM’s don’t use station data. So how can issues with it stop them working?”
It makes it impossible to validate a climate model. And in fact, no climate model has ever been validated.

Duster
Reply to  Keith Willshaw
October 17, 2014 3:17 pm

NIck Stokes says ….GCM’s … use Earth properties – topography, forcings etc. …
Which begs the question of how well topography is modeled to start with, and continuing with how well any other of these properties are estimated or modeled.
Validation – GCM’s do not predict station data. People might wish to compare them with some kind of global or regional index. But they are climate models, not weather models.
Nick, you’re dodging issues by “scene shifting.” If, using more parameters than necessary to model an elephant, no set of GCMs can reasonably track real world data throughout the available instrumental record, then there is clearly error in the application of “Earth properties” in the models or a fundamental misunderstanding of those properties. Since the models also plainly display a bias in the direction in which they miscast the real world, the bias offers a clue in where the error must lie. The fault cannot be in the real world, and therefore, can only be in the theory, or the implementation of the theory in the computer model.

CodeTech
Reply to  Nick Stokes
October 17, 2014 2:01 am

What role are weather stations imagined to play in a GCM?

What a truly bizarre question to ask. Truly.

Nick Stokes
Reply to  CodeTech
October 17, 2014 2:02 am

Can you answer it?

CodeTech
Reply to  CodeTech
October 17, 2014 2:18 am

The fact that you can’t proves that you have no idea what you’re even doing here.
How embarrassing for you.

DEEBEE
Reply to  CodeTech
October 17, 2014 2:31 am

AT least for validation, Nick. Unless the anomalies are being produced from your nether regions.

Nick Stokes
Reply to  CodeTech
October 17, 2014 3:09 am

Here is one relatively simple, well-documented GCM, CAM 3.0. You will not find station data input anywhere.
This post says that GCM’s cannot work, apparently because of some deficiency in station data. That’s just not true. They don’t use it. It might be that someone later finds a mismatch with some index derived from.it. If so, then they obviously have the necessary station data to do that.

Reply to  CodeTech
October 17, 2014 3:33 am

Some nut wrote, “What role are weather stations imagined to play in a GCM?”
If the climate models don’t use real world data for input or for validation then they are just mega-millions computer games. I do agree that the present “climate models” are useless on their face and any data from the real planet earth would just get in the way of providing the answer that the funding agencies want to see.
I am just surprised that an alarmist would just up and admit that real data and climate models are total strangers.
There are people condemned, needlessly, to energy poverty by people like Stokes. It is a travesty.

Reply to  CodeTech
October 17, 2014 3:35 am

As someone else pointed out, there are no climate models at all. There are just low quality weather models that would surprise us to get the weather right 90 days out.

DirkH
Reply to  CodeTech
October 17, 2014 4:45 am

Nick Stokes, do you say that climate models should not be validated? Why should they not be validated?

TYoke
Reply to  CodeTech
October 17, 2014 3:42 pm

Nick wrote: “This post says that GCM’s cannot work, apparently because of some deficiency in station data. … It might be that someone later finds a mismatch with some index derived from.it. If so, then they obviously have the necessary station data to do that.”
Those sentences are the kernel of your argument, and that argument contains a pair of errors.
No one is arguing that the station data is used in an entirely un-massaged form. Of course the direct observations are condensed into some sort of “index” that is gridded, averaged, smoothed, extrapolated, etc.
The problem is that just because it is an “index” that is incorporated or used to validate the model, instead of raw data, most certainly should not be construed to mean that the raw observations somehow become unnecessary to the model. Quite the contrary. The intermediate modeling steps between the raw data and the ultimate GCM merely extends the chain of inferences between Garbage In and Garbage Out.
You reveal your recognition of this reasoning error in your last sentence: “they obviously have the necessary station data to do that”. Why obviously? The inadequacy of the station data is the whole point of the article. Your bland and unsupported assertion of adequacy does not make that data adequate.

Tim Hammond
Reply to  Nick Stokes
October 17, 2014 4:07 am

So how exactly do you think you know what the temperature is at any given time and any given place if you don’t have something measuring the temperature there?
You have a model that accurately replicates temperature in any given location entirely from first principles do you?

CodeTech
Reply to  Tim Hammond
October 17, 2014 5:32 am

How many [insert name of group, in Canada a favorite is Torontonians] does it take to change a light bulb?… just one. He holds it and the world revolves around him.
Now, how does a GC Modeler determine tomorrow’s weather? He runs the model from the creation of the planet 4.5 billion years ago.

Bob Boder
Reply to  Tim Hammond
October 17, 2014 8:21 am

Nick dosn’t care it’s what the modles say that is important to him not what they mean to anything, anyone or any part of reality.

Keith Willshaw
Reply to  Nick Stokes
October 17, 2014 5:14 am

Nick Stokes Said
‘GCM’s do not use station data as input’
Sorry old boy but they assuredly do. The NASA GISS CM ModelE has a data file that is 191 Mb compressed and contains a massive amount of initialization data in great detail from coarse factors such as surface temperature down to things like transient 3-D aerosol concentrations . If you don’t have basic input data such as cloud cover, surface temperature and prevailing wind direction you have no hope of even coming close to modelling climate.
‘Validation – GCM’s do not predict station data.’
Indeed they don’t. The fact is GCM’s are supposed to model the very conditions that the stations measure such as precipitation, temperature, wind speed etc. They have however failed miserably to do so.

Uncle Gus
Reply to  Keith Willshaw
October 17, 2014 11:08 am

Once again, I wish this site had a “Like” button.

Alx
Reply to  Nick Stokes
October 17, 2014 7:38 am

Data is input into formulas. Data is used to validate forumlas. Data does not equal formulas. GCMs try to represent interactions/processes between the atmosphere, oceans, and land surface. It is poorly used to forecast climate changes to increasing CO2. You comment suggests temperature is not an input into a GCM only an output, which is strange as an elephant in a teacup to me. But ignoring that, what is stranger than flocks of zebras flying over Manhattan is that you cannot acknowledge that both the data and models are both clearly insufficient to do any climate forecasting except for entertainment purposes.
Using that methodology I will now go balance my checkbook using a random number generator and an abacus with an unknown number of beads missing.

rgbatduke
Reply to  Nick Stokes
October 17, 2014 8:26 am

I’m trying to decide if this is another “I agree with Nick Stokes day”. On the one hand, you are absolutely correct when you say that weather stations do not contribute direct input to climate models. On the other hand, the climate models do have to be initialized, and because at least some of the subsystems that play a major role in the time evolution of the climate have very long characteristic times and because the climate is highly non-Markovian, one has to start them from initial conditions that are not horribly out of balance with respect to reality. This problem is somewhat exacerbated by the chaotic nature of the dynamics, of course.
Still, Tim Ball is incorrect to assert that it is the lack of station data per se that is the downfall of climate models. The downfall of climate models comes from the explicit assumption that the average of the averages of many climate models, each one producing an ensemble of chaotic trajectories from initial conditions that are explicitly assumed to be irrelevant in the long run, is a useful predictor of the one actual chaotic trajectory the Earth is following while self-integrating the actual physics at microscopic length scales instead of integrating equations that are called “physics” on an absurdly large spatiotemporal stepsize but that somebody basically made up.
Let’s see how sensible this is. Here is a typical Feigenbaum tree:
http://log.1am.me/2010/12/bifurcation-trees-and-fractals.html
Typical is good enough, since Feigenbaum basically showed that this tree has universal properties for iterated maps that lead to chaos, and the weather/climate system is of course the “iterated computational map” in which chaos was first discovered.
Even though the tree structure of the periods is universal, it is not insensitive to the underlying parameters used to generate it. Indeed, small changes in those parameters can lead to large shifts in precisely where the bifurcations occur, in the specific distribution of the strange attractors in whatever high dimensional space one evaluates the dynamics (in the case of weather/climate) but even in the simple few-dimensional systems typically used to generate graphs like this. We’ll stick to the few-dimensional case.
So imagine a set of figures like this, each one generated by a model, each model having a slightly different implementation of an iterated map (e.g. different parameters, slightly different functions being solved per step, different stepsize, and in all cases stepsizes so vastly larger than the stepsize needed to actually track the underlying nonlinear ordinary differential equations that pretending that the iterated map is somehow a “solution” to the ODEs becomes an exercise in the suspension of disbelief akin to that required by a typical space opera with FTL ships bopping all over the Universe in human-short times). Running each model with slightly different initial conditions in every case leads to completely different trajectories — it is in fact the opposite of the behavior of damped, driven linear oscillators, where initial conditions are indeed irrelevant, producing a transient behavior that damps away and leaves one with a nice, clean, periodic signal slaved to the periodic driver. In chaotic models this is in some sense inverted — even starting with almost the same initial condition, the differences grow until the correlation function between two initially almost identical trajectories decays pretty much to zero as the two models are, on average, completely decorrelated for the rest of eternity.
So let’s mentally average over a large set of these decorrelated trajectories, per model. This, of course, effectively linearizes them — the resulting trajectory is (gasp!) very close to the original non-chaotic linear-response trajectory that was split up into the tree by the nonlinearities in the appropriate regime. Do this for all of the distinct models, and one has a collection of nice, tame, linearized trajectories, all of them averages over chaos, all of them different (possibly even substantially different), and then let’s average the averages all together to produce a grand ensemble superaverage of the running averages of the individually produced chaotic trajectories evaluated by integrating a made-up dynamical system at distinct spatiotemporal length scales that completely erase the actual variation observed on all smaller length scales and that Nature seems to think are important to the ultimate dynamics of heat transport efficiency as it self-organizes them quite differently as conditions change on a must smaller length scale than the models track.
Now, claim that this super-averaged trajectory is a useful predictor of the future climate, even though the individual chaotic trajectories produced by each climate model have grossly incorrect features compared to the actual trajectory of the one chaotic nonlinear climate we live in.
Sure, that works. If I apply it to a damped, driven rigid oscillator, I can prove that on average we expect to see it at rest. Or, if I’m slightly more sophisticated, I can show that it is still oscillating periodically. Or, if I’m nefarious, I can tweak the underlying model parameterization and “prove” lots of stuff, none of which has the slightest predictive value for the one trajectory that is actually observed, the single strange attractor of the underlying dynamics.
Of course nobody would be that stupid if they were studying a low dimensional chaotic system, or merely demonstrating chaos for a class. On the contrary, they would be pointing out to the class that this sort of thing is nearly pointless, because this:
http://video.mit.edu/watch/double-pendulum-6392/
is what one observes, completely differently every time, even without actually driving the e.g. double rigid pendulum with a noisy not-quite periodic force so that it never manages to come close to damping down to “simple” linearized small-oscillation behavior.
Here is one place where Tim Ball’s observations above are quite apropos. I think you missed this, but one of the points of the inadequacy of the measurement grid relative to the absurdly inadequate integration grid is that the latter by its nature requires assigning “average” numbers to entire grid cells. Those average numbers, in turn have to in some sense be physically relevant to the actual numbers within each cell. For example, would you say that the models make the implicit assumption, when they assign a temperature of 291.6719345 K and air pressure of .993221573 bar and water vapor content of 0.08112498 and air speed of 2.11558324 m/sec in some single vector direction to a volume of 10,000 cubic kilometers of air (100x100x1 km cell) is supposed to represent that actual average of the relevant measured quantities within that cell? Or are they simply toy parameters, variables created within a toy model that bear little to no necessary resemblance to the variables they are named after?
Damned either way, of course. If they are indeed supposed to represent coarse grained averages, the Ball’s observation that when he personally made actual observation of temperature distributions in just one small portion of the ground, he found that most of the assumptions of homogeneity required to assign an average value to even much smaller cells are simply false. He observed, for example, numerous inversions in the first few hundred meters of just one point in one cell, where the well-established dogma is that the lapse rate is (almost always) monotonic. (I would point out that a pattern of inversions is completely consistent with the topological folding process of lateral turbulent airflow across the warmed ground surface and could probably have been predicted — one can certainly observe them in everything from the original rotational mixing experiments to the patterns made by rising smoke from a cigarette or stick of incense.) On the other hand, if one asserts that they are not supposed to represent the actual averages of the physically relevant quantities in the grid cells, if one asserts that they are some sort of renormalized parameters that are only somehow monotonically connected to the actual averages by some sort of map, well, you’ve just acknowledged that in that case we have no good reason to think that they averages thus produced in 50 years represent the global averages that will be observed at that time! If they aren’t even an accurate representation of the average temperature(s) per cell now, but are the result of an unknown transformation of the average temperatures of the cell into the space where the coarse-grained dynamics is being evaluated as if the cells were “microscopic” in size, using effective interactions between the renormalized variables, how do you expect to be able to map the results of the computation back to actual temperatures then?
That’s really the problem, isn’t it? One of the appealing things about GCMs is that they produce something that really looks like actual weather. With enough tweaking (and a bit of brute force renormalization of energy per timestep to eliminate drift that would otherwise cause them to fail so badly that nobody could miss it) they produce a simulated world on which storms happen, rain falls, droughts occur, all with lots of beautiful chaos, and if one works very hard, one can actually keep the models from having egregious instabilities that take almost all initial states and (for example) collapse to the iceball Earth strange attractor when they are applied to initial conditions corresponding to (say) the middle of the Wisconsin glaciation, with runaway cooling from albedo-driven negative feedback, especially at times in the Wisconsin when atmospheric CO_2 apparently dropped to under 200 ppm and nearly caused mass extinction of plant life.
But the weather they produce isn’t the real weather. It doesn’t even qualitatively correspond to the real weather. And while the weather they produce can certainly be averaged, and while one can certainly assert that this average is “the climate”, the values in each cell of the system aren’t even in a simple one-to-one mapping with the average temperatures one would obtain in the cells with a simple rescaling of the integration grid, let alone the 30 orders of magnitude rescaling needed to contemplate resolving those little whorls of turbulent folding produced by every warmed leaf in the sun.
The conflict here is as old as physics itself. We cannot solve the actual problem of weather or climate and we know it. To be perfectly blunt, we will never be able to solve the actual problem of weather prediction or climate prediction, at least not in any sense that is formally defensible. We therefore tell ourselves that most of the things that prevent us from being able to do so do not matter, that we can average them away. We appeal to damping to erase the otherwise intractable detail that keeps us from being able to proceed. Since this work has to be funded, we make the infinitely adjustable assertion that the next doubling of computational power, the next halving of computational spatiotemporal step size, will at the same time reveal more of the missing detail and make the models more accurate, and yet that the unrevealed detail still remaining isn’t important to the long term predictions. We want to have our cake — models that already work well enough at the level we can afford to compute — and eat it too, the need to build models that work better as soon as we can afford to compute better.
I used to see this all the time in field theory computations presented at conferences — so much so that I named it the “fundamental theorem of diagrammatic quantum field theory”. Speaker after speaker would present the results of their computations in some problem in QFT, and explain the Feynman diagrams that they included in their computation. This was typically a fairly small set of all Feynman diagrams for the problem, because these problems are also basically uncomputable as there are an infinite set of Feynman diagrams and little a priori reason to think that any given diagram omitted from a given computation is, in fact, unimportant. Everybody knows this. We can’t even prove that sums of specific sub-classes of diagrams necessarily converge, even as the formal theory is derivably exact if only one could compute and sum over an infinite number of diagrams.
Each talk would thus without fail begin with a short introduction, explaining the diagrams that were included (usually because they were the diagrams they could algebraically derive formulas for and that they could afford to add up with whatever their computing resources of that particular year were) and then they would invoke the fundamental theorem: “All of the diagrams we omitted from this computation do not significantly contribute.”
Amazingly, the very same people would return two years later, where, armed with new computers and more time they would have redone the entire computation and included several new diagrams omitted two years earlier. The results they obtained would (unsurprisingly) have changed. And yet there it was, at the beginning of the new talk, instead of saying “Hey dudes, we learned that you just can’t trust this diagrammatic perturbation theory thing to work because we included a couple of new diagrams that we thought were unimportant a couple of years ago and they turned out to be important after all, maybe we should all try a completely different approach to solving the problem.” they opened, as before with “All of the diagrams we omitted…”
And this was still better than climate science, because in at least a few cases the computations could be directly compared to measured quantities known to fairly high precision. So each talk would not only compare results of computations carried out to different order in diagrammatic perturbation theory with sums over all ladders, or single loops, or whatever (while still omitting, note well, countless diagrams even at low order as there are a lot of diagrams for even fairly simple systems), they would compare the results to the numbers they were trying to compute. Sometimes adding diagrams made the results better. Sometimes it made them worse. As I said, no theorem of uniform convergence, no real theorem of asymptotic convergence in sums over diagrams of a specific type.
But as a jobs program for physicists, it was and remains today marvelous. And from that point of view or the point of view of pure science, the work is by no means without merit! Some very famous physicists who have made real contributions have done their time doing “wax on, wax off” computations of this sort, or “swinging the chain, swinging the chain” (two movie references, in case this is confusing, google them:-).
At least they knew better than to take the results of forty distinct diagrammatic perturbation theory computations done by forty different groups to forty different orders with forty different precisions and forty different total expenditures of computational resources and with forty different sources of graduate-student-induced computational/method error, carried out over forty years and then do the flat average over all of the results and present the result to the world as a better evaluation of the number in question than the direct experimental measurement of the actual number in nature!
I think that is Tim’s real assertion here. In actual fact, we haven’t got any particularly good idea what the global average temperature actually is. In a sense it is worse than our knowledge of the output of climate models that are supposed to simulate it. We have data on an irregular grid that samples some regions finely and most regions enormously coarsely, even worse than the better GCMs (that at least use a regular grid, even if the usual lat/long decomposition in degrees on a sphere is dumb, dumb, dumb). It is probably excessive to claim that we know the actual global average surface temperature to within one whole degree absolute — if we could agree on some way to define global average surface temperature, if the global average surface temperature we agreed on could be somehow mapped in a sensible way into the average implemented in a renormalized way at the grid resolution we can afford to compute this year.
In summary, let’s say that today I half agree with Nick. Yes, the measurement grid is sadly irrelevant to starting climate models as much as it is in fact relevant to weather prediction with pretty much the same sort of models. The models do not use any actual measurements to initialize, which is at least partly due to the fact that we don’t have any set of actual measurements they could use to initialize that wasn’t egregiously in error over most of the globe. So they start up any old way (hopefully close enough to some actual state of the planet at the past times one starts the models that the model building process remains on the same attractor, oops, hmmm, maybe the correspondence of initial state and reality matters after all!) and hope that after a decade or so the specific behavior associated with the transient goes away and leaves them with, um, “something” predictive of the actual future climate.
It is highly relevant to the model building process itself, as well as the model validation process. Specifically, how can we know that the detail omitted in the climate models is unimportant, especially when it is important in every other chaotic system we’ve ever studied, including many that are far simpler? And, how can we demonstrate that these enormously complex models are actually working when we don’t have a very good idea of the actual state of the planet that can be compared to their predictions? When we don’t know the actual mean surface temperature to within a whole degree (or even how to define “mean surface temperature”), and when our ignorance of that temperature however you define it increases significantly as one moves backwards in time, how can we even resolve model failure?
rgb

Bob Boder
Reply to  rgbatduke
October 17, 2014 8:59 am

So in essence you are saying “nick is right, but the models are useless”.

cC Reader
Reply to  rgbatduke
October 17, 2014 10:44 am

Thank you!

Uncle Gus
Reply to  rgbatduke
October 17, 2014 11:17 am

Dear God. Do you stay in all day or do you just type very, very fast?
Nevertheless, the system must be classically damped in some way, since it’s cycled closely around the freezing point of water for several billion years. (Not that that excuses shoddy thinking, mind you.)

mullumhillbilly
Reply to  rgbatduke
October 17, 2014 3:53 pm

Thank you rgb for a great, landmark statement of all that is wrong with GCM and the ludicrous, preposterous averaging of many nonlinear dynamic systems using big big grid cells . Having read Gleicks “Chaos” many years ago, I’ve always wondered how any scientist could claim to be a climate scientist when they apparently hadn’t read and fully absorbed the implications of Lorentz 1967 paper.

Reply to  rgbatduke
October 17, 2014 4:02 pm

I do so want to believe in Dr. Brown’s conclusions, since he knows so many buzz words that I don’t and because he arrives at destinations I find congenial. But I have found his logic wanting the few times I’ve actually taken the time to slog through his logorrhea.
I don’t mean just his laughable recent pontificating about nuclear-power patents (which I had dealt with before he was even out of college). I mean the areas about which he professes actual expertise, such as thermodynamics and statistical mechanics. He is among those responsible for my conclusion that science is too important to be left to scientists.
Believe me, we lawyers would love to be able to rely on the experts. But experience has shown us that we cannot. We challenge the experts not because we think we’re smarter than they are but rather because they so often prove themselves wrong.
Over the years I have enjoyed Dr. Brown’s cheerleading regarding the poor resolution of climate models, since it tended to support the impression I had formed. As a serious citizen, though, I have sadly concluded that I can no longer look to him for confirmation.
And I would caution other laymen against doing so.

Nick Stokes
Reply to  rgbatduke
October 17, 2014 10:27 pm

RGB,
“On the other hand, the climate models do have to be initialized, and because at least some of the subsystems that play a major role in the time evolution of the climate have very long characteristic times and because the climate is highly non-Markovian, one has to start them from initial conditions that are not horribly out of balance with respect to reality.”
As with most CFD, that is not really true. They aren’t trying to solve an initial value problem. They are trying to determine how forcings and climate come into balance. That is why they usually wind back several decades to start. It’s not to get better initial conditions – quite the reverse. There will be error in the initial conditions which will work its way out over time. If they start too hot, heat will radiate away over that windup time, until by the time you reach the period of interest, it is about right relative to the forcing.
“I think you missed this, but one of the points of the inadequacy of the measurement grid relative to the absurdly inadequate integration grid is that the latter by its nature requires assigning “average” numbers to entire grid cells”
That’s what you do in CFD, or any PDE solution, on any scale. You solve for nodal values, and it’s related to continuum by interpolation (or averaging after integration). And with CFD, direct Navier-Stokes solution is impractical on any scale. It’s always done with some kind of turbulence modelling (except at the very viscous end). Grid cells are never small enough. There is always some scale that you can’t resolve. But CFD is big time useful. You get answers on the scale that you can resolve. And with GCM’s and climate, that scale is useful.
“Or are they simply toy parameters, variables created within a toy model that bear little to no necessary resemblance to the variables they are named after?”
They generally relate to conserved quantities. Heat, momentum etc, and are best expressed that way in equations. So when you refer to average temperature in a cell, you are referring to heat content. And the heat equation used just says that that heat content is advected in accordance with average gradients.

Reply to  rgbatduke
October 17, 2014 10:48 pm

Nick Stokes wrote” You get answers on the scale that you can resolve. And with GCM’s and climate, that scale is useful.”
++++++++++
Useful? For what? What bit of evidence is there that any climate model can spit out anything that resembles climate?

Nick Stokes
Reply to  rgbatduke
October 17, 2014 11:14 pm

Mario,
“What bit of evidence is there that any climate model can spit out anything that resembles climate?”
Here is a vizualisation of the ocean component of the GFDL AOGCM, with SST. I choose it because it shows familiar patterns generated by GCM. The currents that you see are not the solution of a meaningful initial value problem. Nor are they obtained using station data, or even observed SST. They are first principles solutions of a whole dynamics model, and are the result of air/water properties, forcings and topography. And maths.

Nick Stokes
Reply to  rgbatduke
October 17, 2014 11:19 pm

I’ll see if I can embed that video from GFDL:
[youtube http://www.youtube.com/watch?v=aX9nHyMP4L0%5D

Reply to  Nick Stokes
October 17, 2014 5:19 pm

“The key is the realization that climate system predictions, regardless of timescale, will require initialization of coupled general circulation models with best estimates of the current observed state of the atmosphere, oceans, cryosphere, and land surface. Formidable challenges exist: for instance, what is the best method of
initialization given imperfect observations and systematic errors in models?”
http://www.cgd.ucar.edu/staff/jhurrell/docs/hurrell.decadalclimatepred.jphys08.pdf
Initialization and validation would require some kind of data. Missing detail in significant ocean regions calls into question how accurate is the starting point?

Nick Stokes
Reply to  Ragnaar
October 17, 2014 11:52 pm

“Missing detail in significant ocean regions calls into question how accurate is the starting point?”
Check the poorly spelt title of your quote: “Decadel climate prediction:”
This is the newish idea I referred to above – something between a GCM and a weather forecast. It’s not yet clear how successful it will be. But it isn’t traditional GCM climate prediction.

Bob Boder
Reply to  Ragnaar
October 18, 2014 10:10 am

All of this is irrelevant the models don’t work regardless of any of these arguments. You don’t need to know the absolute initial conditions for the models to have value but they do need to generate something that is close to real conditions and they don’t.
Nick will of course now say that they do and point to all kinds of BS claims of accurate results, so before we go there, Nick give us some accurate idea of what is going to happen over the next 5 years based on your loved models. Put your name to something that is foward looking. But of course you won’t because then you will have to admit that the models don’t work when your”predictions” don’t pan out. Next you’ll say “I don’t make predictions” of course.

Reply to  Nick Stokes
October 18, 2014 1:37 pm

“They aren’t trying to solve an initial value problem. They are trying to determine how forcings and climate come into balance. That is why they usually wind back several decades to start. It’s not to get better initial conditions – quite the reverse. There will be error in the initial conditions which will work its way out over time. If they start too hot, heat will radiate away over that windup time, until by the time you reach the period of interest, it is about right relative to the forcing.”
I think I understand your explanation. During the wind up a GCM it will go towards a balanced situation. During and after the forcings have affected things, it will go towards the new balanced situation. When we get to the period of interest we make a balance sheet of the climate. We make another one at the end of the model run. Explaining what happened between the two balance sheets is an income statement which will show a gain or a loss of heat. To have an accurate income statement, the affect of forcings, it is helpful to have an accurate balance sheet, the initial and ending values. We can check the balance sheets against the income statement. I find it difficult to say we can minimize the importance of the initial and ending values and still have a good income statement.

Reply to  Nick Stokes
October 18, 2014 9:56 pm

Reading some of your comments I might be making progress. My earlier example had a beginning balance sheet (B/S), and ending one, and an income statement (I/S) that connects the two. Using your scientific furnace example after hours of being on, inputs equal outputs. It is steady state. That temperature is measured (B/S). Some attribute is changed. The temperature is measured again (B/S). The temperature change (I/S) is attributed to whatever the change made was. The model’s books balance. Where we seem to be is that the climate data from observations we have does not interchange with the GCMs data. They are kind of the same thing, but with important qualifications used to note their differences.

October 16, 2014 9:42 pm

Nick Stokes;
What role are weather stations imagined to play in a GCM?
Validation?

DEEBEE
Reply to  davidmhoffer
October 17, 2014 2:33 am

Absolutely. And of course he is laying his usual trap by then referring to weather station temperature, so that someone can get into a debate about that rather than the poor prediction of anomalies

Reply to  davidmhoffer
October 17, 2014 3:51 am

Stokes,
“apparently because of some deficiency in station data.”
If one cannot model initial conditions one cannot model final conditions with any sort of accuracy. As if you did not already know that. Disingenuous, much?

Nick Stokes
Reply to  Michael Moon
October 18, 2014 12:00 am

“If one cannot model initial conditions one cannot model final conditions with any sort of accuracy. As if you did not already know that.”
One thing I am familiar with is computational fluid dynamics. There you are hardly ever seeking an initial or final state. You are solving a time variable system usually to gather some sort of statistics. Lift and drag on an airfoil, say. Sometimes there is a particular event or perturbation. GCM’s are similar. You generate synthetic weather to gather climate statistics. It’s not an initial value problem.

David A
Reply to  Michael Moon
October 18, 2014 12:27 am

Nick, Nick Nick, there you go again. None of us know if that initial point is positive or negative to equilibrium. To gauge for accuracy you must initialize to an observation, and you must meet the end point observation, and you should back cast to the past and meet that observation as well.

Nick Stokes
Reply to  Michael Moon
October 18, 2014 1:11 am

“None of us know if that initial point is positive or negative to equilibrium.”
You don’t need to. Think of that airfoil problem. It has turbulence, vortex shedding, “weather”. So what do you do experimentally? Crank up the wind tunnel and take measurements. No-one tries to determine the initial state of a wind tunnel.
Same with a CFD analysis. You emulate the running state of a wind tunnel and let it compute for a while. You don’t try to determine a solution dependent on an initial state. There isn’t one that makes sense.

David A
Reply to  Michael Moon
October 18, 2014 6:39 am

Nick says…”You don’t need to. Think of that airfoil problem. It has turbulence, vortex shedding, “weather”. So what do you do experimentally? Crank up the wind tunnel and take measurements. No-one tries to determine the initial state of a wind tunnel.”
—————————————————————————–
You really are being disingenuous. We control a wind tunnel to exactly the speed we choose to measure the aerodynamic performance of whatever we place inside a wind tunnel. The past performance of the wind tunnel is irrelevant to the current wind in the tunnel, and irrelevant to the current performance of whatever object is now in the wind tunnel.
A temperature is entirely different. If the current atmospheric temperature is negative (cool) to the current inputs, it will warm even with zero change in input. If we foolishly ASSUME the current radiative balance was zero, then we could mistake the proposed change to conditions over the time of the model run (additional atmospheric CO2) was the cause of the observed warming, when in fact said warming would have occurred even without the additional CO2.
Did you really make me type this?

Nick Stokes
Reply to  Michael Moon
October 18, 2014 6:57 pm

“A temperature is entirely different”
OK, think of controlling a scientific furnace. You set the power, let it settle until forcing balances heat loss (maybe overnight), then take measurements. By settling, you don’t care what the temperature was when you applied controls. Hotter or colder.
Same with GCMs. If you want to model 21st cen, you start maybe in 1900. You don’t know that much about 1900, certainly not wind speeds etc. But you apply 20Cen forcings, so that after 100 years, the temperature is in balance, even if the forcing varied. So you have a starting point very insensitive to the actual state in 1900.

Reply to  Nick Stokes
October 18, 2014 8:21 pm

The contention that there is a “forcing” posits the existence of a linear functional relation from the the magnitude of the change in the “forcing” to the magnitude of the change in the global temperature at equilibrium. You say you’d like to test this contention? Whoops, you can’t do so: the change in the global temperature at equilibrium is not an observable feature of the real world.

David A
Reply to  Michael Moon
October 19, 2014 5:10 am

Nick really? Now you move from a wind tunnel to a furnace, and achieve a hypothetical equilibrium. Sorry, the earth’s climate is neither a wind tunnel or a furnace. Read below for why.
The scientific evidence regarding the CO2 response to warming is that it takes centuries for the entire system to respond. Therefore you have no idea if the earth is now or was then at equilibrium. And in many cases the paleo proxy record indicates the earth cools again after the CO2 increases, indicating that CO2 responds to warming, but does not feedback to additional warming, or if it does, it is to weak to overcome other natural processes. Ocean currents take centuries to turn over. Ocean responses can be very long term, as our solar changes, thus even starting at 1900 does not save the climate models.
Besides Nick, the climate models are all wrong against observations in the same direction. Thus this consistent ignorance of the climate models should inform you that they likely peg the influence of CO2 much higher the real world does.
Think of the climate very simply as dozens of teeter-totters, down on the right is warming, down on the left is cooler. All of these teeter-totters oscillate at different frequencies, some daily, some seasonal, some decadal, some centuries, some vary, and many we do not understand. This is why it is very difficult to pick any one factor, and clearly discern its influence against the noise; as they are all interacting with and competing against many other factors. It is likely that major shifts in climate only occur when by happenstance an adequate number of the teeter-totters happen to synchronize to one side at the same time. BTW, this chaotic situation is further complicated by the fact that at some GATs some of what once influenced warming, may now influence cooling. It is also highly unlikely that any of these factors are directly linear in how they apply.
However some things are known. The benefits of additional CO2 are known, and are a significant factor in the reasons we are not now in a world food crisis. The anthropogenic increase in CO2 currently saves the world about 15% of agricultural land and water, likely preventing much international and regional stress. The purported harms of CO2 are not manifesting.
Hansen was wrong about how much CO2 would accumulate in the atmosphere, and wrong about its ability to warm, and completely wrong about the predictions of catastrophe, common in both journals and the media. The climate science community refuse to learn from the failures of the models. That failure is informative.

Reply to  davidmhoffer
October 17, 2014 7:51 pm

The Stokes Syllogism:
GCM does not use weather station data.
World of Warcraft does not use weather station data.
Therefore World of Warcraft is a GCM.
If GCM are unfalsifiable, they aren’t science. If weather station data is even a rough set of information with which to check predictions, let alone local people sticking their own brewer’s thermometers on trees as in the good old days of colonial Australia, there is no earthly point to GCM other than as props in a typical boiler room con.

David A
Reply to  davidmhoffer
October 17, 2014 11:31 pm

Yes, and all the models fail badly in one direction. So, we cannot properly initialize the models, and even if they are all initialized to the same start point, (which none of us know if that initial point is in positive or negative to equilibrium) they all fail badly in the same direction.
It is remarkable that dozens of models can all fail a chaotic system in a uniformly systemically and consistently wrong direction, way too warm. It takes a unique anti-science talent to be so consistently wrong, and an even greater hubris, to learn nothing from such consistent failure, and to arrogantly ask the world to change because of your pathetic climate models.

Reply to  David A
October 17, 2014 11:39 pm

+1

CodeTech
Reply to  David A
October 18, 2014 4:56 am

Yes. This.

October 16, 2014 9:56 pm

http://wattsupwiththat.com/2014/10/06/real-science-debates-are-not-rare/#comment-1757616
GREAT comment from Robert G Brown on models. He begins by explaining the granularity you’d need to actually build a model that works, then postulates an imagined super computer orders of magnitude faster than anything we have on the planet today, and then concludes is would take that computer, at that granularity, calculating 30 years into the future…. 36 years to do all the math.

DirkH
Reply to  davidmhoffer
October 17, 2014 4:51 am

Don’t forget, if you DO have that computational power, and make tiny grid boxes with a tiny time step, then the entire statistical description of the grid box content breaks down, because a statistical description as used in the GCM’s can only be halfway correct when there are many process instances in the box, in the timestep described.
Reducing grid box size and duration requires completely new descriptions of the physics, on the microlevel. And the microphysics are not understood. Charge separation due to absorption of IR in water droplets, plasma bubbles, thunderstorms, …

October 16, 2014 10:08 pm

I have been making the same point as this guest post for some years at several posts at
http://climatesense-norpag.blogspot.com.
The inherent uselessness of the IPCC models is discussed in some detail in Part1 of the latest post at the above link. Here is the conclusion of Part 1
“In summary the temperature projections of the IPCC – Met office models and all the impact studies which derive from them have no solid foundation in empirical science being derived from inherently useless and specifically structurally flawed models. They provide no basis for the discussion of future climate trends and represent an enormous waste of time and money. As a foundation for Governmental climate and energy policy their forecasts are already seen to be grossly in error and are therefore worse than useless. A new forecasting paradigm needs to be adopted.
The modeling community is itself beginning to acknowledge its failures and even Science Magazine
which has generally been a propagandist for the CAGW meme is now allowing reality to creep in. An article in its 6/13/2014 issue says:
“Much of the problem boils down to grid resolution. “The truth is that the level of detail in the models isn’t really determined by scientific constraints,” says Tim Palmer, a physicist at the University of Oxford in the United Kingdom who advocates stochastic approaches to climate modeling. “It is determined entirely by the size of the computers.” Roughly speaking, an order-of-magnitude increase in computer power is needed to halve the grid size. Typical horizontal grid size has fallen from 500 km in the 1970s to 100 km today and could fall to 10 km in 10 years’ time. But even that won’t be much help in modeling vitally important small-scale phenomena such as cloud formation, Palmer points out. And before they achieve that kind of detail, computers may run up against a physical barrier: power consumption. “Machines that run exaflops [1018 floating point operations per second] are on the horizon,” Palmer says. “The problem is, you’ll need 100 MW to run one.” That’s enough electricity to power a town of 100,000 people.
Faced with such obstacles, Palmer and others advocate a fresh start.”
Having said that , the skeptical community seems reluctant to actually abandon the basic IPCC approach and continues to try to refine or amend or adjust the IPCC models with endless discussion of revised CS for example or revised calculations of OHC etc. A different mindset and approach to forecasting must be used as the basis for discussion of future climate trends..
Part 2 at the linked post says:
” 2 The Past is the Key to the Present and Future . Finding then Forecasting the Natural Quasi-Periodicities Governing Earths Climate – the Geological Approach.
2.1 General Principles.
The core competency in the Geological Sciences is the ability to recognize and correlate the changing patterns of events in time and space. This requires a mindset and set of skills very different from the reductionist approach to nature, but one which is appropriate and necessary for investigating past climates and forecasting future climate trends. Scientists and modelers with backgrounds in physics and maths usually have little experience in correlating multiple, often fragmentary, data sets of multiple variables to build an understanding and narrative of general trends and patterns from the actual individual local and regional time series of particular variables………………….
Earth’s climate is the result of resonances and beats between various quasi-cyclic processes of varying wavelengths combined with endogenous secular earth processes such as, for example, plate tectonics. It is not possible to forecast the future unless we have a good understanding of the relation of the climate of the present time to the current phases of these different interacting natural quasi-periodicities which fall into two main categories.
a) The orbital long wave Milankovitch eccentricity,obliquity and precessional cycles which are modulated by
b) Solar “activity” cycles with possibly multi-millennial, millennial, centennial and decadal time scales.
The convolution of the a and b drivers is mediated through the great oceanic current and atmospheric pressure systems to produce the earth’s climate and weather.
After establishing where we are relative to the long wave periodicities to help forecast decadal and annual changes, we can then look at where earth is in time relative to the periodicities of the PDO, AMO and NAO and ENSO indices and based on past patterns make reasonable forecasts for future decadal periods.
In addition to these quasi-periodic processes we must also be aware of endogenous earth changes in geomagnetic field strength, volcanic activity and at really long time scales the plate tectonic movements and disposition of the land masses.”
During the last few years I have laid out in a series of posts an analysis of the basic climate data and of the methods used in climate prediction. From these I have developed a simple, rational and transparent forecast of the possible timing and extent of probable future cooling by considering the recent temperature peak as a nearly synchronous peak in both the 60- and 1000-year cycles and by using the neutron count and AP Index as supporting evidence that we are just past the peak of the controlling millennial cycle and beginning a cooling trend which will last several hundred years.
For the forecasts and supporting evidence go to the link at the beginning of this comment.

jorgekafkazar
Reply to  Dr Norman Page
October 16, 2014 10:38 pm

“…Tim Palmer, a physicist at the University of Oxford in the United Kingdom who advocates stochastic approaches to climate modeling…”
Madness piled upon madness.

October 16, 2014 10:15 pm

The IPCC and the CMIP5 models in LLNL simply need to be dumped and the money saved or diverted. The government grant-funded and intramural climate modelers can go finds jobs with Wall Street or Financial and Insurance firms, i.e . that is, become productive members of society.

joeldshore
Reply to  Joel O'Bryan
October 19, 2014 7:00 am

Since you left out the “sarc” tab, one could almost interpret your comment as serious, although I certainly hope you meant it sarcastically.

Reply to  Joel O'Bryan
October 19, 2014 7:12 pm

Good point, since the science is settled. Why hasn’t there been a mass exodus from climatology? How can they continue to publish papers? What about the poor Phd candidates in the field?

richard verney
October 16, 2014 10:46 pm

I for one would like to know where all the weather stations are located, and how many there are, that are said to enable us to form a view on global temperatures going back to 1850 and even back to the 1800s.
It is claimed; “The dataset includes measurements as far back as the year 1701 from over 7,200 weather stations around the world.” Who seriously believes that in 1701 there were some 7,200 weather stations distributed worldwide? That proposition sounds farcical to me.
It is only a very few countries that have weather data going back to the 1700s, and usually this is concentrated around their major cities, although in that era UHI is not an issue, but spatial coverage is an issue when one is claiming to reconstruct global temperatures.

Brandon Gates
Reply to  richard verney
October 17, 2014 2:00 am

The raw and and adjusted data are here:
http://www.ncdc.noaa.gov/data-access/land-based-station-data/land-based-datasets/global-historical-climatology-network-ghcn
7,280 total stations, but not all operating concurrently. The maximum number of stations concurrently operating was 6,013 in 1970. The earliest records are from 1701, with one station reporting. See Figure 4(b) above for a graphical representation.

tty
Reply to  richard verney
October 17, 2014 8:09 am

I think there was just one Station (in Central England) in 1701. A second (de Bilt, Netherlands) came on line in 1706 and a third (Uppsala, Sweden) in 1722.

Mike T
October 16, 2014 11:27 pm

Of course the temperature on the ground is different to what is read in a Stevenson screen at 1.1 to 1.2m above the ground. Australian stations have read the “ground temperature”, or terrestrial minimum, for many years. For those stations without a terrestrial minimum, a “frost day” is recorded when the air minimum falls below 2.2 degrees C, or frost is observed.

October 16, 2014 11:36 pm

I’d like to add my humble two cents to this good post to touch on one additional topic. I was the principal developer of large, 3-D electromagnetic codes for radiation transport modeling, which have been run on several thousand processors on one of the largest and fastest computers in the world; much like GCMs. After the initial architecture was in place, one of the first orders of business was to perform a rigorous set of validation exercises. This included comparing to analytical solutions for radiating dipoles and light-scattering spheres, which Gustav Mie on the shoulders of Hendrik Lorentz impressively accomplished. These validation procedures were *absolutely* necessary to both debugging, model verification and validation (separate things) and providing the incremental confidence we needed to eventually perform our own studies, which ended up demonstrating–through both model and experiment–the breaking of the optical diffraction limit using nanoscale transport mechanisms. I can’t overstate how important this validation was. The writeup fo this work was later awarded the national Best Paper in Thermophysics, which I mention for appreciation of co-authors Theppakuttai, Chen, and Howell.
But descriptions of climate modeling by news and popularized science didn’t satisfy my sniff test. Certainly I agree that carbon dioxide is a greenhouse gas which has a net warming effect on the atmosphere. We understand the crux of the debate has clearly been the quantification and consequences of this effect. As I would recommend to anyone with the capability and/or open mind, on any subject, I studied primary sources to inform myself. I approached my investigation from the standpoint of a computational fluid dynamicist.
I was immediately shocked by what I saw in climate science publications. There is much to say, but the only thing I want to comment on here is the lack of rigorous validation procedures in the models, as far as I can tell. Various modules (and I’ve looked at NCAR and GISS, primarily) seem to have limited validation procedures performed independently of other modules and within a limited scope of the expected modeling range. I have not found any conjugate validation exercises using the integrated models (though I am hopeful someone will enlighten me?). To not have the coupled heat transfer and fluid dynamic mechanisms validated to even a moderate degree, let alone extreme degree of confidence required when projections are made several orders of magnitude outside the characteristic timescale of transport mechanisms is no better than playing roulette. It is like obtaining a mortgage with no idea what your interest rate is…absurd. The uncertainty will be an order of magnitude larger than the long-term trend you’re hoping to project. This is not how tier-1 science and engineering operates. This is not the level of precision required to get jet engines capable of thousands of hours of flight and spacecrafts in orbit and land rovers in specific places on other planets. Large integrated models of individual component models cannot rely on narrow component-level validation procedures. Period. It is an absolute certainty that the confidence we require in the performance of extremely complicated life-supporting vehicles cannot be claimed without integrated validation procedures that do not appear to exist for GCMs. This is one reason, I believe, why we see such a spread in model projections: because it does not exist. V&V is not a trivial issue; DOE, NSF, and NASA have spent many tens of millions of dollars in efforts begun as late as 2013 to determine how to accomplish V&V, for good reason. I support the sentiment behind those efforts.
So where does that leave us? GCM’s can’t be validated against analytical solutions of actual planetary systems, of course. That is a statement that can’t be worked around and should provide a boundary condition in itself for GCM model projection confidence. But there are analytical fluid dynamics solutions that are relevant, idealized planetary systems that can be modeled and compared to ab-initio solutions, as well as line-by-line Monte-Carlo benchmark simulations which can be performed to validate full-spectrum radiative transport in participating media. I’ve seen nothing that meets this criteria (though I am open to and welcome correction. I will give a nod to LBL radiation calcs which use the latest HITRAN lines but still don’t present validation spectra and are then parameterized from k-distribution form for use in GCMs)
My conclusion is that current GCMs are like lawn darts. They are tossed in the right direction based on real knowledge, but where they land is a complete function of the best-guess forcings put into it. This is in direct contrast to the results of highly complex models found elsewhere in science and engineering, which are like .270 rounds trained on target by powerful scopes. And they bring home prizes because they were sighted in.

Uncle Gus
Reply to  Alex H (@USthermophysics)
October 17, 2014 11:28 am

I think the problem is that people (particularily politicians) look at climate modelling and are told, “It’s computer modelling”, and “It’s science”, and have no real idea of how those things are really done.
Add to that the apparent fact that climate modelling is stuck not so much in the eighties as in the sixties – using the same philosophy and almost the same methods that the Club of Rome used. It was good enough then, but even then people in the know knew it was essentially RIRO, and didn’t take it seriously.
Nowadays, you don’t take it seriously, you lose your job…

TYoke
Reply to  Alex H (@USthermophysics)
October 17, 2014 4:45 pm

Very nice post. I also work in a fluid dynamics field and I was made a skeptic when I saw Al Gore insisting that no more debate was necessary since the “science is settled”.
Al Gore, the politicians, and the MSM were convinced so it was time to move straight on to public shaming of the “deniers”. In the official channels, group-think and heretic hunting have subsequently ruled the day.

Reply to  TYoke
October 17, 2014 8:00 pm

Speaking as someone who used to design projects at UN level for several years, I can assure all the puzzled scientists that GCM and all the other tools of the IPCC, UNFCCC etc. were designed to accomplish the marxist redistribution of wealth and to destroy the industrial capacity of the civilised world. Some UN types say it openly; others talk in more veiled terms of the “millions killed by climate change” (yes really) each decade.
Don’t let cognitive dissonance overtake you- judge models by their effect on the real world to see the goals of those who pay for them. If a Borgia wants a painting done, you paint how the Borgia tells you to.
Gleichschaltung very nearly had the developed world, not just the political-media-bureaucrat class, embracing a Rings of Ice level of delusion. Magical thinking is dangerous and virulent.

David A
Reply to  Alex H (@USthermophysics)
October 17, 2014 11:46 pm

“where they land is a complete function of the best-guess forcings put into it.”
=========================
They land exactly where their political masters want them to land.

October 17, 2014 12:14 am

Do your arguments against computer models also apply for weather forecast? In other words: Why do we trust weather forecasts but not predictions for climate warming?

policycritic
Reply to  Paul Berberich
October 17, 2014 12:30 am

Decades?

GregK
Reply to  Paul Berberich
October 17, 2014 12:35 am

Information from weatherward of us, air pressure, temperature, rainfall etc tells us what is approaching.
In addition we can see approaching systems in satellite images.
Tropical storms/cyclones/typhoons/hurricanes show up rather nicely.
All good for a week or two in advance.
But that’s all empirical
Doesn’t work for a year, 10 years, 100 years into the future because we don’t understand what controls the system.

Patrick
Reply to  Paul Berberich
October 17, 2014 12:36 am

I don’t “trust” weather forecasts at all as they are usually ~50% wrong, and these days, based on computer models. Micheal Fish in the UK in the 80’s used traditional methods to forecast weather for the next day and the UK had one of it’s worst storms over that night.

Reply to  Paul Berberich
October 17, 2014 12:43 am

The scale, data quality, and length of the run are quite different. A short term regional scale model for a country with a lot of weather stations and focused satellite coverage can be tuned with the observations. The smaller grid helps improve model performance, and the short amount of time they have to project means the time steps are much smaller and the model runs in a much shorter time span. This allows the modeler to train himself and the model to make improved predictions. The improvements include the introduction of parameterizations derived from model performance.
A world wide model has a much larger scale, the historical input data is low quality for many regions, is expected to run for 100 years (this means the time steps have to be stretched), and there isn´t sufficient time to “teach” the modelers and tune it properly.
I started running dynamic models in 1982 (in another field, but the basic principles and problems are the same). Over the years we have evolved to use multiple model runs to create ensembles or families of model runs. We create the statistics from these ensembles….and we perform a parallel effort looking for real life analogues we can use to verify if we are even close.
We also have the ability to run lab experiments to try to understand the small scale phenomena. What we find is that lab results we have run are impossible to model with a numerical simulator – we can´t get all the physics handled properly.
Because the lab resolution is at best 1 meter and we need to run kilometer wide models we still face a problem trying to “scale up” the lab results. I believe some outfits are so bothered by this issue they are discussing building giant lab experiments which extend the lab scale from one meter to as much as four meters. This leaves a gap between the lab work and the real world, but it does mean the scale up is a bit less abrupt. I don´t think it will work. I´d rather see a much denser data gathering exercise in real world conditions. And I suggest to climate modelers they should do the same. The should spend the next 20 years gathering extremely detailed data, then see how they can use it to improve their models.
But I guess this is getting too technical…let´s just say a climate model is a lot coarser than a weather model. You are comparing a very large plastic butter knife to a small scalpel.

knr
Reply to  Paul Berberich
October 17, 2014 1:50 am

we don’t , and oddly any forecast over about 48 hours is also consider not worth much by the professionals. Indeed forecasting the weather is known to be highly problematic models or no models. In the past people accepted this as annoying but not critical. But now we see great claims to accuracy in prediction, often to two decimal places, decades ahead, but has its ‘climate ‘ is supposed to be different while in reality often the same issues that make weather forecasting so hard effect ‘climate’ forecasting.
It’s a wired situation but in climate ‘science’ the less the models reflect reality the great the accuracy is claimed for them under the good old trick of ‘may have not happed yet but it will sometime ‘
Take away the models form climate ‘science’ and you got virtual nothing left , so you can see way the models are all important and have to be defended,be they good , bad or ugly.

Tim Hammond
Reply to  Paul Berberich
October 17, 2014 4:10 am

Not sure what your point is – we are not using weather forecasts for next week as a reason to entirely change the basis of our economy.
Climate models are interesting and may have a use, but that use is not to change the world as we know it.

Bill Marsh
Editor
Reply to  Paul Berberich
October 17, 2014 5:16 am

Time. We don’t ‘trust’ weather forecasts beyond 3-5 days into the future and they are not inherently ‘accurate’ within that timeframe either. Weather forecasts do not ‘predict’ the temperature to within .01C. My local forecast for precipitation is wrong far more often than it is right for instance. The weather forecasts give a probability of rain occurring, not a ‘prediction’ of how much rain will occur and they give a general idea of what the temperature range will be, not an exact measure of what (and when) the highest temperature will be.
The further into the future a weather forecast is made, the less we can ‘trust’ it to be accurate.

VikingExplorer
Reply to  Paul Berberich
October 17, 2014 6:31 am

I disagree with the idea that computer models in general cannot work. I would assert that the wrong things have been modelled, and that the overall approach has been incorrect.
All the comments talking about “weather” are really beside the point. Chaos makes it extremely difficult to predict weather. However, climate is a simpler problem. In a climate study, we don’t need to know exactly where it’s going to rain, only that it does.

wally
Reply to  Paul Berberich
October 17, 2014 8:24 am

When the weatherman announces a front moving through is bringing rain, I grab my $3 umbrella and when it doesnt rain I grumble about the inconvenience of carrying it around.
When the IPCC using computer models announces a climactic prediction requiring immediate action and a global investment to the tune of trillions of dollars … I tend to give pause in my response.

Bob Boder
Reply to  Paul Berberich
October 17, 2014 9:23 am

Don’t know bout anyone else but I don’t trust weather forecasts. It was supposed to rain all day where i live yesterday and my sons soccer game (football to the brits here) was supposed to be canceled and it dried up about 11:00 and we got the game in.

Uncle Gus
Reply to  Paul Berberich
October 17, 2014 11:28 am

We do?

October 17, 2014 12:50 am

Very good article and very good comments. As. Teenager, there was a series of models in the monster theme. One could have the Werewolf, Dragula, Frankenstein, and or the Mummy.
I always won best replica for I did an excellent job in detailed painting. They were always a replica of something on the silver screen created by Hollywood actors, makeup artists, directors and producers and most of all money based on someone’s imagination.
The truth is in the pudding. A model is the replication of nothing more than someone’s imagination and pen and money.
Thus, one has Man-made global warming.
In reality, there is a cold situation already here and that input is not In The models. Over the last few years we have lost over 30k citizens to cold. We lost over 40k of livestock in South Dakota to a freak early October storm in 2013.
In New Zealand and Scotland farmers lost thousands of lambs to late winter storms.
As one Russian scientist stated, modeling is not science.
A geologist said, carbonates are the foundation of life.
Thank you all for your article and comments. Well done.
Paul Pierett

Reply to  Paul Pierett
October 17, 2014 1:09 am

I wouldn´t toss all models in the can the way you have. Mathematical models are bread and butter in many professions. I have spent years running all sorts of models. Some of them are highly accurate (for example, we can model salt water at different temperatures, and pressures and predict which salts will precipitate).

Patrick
Reply to  Fernando Leanme
October 17, 2014 1:17 am

They may be “bread and butter” in many professions, unfortunately actions based on “bad” inputs results in equally “bad” outputs. We can model something where we know *ALL* variables. Thats why we can build virtual and real model aircraft, ships and cars. Climate? Not at all!

Reply to  Fernando Leanme
October 17, 2014 2:50 am

The climate can be modeled. Evidently the results are less accurate than we wish. This is why I offered a comment with a graph “correcting” Mann´s slide shown during his Cabot talk. So the question isn´t whether the models work or not, the question is whether they work well enough to make decisions such as taxing emissions or subsidizing solar power.
When it comes to taking such measures, I believe the models are insufficient. However, I do worry about the fact that we are running out of oil. This means I´m not necessarily opposed to measures to increase efficiency and save it for the future.

Tim Hammond
Reply to  Fernando Leanme
October 17, 2014 4:12 am

Because you know how those physical processes work to a high degree,
Climate models (i) don’t have that sort of knowledge and (ii) are being used to somehow “create” knowledge, which is utter nonsense.

DirkH
Reply to  Fernando Leanme
October 17, 2014 5:02 am

Fernando Leanme
October 17, 2014 at 2:50 am
“The climate can be modeled. Evidently the results are less accurate than we wish. ”
For as long as I have followed the subject, the predictive skill of climate models has been negative. They are a negative predictor. Prepare for the opposite of what the True Believers expect and you’ll be fine.

Bill Marsh
Editor
Reply to  Fernando Leanme
October 17, 2014 5:27 am

Fernando Leanme
October 17, 2014 at 2:50 am
“The climate can be modeled. Evidently the results are less accurate than we wish. ”
I’m not sure I would agree with that statement. The IPCC states (and I think there is general agreement with the statement) that the ocean-atmosphere system is a “complex, non-linear, chaotic system”. If this is true then I would submit that the probability that we can ‘model’ the state of that system 30 years into the future is non-zero but trivially so. In any chaotic system, if you do not know the initial value of every variables to a highly accurate degree and the exact functioning of every process in the system, you cannot ‘predict’ the future state of that system. That is the nature of chaotic systems. I would submit that we do not even know all of the variables, nor do we know all of the processes that affect the system, much less to the degree needed for the initial state. I would further submit that we will never possess this knowledge.

Reply to  Fernando Leanme
October 17, 2014 7:20 am

DC Cowboy et al: It´s just a question of semantics, I suppose. We can model almost anything. The question is whether the model delivers a useable product. The models do exist, they are run.
Let me describe the problems I have faced:
In some cases our ability to model generates overconfidence in the higher floors. We have management types educated in Business Schools who don´t really understand what the models can and can´t do. But they love the model outputs.
These are regarded extremely well if they are mounted on a powerpoint slide. Some presentations even incude movies (these show changing pressures, temperatures, saturations). What we can´t get into some people´s heads is that models are not reliable enough.
Over the years many of us learned this fact and this is why we perform endless searches for analogues. Others focus on refining the lab work.
Plus in some cases we found the basic theories were off (I can´t discuss the details, but let´s just say it´s so crazy it´s as if we found that sometimes water ice doesn´t float on liquid water).
SO I guess what I´m saying is that models can be run, but the output has to pass a smell test. And I don´t think the climate models are passing theirs at this time.

Reply to  Fernando Leanme
October 19, 2014 9:41 am

Fernando Leanme
October 17, 2014 at 2:50 am
“However, I do worry about the fact that we are running out of oil. ”
We will never run out of oil. The price will rise as the apparent supply diminishes. If hydrocarbons have a future value as other than fuel the market will adjust. An economic reason to find new energy sources is more efficient than a legislative one.
.

aussie pete
October 17, 2014 1:09 am

Thank you Dr.Ball
You (and many commenters here) have explained “scientifically” what any thinking and reasonably intelligent “non-scientist” knows intuitively. This is primarily why i have been skeptical for many years now.

knr
October 17, 2014 1:40 am

Computer Climate Models Cannot Work, not true they certainly ‘work’ for those who made a career out of pushing them , they ‘work’ for those using them to gain personal enrichment or those using them for political ends. That they fail to ‘ work’ in predicted future climate is a minor issues has long as they ‘work’ in the areas in which those who they are most useful for want them to ‘work’

Jeff Mitchell
Reply to  knr
October 17, 2014 10:57 am

This would be my big point as well. The models in question cannot work because they exist not to model climate, but to provide propaganda in the effort to control people’s lives. They are simply trying to make the implausible plausible so they can effect policies that favor their political desires. This is deliberate. This is not in the category of well intentioned mistakes.
Trashing all models, though, is not a particularly good approach. Some models do work very well. Certain large aircraft have been designed and built with nary a physical test, but fly as advertised. The models that allow for this have been refined by use of the scientific method wherein each step in the development has been tested against the real world before proceeding to the next step. The agenda of these models is to compete in a real world market and make the most efficient product so that they can sell more and make more money. They can’t afford to fake it if they want to get the right results.
To lump these kinds of models with the climate models is unfair and unproductive. The post may wish to qualify the statement by saying models that have no basis in reality cannot work except by extreme coincidence.

Bob Boder
Reply to  Jeff Mitchell
October 17, 2014 1:18 pm

And for that the models work really well, as you most correctly have pointed out, its about control and power not climate.

Sceptical lefty
October 17, 2014 2:01 am

Models, whether physical or computer-simulated, have their uses. However, for a model to to be plausibly applicable it must incorporate all relevant factors, giving them appropriate weight. For an engineering model this is relatively straightforward, although it can be complex. For the Earth’s climate the task is impossible, now and for the foreseeable future. The Earth is an open system, so we can never be sure that we have nailed all relevant factors, let alone quantified them or understood their mutual relationships. It is chaotic, or orderly to a degree of complexity beyond the calculating capacity of any realistically conceivable computer. It is extremely old relative to the period for which we have accurate observations so there may well be cyclic variables of such long periodicity that we haven’t recognised them. The Earth’s existence appears to be linear — i.e. it was created, it now has a ‘life’ and someday it will probably ‘die’. It is therefore reasonable to suppose that observed cyclic phenomena had to begin at some stage and will eventually end — maybe tomorrow.
Until our observations and understanding of this planet and its cosmic environment are considerably more advanced than they are now, climate models are only a waste of time and resources. I believe that a case could be made that meteorologists of 50 years ago had a better understanding of the weather and climate than do the current bunch. The early weathermen had to make do with observations and harboured few illusions about their capacity for long-term forecasting. (There has been some interesting research into solar activity, but all sensible people know that the sun isn’t very important.) The existence of increasingly powerful computers has given the modern ‘climatologists’ either a mistaken belief in the reliability of their forecasts and/or an opportunity to cynically enhance their incomes and prestige with what is, basically, mumbo-jumbo.
Finally, let’s not get too cocky about ‘The Pause’. If things start to get warmer the doomsayers will be revitalised, regardless of the highly questionable evidence for anthropogenic influences. If there is a cooling tendancy it will be accompanied by a brief ‘pause’ to allow the experts to change step and we will then be hit with anthropogenic cooling, which can be reversed if we spend enough money.

CodeTech
October 17, 2014 2:13 am

I have mentioned this before, but I think it’s appropriate again.
Years ago, a friend decided he was serious about winning the lottery. He got a listing of every Lotto 6/49 draw since it started, and built a model. I have no idea what voodoo he felt was necessary to give his model predictive power, but he regularly bought large batches of tickets.
As a control, I used to buy “Quick Picks”. I won minor prizes much more frequently than his model did. Neither of us won a major prize.
No matter what his model did, it was NOT capable of prediction. He spent a lot of time messing around with it, changing things here and there to make a hindcast work. He used to wax eloquent about the minute differences in the weight of the balls due to ink distribution and static charge while the machine was running. But it didn’t matter. What he was doing was not ever going to predict the lottery numbers on ANY time scale.
Ironically for this, I personally know the $40 million dollar lottery winner than many of you have heard of, since he’s giving it all away for cancer related charities and research. I knew him for 20 years, during which time that whole lottery model thing was active. Tom never used a model.

EternalOptimist
October 17, 2014 2:19 am

Would Nick Stokes bet his house that the models will work ? I will bet mine that they will not

October 17, 2014 2:23 am

Well let us see here. We have climate modelers who have a tremendous bias which blinds them. They have little reliable data and most of what little they have as been “adjusted” to the alarmist bias. On top of all that, they don’t really understand the climate engine anyway. They are almost clueless about the role of water here on a water planet.
There is not chance that modern climatology can get anything right. Not until they go back to first principles and start over honestly. (not holding my breath)

1 2 3 4