A Simple Truth; Computer Climate Models Cannot Work

Guest opinion by Dr. Tim Ball –

Ockham’s Razor says, “Entities are not to be multiplied beyond necessity.” Usually applied in making a decision between two competing possibilities, it suggests the simplest is most likely correct. It can be applied in the debate about climate and the viability of computer climate models. An old joke about economists’ claims they try to predict the tide by measuring one wave. Is that carrying simplification too far? It parallels the Intergovernmental Panel on Climate Change (IPCC) objective of trying to predict the climate by measuring one variable, CO2. Conversely, people trying to determine what is wrong with the IPCC climate models consider a multitude of factors, when the failure is completely explained by one thing, insufficient data to construct a model.

IPCC computer climate models are the vehicles of deception for the anthropogenic global warming (AGW) claim that human CO2 is causing global warming. They create the results they are designed to produce.

The acronym GIGO, (Garbage In, Garbage Out) reflects that most working around computer models knew the problem. Some suggest that in climate science, it actually stands for Gospel In, Gospel Out. This is an interesting observation, but underscores a serious conundrum. The Gospel Out results are the IPCC predictions, (projections), and they are consistently wrong. This is no surprise to me, because I have spoken out from the start about the inadequacy of the models. I watched modelers take over and dominate climate conferences as keynote presenters. It was modelers who dominated the Climatic Research Unit (CRU), and through them, the IPCC. Society is still enamored of computers, so they attain an aura of accuracy and truth that is unjustified. Pierre Gallois explains,

If you put tomfoolery into a computer, nothing comes out but tomfoolery. But this tomfoolery, having passed through a very expensive machine, is somehow ennobled and no-one dares criticize it.

Michael Hammer summarizes it as follows,

It is important to remember that the model output is completely and exclusively determined by the information encapsulated in the input equations.  The computer contributes no checking, no additional information and no greater certainty in the output.  It only contributes computational speed.

It is a good article, but misses the most important point of all, namely that a model is only as good as the structure on which it is built, the weather records.

The IPCC Gap Between Data and Models Begins

This omission is not surprising. Hubert Lamb, founder of the CRU, defined the basic problem and his successor, Tom Wigley, orchestrated the transition to the bigger problem of politically directed climate models.

Figure 2: Wigley and H.H.Lamb, founder of the CRU.

Source

Lamb’s reason for establishing the CRU appears on page 203 of his autobiography, “Through all the Changing Scenes of Life: A Meteorologists Tale”

“…it was clear that the first and greatest need was to establish the facts of the past record of the natural climate in times before any side effects of human activities could well be important.”

Lamb knew what was going on because he cryptically writes,

“My immediate successor, Professor Tom Wigley, was chiefly interested in the prospects of world climates being changed as a result of human activities, primarily through the burning up of wood, coal, oil and gas reserves…” “After only a few years almost all the work on historical reconstruction of past climate and weather situations, which first made the Unit well known, was abandoned.”

Lamb further explained how a grant from the Rockefeller Foundation came to grief because of,

“…an understandable difference of scientific judgment between me and the scientist, Dr. Tom Wigley, whom we have appointed to take charge of the research.”

Wigley promoted application of computer models, but Lamb knew they were only as good as the data used for their construction. Lamb is still correct. The models are built on data, which either doesn’t exist, or is by all measures inadequate.

Figure 2

Climate Models Construct.

Models range from simple scaled down replicas with recognizable individual components, to abstractions, such as math formula, that are far removed from reality, with symbols representing individual components. Figure 2 is a simple schematic model of divisions necessary for a computer model. Grid spacing (3° by 3° shown) varies, and reduction is claimed as a goal for improved accuracy. It doesn’t matter, because there are so few stations of adequate length or reliability. The mathematical formula for each grid cannot be accurate.

Figure 3 show the number of stations according to NASA GISS.

Figure 3.

It is deceiving, because each dot represents a single weather station, but covers a few hundred square kilometers at scale on the map. Regardless, the reality is vast areas of the world have no weather stations at all. Probably 85+ percent of the grids have no data. The actual problem is even greater as NASA GISS, apparently unknowingly, illustrated in Figure 4.

Figure 4.

4(a) shows length of record. Only 1000 stations have records of 100 years and almost all of them are in heavily populated areas of northeastern US or Western Europe and subject to urban heat island effect (UHIE) 4(b) shows the decline in stations around 1960. This was partly related to the anticipated increased coverage of satellites. This didn’t happen effectively until 2003-04. The surface record remained the standard for the IPCC Reports. Figure 5 shows a CRU produced map for the Arctic Climate Impact Assessment (ACIA) report.

Figure 5.

It is a polar projection for the period from 1954 to 2003and shows “No Data” for the Arctic Ocean (14 million km2), almost the size of Russia. Despite the significant decline in stations in 4(b), graph 4(c) shows only a slight decline in area covered. This is because they assume each station represents, the percent of hemispheric area located within 1200km of a reporting station.” This is absurd. Draw a 1200km circle around any land-based station and see what is included. The claim is even sillier if a portion includes water.

Figure 6 a, shows the direct distance between Calgary and Vancouver at 670 km and they are close to the same latitude.

Figure 6 a

Figure 6 b, London to Bologna, distance 1154 km.

Figure 6 b

Figure 6 c, Trondheim to Rome, distance 2403 km. Notice this 2400 km circle includes most of Europe.

Figure 6 c

An example of problems of the 1200 km claim occurred in Saskatchewan a few years ago. The Provincial Ombudsman consulted me about frost insurance claims that made no sense. The government agricultural insurance decided to offer frost coverage. Each farmer was required to pick the nearest weather station as the base for decisions. The very first year they had a frost at the end of August. Using weather station records, about half of the farmers received no coverage because their station showed 0.5°C, yet all of them had “black frost”, so-called because green leaves turn black from cellular damage. The other half got paid, even though they had no physical evidence of frost, but their station showed -0.5°C. The Ombudsman could not believe the inadequacies and inaccuracies of the temperature record and this in essentially an isotropic plain. Especially after I pointed out that they were temperatures from a Stevenson Screen, for the most part at 1.25 to 2 m above ground and thus above the crop. Temperatures below that level are markedly different.

Empirical Test Of Temperature Data.

A group carrying out a mapping project, trying to use data for practical application, confronted the inadequacy of the temperature record.

The story of this project begins with coffee, we wanted to make maps that showed where in the world coffee grows best, and where it goes after it has been harvested. We explored worldwide coffee production data and discussed how to map the optimal growing regions based on the key environmental conditions: temperature, precipitation, altitude, sunlight, wind, and soil quality.

The first extensive dataset we could find contained temperature data from NOAA’s National Climatic Data Center. So we set out to draw a map of the earth based on historical monthly temperature. The dataset includes measurements as far back as the year 1701 from over 7,200 weather stations around the world.

Each climate station could be placed at a specific point on the globe by their geospatial coordinates. North America and Europe were densely packed with points, while South America, Africa, and East Asia were rather sparsely covered. The list of stations varied from year to year, with some stations coming online and others disappearing. That meant that you couldn’t simply plot the temperature for a specific location over time.

Figure 7

The map they produced illustrates the gaps even more starkly, but that was not the only issue.

At this point, we had a passable approximation of a global temperature map, (Figure 7) but we couldn’t easily find other data relating to precipitation, altitude, sunlight, wind, and soil quality. The temperature data on its own didn’t tell a compelling story to us.

The UK may have accurate temperature measures, but it is a small area. Most larger countries have inadequate instrumentation and measures. The US is probably the best, certainly most expensive, network. Anthony Watts research showed that the US record has only 7.9 percent of weather stations with a less than 1°C accuracy.

Precipitation Data A Bigger Problem

Water, in all its phases, is critical to movement of energy through the atmosphere. Transfer of surplus energy from the Tropics to offset deficits in Polar Regions (Figure 8) is largely in the form of latent heat. Precipitation is just one measure of this crucial variable.

Figure 8

It is a very difficult variable to measure accurately, and records are completely inadequate in space and time. An example of the problem was exposed in attempts to use computer models to predict the African monsoon. (Science, 4 August 2006,)

Alessandra Giannini, a climate scientist at Columbia University. Some models predict a wetter future; others, a drier one. “They cannot all be right.”

One culprit identified was the inadequacy of data.

One obvious problem is a lack of data. Africa’s network of 1152 weather watch stations, which provide real-time data and supply international climate archives, is just one-eighth the minimum density recommended by the World Meteorological Organization (WMO). Furthermore, the stations that do exist often fail to report.

It is likely very few regions meet the WMO recommended density. The problem is more complex, because temperature changes are relatively uniform, although certainly not over 1200km. However, precipitation amounts vary in a matter of meters. Much precipitation comes from showers that develop from cumulus clouds that develop during the day. Most farmers in North America are familiar with one section of land getting rain while another is missed.

Temperature and precipitation, the two most important variables, are completely inadequate to create the conditions, and therefore the formula for any surface grid of the model. As the latest IPCC Report, AR5, notes in two vague under-statements,

The ability of climate models to simulate surface temperature has improved in many, though not all, important aspects relative to the generation of models assessed in the AR4.

The simulation of large-scale patterns of precipitation has improved somewhat since the AR4, although models continue to perform less well for precipitation than for surface temperature.

But the atmosphere is three-dimensional and the amount of data above the surface is almost non-existent. Just one example illustrates the problems. We had instruments every 60 m on a 304 m tower outside the heat island effect of the City of Winnipeg. The changes in that short distance were remarkable, with many more inversions than we expected.

Some think parametrization is used to substitute for basic data like temperature and precipitation. It is not. It is a,

method of replacing processes that are too small-scale or complex to be physically represented in the model by a simplified process.

Even then, IPCC acknowledge limits and variances

The differences between parameterizations are an important reason why climate model results differ.

Data Even More Inadequate For Dynamic Atmosphere.

They “fill in” the gaps with the 1200 km claim, which shows how meaningless it all is. They have little or no data in any of the cubes, yet they are the mathematical building blocks of the computer models. It is likely that between the surface and atmosphere there is data for about 10 percent of the total atmospheric volume. These comments apply to a static situation, but the volumes are constantly changing daily, monthly, seasonally and annually in a dynamic atmosphere and these all change with climate change.

Ockham’s Razor indicates that any discussion about the complexities of climate models including methods, processes and procedures are irrelevant. They cannot work because the simple truth is the data, the basic building blocks of the model, are completely inadequate. Here is Tolstoi’s comment about a simple truth.

“I know that most men, including those at ease with problems of the greatest complexity, can seldom accept even the simplest and most obvious truth if it be such as would oblige them to admit the falsity of conclusions which they delighted in explaining to colleagues, which they have proudly taught to others, and which they have woven, thread by thread, into the fabric of their lives.”

Another simple truth is the model output should never be used as the basis for anything let alone global energy policy.

301 thoughts on “A Simple Truth; Computer Climate Models Cannot Work”

1. Mario Lento says:

“They create the results they are designed to produce.”
++++++++
This is the crux of the problem. Circular logic… CO2 is the knob and everything else must be adjusted so that CO2 can be the cause.

• evanmjones says:

CONFESSIONS OF A HOMOGENIZER, STATION-ADJUSTER AND STATION-DROPPER

Circular logic

Yes. Entirely wrong approach. One needs to embrace (and limit) your MoE by taking it from the top down. Keeps it all on the rails. It’s a meataxe approach, but any other is futility-squared. And game developer worth half his salt knows this (game designers, sometimes not so much!) Anyone transitioning from player to designer to developer (note the order) comes to know this in his bones.

These guys think they can take a bunch of Advanced Squad leader maps and simulate the Eastern Front. If they ever designed a game (a historical simulation

It’s all Victor Venema’s fault. RECURSE YOU, RED BARON!

That’s what it comes down to. doesn’t it? When I was running recursive logic on pairwise comparisons, I never saw so many Excel circular logic errors in my life.

Yes, I am Unclean: Homogenize — just once — and you are a Homogenizer for the rest of your life …

Homogenization is, most emphatically, not a zero-sum game. It does not merely smear the microsite error around, as in an average. It isolates the outliers (in this case, most of the well sited stations) and adjusts them upwards to match the poorly sited majority. The result is not an average, but a considerable overall upward adjustment of the record (i.e., in exactly the wrong direction).

After the dust clears, the Little Boxes on the Hillside have all come out the same.

• Tim says:

More like Little boxes at the Airport and the car park. In the last 25 years there has been an accelerating reduction in thermometer counts globally with the pace of deletion rising rapidly in recent years. Over 6000 stations were active in the mid-1990s. Just over 1000 are in use today.
The stations that dropped out were mainly rural and at higher latitudes and altitudes — all cooler stations.

• Auto says:

http://www.imo.org/blast/blastDataHelper.asp?data_id=24475&filename=1293.pdf
This link shows – on page 6/6 – for one month (August 2008) only – where ship observations were made.
<400,000 for the month, globally.
Error bars of mental arithmetic. . .
Average – one per 300-ish square miles of ocean. For the month.
one per 90-100 thousand square miles each day.
Area of he UK – about 92,000 square miles.

I am encouraging ship masters I know to have their ships become Voluntary Observing Ships.

Auto

2. It is indeed unfortunate that the study of historical records that H.H. Lamb was so expert at was completely abandoned by the computer mongers.

In the U.S. it happened when A Gore gave Kevin Trenberth at the University of Michigan \$5 million dollars to buy a super computer. Took the money from NASA. This is while at UAH John Christy was starved of funds…

• evanmjones says:

Do models work?

They “perform as advertised”. More’s the pity. There also a known advisory: Do not homogenize a system with mostly bad datapoints. Right there on the warming label (between the disclaimer and the skull and crossbones).

• Just an engineer says:

“Right there on the warming label.” Intentional?

3. As we said back in 1971 when I got my Systemprogrammer Exam…: Bad Input -> Bad Output. (short version was BIBO)

• SandyInLimousin says:

Bad Input -> Guesswork -> Nonsense Out = BIGNO

• OK BIGNO and BIBO. Some persons must have been asleep or dreaming from Primary School on…..

• Or: Bad Input -> Nonsense -> Garbage Out = BINGO!

4. Dr. S. Jeevananda Reddy says:

“I watched modelers take over and dominate climate conferences as keynote presenters. It was modelers who dominated the Climatic Research Unit (CRU), and through them, the IPCC” — it is true, I made such observations few decades back with reference to India Meteorological Department/IITM research priorities. At that time using ground based data excellent short range forecasts were given. Here the experience with local conditions were given top priority. But, with collaboration from USA groups [mostly Indians] the shift changed to model based forecasts — with poor quality of predictions. Here the promotions and awards were given to model groups rather than land based forecasters. Also, in several fields of meteorology the deficiencies in models were discussed and published in 70s & 80s. With the sophisticated computers, the entire research shifted. I did my research with 256 kb computer purchased for US\$ from South Africa — working in Mozambique. The programmes were written by me in Fortran IV.

Dr. S. Jeevananda Reddy

• Dr. S. Jeevananda Reddy says:

US\$ 3000

Dr. S. Jeevananda Reddy

• Sir, your experience shows that climat change marketeers masquerading as grant snuffling scientists are not just laughably bad at science, what they do has major, potentially lethal, consequences.

5. Mario Lento says:

Tim: This is a nice post… thank you!

• policycritic says:

Yeah, I agree. I think it is explosive, frankly.

6. In “The death of economics” by Paul Ormerod, he makes an important point: a model that is nearly correct can be completely wrong. And there is no relationship between relative correctness and accuracy. A bad model can give a more accurate result than a good one, by accident. Since the climate models are by necessity incomplete, they can be entirely wrong. Therefore no inference that “only with CO2 can we make it work” has any validity. None whatever. If that is all they have, the world is wrecking itself, wrecking the environment, impoverishing the poor, for no good reason.

7. cg says:

[Snip. OT – mod]

• zenrebok says:

cg [snipped]
[Thanks for spotting the OT – mod]

8. LewSkannen says:

Another interesting snippet from the book ‘Chaos’ by Gleick (brother of the other one) is that even if you measured perfectly all the relevant average parameters (temperature, humidity, wind velocity etc) in every cubic meter of the entire atmosphere and had a perfect algorithm to crunch the numbers you would be unable to make any predictions more than a month ahead simply due to the chaotic effects on the algorithm from the tiny discrepancies between the average parameter of each cube and the actual state of each cube.

• Mike Jonas says:

Thx LewSkannen. You are absolutely correct, and the post really needed to include that information. The missing data that the post refers to pales into insignificance beside the utterly useless structure of the climate models. They aren’t climate models, they are low quality weather models. Even the very best weather models can only successfully predict a few days ahead. A climate model would have in it the things that actually drive climate, such as orbit, sun, clouds, ocean oscillations, etc, plus GHGs of course. Its structure would be quite different to current “climate” models, as it could not be based on small slices of space-time for the reason you give.

• dccowboy says:

They are not ‘climate’ models, they are ‘circulation’ models, as in General Circulation Models, intended (but failing) to model atmospheric circulation.

• JohnTyler says:

Numerical solutions to partial, non-linear differential equations ALWAYS produce a tiny error and if you seek just the solution, numerically produced, to just one equation, you can construct the algorithm to make the error insignificant. (Actually, this is true also for linear – and far SIMPLER – differential equations as well).
But when the results of many of these numerical solutions are used as input parameters for the next set of equations, the “error” becomes magnified. Repeating this process thousands of times produces results with huge errors and the final results are simply WRONG.

• PiperPaul says:

Evil twin phenomenon?

• Duster says:

This observation was originally made by Edward Lorentz in Deterministic Nonperiodic Flow in the 1960s. Lorenz concluded that very small variations could make immense changes in the state of deterministic systems over time. Since Lorentz’s work was in computational meteorology one would have thought the modelers would have considered the conclusions and at least qualified the discussion of model accuracy.

• chrisyu says:

even for a simple system, a double pendulum, accurately predicting the position and velocity after 15-20 swings becomes impossible. But yet CC scientists claim they can predict the climate 100 years out. Show me an accurate predictive computer model for a double pendulum then maybe we can talk about your climate model.

• LewSkannen says:

I remember that being mentioned in the book. After 2 minutes to calculate the position of the pendulum you would need to have the initial conditions exact, down to the gravitational effect of a rain drop two miles away.

9. LewSkannen says:

The other analogy I like to use is to relate CO2 to Currency Forgery.
Everyone agrees that currency forgery drives up inflation in the same way that everyone agrees that CO2 acts as a GHG.
No sane person, however, expects to be able to be able to predict the world economy in a hundred years time just based on the rate of currency forgery.

• David A says:

True, but currency forgery is theft, all bad.
CO2 is clearly net beneficial.

10. Nick Stokes says:

“It was modelers who dominated the Climatic Research Unit (CRU),”
Very strange ideas of models here, if GCM’s are what is meant, as Fig 2 implies. Which modellers dominated CRU?

“It doesn’t matter, because there are so few stations of adequate length or reliability. The mathematical formula for each grid cannot be accurate.”
What role are weather stations imagined to play in a GCM?

I think a lot of different things are mixed up here.

• Keith Willshaw says:

‘What role are weather stations imagined to play in a GCM?’

1) Computer models rely on input data – Weather stations are the source of that data.
I use models that simulate thermodynamic processes. If I don’t have valid input data the model CANNOT produce anything useful.

2) Validation. When I run a new computer model I compare its output with real life experimental results. If the model is not validated this way its worse than useless.

• Nick Stokes says:

GCM’s do not use station data as input. They use Earth properties – topography, forcings etc. Some recent programs try to make decadal predictions from known state input. That would generally be a state of the kind you would get from a weather forecasting model. But there is no direct relation between stations and a GCM grid.

Validation – GCM’s do not predict station data. People might wish to compare them with some kind of global or regional index. But they are climate models, not weather models.

• DEEBEE says:

“GCM’s do not predict station data.”

As usual with Nick watch the pea. No body is claiming that Nick — just make an obvious assertion and wait till someone falls into it so you can swoop and conquer and publically preen your intellect

• Nick Stokes says:

This post says that computer models cannot work, and then says a whole lot about deficiencies in station data, and particularly related to GCM grids.. But GCM’s don’t use station data. So how can issues with it stop them working?

• DirkH says:

Nick Stokes
October 17, 2014 at 3:13 am
“This post says that computer models cannot work, and then says a whole lot about deficiencies in station data, and particularly related to GCM grids.. But GCM’s don’t use station data. So how can issues with it stop them working?”

It makes it impossible to validate a climate model. And in fact, no climate model has ever been validated.

• Duster says:

NIck Stokes says ….GCM’s … use Earth properties – topography, forcings etc. …

Which begs the question of how well topography is modeled to start with, and continuing with how well any other of these properties are estimated or modeled.

Validation – GCM’s do not predict station data. People might wish to compare them with some kind of global or regional index. But they are climate models, not weather models.

Nick, you’re dodging issues by “scene shifting.” If, using more parameters than necessary to model an elephant, no set of GCMs can reasonably track real world data throughout the available instrumental record, then there is clearly error in the application of “Earth properties” in the models or a fundamental misunderstanding of those properties. Since the models also plainly display a bias in the direction in which they miscast the real world, the bias offers a clue in where the error must lie. The fault cannot be in the real world, and therefore, can only be in the theory, or the implementation of the theory in the computer model.

• CodeTech says:

What role are weather stations imagined to play in a GCM?

What a truly bizarre question to ask. Truly.

• Nick Stokes says:

• CodeTech says:

The fact that you can’t proves that you have no idea what you’re even doing here.

How embarrassing for you.

• DEEBEE says:

AT least for validation, Nick. Unless the anomalies are being produced from your nether regions.

• Nick Stokes says:

Here is one relatively simple, well-documented GCM, CAM 3.0. You will not find station data input anywhere.

This post says that GCM’s cannot work, apparently because of some deficiency in station data. That’s just not true. They don’t use it. It might be that someone later finds a mismatch with some index derived from.it. If so, then they obviously have the necessary station data to do that.

• Some nut wrote, “What role are weather stations imagined to play in a GCM?”

If the climate models don’t use real world data for input or for validation then they are just mega-millions computer games. I do agree that the present “climate models” are useless on their face and any data from the real planet earth would just get in the way of providing the answer that the funding agencies want to see.

I am just surprised that an alarmist would just up and admit that real data and climate models are total strangers.

There are people condemned, needlessly, to energy poverty by people like Stokes. It is a travesty.

• As someone else pointed out, there are no climate models at all. There are just low quality weather models that would surprise us to get the weather right 90 days out.

• DirkH says:

Nick Stokes, do you say that climate models should not be validated? Why should they not be validated?

• TYoke says:

Nick wrote: “This post says that GCM’s cannot work, apparently because of some deficiency in station data. … It might be that someone later finds a mismatch with some index derived from.it. If so, then they obviously have the necessary station data to do that.”

Those sentences are the kernel of your argument, and that argument contains a pair of errors.

No one is arguing that the station data is used in an entirely un-massaged form. Of course the direct observations are condensed into some sort of “index” that is gridded, averaged, smoothed, extrapolated, etc.

The problem is that just because it is an “index” that is incorporated or used to validate the model, instead of raw data, most certainly should not be construed to mean that the raw observations somehow become unnecessary to the model. Quite the contrary. The intermediate modeling steps between the raw data and the ultimate GCM merely extends the chain of inferences between Garbage In and Garbage Out.

You reveal your recognition of this reasoning error in your last sentence: “they obviously have the necessary station data to do that”. Why obviously? The inadequacy of the station data is the whole point of the article. Your bland and unsupported assertion of adequacy does not make that data adequate.

• Tim Hammond says:

So how exactly do you think you know what the temperature is at any given time and any given place if you don’t have something measuring the temperature there?

You have a model that accurately replicates temperature in any given location entirely from first principles do you?

• CodeTech says:

How many [insert name of group, in Canada a favorite is Torontonians] does it take to change a light bulb?… just one. He holds it and the world revolves around him.

Now, how does a GC Modeler determine tomorrow’s weather? He runs the model from the creation of the planet 4.5 billion years ago.

• Bob Boder says:

Nick dosn’t care it’s what the modles say that is important to him not what they mean to anything, anyone or any part of reality.

• Keith Willshaw says:

Nick Stokes Said
‘GCM’s do not use station data as input’

Sorry old boy but they assuredly do. The NASA GISS CM ModelE has a data file that is 191 Mb compressed and contains a massive amount of initialization data in great detail from coarse factors such as surface temperature down to things like transient 3-D aerosol concentrations . If you don’t have basic input data such as cloud cover, surface temperature and prevailing wind direction you have no hope of even coming close to modelling climate.

‘Validation – GCM’s do not predict station data.’
Indeed they don’t. The fact is GCM’s are supposed to model the very conditions that the stations measure such as precipitation, temperature, wind speed etc. They have however failed miserably to do so.

• Uncle Gus says:

Once again, I wish this site had a “Like” button.

• Alx says:

Data is input into formulas. Data is used to validate forumlas. Data does not equal formulas. GCMs try to represent interactions/processes between the atmosphere, oceans, and land surface. It is poorly used to forecast climate changes to increasing CO2. You comment suggests temperature is not an input into a GCM only an output, which is strange as an elephant in a teacup to me. But ignoring that, what is stranger than flocks of zebras flying over Manhattan is that you cannot acknowledge that both the data and models are both clearly insufficient to do any climate forecasting except for entertainment purposes.

Using that methodology I will now go balance my checkbook using a random number generator and an abacus with an unknown number of beads missing.

• rgbatduke says:

I’m trying to decide if this is another “I agree with Nick Stokes day”. On the one hand, you are absolutely correct when you say that weather stations do not contribute direct input to climate models. On the other hand, the climate models do have to be initialized, and because at least some of the subsystems that play a major role in the time evolution of the climate have very long characteristic times and because the climate is highly non-Markovian, one has to start them from initial conditions that are not horribly out of balance with respect to reality. This problem is somewhat exacerbated by the chaotic nature of the dynamics, of course.

Still, Tim Ball is incorrect to assert that it is the lack of station data per se that is the downfall of climate models. The downfall of climate models comes from the explicit assumption that the average of the averages of many climate models, each one producing an ensemble of chaotic trajectories from initial conditions that are explicitly assumed to be irrelevant in the long run, is a useful predictor of the one actual chaotic trajectory the Earth is following while self-integrating the actual physics at microscopic length scales instead of integrating equations that are called “physics” on an absurdly large spatiotemporal stepsize but that somebody basically made up.

Let’s see how sensible this is. Here is a typical Feigenbaum tree:

http://log.1am.me/2010/12/bifurcation-trees-and-fractals.html

Typical is good enough, since Feigenbaum basically showed that this tree has universal properties for iterated maps that lead to chaos, and the weather/climate system is of course the “iterated computational map” in which chaos was first discovered.

Even though the tree structure of the periods is universal, it is not insensitive to the underlying parameters used to generate it. Indeed, small changes in those parameters can lead to large shifts in precisely where the bifurcations occur, in the specific distribution of the strange attractors in whatever high dimensional space one evaluates the dynamics (in the case of weather/climate) but even in the simple few-dimensional systems typically used to generate graphs like this. We’ll stick to the few-dimensional case.

So imagine a set of figures like this, each one generated by a model, each model having a slightly different implementation of an iterated map (e.g. different parameters, slightly different functions being solved per step, different stepsize, and in all cases stepsizes so vastly larger than the stepsize needed to actually track the underlying nonlinear ordinary differential equations that pretending that the iterated map is somehow a “solution” to the ODEs becomes an exercise in the suspension of disbelief akin to that required by a typical space opera with FTL ships bopping all over the Universe in human-short times). Running each model with slightly different initial conditions in every case leads to completely different trajectories — it is in fact the opposite of the behavior of damped, driven linear oscillators, where initial conditions are indeed irrelevant, producing a transient behavior that damps away and leaves one with a nice, clean, periodic signal slaved to the periodic driver. In chaotic models this is in some sense inverted — even starting with almost the same initial condition, the differences grow until the correlation function between two initially almost identical trajectories decays pretty much to zero as the two models are, on average, completely decorrelated for the rest of eternity.

So let’s mentally average over a large set of these decorrelated trajectories, per model. This, of course, effectively linearizes them — the resulting trajectory is (gasp!) very close to the original non-chaotic linear-response trajectory that was split up into the tree by the nonlinearities in the appropriate regime. Do this for all of the distinct models, and one has a collection of nice, tame, linearized trajectories, all of them averages over chaos, all of them different (possibly even substantially different), and then let’s average the averages all together to produce a grand ensemble superaverage of the running averages of the individually produced chaotic trajectories evaluated by integrating a made-up dynamical system at distinct spatiotemporal length scales that completely erase the actual variation observed on all smaller length scales and that Nature seems to think are important to the ultimate dynamics of heat transport efficiency as it self-organizes them quite differently as conditions change on a must smaller length scale than the models track.

Now, claim that this super-averaged trajectory is a useful predictor of the future climate, even though the individual chaotic trajectories produced by each climate model have grossly incorrect features compared to the actual trajectory of the one chaotic nonlinear climate we live in.

Sure, that works. If I apply it to a damped, driven rigid oscillator, I can prove that on average we expect to see it at rest. Or, if I’m slightly more sophisticated, I can show that it is still oscillating periodically. Or, if I’m nefarious, I can tweak the underlying model parameterization and “prove” lots of stuff, none of which has the slightest predictive value for the one trajectory that is actually observed, the single strange attractor of the underlying dynamics.

Of course nobody would be that stupid if they were studying a low dimensional chaotic system, or merely demonstrating chaos for a class. On the contrary, they would be pointing out to the class that this sort of thing is nearly pointless, because this:

http://video.mit.edu/watch/double-pendulum-6392/

is what one observes, completely differently every time, even without actually driving the e.g. double rigid pendulum with a noisy not-quite periodic force so that it never manages to come close to damping down to “simple” linearized small-oscillation behavior.

Here is one place where Tim Ball’s observations above are quite apropos. I think you missed this, but one of the points of the inadequacy of the measurement grid relative to the absurdly inadequate integration grid is that the latter by its nature requires assigning “average” numbers to entire grid cells. Those average numbers, in turn have to in some sense be physically relevant to the actual numbers within each cell. For example, would you say that the models make the implicit assumption, when they assign a temperature of 291.6719345 K and air pressure of .993221573 bar and water vapor content of 0.08112498 and air speed of 2.11558324 m/sec in some single vector direction to a volume of 10,000 cubic kilometers of air (100x100x1 km cell) is supposed to represent that actual average of the relevant measured quantities within that cell? Or are they simply toy parameters, variables created within a toy model that bear little to no necessary resemblance to the variables they are named after?

Damned either way, of course. If they are indeed supposed to represent coarse grained averages, the Ball’s observation that when he personally made actual observation of temperature distributions in just one small portion of the ground, he found that most of the assumptions of homogeneity required to assign an average value to even much smaller cells are simply false. He observed, for example, numerous inversions in the first few hundred meters of just one point in one cell, where the well-established dogma is that the lapse rate is (almost always) monotonic. (I would point out that a pattern of inversions is completely consistent with the topological folding process of lateral turbulent airflow across the warmed ground surface and could probably have been predicted — one can certainly observe them in everything from the original rotational mixing experiments to the patterns made by rising smoke from a cigarette or stick of incense.) On the other hand, if one asserts that they are not supposed to represent the actual averages of the physically relevant quantities in the grid cells, if one asserts that they are some sort of renormalized parameters that are only somehow monotonically connected to the actual averages by some sort of map, well, you’ve just acknowledged that in that case we have no good reason to think that they averages thus produced in 50 years represent the global averages that will be observed at that time! If they aren’t even an accurate representation of the average temperature(s) per cell now, but are the result of an unknown transformation of the average temperatures of the cell into the space where the coarse-grained dynamics is being evaluated as if the cells were “microscopic” in size, using effective interactions between the renormalized variables, how do you expect to be able to map the results of the computation back to actual temperatures then?

That’s really the problem, isn’t it? One of the appealing things about GCMs is that they produce something that really looks like actual weather. With enough tweaking (and a bit of brute force renormalization of energy per timestep to eliminate drift that would otherwise cause them to fail so badly that nobody could miss it) they produce a simulated world on which storms happen, rain falls, droughts occur, all with lots of beautiful chaos, and if one works very hard, one can actually keep the models from having egregious instabilities that take almost all initial states and (for example) collapse to the iceball Earth strange attractor when they are applied to initial conditions corresponding to (say) the middle of the Wisconsin glaciation, with runaway cooling from albedo-driven negative feedback, especially at times in the Wisconsin when atmospheric CO_2 apparently dropped to under 200 ppm and nearly caused mass extinction of plant life.

But the weather they produce isn’t the real weather. It doesn’t even qualitatively correspond to the real weather. And while the weather they produce can certainly be averaged, and while one can certainly assert that this average is “the climate”, the values in each cell of the system aren’t even in a simple one-to-one mapping with the average temperatures one would obtain in the cells with a simple rescaling of the integration grid, let alone the 30 orders of magnitude rescaling needed to contemplate resolving those little whorls of turbulent folding produced by every warmed leaf in the sun.

The conflict here is as old as physics itself. We cannot solve the actual problem of weather or climate and we know it. To be perfectly blunt, we will never be able to solve the actual problem of weather prediction or climate prediction, at least not in any sense that is formally defensible. We therefore tell ourselves that most of the things that prevent us from being able to do so do not matter, that we can average them away. We appeal to damping to erase the otherwise intractable detail that keeps us from being able to proceed. Since this work has to be funded, we make the infinitely adjustable assertion that the next doubling of computational power, the next halving of computational spatiotemporal step size, will at the same time reveal more of the missing detail and make the models more accurate, and yet that the unrevealed detail still remaining isn’t important to the long term predictions. We want to have our cake — models that already work well enough at the level we can afford to compute — and eat it too, the need to build models that work better as soon as we can afford to compute better.

I used to see this all the time in field theory computations presented at conferences — so much so that I named it the “fundamental theorem of diagrammatic quantum field theory”. Speaker after speaker would present the results of their computations in some problem in QFT, and explain the Feynman diagrams that they included in their computation. This was typically a fairly small set of all Feynman diagrams for the problem, because these problems are also basically uncomputable as there are an infinite set of Feynman diagrams and little a priori reason to think that any given diagram omitted from a given computation is, in fact, unimportant. Everybody knows this. We can’t even prove that sums of specific sub-classes of diagrams necessarily converge, even as the formal theory is derivably exact if only one could compute and sum over an infinite number of diagrams.

Each talk would thus without fail begin with a short introduction, explaining the diagrams that were included (usually because they were the diagrams they could algebraically derive formulas for and that they could afford to add up with whatever their computing resources of that particular year were) and then they would invoke the fundamental theorem: “All of the diagrams we omitted from this computation do not significantly contribute.”

Amazingly, the very same people would return two years later, where, armed with new computers and more time they would have redone the entire computation and included several new diagrams omitted two years earlier. The results they obtained would (unsurprisingly) have changed. And yet there it was, at the beginning of the new talk, instead of saying “Hey dudes, we learned that you just can’t trust this diagrammatic perturbation theory thing to work because we included a couple of new diagrams that we thought were unimportant a couple of years ago and they turned out to be important after all, maybe we should all try a completely different approach to solving the problem.” they opened, as before with “All of the diagrams we omitted…”

And this was still better than climate science, because in at least a few cases the computations could be directly compared to measured quantities known to fairly high precision. So each talk would not only compare results of computations carried out to different order in diagrammatic perturbation theory with sums over all ladders, or single loops, or whatever (while still omitting, note well, countless diagrams even at low order as there are a lot of diagrams for even fairly simple systems), they would compare the results to the numbers they were trying to compute. Sometimes adding diagrams made the results better. Sometimes it made them worse. As I said, no theorem of uniform convergence, no real theorem of asymptotic convergence in sums over diagrams of a specific type.

But as a jobs program for physicists, it was and remains today marvelous. And from that point of view or the point of view of pure science, the work is by no means without merit! Some very famous physicists who have made real contributions have done their time doing “wax on, wax off” computations of this sort, or “swinging the chain, swinging the chain” (two movie references, in case this is confusing, google them:-).

At least they knew better than to take the results of forty distinct diagrammatic perturbation theory computations done by forty different groups to forty different orders with forty different precisions and forty different total expenditures of computational resources and with forty different sources of graduate-student-induced computational/method error, carried out over forty years and then do the flat average over all of the results and present the result to the world as a better evaluation of the number in question than the direct experimental measurement of the actual number in nature!

I think that is Tim’s real assertion here. In actual fact, we haven’t got any particularly good idea what the global average temperature actually is. In a sense it is worse than our knowledge of the output of climate models that are supposed to simulate it. We have data on an irregular grid that samples some regions finely and most regions enormously coarsely, even worse than the better GCMs (that at least use a regular grid, even if the usual lat/long decomposition in degrees on a sphere is dumb, dumb, dumb). It is probably excessive to claim that we know the actual global average surface temperature to within one whole degree absolute — if we could agree on some way to define global average surface temperature, if the global average surface temperature we agreed on could be somehow mapped in a sensible way into the average implemented in a renormalized way at the grid resolution we can afford to compute this year.

In summary, let’s say that today I half agree with Nick. Yes, the measurement grid is sadly irrelevant to starting climate models as much as it is in fact relevant to weather prediction with pretty much the same sort of models. The models do not use any actual measurements to initialize, which is at least partly due to the fact that we don’t have any set of actual measurements they could use to initialize that wasn’t egregiously in error over most of the globe. So they start up any old way (hopefully close enough to some actual state of the planet at the past times one starts the models that the model building process remains on the same attractor, oops, hmmm, maybe the correspondence of initial state and reality matters after all!) and hope that after a decade or so the specific behavior associated with the transient goes away and leaves them with, um, “something” predictive of the actual future climate.

It is highly relevant to the model building process itself, as well as the model validation process. Specifically, how can we know that the detail omitted in the climate models is unimportant, especially when it is important in every other chaotic system we’ve ever studied, including many that are far simpler? And, how can we demonstrate that these enormously complex models are actually working when we don’t have a very good idea of the actual state of the planet that can be compared to their predictions? When we don’t know the actual mean surface temperature to within a whole degree (or even how to define “mean surface temperature”), and when our ignorance of that temperature however you define it increases significantly as one moves backwards in time, how can we even resolve model failure?

rgb

• Bob Boder says:

So in essence you are saying “nick is right, but the models are useless”.

Thank you!

• Uncle Gus says:

Dear God. Do you stay in all day or do you just type very, very fast?

Nevertheless, the system must be classically damped in some way, since it’s cycled closely around the freezing point of water for several billion years. (Not that that excuses shoddy thinking, mind you.)

• mullumhillbilly says:

Thank you rgb for a great, landmark statement of all that is wrong with GCM and the ludicrous, preposterous averaging of many nonlinear dynamic systems using big big grid cells . Having read Gleicks “Chaos” many years ago, I’ve always wondered how any scientist could claim to be a climate scientist when they apparently hadn’t read and fully absorbed the implications of Lorentz 1967 paper.

• Joe Born says:

I do so want to believe in Dr. Brown’s conclusions, since he knows so many buzz words that I don’t and because he arrives at destinations I find congenial. But I have found his logic wanting the few times I’ve actually taken the time to slog through his logorrhea.

I don’t mean just his laughable recent pontificating about nuclear-power patents (which I had dealt with before he was even out of college). I mean the areas about which he professes actual expertise, such as thermodynamics and statistical mechanics. He is among those responsible for my conclusion that science is too important to be left to scientists.

Believe me, we lawyers would love to be able to rely on the experts. But experience has shown us that we cannot. We challenge the experts not because we think we’re smarter than they are but rather because they so often prove themselves wrong.

Over the years I have enjoyed Dr. Brown’s cheerleading regarding the poor resolution of climate models, since it tended to support the impression I had formed. As a serious citizen, though, I have sadly concluded that I can no longer look to him for confirmation.

And I would caution other laymen against doing so.

• Nick Stokes says:

RGB,
“On the other hand, the climate models do have to be initialized, and because at least some of the subsystems that play a major role in the time evolution of the climate have very long characteristic times and because the climate is highly non-Markovian, one has to start them from initial conditions that are not horribly out of balance with respect to reality.”

As with most CFD, that is not really true. They aren’t trying to solve an initial value problem. They are trying to determine how forcings and climate come into balance. That is why they usually wind back several decades to start. It’s not to get better initial conditions – quite the reverse. There will be error in the initial conditions which will work its way out over time. If they start too hot, heat will radiate away over that windup time, until by the time you reach the period of interest, it is about right relative to the forcing.

“I think you missed this, but one of the points of the inadequacy of the measurement grid relative to the absurdly inadequate integration grid is that the latter by its nature requires assigning “average” numbers to entire grid cells”

That’s what you do in CFD, or any PDE solution, on any scale. You solve for nodal values, and it’s related to continuum by interpolation (or averaging after integration). And with CFD, direct Navier-Stokes solution is impractical on any scale. It’s always done with some kind of turbulence modelling (except at the very viscous end). Grid cells are never small enough. There is always some scale that you can’t resolve. But CFD is big time useful. You get answers on the scale that you can resolve. And with GCM’s and climate, that scale is useful.

“Or are they simply toy parameters, variables created within a toy model that bear little to no necessary resemblance to the variables they are named after?”
They generally relate to conserved quantities. Heat, momentum etc, and are best expressed that way in equations. So when you refer to average temperature in a cell, you are referring to heat content. And the heat equation used just says that that heat content is advected in accordance with average gradients.

• Mario Lento says:

Nick Stokes wrote” You get answers on the scale that you can resolve. And with GCM’s and climate, that scale is useful.”
++++++++++
Useful? For what? What bit of evidence is there that any climate model can spit out anything that resembles climate?

• Nick Stokes says:

Mario,
“What bit of evidence is there that any climate model can spit out anything that resembles climate?”

Here is a vizualisation of the ocean component of the GFDL AOGCM, with SST. I choose it because it shows familiar patterns generated by GCM. The currents that you see are not the solution of a meaningful initial value problem. Nor are they obtained using station data, or even observed SST. They are first principles solutions of a whole dynamics model, and are the result of air/water properties, forcings and topography. And maths.

• Nick Stokes says:

I’ll see if I can embed that video from GFDL:

• “The key is the realization that climate system predictions, regardless of timescale, will require initialization of coupled general circulation models with best estimates of the current observed state of the atmosphere, oceans, cryosphere, and land surface. Formidable challenges exist: for instance, what is the best method of
initialization given imperfect observations and systematic errors in models?”
Initialization and validation would require some kind of data. Missing detail in significant ocean regions calls into question how accurate is the starting point?

• Nick Stokes says:

“Missing detail in significant ocean regions calls into question how accurate is the starting point?”

This is the newish idea I referred to above – something between a GCM and a weather forecast. It’s not yet clear how successful it will be. But it isn’t traditional GCM climate prediction.

• All of this is irrelevant the models don’t work regardless of any of these arguments. You don’t need to know the absolute initial conditions for the models to have value but they do need to generate something that is close to real conditions and they don’t.
Nick will of course now say that they do and point to all kinds of BS claims of accurate results, so before we go there, Nick give us some accurate idea of what is going to happen over the next 5 years based on your loved models. Put your name to something that is foward looking. But of course you won’t because then you will have to admit that the models don’t work when your”predictions” don’t pan out. Next you’ll say “I don’t make predictions” of course.

• “They aren’t trying to solve an initial value problem. They are trying to determine how forcings and climate come into balance. That is why they usually wind back several decades to start. It’s not to get better initial conditions – quite the reverse. There will be error in the initial conditions which will work its way out over time. If they start too hot, heat will radiate away over that windup time, until by the time you reach the period of interest, it is about right relative to the forcing.”

I think I understand your explanation. During the wind up a GCM it will go towards a balanced situation. During and after the forcings have affected things, it will go towards the new balanced situation. When we get to the period of interest we make a balance sheet of the climate. We make another one at the end of the model run. Explaining what happened between the two balance sheets is an income statement which will show a gain or a loss of heat. To have an accurate income statement, the affect of forcings, it is helpful to have an accurate balance sheet, the initial and ending values. We can check the balance sheets against the income statement. I find it difficult to say we can minimize the importance of the initial and ending values and still have a good income statement.

• Reading some of your comments I might be making progress. My earlier example had a beginning balance sheet (B/S), and ending one, and an income statement (I/S) that connects the two. Using your scientific furnace example after hours of being on, inputs equal outputs. It is steady state. That temperature is measured (B/S). Some attribute is changed. The temperature is measured again (B/S). The temperature change (I/S) is attributed to whatever the change made was. The model’s books balance. Where we seem to be is that the climate data from observations we have does not interchange with the GCMs data. They are kind of the same thing, but with important qualifications used to note their differences.

11. Nick Stokes;
What role are weather stations imagined to play in a GCM?

Validation?

• DEEBEE says:

Absolutely. And of course he is laying his usual trap by then referring to weather station temperature, so that someone can get into a debate about that rather than the poor prediction of anomalies

• Stokes,

“apparently because of some deficiency in station data.”

If one cannot model initial conditions one cannot model final conditions with any sort of accuracy. As if you did not already know that. Disingenuous, much?

• Nick Stokes says:

“If one cannot model initial conditions one cannot model final conditions with any sort of accuracy. As if you did not already know that.”

One thing I am familiar with is computational fluid dynamics. There you are hardly ever seeking an initial or final state. You are solving a time variable system usually to gather some sort of statistics. Lift and drag on an airfoil, say. Sometimes there is a particular event or perturbation. GCM’s are similar. You generate synthetic weather to gather climate statistics. It’s not an initial value problem.

• David A says:

Nick, Nick Nick, there you go again. None of us know if that initial point is positive or negative to equilibrium. To gauge for accuracy you must initialize to an observation, and you must meet the end point observation, and you should back cast to the past and meet that observation as well.

• Nick Stokes says:

“None of us know if that initial point is positive or negative to equilibrium.”
You don’t need to. Think of that airfoil problem. It has turbulence, vortex shedding, “weather”. So what do you do experimentally? Crank up the wind tunnel and take measurements. No-one tries to determine the initial state of a wind tunnel.

Same with a CFD analysis. You emulate the running state of a wind tunnel and let it compute for a while. You don’t try to determine a solution dependent on an initial state. There isn’t one that makes sense.

• David A says:

Nick says…”You don’t need to. Think of that airfoil problem. It has turbulence, vortex shedding, “weather”. So what do you do experimentally? Crank up the wind tunnel and take measurements. No-one tries to determine the initial state of a wind tunnel.”
—————————————————————————–
You really are being disingenuous. We control a wind tunnel to exactly the speed we choose to measure the aerodynamic performance of whatever we place inside a wind tunnel. The past performance of the wind tunnel is irrelevant to the current wind in the tunnel, and irrelevant to the current performance of whatever object is now in the wind tunnel.

A temperature is entirely different. If the current atmospheric temperature is negative (cool) to the current inputs, it will warm even with zero change in input. If we foolishly ASSUME the current radiative balance was zero, then we could mistake the proposed change to conditions over the time of the model run (additional atmospheric CO2) was the cause of the observed warming, when in fact said warming would have occurred even without the additional CO2.

Did you really make me type this?

• Nick Stokes says:

“A temperature is entirely different”
OK, think of controlling a scientific furnace. You set the power, let it settle until forcing balances heat loss (maybe overnight), then take measurements. By settling, you don’t care what the temperature was when you applied controls. Hotter or colder.

Same with GCMs. If you want to model 21st cen, you start maybe in 1900. You don’t know that much about 1900, certainly not wind speeds etc. But you apply 20Cen forcings, so that after 100 years, the temperature is in balance, even if the forcing varied. So you have a starting point very insensitive to the actual state in 1900.

• The contention that there is a “forcing” posits the existence of a linear functional relation from the the magnitude of the change in the “forcing” to the magnitude of the change in the global temperature at equilibrium. You say you’d like to test this contention? Whoops, you can’t do so: the change in the global temperature at equilibrium is not an observable feature of the real world.

• David A says:

Nick really? Now you move from a wind tunnel to a furnace, and achieve a hypothetical equilibrium. Sorry, the earth’s climate is neither a wind tunnel or a furnace. Read below for why.

The scientific evidence regarding the CO2 response to warming is that it takes centuries for the entire system to respond. Therefore you have no idea if the earth is now or was then at equilibrium. And in many cases the paleo proxy record indicates the earth cools again after the CO2 increases, indicating that CO2 responds to warming, but does not feedback to additional warming, or if it does, it is to weak to overcome other natural processes. Ocean currents take centuries to turn over. Ocean responses can be very long term, as our solar changes, thus even starting at 1900 does not save the climate models.

Besides Nick, the climate models are all wrong against observations in the same direction. Thus this consistent ignorance of the climate models should inform you that they likely peg the influence of CO2 much higher the real world does.

Think of the climate very simply as dozens of teeter-totters, down on the right is warming, down on the left is cooler. All of these teeter-totters oscillate at different frequencies, some daily, some seasonal, some decadal, some centuries, some vary, and many we do not understand. This is why it is very difficult to pick any one factor, and clearly discern its influence against the noise; as they are all interacting with and competing against many other factors. It is likely that major shifts in climate only occur when by happenstance an adequate number of the teeter-totters happen to synchronize to one side at the same time. BTW, this chaotic situation is further complicated by the fact that at some GATs some of what once influenced warming, may now influence cooling. It is also highly unlikely that any of these factors are directly linear in how they apply.

However some things are known. The benefits of additional CO2 are known, and are a significant factor in the reasons we are not now in a world food crisis. The anthropogenic increase in CO2 currently saves the world about 15% of agricultural land and water, likely preventing much international and regional stress. The purported harms of CO2 are not manifesting.

Hansen was wrong about how much CO2 would accumulate in the atmosphere, and wrong about its ability to warm, and completely wrong about the predictions of catastrophe, common in both journals and the media. The climate science community refuse to learn from the failures of the models. That failure is informative.

• The Stokes Syllogism:

GCM does not use weather station data.
World of Warcraft does not use weather station data.
Therefore World of Warcraft is a GCM.

If GCM are unfalsifiable, they aren’t science. If weather station data is even a rough set of information with which to check predictions, let alone local people sticking their own brewer’s thermometers on trees as in the good old days of colonial Australia, there is no earthly point to GCM other than as props in a typical boiler room con.

• David A says:

Yes, and all the models fail badly in one direction. So, we cannot properly initialize the models, and even if they are all initialized to the same start point, (which none of us know if that initial point is in positive or negative to equilibrium) they all fail badly in the same direction.

It is remarkable that dozens of models can all fail a chaotic system in a uniformly systemically and consistently wrong direction, way too warm. It takes a unique anti-science talent to be so consistently wrong, and an even greater hubris, to learn nothing from such consistent failure, and to arrogantly ask the world to change because of your pathetic climate models.

• Mario Lento says:

+1

• CodeTech says:

Yes. This.

• DirkH says:

Don’t forget, if you DO have that computational power, and make tiny grid boxes with a tiny time step, then the entire statistical description of the grid box content breaks down, because a statistical description as used in the GCM’s can only be halfway correct when there are many process instances in the box, in the timestep described.

Reducing grid box size and duration requires completely new descriptions of the physics, on the microlevel. And the microphysics are not understood. Charge separation due to absorption of IR in water droplets, plasma bubbles, thunderstorms, …

12. I have been making the same point as this guest post for some years at several posts at
http://climatesense-norpag.blogspot.com.
The inherent uselessness of the IPCC models is discussed in some detail in Part1 of the latest post at the above link. Here is the conclusion of Part 1
“In summary the temperature projections of the IPCC – Met office models and all the impact studies which derive from them have no solid foundation in empirical science being derived from inherently useless and specifically structurally flawed models. They provide no basis for the discussion of future climate trends and represent an enormous waste of time and money. As a foundation for Governmental climate and energy policy their forecasts are already seen to be grossly in error and are therefore worse than useless. A new forecasting paradigm needs to be adopted.
The modeling community is itself beginning to acknowledge its failures and even Science Magazine
which has generally been a propagandist for the CAGW meme is now allowing reality to creep in. An article in its 6/13/2014 issue says:
“Much of the problem boils down to grid resolution. “The truth is that the level of detail in the models isn’t really determined by scientific constraints,” says Tim Palmer, a physicist at the University of Oxford in the United Kingdom who advocates stochastic approaches to climate modeling. “It is determined entirely by the size of the computers.” Roughly speaking, an order-of-magnitude increase in computer power is needed to halve the grid size. Typical horizontal grid size has fallen from 500 km in the 1970s to 100 km today and could fall to 10 km in 10 years’ time. But even that won’t be much help in modeling vitally important small-scale phenomena such as cloud formation, Palmer points out. And before they achieve that kind of detail, computers may run up against a physical barrier: power consumption. “Machines that run exaflops [1018 floating point operations per second] are on the horizon,” Palmer says. “The problem is, you’ll need 100 MW to run one.” That’s enough electricity to power a town of 100,000 people.
Faced with such obstacles, Palmer and others advocate a fresh start.”

Having said that , the skeptical community seems reluctant to actually abandon the basic IPCC approach and continues to try to refine or amend or adjust the IPCC models with endless discussion of revised CS for example or revised calculations of OHC etc. A different mindset and approach to forecasting must be used as the basis for discussion of future climate trends..

Part 2 at the linked post says:
” 2 The Past is the Key to the Present and Future . Finding then Forecasting the Natural Quasi-Periodicities Governing Earths Climate – the Geological Approach.
2.1 General Principles.
The core competency in the Geological Sciences is the ability to recognize and correlate the changing patterns of events in time and space. This requires a mindset and set of skills very different from the reductionist approach to nature, but one which is appropriate and necessary for investigating past climates and forecasting future climate trends. Scientists and modelers with backgrounds in physics and maths usually have little experience in correlating multiple, often fragmentary, data sets of multiple variables to build an understanding and narrative of general trends and patterns from the actual individual local and regional time series of particular variables………………….
Earth’s climate is the result of resonances and beats between various quasi-cyclic processes of varying wavelengths combined with endogenous secular earth processes such as, for example, plate tectonics. It is not possible to forecast the future unless we have a good understanding of the relation of the climate of the present time to the current phases of these different interacting natural quasi-periodicities which fall into two main categories.
a) The orbital long wave Milankovitch eccentricity,obliquity and precessional cycles which are modulated by
b) Solar “activity” cycles with possibly multi-millennial, millennial, centennial and decadal time scales.
The convolution of the a and b drivers is mediated through the great oceanic current and atmospheric pressure systems to produce the earth’s climate and weather.
After establishing where we are relative to the long wave periodicities to help forecast decadal and annual changes, we can then look at where earth is in time relative to the periodicities of the PDO, AMO and NAO and ENSO indices and based on past patterns make reasonable forecasts for future decadal periods.
In addition to these quasi-periodic processes we must also be aware of endogenous earth changes in geomagnetic field strength, volcanic activity and at really long time scales the plate tectonic movements and disposition of the land masses.”

During the last few years I have laid out in a series of posts an analysis of the basic climate data and of the methods used in climate prediction. From these I have developed a simple, rational and transparent forecast of the possible timing and extent of probable future cooling by considering the recent temperature peak as a nearly synchronous peak in both the 60- and 1000-year cycles and by using the neutron count and AP Index as supporting evidence that we are just past the peak of the controlling millennial cycle and beginning a cooling trend which will last several hundred years.
For the forecasts and supporting evidence go to the link at the beginning of this comment.

• jorgekafkazar says:

“…Tim Palmer, a physicist at the University of Oxford in the United Kingdom who advocates stochastic approaches to climate modeling…”

13. Joel O'Bryan says:

The IPCC and the CMIP5 models in LLNL simply need to be dumped and the money saved or diverted. The government grant-funded and intramural climate modelers can go finds jobs with Wall Street or Financial and Insurance firms, i.e . that is, become productive members of society.

• joeldshore says:

Since you left out the “sarc” tab, one could almost interpret your comment as serious, although I certainly hope you meant it sarcastically.

• Good point, since the science is settled. Why hasn’t there been a mass exodus from climatology? How can they continue to publish papers? What about the poor Phd candidates in the field?

14. richard verney says:

I for one would like to know where all the weather stations are located, and how many there are, that are said to enable us to form a view on global temperatures going back to 1850 and even back to the 1800s.

It is claimed; “The dataset includes measurements as far back as the year 1701 from over 7,200 weather stations around the world.” Who seriously believes that in 1701 there were some 7,200 weather stations distributed worldwide? That proposition sounds farcical to me.

It is only a very few countries that have weather data going back to the 1700s, and usually this is concentrated around their major cities, although in that era UHI is not an issue, but spatial coverage is an issue when one is claiming to reconstruct global temperatures.

15. Mike T says:

Of course the temperature on the ground is different to what is read in a Stevenson screen at 1.1 to 1.2m above the ground. Australian stations have read the “ground temperature”, or terrestrial minimum, for many years. For those stations without a terrestrial minimum, a “frost day” is recorded when the air minimum falls below 2.2 degrees C, or frost is observed.

16. I’d like to add my humble two cents to this good post to touch on one additional topic. I was the principal developer of large, 3-D electromagnetic codes for radiation transport modeling, which have been run on several thousand processors on one of the largest and fastest computers in the world; much like GCMs. After the initial architecture was in place, one of the first orders of business was to perform a rigorous set of validation exercises. This included comparing to analytical solutions for radiating dipoles and light-scattering spheres, which Gustav Mie on the shoulders of Hendrik Lorentz impressively accomplished. These validation procedures were *absolutely* necessary to both debugging, model verification and validation (separate things) and providing the incremental confidence we needed to eventually perform our own studies, which ended up demonstrating–through both model and experiment–the breaking of the optical diffraction limit using nanoscale transport mechanisms. I can’t overstate how important this validation was. The writeup fo this work was later awarded the national Best Paper in Thermophysics, which I mention for appreciation of co-authors Theppakuttai, Chen, and Howell.

But descriptions of climate modeling by news and popularized science didn’t satisfy my sniff test. Certainly I agree that carbon dioxide is a greenhouse gas which has a net warming effect on the atmosphere. We understand the crux of the debate has clearly been the quantification and consequences of this effect. As I would recommend to anyone with the capability and/or open mind, on any subject, I studied primary sources to inform myself. I approached my investigation from the standpoint of a computational fluid dynamicist.

I was immediately shocked by what I saw in climate science publications. There is much to say, but the only thing I want to comment on here is the lack of rigorous validation procedures in the models, as far as I can tell. Various modules (and I’ve looked at NCAR and GISS, primarily) seem to have limited validation procedures performed independently of other modules and within a limited scope of the expected modeling range. I have not found any conjugate validation exercises using the integrated models (though I am hopeful someone will enlighten me?). To not have the coupled heat transfer and fluid dynamic mechanisms validated to even a moderate degree, let alone extreme degree of confidence required when projections are made several orders of magnitude outside the characteristic timescale of transport mechanisms is no better than playing roulette. It is like obtaining a mortgage with no idea what your interest rate is…absurd. The uncertainty will be an order of magnitude larger than the long-term trend you’re hoping to project. This is not how tier-1 science and engineering operates. This is not the level of precision required to get jet engines capable of thousands of hours of flight and spacecrafts in orbit and land rovers in specific places on other planets. Large integrated models of individual component models cannot rely on narrow component-level validation procedures. Period. It is an absolute certainty that the confidence we require in the performance of extremely complicated life-supporting vehicles cannot be claimed without integrated validation procedures that do not appear to exist for GCMs. This is one reason, I believe, why we see such a spread in model projections: because it does not exist. V&V is not a trivial issue; DOE, NSF, and NASA have spent many tens of millions of dollars in efforts begun as late as 2013 to determine how to accomplish V&V, for good reason. I support the sentiment behind those efforts.

So where does that leave us? GCM’s can’t be validated against analytical solutions of actual planetary systems, of course. That is a statement that can’t be worked around and should provide a boundary condition in itself for GCM model projection confidence. But there are analytical fluid dynamics solutions that are relevant, idealized planetary systems that can be modeled and compared to ab-initio solutions, as well as line-by-line Monte-Carlo benchmark simulations which can be performed to validate full-spectrum radiative transport in participating media. I’ve seen nothing that meets this criteria (though I am open to and welcome correction. I will give a nod to LBL radiation calcs which use the latest HITRAN lines but still don’t present validation spectra and are then parameterized from k-distribution form for use in GCMs)

My conclusion is that current GCMs are like lawn darts. They are tossed in the right direction based on real knowledge, but where they land is a complete function of the best-guess forcings put into it. This is in direct contrast to the results of highly complex models found elsewhere in science and engineering, which are like .270 rounds trained on target by powerful scopes. And they bring home prizes because they were sighted in.

• Uncle Gus says:

I think the problem is that people (particularily politicians) look at climate modelling and are told, “It’s computer modelling”, and “It’s science”, and have no real idea of how those things are really done.

Add to that the apparent fact that climate modelling is stuck not so much in the eighties as in the sixties – using the same philosophy and almost the same methods that the Club of Rome used. It was good enough then, but even then people in the know knew it was essentially RIRO, and didn’t take it seriously.

• TYoke says:

Very nice post. I also work in a fluid dynamics field and I was made a skeptic when I saw Al Gore insisting that no more debate was necessary since the “science is settled”.

Al Gore, the politicians, and the MSM were convinced so it was time to move straight on to public shaming of the “deniers”. In the official channels, group-think and heretic hunting have subsequently ruled the day.

• Speaking as someone who used to design projects at UN level for several years, I can assure all the puzzled scientists that GCM and all the other tools of the IPCC, UNFCCC etc. were designed to accomplish the marxist redistribution of wealth and to destroy the industrial capacity of the civilised world. Some UN types say it openly; others talk in more veiled terms of the “millions killed by climate change” (yes really) each decade.

Don’t let cognitive dissonance overtake you- judge models by their effect on the real world to see the goals of those who pay for them. If a Borgia wants a painting done, you paint how the Borgia tells you to.

Gleichschaltung very nearly had the developed world, not just the political-media-bureaucrat class, embracing a Rings of Ice level of delusion. Magical thinking is dangerous and virulent.

• David A says:

“where they land is a complete function of the best-guess forcings put into it.”
=========================
They land exactly where their political masters want them to land.

17. Do your arguments against computer models also apply for weather forecast? In other words: Why do we trust weather forecasts but not predictions for climate warming?

• GregK says:

Information from weatherward of us, air pressure, temperature, rainfall etc tells us what is approaching.
In addition we can see approaching systems in satellite images.
Tropical storms/cyclones/typhoons/hurricanes show up rather nicely.

All good for a week or two in advance.
But that’s all empirical

Doesn’t work for a year, 10 years, 100 years into the future because we don’t understand what controls the system.

• Patrick says:

I don’t “trust” weather forecasts at all as they are usually ~50% wrong, and these days, based on computer models. Micheal Fish in the UK in the 80’s used traditional methods to forecast weather for the next day and the UK had one of it’s worst storms over that night.

• The scale, data quality, and length of the run are quite different. A short term regional scale model for a country with a lot of weather stations and focused satellite coverage can be tuned with the observations. The smaller grid helps improve model performance, and the short amount of time they have to project means the time steps are much smaller and the model runs in a much shorter time span. This allows the modeler to train himself and the model to make improved predictions. The improvements include the introduction of parameterizations derived from model performance.

A world wide model has a much larger scale, the historical input data is low quality for many regions, is expected to run for 100 years (this means the time steps have to be stretched), and there isn´t sufficient time to “teach” the modelers and tune it properly.

I started running dynamic models in 1982 (in another field, but the basic principles and problems are the same). Over the years we have evolved to use multiple model runs to create ensembles or families of model runs. We create the statistics from these ensembles….and we perform a parallel effort looking for real life analogues we can use to verify if we are even close.

We also have the ability to run lab experiments to try to understand the small scale phenomena. What we find is that lab results we have run are impossible to model with a numerical simulator – we can´t get all the physics handled properly.

Because the lab resolution is at best 1 meter and we need to run kilometer wide models we still face a problem trying to “scale up” the lab results. I believe some outfits are so bothered by this issue they are discussing building giant lab experiments which extend the lab scale from one meter to as much as four meters. This leaves a gap between the lab work and the real world, but it does mean the scale up is a bit less abrupt. I don´t think it will work. I´d rather see a much denser data gathering exercise in real world conditions. And I suggest to climate modelers they should do the same. The should spend the next 20 years gathering extremely detailed data, then see how they can use it to improve their models.

But I guess this is getting too technical…let´s just say a climate model is a lot coarser than a weather model. You are comparing a very large plastic butter knife to a small scalpel.

• knr says:

we don’t , and oddly any forecast over about 48 hours is also consider not worth much by the professionals. Indeed forecasting the weather is known to be highly problematic models or no models. In the past people accepted this as annoying but not critical. But now we see great claims to accuracy in prediction, often to two decimal places, decades ahead, but has its ‘climate ‘ is supposed to be different while in reality often the same issues that make weather forecasting so hard effect ‘climate’ forecasting.
It’s a wired situation but in climate ‘science’ the less the models reflect reality the great the accuracy is claimed for them under the good old trick of ‘may have not happed yet but it will sometime ‘
Take away the models form climate ‘science’ and you got virtual nothing left , so you can see way the models are all important and have to be defended,be they good , bad or ugly.

• Tim Hammond says:

Not sure what your point is – we are not using weather forecasts for next week as a reason to entirely change the basis of our economy.

Climate models are interesting and may have a use, but that use is not to change the world as we know it.

• dccowboy says:

Time. We don’t ‘trust’ weather forecasts beyond 3-5 days into the future and they are not inherently ‘accurate’ within that timeframe either. Weather forecasts do not ‘predict’ the temperature to within .01C. My local forecast for precipitation is wrong far more often than it is right for instance. The weather forecasts give a probability of rain occurring, not a ‘prediction’ of how much rain will occur and they give a general idea of what the temperature range will be, not an exact measure of what (and when) the highest temperature will be.

The further into the future a weather forecast is made, the less we can ‘trust’ it to be accurate.

• VikingExplorer says:

I disagree with the idea that computer models in general cannot work. I would assert that the wrong things have been modelled, and that the overall approach has been incorrect.

All the comments talking about “weather” are really beside the point. Chaos makes it extremely difficult to predict weather. However, climate is a simpler problem. In a climate study, we don’t need to know exactly where it’s going to rain, only that it does.

• wally says:

When the weatherman announces a front moving through is bringing rain, I grab my \$3 umbrella and when it doesnt rain I grumble about the inconvenience of carrying it around.

When the IPCC using computer models announces a climactic prediction requiring immediate action and a global investment to the tune of trillions of dollars … I tend to give pause in my response.

• Bob Boder says:

Don’t know bout anyone else but I don’t trust weather forecasts. It was supposed to rain all day where i live yesterday and my sons soccer game (football to the brits here) was supposed to be canceled and it dried up about 11:00 and we got the game in.

• Uncle Gus says:

We do?

18. Very good article and very good comments. As. Teenager, there was a series of models in the monster theme. One could have the Werewolf, Dragula, Frankenstein, and or the Mummy.

I always won best replica for I did an excellent job in detailed painting. They were always a replica of something on the silver screen created by Hollywood actors, makeup artists, directors and producers and most of all money based on someone’s imagination.

The truth is in the pudding. A model is the replication of nothing more than someone’s imagination and pen and money.

Thus, one has Man-made global warming.

In reality, there is a cold situation already here and that input is not In The models. Over the last few years we have lost over 30k citizens to cold. We lost over 40k of livestock in South Dakota to a freak early October storm in 2013.

In New Zealand and Scotland farmers lost thousands of lambs to late winter storms.

As one Russian scientist stated, modeling is not science.

A geologist said, carbonates are the foundation of life.

Paul Pierett

• I wouldn´t toss all models in the can the way you have. Mathematical models are bread and butter in many professions. I have spent years running all sorts of models. Some of them are highly accurate (for example, we can model salt water at different temperatures, and pressures and predict which salts will precipitate).

• Patrick says:

They may be “bread and butter” in many professions, unfortunately actions based on “bad” inputs results in equally “bad” outputs. We can model something where we know *ALL* variables. Thats why we can build virtual and real model aircraft, ships and cars. Climate? Not at all!

• The climate can be modeled. Evidently the results are less accurate than we wish. This is why I offered a comment with a graph “correcting” Mann´s slide shown during his Cabot talk. So the question isn´t whether the models work or not, the question is whether they work well enough to make decisions such as taxing emissions or subsidizing solar power.

When it comes to taking such measures, I believe the models are insufficient. However, I do worry about the fact that we are running out of oil. This means I´m not necessarily opposed to measures to increase efficiency and save it for the future.

• Tim Hammond says:

Because you know how those physical processes work to a high degree,

Climate models (i) don’t have that sort of knowledge and (ii) are being used to somehow “create” knowledge, which is utter nonsense.

• DirkH says:

Fernando Leanme
October 17, 2014 at 2:50 am
“The climate can be modeled. Evidently the results are less accurate than we wish. ”

For as long as I have followed the subject, the predictive skill of climate models has been negative. They are a negative predictor. Prepare for the opposite of what the True Believers expect and you’ll be fine.

• dccowboy says:

Fernando Leanme
October 17, 2014 at 2:50 am
“The climate can be modeled. Evidently the results are less accurate than we wish. ”

I’m not sure I would agree with that statement. The IPCC states (and I think there is general agreement with the statement) that the ocean-atmosphere system is a “complex, non-linear, chaotic system”. If this is true then I would submit that the probability that we can ‘model’ the state of that system 30 years into the future is non-zero but trivially so. In any chaotic system, if you do not know the initial value of every variables to a highly accurate degree and the exact functioning of every process in the system, you cannot ‘predict’ the future state of that system. That is the nature of chaotic systems. I would submit that we do not even know all of the variables, nor do we know all of the processes that affect the system, much less to the degree needed for the initial state. I would further submit that we will never possess this knowledge.

• DC Cowboy et al: It´s just a question of semantics, I suppose. We can model almost anything. The question is whether the model delivers a useable product. The models do exist, they are run.

Let me describe the problems I have faced:

In some cases our ability to model generates overconfidence in the higher floors. We have management types educated in Business Schools who don´t really understand what the models can and can´t do. But they love the model outputs.

These are regarded extremely well if they are mounted on a powerpoint slide. Some presentations even incude movies (these show changing pressures, temperatures, saturations). What we can´t get into some people´s heads is that models are not reliable enough.

Over the years many of us learned this fact and this is why we perform endless searches for analogues. Others focus on refining the lab work.

Plus in some cases we found the basic theories were off (I can´t discuss the details, but let´s just say it´s so crazy it´s as if we found that sometimes water ice doesn´t float on liquid water).

SO I guess what I´m saying is that models can be run, but the output has to pass a smell test. And I don´t think the climate models are passing theirs at this time.

• Fernando Leanme
October 17, 2014 at 2:50 am
“However, I do worry about the fact that we are running out of oil. ”
We will never run out of oil. The price will rise as the apparent supply diminishes. If hydrocarbons have a future value as other than fuel the market will adjust. An economic reason to find new energy sources is more efficient than a legislative one.
.

19. aussie pete says:

Thank you Dr.Ball
You (and many commenters here) have explained “scientifically” what any thinking and reasonably intelligent “non-scientist” knows intuitively. This is primarily why i have been skeptical for many years now.

20. knr says:

Computer Climate Models Cannot Work, not true they certainly ‘work’ for those who made a career out of pushing them , they ‘work’ for those using them to gain personal enrichment or those using them for political ends. That they fail to ‘ work’ in predicted future climate is a minor issues has long as they ‘work’ in the areas in which those who they are most useful for want them to ‘work’

• Jeff Mitchell says:

This would be my big point as well. The models in question cannot work because they exist not to model climate, but to provide propaganda in the effort to control people’s lives. They are simply trying to make the implausible plausible so they can effect policies that favor their political desires. This is deliberate. This is not in the category of well intentioned mistakes.

Trashing all models, though, is not a particularly good approach. Some models do work very well. Certain large aircraft have been designed and built with nary a physical test, but fly as advertised. The models that allow for this have been refined by use of the scientific method wherein each step in the development has been tested against the real world before proceeding to the next step. The agenda of these models is to compete in a real world market and make the most efficient product so that they can sell more and make more money. They can’t afford to fake it if they want to get the right results.

To lump these kinds of models with the climate models is unfair and unproductive. The post may wish to qualify the statement by saying models that have no basis in reality cannot work except by extreme coincidence.

• Bob Boder says:

And for that the models work really well, as you most correctly have pointed out, its about control and power not climate.

21. Sceptical lefty says:

Models, whether physical or computer-simulated, have their uses. However, for a model to to be plausibly applicable it must incorporate all relevant factors, giving them appropriate weight. For an engineering model this is relatively straightforward, although it can be complex. For the Earth’s climate the task is impossible, now and for the foreseeable future. The Earth is an open system, so we can never be sure that we have nailed all relevant factors, let alone quantified them or understood their mutual relationships. It is chaotic, or orderly to a degree of complexity beyond the calculating capacity of any realistically conceivable computer. It is extremely old relative to the period for which we have accurate observations so there may well be cyclic variables of such long periodicity that we haven’t recognised them. The Earth’s existence appears to be linear — i.e. it was created, it now has a ‘life’ and someday it will probably ‘die’. It is therefore reasonable to suppose that observed cyclic phenomena had to begin at some stage and will eventually end — maybe tomorrow.

Until our observations and understanding of this planet and its cosmic environment are considerably more advanced than they are now, climate models are only a waste of time and resources. I believe that a case could be made that meteorologists of 50 years ago had a better understanding of the weather and climate than do the current bunch. The early weathermen had to make do with observations and harboured few illusions about their capacity for long-term forecasting. (There has been some interesting research into solar activity, but all sensible people know that the sun isn’t very important.) The existence of increasingly powerful computers has given the modern ‘climatologists’ either a mistaken belief in the reliability of their forecasts and/or an opportunity to cynically enhance their incomes and prestige with what is, basically, mumbo-jumbo.

Finally, let’s not get too cocky about ‘The Pause’. If things start to get warmer the doomsayers will be revitalised, regardless of the highly questionable evidence for anthropogenic influences. If there is a cooling tendancy it will be accompanied by a brief ‘pause’ to allow the experts to change step and we will then be hit with anthropogenic cooling, which can be reversed if we spend enough money.

22. CodeTech says:

I have mentioned this before, but I think it’s appropriate again.

Years ago, a friend decided he was serious about winning the lottery. He got a listing of every Lotto 6/49 draw since it started, and built a model. I have no idea what voodoo he felt was necessary to give his model predictive power, but he regularly bought large batches of tickets.

As a control, I used to buy “Quick Picks”. I won minor prizes much more frequently than his model did. Neither of us won a major prize.

No matter what his model did, it was NOT capable of prediction. He spent a lot of time messing around with it, changing things here and there to make a hindcast work. He used to wax eloquent about the minute differences in the weight of the balls due to ink distribution and static charge while the machine was running. But it didn’t matter. What he was doing was not ever going to predict the lottery numbers on ANY time scale.

Ironically for this, I personally know the \$40 million dollar lottery winner than many of you have heard of, since he’s giving it all away for cancer related charities and research. I knew him for 20 years, during which time that whole lottery model thing was active. Tom never used a model.

23. EternalOptimist says:

Would Nick Stokes bet his house that the models will work ? I will bet mine that they will not

24. Well let us see here. We have climate modelers who have a tremendous bias which blinds them. They have little reliable data and most of what little they have as been “adjusted” to the alarmist bias. On top of all that, they don’t really understand the climate engine anyway. They are almost clueless about the role of water here on a water planet.

There is not chance that modern climatology can get anything right. Not until they go back to first principles and start over honestly. (not holding my breath)

25. William Yarber says:

As an engineering student back in the late 60’s, I was taught “if the results from your model don’t match observations of the responses of your system, you must change your MODEL”! IPCC, Mann, Hansen and the rest of the AGW crowd believe you change the data to make your model’s predictions fit. But they have still been wrong for the past 15+ years. When will they go to jail for their fraudulent behavior?

Bill

26. Neil McEvoy says:

There is no more common error than to assume that, because prolonged and accurate mathematical calculations have been made, the application of the result to some fact of nature is absolutely certain.

27. tz2026 says:

GARBAGE in, Gospel out.

28. Despite costing many \$millions each, not one GCM was able to predict the most significant event of the past 30 years: the fact that global warming has stopped.

They all predicted that global warming would continue apace, or that it would accelerate. Neither event happened. So as a taxpayer, I have a question:

When do we get a refund?

• Global warming many not have stopped. What seems to have flattened out is the surface temperature increase. However, there´s indications ocean water temperature may have increased a tiny amount. My guess is this may be happening because sea level is rising a little bit. But I´m not sure.

• Lars P. says:
• DirkH says:

Fernando Leanme
October 17, 2014 at 3:02 am
“What seems to have flattened out is the surface temperature increase.”

Well, that’s a falsification of the theory, and therefore of the models, so we agree about that, and the models should now be scrapped.

• Lars, that post gives us these possible exits:

1. The ocean temperature is increasing between 700 and 2000 meters…Below 2000 meters it´s cooling. Above 700 meters it´s not doing much. The net result is a slight water expansion which raises sea level a teensy amount.

2. The data isn´t worth much because the temperature changes are very subtle and we don´t have enough buoys and elephant seals. Plus the reanalysis fails to account for geothermal heat. So the whole thing may be a bit off.

3. Maybe sea level rise is all caused by glacier melt and sea floor shape changes….but I tend to be skeptical.

4. Maybe the sun and the clouds took over for a while and the energy level isn´t changing at all.

I don´t think this climate zingy is measured or understood well enough to be able to say “I´m pretty sure”. As I stated before, “I´m not sure”.

• Uncle Gus says:

DirkH: I’ve been saying this for some time. The models have been falsified. Global warming may be real, it may be man-made, it may be worse than we thought, but THE MODELS HAVE BEEN FALSIFIED. They are wrong. They don’t work.

I’ve even tried to tell this to warmists. Call me King Canute…

• Uncle Gus:

The models have not been falsified.

A model is falsified when the predicted relative frequencies of the outcomes of events are compared to the observed relative frequencies and there is not a match. For the models that were used in making policy on CO2 emissions there are no events or relative frequencies hence is no opportunity to make this comparison.

IPCC AR4 and prior assessment reports replace this comparison with one in which an observed global temperature is compared to various projections. Though projections can exhibit error they do not possess the property of a proposition that is called its “truth-value.” Thus, they cannot exhibit falsity. The first assessment report to broach the topic of events and relative frequencies is AR5.

• Bob Boder says:

And neither is anybody else, but if you need help deciding i can give you a dart board and you can paste different climate change ideas on it and chuck a couple of darts and go with what ever the gods of fate say. thats better then going with something that we know doesn’t work.

• Ian W says:

Fernando Leanme October 17, 2014 at 7:28 am

* The huge amounts of water being extracted from underground aquifers, measured in thousands of cubic kilometers of water,
* The amounts of silt and other runoff from rivers into the sea

The ocean environment is not static

• Lars P. says:

Fernando, the warming of the oceans through backradiation directly is not possible, as backradiation’s IR cannot penetrate deeper then a couple of microns in water.
The oceans do have a cool skin due to evaporation at the urface. This skin is several orders of magnitude wider then the couple of microns, which ensures that net heat transfer goes from below to the surface for the last centimeter of water.
The warming comes from sunrays:

Which leaves us with some limited options how the ocean water could be warmed up over the time by backradiation: it would need to increase the surface temperature (everything else remaining equal). But we do have better observations of the temperature of the surface. Much better then the ocean heat content data.
http://bobtisdale.wordpress.com/2014/10/12/september-2014-sea-surface-temperature-sst-anomaly-update/
And the surface does not really show warming.
Therefore I am not surprised when data does not show warming for the ocean heat content. (not to talk about the precision needed to measure variations in the value of a 1/100 of a degree, we do not have that data quality and accuracy when we measure the ocean heat content.)

Where does the sea level increase come from?
Here again we have 2 type of measurements, the satellites and the former tide-gauges.
The satellite have been calibrated and show less then 3 mm/year increase (ignoring GIA adjustment), tide-gauge show about 1 mm.
Interesting to see is that tide-gauge measurements – if one takes longer period of time – will not show sudden acceleration for the last 30 years in comparison with the previous 30-50 years.
http://www.psmsl.org/data/obtaining/
http://www.burtonsys.com/climategate/global_msl_trend_analysis.html
There are a lot of questions about the way the satellite sea level has been calibrated, however what I think is important is that it shows about constant values, no acceleration, rather deceleration for the last decade.
http://hockeyschtick.blogspot.co.uk/2013/07/new-paper-finds-global-sea-levels.html
Why is the sea level rising? There is also a human contribution, surprisingly, I think the whole pumping of freatic waters is actually the highest human contribution to sea level rise. I remember having seen numbers between 0.4 or 0.7 mm/year, but cannot find the links to the papers now.

So yes, as you say “I´m not sure”, but in the end Claes Johnson may be right, the effect from more CO2 to climate may be so that we are not even able to measure it:
http://claesjohnson.blogspot.co.at/search/label/greenhouse%20effect
These my 2 cents…

• Fernando Leanme says:

1. The ocean temperature is increasing between 700 and 2000 meters… Above 700 meters it´s not doing much.

Convince me, Fernando. Explain how the temperature of a huge layer of ocean water like that can be increasing, while cooler water sits on top of it.

You are saying the laws of thermodynamics don’t apply to the oceans. Otherwise, the warmer water would rise, no?

Also, there is no indication that a warmer water layer is sitting there uder a cooler layer. That is a measurement-free conjecture, made in desperation to try and explain why global warming has stopped.

You need evidence, my friend. Measurements. A conjecture like that by itself is just not enough.

• Nick Stokes says:

dbs
“Otherwise, the warmer water would rise, no?”
No, the water is warmer than it was. It is not warmer than the water above. There is still a gradient.

The water below can warm. Just ask how it remains cool. There is a supply of cool water from deep currents which balances the downward diffusion of heat. If that cool supply becomes slower or warmer, then the deep will warm without heat having come from above.

• Mario Lento says:

The water which is warmed by the sun can be pushed down through ENSO processes. So the warm water is relocated by prevailing westerlies during ENSO neutral and progressively more during La Nina. This does not require an end run around Thermodynamics.

• Uncle Gus says:

Terry Oldberg: What is your point?

You are using a very complex and formal definition of “falsification”. I am using the simplest. The models do not successfully predict. Their predictions are false. They are falsified. They are therefore of no practical use.

You know what I mean. What do you mean?

• Nick Stokes,

If, as you claim, the deep ocean is warming, why is it that the ARGO buoy array shows cooling?

The only warming shown was after they ‘adjusted’ ARGO. You need to produce verifiable, real world measurements that indicate ocean warming at those depths. The 2000 meter measurements show that generally, the ocean is cooling. Where are your measurements showing that the deep ocean is warming? The only data I can find shows that generally, the oceans are cooling.

Trenberth’s claim that there is heat hiding in the dep oceans looks increasingly preposterous. The simplest explanation is that global warming has stopped.

• Uncle Gus says:

Terry Oldberg: Uh huh. That’s what I thought.

I do understand what you mean – the models and their conclusions are not well enough defined to be falsified. (In the old cant phrase, they’re “not even wrong!”) There is even a certain deliberateness about that that makes it almost fraudulent in some cases.

But the policy makers, let alone the MSM, don’t understand “not even wrong”. They see the predicted warming hasn’t happened. They’re oblivious.

There’s an old saying about not taking a knife to a gunfight. It seems to me that you’re turning up to a knifefight with… a laptop. Or a slide-rule. Something, anyway, that doesn’t cut.

• CodeTech says:

First, you’re referring to past tense where it’s present. ie. They PREDICT that global warming is continuing, they continue to claim that it IS accelerating.

Also, it’s more likely a peak than a pause or stop. Things go downward from here. Cycles.

No refunds, sorry. You knew what you were getting into when you had income that got taxed.

• joeldshore says:

For all the claims that global warming has stopped, the actual facts are that the linear trend from 1975 to the present is essentially indistinguishable to the trend from 1975 to 1997: http://www.woodfortrees.org/plot/hadcrut4gl/from:1970/plot/hadcrut4gl/from:1975/to:1997/trend/plot/hadcrut4gl/from:1975/trend (In this example, the trend is actually a bit steeper for 1975 to present, although that exact detail is sensitive to exactly when in and around 1997 you choose as the end point..)

• joeldshore says:

And, GISS Temp tells a similar story:

while the two satellite products actually show a much higher trend from the start of their records (~1979) to the present than from 1979 to 1997:

• milodonharlani says:

The fact is that since c. 1996 (depending upon data set & degree of “adjustment”) there has been no statistically significant warming & indeed from some more recent date cooling.

You might as well point out that the warming trends from c. 1700 & c. 1850 is also still intact. The further facts are that CO2 started rising monotonically c. 1944, but the world cooled until c. 1977, when the PDO shifted, & that it has now stopped warming (& more recently cooled) despite continued rise in CO2.

Thus there is no observed correlation between the alleged sharp increase in CO2 for ~70 years & temperature, as GASTA (as measured) has fallen, risen & stayed flat all the while that CO2 has steadily climbed.

• joeldshore says:

It is always possible to say “there has been no statistically significant warming for the last N years” where the value of N varies simply because it takes a while to establish a statistically-significant trend.

There is in fact quite a good correlation between CO2 rise and warming during the historical temperature record, as can be seen by plotting both on the same graph and scaling accordingly. Is the correlation perfect, indicating that CO2 is the ONLY factor affecting the temperature? No…but nobody has claimed that it is. The claim is that it became the dominant driver over multidecadal timescales toward the end of the 20th century. (Over shorter time scales, internal variability dominates, just like it does for the seasonal cycle where the temperature here in Rochester don’t monotonically go between high in summer and low in winter or vice versa, despite the fact that we have a very strong seasonal cycle.)

• There are a host of barriers of a logical nature to the conclusion that the rise in the CO2 concentration caused the rise in the warming. One of these barriers is ambiguity of reference by the term “warming” to the associated idea. It isn’t the change in the global temperature but what is it?

• Mario Lento says:

joeldshore October 18, 2014 at 3:04 pm
It is always possible to say “there has been no statistically significant warming for the last N years” where the value of N varies simply because it takes a while to establish a statistically-significant trend.

There is in fact quite a good correlation between CO2 rise and warming during the historical temperature record, as can be seen by plotting both on the same graph and scaling accordingly. Is the correlation perfect, indicating that CO2 is the ONLY factor affecting the temperature? No…but nobody has claimed that it is. The claim is that it became the dominant driver over multidecadal timescales toward the end of the 20th century. (Over shorter time scales, internal variability dominates, just like it does for the seasonal cycle where the temperature here in Rochester don’t monotonically go between high in summer and low in winter or vice versa, despite the fact that we have a very strong seasonal cycle.)
++++++++++
No you are not correct. N = now, and looking back for at least half the record it’s not been warming.

No you are not correct where you state “No…but nobody has claimed that it is.” In fact, the IPCC and Gore and others have claimed that it is indisputable, that virtually all of the warming is due to CO2.

And virtually all ice core samples show CO2 follows, not leads temperature change.

If you look at short time periods, people like you get confused when they see correlation and want there to be causation.

• joeldshore says:

Mario,

My statement does not contradict what the IPCC said. The point is not that there are no other effects but that these other effects have not caused significant net warming, at least since the middle of the 20th century. Internal variability and natural factors like solar and volcanic aerosols are probably pretty much a wash…and manmade aerosols have causes some cooling (particularly notable in the mid-20th century).

The ice cores seem to show that temperature starts to change before CO2, which is compatible with the idea that CO2 plays the role of a feedback in those cases. (There were not humans around to emit prodigious quantities of CO2 by rapidly putting carbon stored in fossil fuels back into the atmosphere.) The CO2 is the most likely explanation of how the warming in the two hemispheres becomes synchronized (since the Milankovitch oscillations would tend to make them out of phase). And, since we know quite accurately the radiative forcing due to CO2 and can estimate the radiative forcing due to other factors (albedo changes mainly due to ice sheets and to a lesser extent vegetation changes and changes in aerosol loading), it can be determined that CO2 was probably responsible for about 1/3 of the warming/cooling in the ice age – interglacial cycles.

More importantly, these estimates allow us to estimate the warming that occurs with each W/m^2 of forcing….with that value somewhere around 0.75 C per (W/m^2). And, this, along with the universally agreed-upon forcing of ~4 W/m^2 per CO2 doubling leads to the conclusion that the climate sensitivity is somewhere around 3 C per doubling. In fact, as Hansen points out, this is the forcing one obtains if one assumes that albedo changes due to ice changes are a forcing rather than a feedback. Since they play the role of a feedback in our current climate “experiment”, the actual sensitivity could be higher…Hansen thought perhaps even about double that…although hopefully that is not true in our current climate state when there is not that much ice to melt.

• Mario Lento says:

Hey Genius: Try the same experiment from 1900 to 1940 and tell me what you find? Or was your post supposed to have a /sarc

29. Lars P. says:

Thank you for this post. It is not an easy subject, however I think it is one of the main pillars of the current climatology and it clearly shows why this “science” stands on shacky grounds:

“… computer models, but Lamb knew they were only as good as the data used for their construction. Lamb is still correct. ”

With my limited knowledge in computer climate models I think I have spotted one additional problem not mentioned above:
computer models do not generate data

When colected temperature data is passed through a computer model to generate the “data” for future modelling, as I understand GISS does, this only ensures that “data” already contains the gospel input preparing the gospel output.

With each new iteration and each new data version GISS is reworking and altering the data making the past cooler and taking out any variances from the natural variations that do not fit to the model – a fact many skeptics have highlighted on and on again.

This is why we can see with our eyes how past data transforms (do not trust your lying eyes, trust “the science” kind of thing), cooling becomes non-cooling, the 70s ice age scare dissapears in Nirvana and “the pause” will also dissapear retroactively given enough iterations.
Everything will end to be just a computer generated input/output gospel that will perfectly fit in an ideal GISS world.

30. Alex says:

The average fingerprint of ten individuals in a room wouldn’t identify anybody.

• HarryG says:

Exactly

31. Ken L. says:

Back some 30 years ago while taking some elementary meteorological courses at my state’s largest university and fledgling Meteorology department with Dr. Howie Bluestein( what a great guy!) of tornado research fame as my instructor, we talked about how coarse the grids were and inaccurate the measurements used by models to predict the weather. The longer out you went and the more calculations performed using the less than accurate data, the results became less and less reliable – and that was for 7-10 day periods! I’m not claiming Dr. Bluestein as a skeptic in today’s argument, btw, at least publicly.

The computers today are better by orders of magnitude, but are the grids and weather measurements improved enough to match that new calculating power – especially as the periods of time get longer?
I still haven’t forgotten what a math professor told out class – when error ridden data is plugged into long complex calculations, the errors are magnified, inevitably.

That was the seed of my initial suspicions about climate models. and why I was instantaneously a skeptic. Especially when a mathematician friend with a statistical and computer background pointed out all the errors in the whole hockey stick presentation. And not to mention the scale of the temperature differences upon which they were laying claim to all sorts of dramatic conclusions about our world to come.

My doubts were strengthened further as I saw cadres of alarmists laying down in devotion to their computer model Gods and painting their results as evidence, even over hard data that should be the real test of any hypothesis.

Now all I can do is hide and watch and hope that the truth about how little we REALLY know can over come the certainty upon which the alarmist base their catastrophic forecasts, so we don’t end up betting the economic farm and the well being of millions who possess no blame nor guilt in the situation, but will be hurt by ill conceived and premature public policy actions,

32. While current models have clearly failed, it is a really unfortunate title that tops this article. Models absolutely can work and can work reasonably well with simple factors as long as the factors are set to the right levels.

• dccowboy says:

That is an oversimplification of the problem. It is not an accurate statement about a complex, non-linear, chaotic system. Chaotic systems cannot be modeled by reduction of the system to ‘simple factors’ and I’m not sure what you mean by ‘reasonably well’.

• Mario Lento says:

@Jeff Id October 17, 2014 at 3:41 am
+++++++++
True – However (and this is a significant point) we’re talking about Climate models which only seek to prove CO2 is causing virtually all of the warming in what is claimed as an otherwise static climate. The models in climate science deny that climate changes naturally. These models cannot be useful because their purpose is fatally flawed.

33. That is the whole problem with data that is seen as information. Information has its roots in the exformation of a shared story. Data has no roots at all and can thus be interpreted as needed … and can thus tell a story of its own.

34. Gamecock says:

Dr. Ball is correct as it applies to gridded GCMs: initial values for grid cells can never be right. And it’s far worse than he states. He is talking two-dimensional, the surface station values. Initial values must be 3D, including cells at high altitude.

The Nudge Models, as I call them, can’t succeed, regardless of granularity. Rounding errors alone would destroy results. Gridding the atmosphere is the wrong approach. The belief is that gridding can overcome the lack of knowledge of how the atmosphere operates. Bigger computers and more granularity can’t overcome the basic lack of understanding.

As a computer jock, I was responsible for a set of models for 20 years. They worked fine, because the software correctly codified the behaviors in the system. Our knowledge of global atmospheric behaviors is far too limited to model the overall system. You can’t model what you don’t know. In that sense, I say software is the big problem, not lack of data. Adequate models might work with far less data. Bad models won’t work with a lot more data.

• I agree. For example, one of the most critical processes is cloud formation. Obviously, the extent, placement, and timing of cloud formation makes a big difference. Yet the physics is not adequately understood. If my sources are correct, the way they deal with this is to “parameterize” cloud formation. Now, I am not familiar with the details of the models, but I think this means that they reduce the process of cloud formation to some mathematical expressions that can be used to describe it (but that no one claims actually models it). Then they make guesses about the values of the parameters in the expressions, and provide that to the model so that it can calculate the extent, placement, and timing of cloud formation. [Question for those more knowledgeable — am I correct that they parameterize cloud formation, and is my characterization of parameterization correct?] It would seem that if you get to pick the parameter values, you can get any result you want.

35. Cees says:

Sorry, but I’ve heard “Hang down your head, Tom Wigley” too many times by now. Someone switch off the gramophone please?

• kenw says:

believe it he was Tom Dooley.

36. Village Idiot says:

So tell us, please. Dr Tim, the way forward. All climate models should be scraped? Climate modelling forbidden? Efforts to understand the planet’s geo-physics abandoned? Or do you have a sensible suggestion for what should replace climate models?

• DirkH says:

Return the money, you scoundrels.

• Otter (ClimateOtter on Twitter) says:

In the field Observational data and a great deal of hard work.

• Richard M says:

Bingo! Until we actually understand natural cycles the chances of modeling outside changes is impossible.

• Ed_B says:

Yes, abandon climate modelling. Shut off the funding to any “climate change” study. Spend the money on new generation nuclear reactors.

• dccowboy says:

I don’t think Dr Ball is suggesting any of those wild leaps.

I think his ‘suggestion’ would be that we should not base global energy and ‘climate change’ policy on the results of current models – as it seems we are currently doing now.

• Larry in Texas says:

That has been my fundamental point all along about climate models. Given the known inadequacies of such models, both in what data may be missing AND in what is assumed by those modelers (whether they have an agenda or not), no one can adequately predict what has been predicted about the behavior of the climate. As such, if I am a policy maker, I cannot base any reasonable policy of any kind upon the dicey nature of these models, and the fact that their predictions have proven to be unfulfilled makes especially draconian policies as have been proposed both undesirable and unjustified.

• Village Idiot For the way forward I repeat the conclusion of my earlier comment at 16/10:08pm

“During the last few years I have laid out in a series of posts an analysis of the basic climate data and of the methods used in climate prediction. From these I have developed a simple, rational and transparent forecast of the possible timing and extent of probable future cooling by considering the recent temperature peak as a nearly synchronous peak in both the 60- and 1000-year cycles and by using the neutron count and AP Index as supporting evidence that we are just past the peak of the controlling millennial cycle and beginning a cooling trend which will last several hundred years.
For the forecasts and supporting evidence go to the link at the beginning of this comment.”
see http://climatesense-norpag.blogspot.com

• Bob Boder says:

Try replacing it with a model that works and can be established as working over a long period of time, until then how about not trying to change the whole worlds socicio/economic structure.

• Uncle Gus says:

As someone here has said, there ARE no climate models, simply circulation models. That don’t work.

How about replacing them with different models (maybe of something a little simpler?) that do work a little bit. Then we can build from there.

37. Johanus says:

“All models are wrong. Some are useful”
-George Box

• David Jay says:

Would that be a one-box model?

• PiperPaul says:

Outside of which thinking was employed?

38. Akatsukami says:

I take exception to the statement:

Ockham’s Razor says, “Entities are not to be multiplied beyond necessity.” Usually applied in making a decision between two competing possibilities, it suggests the simplest is most likely correct.

The simpler hypothesis is not more likely to be correct; it is easier to work with (see Robert Grosseteste’s Commentarius in Posteriorum Analyticorum Libros).

• Pat Frank says:

I agree with the exception you took, Akatsukami, but disagree with your correction. Simpler models are preferred because more complex models are necessarily superfluous.

39. Jim says:

First, let me say that I don’t believe in global warming. I’ve read enough articles here and in other places to be convinced that there has been no significant warming for almost two decades. So, having said that, can someone please explain an article like this one: http://www.slate.com/blogs/future_tense/2014/10/13/nasa_earth_just_experienced_the_warmest_six_month_stretch_ever.html?utm_content=bufferac518&utm_medium=social&utm_source=facebook.com&utm_campaign=buffer

Is NASA or the National Climatic Data Center just fabricating these numbers?

• Barry says:

No. The “pausers” will soon have to shift their emphasis elsewhere, like pointing out that there is limited data and all models are wrong. They will also say that global warming impacts are not that bad — hey, who doesn’t want a warmer winter?

• CodeTech says:

Seriously? “Pausers”? Are you actually going to put people in a little labeled box because they have the gall to point out a painfully obvious fact? Really obvious fact, on the scale of an elephant in a room or an Emperor with no clothes…

Or are you being sarcastic? See, it’s hard to tell here, there are so many whack-job warmists posting that you can rarely tell when someone is genuinely ill or just mocking them.

• David A says:

Barry says…”They will also say that global warming impacts are not that bad — hey, who doesn’t want a warmer winter?”
==================
Barry, please pay better attention. What we say, at least I say, is that the benefits of CO2 are KNOWN (In thousands of experiments, both laboratory and real world,) and they are very positive, the harms are “ever projected and ever failing to materialize. As to the “Pause” well it certainly is, but many do not ASSUME it will be broken by warming. In fact if the AMO turns it will most likely cool. Then the “alarmists” will what. likely blame some anthropogenic source and still want to change the world to their statist vision. I foresee no pause in their efforts.

• dccowboy says:

Fabricating? Too strong a word I think. It implies an intent to deceive, which I am not willing to accept. They are ‘fabricating’ the numbers in the sense that, due to the sparse nature of the stations and the amount of missing data from stations, they are forced to use ‘mathematics’ to project what temperatures may have been 1200 km from stations that do exist. Do you think that a temperature in Washington, DC can be mathematically ‘manipulated’ to give an accurate temperature in Jacksonville, FL? The issue is far more complex than that tho.

• Uncle Gus says:

It’s science, Jim, but not as we know it!

(Sorry, couldn’t resist!)

Basically, it’s spin. I seriously doubt that the Earth has experienced its warmest six months ever by any reasonable definition. But they’re not using reasonable definitions. Neither of “ever” nor of “warmest”. Probably not of “six months” either.

You’ll generally see a story like this turning up every few months. It usually means that some particularly damning evidence has just cast doubt on the “consensus” view of global warming.

40. Doug Huffman says:

I would characterize modelling and climate-modelling and much of what passes for popular science as ad hockery by those afeared of error.

41. Thank you, Dr. Tim Ball, for a really great commentary …. and one that truly exposes the lunacy involved with the calculations of Regional and/or Global Average Temperatures.

And as a side note, to wit:

Dr. Tim Ball said:

Society is still enamored of computers, so they attain an aura of accuracy and truth that is unjustified”.
———–

Right you are. Many years ago, early 70’s, I was telling my co-workers they should be using the “text editor” program for our mini-computer to key-enter all of their “income & expense” line items to a mag-tape data file so that they could then “sort” them by date and type and then “print” them out on fan-fold computer paper.

When asked why I would tell them because …. iffen their yearly Income Tax Returns were ever audited that the IRS auditor would more likely believe the “truth and accuracy” of the computer print-out than he/she would any verbal or hand-written figures.

42. Bill Illis says:

There are two ways to estimate the impacts of global warming:

1. Build the best computer model that can be made: or,
2. Observe the real Earth and see how increased GHGs are actually impacting the climate, use those observations to extrapolate the future impacts.

I’ve always understood that 2. was the way to go because of the extreme complexity of what we are talking about here – trillions upon trillions of energy/photon packets moving through trillions upon trillions of molecules every picosecond.

The physics involved in this system cannot really be described a few dozen dynamic equations nor downscaled to 10 km by 10 km by 10 km grids – at least not to the level of accuracy that one should rely on.

We need to be empiricists, not modellers.

• Barry says:

Bill, atmospheric temperatures are at an all-time high (2014 may be the warmest year on record), oceans continue to warm, and land ice continues to decrease. Is that enough observation for you?

• Richard M says:

Even if what you said were true, what would it tell you about the relative impacts of all the various forces that impact climate? What is the relative impact of the AMO on Greenland ice melt? What is the effect to geothermal forces on land ice over time? How much of the warming was induced by changes in the THC speed? How does the PDO affect ocean temperatures? What about land use changes? etc. etc. All of these need to be understood before we could have any clue as to the impact of increased GHGs and/or aerosols.

• David Jay says:

“atmospheric temperatures are at an all-time high”

**snigger**

• Uncle Gus says:

Funny, land ice only seemed to start decreasing once the public began to twig that melting sea ice could have no effect on sea levels.

Before that, it was “Lowest Arctic ice cover in history! We’re all going to drown!” And of course it was all (seasonal) pack ice. Which is now increasing again year by year…

• Bill Illis says:

Barry October 17, 2014 at 5:41 am
Bill, atmospheric temperatures are at an all-time high (2014 may be the warmest year on record.
—————————

The atmosphere records are recorded by RSS and UAH and they are very low right now compared to the historic highs (which means you are listening to propaganda, not being an empiricist).

And at the rate the adjusted land surface temperatures and the not-seasonally-adjusted-properly sea surface temperatures are actually increasing at, warming will not be a problem. You have to look at the actual data here and not read propaganda again.

• Bill Illis says:

This is an example of what I was talking about – water vapor – is it really operating the way global warming theory predicts it should be? Water vapor, as a feedback to CO2 warming, is only responsible for about half of the total warming of 3.0C predicted for a doubling of CO2.

But this is what it is “actually” doing over the past 67 years – as in 67 years is a long enough time to see how the real Earth tm responds.

And then just a little further description about why the theory “assumes” CO2 is the control knob. Water vapor as a feedback at 7.0%/K with varying CO2 producing the initial temperature response, would control almost all of the greenhouse effect. Its just that that is not what is really happening. (Of course, now clouds enter the equation and how can there be any clouds if there is almost no water vapor in the atmosphere – the theory actually predicts a massive “increase” in clouds at these impossibly low values of water vapor – its just a theory that is not based on real physical parametres).

• Dr. S. Jeevananda Reddy says:

I worked out that Wet Bulb temperature (Tw) = dry bulb temperature (T) x (0.45 + 0.006 x h x sq. root of p/1060 where T & Tw are degrees Celsius, h is the relative humidity in % and p normal pressure in mb. Tw = C x sq. root of W where in W is the precipitable water vapour gm/cm2 and C is constant varies with season.Net radiation intensity (Rn, cal/cm2/day) is = bl Tw where bl are constants related to length of the day, latitude, station level pressure in mb.

Dr. S. Jeevananda Reddy

43. Barry says:

“All models are wrong; some are useful.” So then it would be easy to point out how models are wrong, and cannot perfectly predict the future (even meteorologists are not 100% accurate even 1 day in advance). But then what do we do to plan for the future? Your supposition seems to be that models are biased towards warming, but in fact, if the models are completely wrong, they could be biased towards cooling (less warming), and the future may be even worse than predicted. The solution supported here seems to be to bury our heads in the sand.

• dccowboy says:

I don’t think ‘bias’ has anything to do with the inaccuracy of GCMs. They simply do not (and some claim that that they cannot) model the ocean-atmosphere system that would allow them to be used as the basis of energy and ‘climate’ policy. Should we abandon them? No, but we should not use their current output to ‘plan for the future’ either.

• knr says:

;but in fact, if the models are completely wrong, they could be biased towards cooling ‘ Your amusing that there ‘wrongness ‘ is an accident which not be the case , if they ‘want’ the models to tell a certain story then all they have to do is front load them to make sure they do. You ‘need warmth ‘ to keep the money flowing in you make sure your models produce warmth reality has no part in that process . Such ‘front loading ‘ is why models are often not trusted in any area , in climate ‘science’ its highly political and self serving nature , no AGW an lot less climate ‘science’, suggest font loading is very likley to be seen and its one reason they are so wrong so often in pratice when this approach fails match events.

• George Box’s aphorism that “All models are wrong; some are useful” is incorrect. Wrongness is a consequence of the use of heuristics in selection of the inferences that are made by a model. It can be avoided through replacement of heuristics by optimization.

44. Tom in Florida says:

I told this story a few years ago on this blog but I will tell it again as it is very relevant. In 1968 I was a senior in high school in Hamden CT. The school had invested in a computer for a class on programming and understanding computers. Now, at that time, this very limited computer was the size of a small refrigerator and used a punched hole tape with a language called Focal (pre-Fortran). It was a simple, English based language that used if and then boxes. We were all assigned to design a program and run it to see if it worked. I designed a very simple program I called the Personality Analyzer. My intent was to play a joke on one of my friends. The program asked for all kinds of data; hair color, eye color, height, weight and even ethnic background. Now the only thing that mattered was the height because I wrote it so that when you put in the height of the friend I was playing the joke on, 5’9”, it always spit out the same answer which was not very flattering to my friend. For comparison and knowing the heights of two other friends I did the same thing but with different results, both very flattering. It worked and the joke went over very well. Since that day I have always been skeptical of computer results because one can always produce the desired outcome simply by controlling the input.

[Rather, “…by controlling what is done with the input.” ? .mod]

• Tom in Florida says:

re: Mod. The program itself will control what is done with the input, as it did in this case. But I was referring to the fact that I knew what response I wanted and controlled the input to achieve that goal.

45. Alex says:

I can’t tolerate the way averages are used sometimes

The earth is considered to have an average temperature of 15C.
Ignoring albedo and the fact that the earth is not a blackbody, I have noticed this:
15C= 390 w/m2
Average of -10c and +40C = 15C
-10 = 271 w/m2
+40C = 544 w/m2

Midpoint is 407.5 w/m2

So what figure are these climate scientists working on? 390 or 407

http://www.spectralcalc.com/blackbody_calculator/blackbody.php

• David A says:

Now change the humidity in both data sets and see what happens to your w/m2!

• David A says:

Now change the atmospheric density in both data sets, and see what happens to your w/m2!

Perhaps I am alone amongst your readers of having had the honour and privilege of meeting Professor Lamb back in 1977 by which time he was semi retired. I commissioned the CRU to create a model relating sugar beet havests to weather conditions. This was after 3 catastrophic years. They produced a model that predicted the past wonderfully well. It was the future that the model had difficulty with…..
From memory, Hans Morth took over from Professor Lamb if only for a short time. He had a theory about the impact of Saturn and Jupiter when they were with the Earth at 90 degrees. Something to with the conservation of angular momentum – a bit beyond me.

47. Alan Robertson says:

“Two states differing by imperceptible amounts may eventually evolve into two considerably different states … If, then, there is any error whatever in observing the present state — and in any real system such errors seem inevitable — an acceptable prediction of an instantaneous state in the distant future may well be impossible….In view of the inevitable inaccuracy and incompleteness of weather observations, precise very-long-range forecasting would seem to be nonexistent.” – Deterministic Nonperiodic Flow, in Journal of Atmospheric Sciences, Lorenz E. ’63

48. Jeff Alberts says:

Saying climate models don’t work is fine, they don’t. Saying they cannot work is myopic. We’ll never break the sound barrier, never get to the moon, oh wait…

• Gamecock says:

Climate models can never tell us anything useful.

• Alx says:

Still waiting for those predicted moon colonies to develop, and who imagined average people carrying a computer that fits in their shirt pocket and allows them to take pictures, video, and communicate with people across the planet while having access to a literally infinite trove of information instantly.

In other words the future as always remains unpredictable and often un-imaginable. What is certain is climate models cannot have worked yesterday, last month, last year, or 10 years ago and cannot work today. It’s not that they “do not” work, it is they “cannot” due to the mulitple reasons the article explains.

Anyways I am contacting Las Vegas and betting on lunar colonies developing before climate models becoming robust enough to forecast global climate over decades. Always wanted a condo on the moon.

• We’ll never have a pocket sized fusion generator either, or unicorn feathers or leprechaun gold.
Climate models will NEVER work as long as “an increase in CO2 = higher temps” is a factor in them

• Alan Robertson says:

Jeff,
In my post immediately prior to yours, I referenced the work of Edward Lorenz, who showed in the ’60s that it is mathematically impossible to make accurate long- term weather predictions, when all inputs to the system are not precisely known and added to the calculations. What is a climate model other than a long- term weather- prediction model on a grand scale? Not only are none of the climate models accurate, they can not be and never will be accurate. That isn’t a myopic statement, it’s reality.

• joeldshore says:

Your statement is a “red herring”. Climate models demonstrate what Lorenz talked about every day: Started from slightly perturbed initial conditions, they do indeed give very different predictions for the weather and even the up and down jiggles in global temperature. However, they give the same result for the trend over, say, a century due to increases in greenhouse gases.

Look, if we take your argument at face value, nobody could ever predict that the climate here in Rochester is cooler in the winter than in the summer by some 25-30 C. Yet, such predictions can be made confidently. (What can’t be made as confidently are predictions of whether this winter will be warmer or colder than average because such predictions are sensitive to the initial conditions.)

So, here’s the point: Some predictions are very sensitive to the initial conditions and some depend more on what are called the boundary conditions. Predicting the change in climate due to a significant perturbation in the Earth’s radiative budget is a boundary conditions problem, not an initial conditions problem. Just like predicting the seasonal cycle is a boundary conditions problem.

• Richard of NZ says:

The myopia was of those people who made such statements as “we’ll never break the sound barrier”. At the time the sound barrier had been broken for several hundreds or thousands of years. The crack of a whip is due to the tip of the lash going supersonic. Most rifle bullets were supersonic, i.e. had broken the sound barrier, and had been since the late 1800’s.

Your statements are a mere distraction.

• Of course “climate modeling” computer programs work, all of them do, …. but the quality of their “work” output is totally FUBAR and of no practical value whatsoever.

One can not “model” a chaotic process that is “driven by” thousands of different randomly occurring input variables which never “repeat” themselves from one day to the next.

• That model building work conducted thus far is of no practical value can be expressed in information theoretic and control theoretic terms by pointing out that: a) The mutual information of the models is nil and b) If the mutual information is nil then the climate system is uncontrollable.

49. The role of Ockham’s razor is to select which among the many inferences that are candidates for being made by a generalization is the one that will be made by it. Ockham’s razor suffers from lack of uniqueness of the inference that is selected by it with consequential violation of the law of non-contradiction. The rule called “entropy minimax” provides the uniqueness that Ockham’s razor lacks thusly satisfying non-contradiction.

50. ren says:

Where warmth runs away?

51. Winston says:

Verification, Validation, and Confirmation of Numerical Models in the Earth Sciences

Naomi Oreskes,* Kristin Shrader-Frechette, Kenneth Belitz

SCIENCE * VOL. 263 * 4 FEBRUARY 1994

Abstract: Verification and validation of numerical models of natural systems is impossible. This is because natural systems are never closed and because model results are always non-unique. Models can be confirmed by the demonstration of agreement between observation and prediction, but confirmation is inherently partial. Complete confirmation is logically precluded by the fallacy of affirming the consequent and by incomplete access to natural phenomena. Models can only be evaluated in relative terms, and their predictive value is always open to question. The primary value of models is heuristic.

http://courses.washington.edu/ess408/OreskesetalModels.pdf

• The paper of Oreskes et al is incorrect. As the information about a natural system is incomplete it is impossible to verify a model of such a system. However, using available technology it is possible to build and validate such a model.

• Alx says:

“The primary value of models is heuristic.”

Kind of like rule of thumb, an educated guess, intuitive guess, close enough for government work, the piece fits just have to pound it in, the piece fits just add extra filler, stereotyping; Joe had good luck with his 2003 Honda, loosey goosey, etc.

Ok heuristic, fine a shortcut method of some kinds of problem solving, but not a good approach for example for brain surgery. In terms of climate science and global politics, I think it should only be used for entertainment purposes.

• In the world of model building a “heuristic” is an intuitive rule of thumb that is used in selecting the inferences which will be made by the induced model. Climatologists commonly employ heuristics which select those inferences that are most likely to make it possible to pay off the mortgage and send the kids through college!

52. Steve Oregon says:

TITO
Tomfoolery In Tomfoolery Out
MIMO
Mendacity In Mendacity Out
NINO
Nick In Nick Out

All the same thing.

53. more soylent green! says:

This is the money quote

“[IPCC computer climate models ]create the results they are designed to produce.”

I can’t say it more succinctly than that.

54. axbucxdu says:

Unfortunately, this is all old hat to those dwindling numbers of genuine scientists that are aware of their limitations. See the paper titled, A First Look at the Second Metamorphosis of Science, by Attlee Jackson. Modern science is all about dogma, and very little about data.

55. BeegO says:

When this whole Globel Warming thing first came about, I wondered” how could you tell if the earth was warming or not”. You can’t tell by the earth based temperatures because we abviously don’t have enough of them for decent data. Plus you don’t have any real history data to go by. You could use satallite phots of the polar ice caps but that is a very slow process and again, we have no real history. I guess the answer is that even to this day, you can’t really tell if the earth is getting warmer or colder. And oviously you cant tell if CO2 is causing anything. Now if an idiot like me can easily figure this out, why is it that these brillient scientist can’t see this?

• whiten says:

Is not very complicated really….for the last century till around year 2000 there has been a global temperature rise at about 0.7C to 0.8C…that actually is a big anomaly according to climate data, is a huge variation in such a short period of time…all known natural variations can not explain it… besides is worse when you consider that to be a warming in the top of another prior warming of 0.2C and while most probably there should have been a cooling expected during the same period instead of so much warming…The anomaly is shown clearly by the temperature records, I don’t think there is or ever was any disputes about it.
Either AGW a wrong or a right-correct hipothesis to explain the anomaly the anomaly still stand and remains to be explained…and as time goes it shows clearly that is not something to be ignored lightly.

cheers

• whiten says:

Oh…….. some clarification:
The +10 below @earwing42 is meant for the comment above I replied to @BeegO.

CHEERS

• Uncle Gus says:

It’s a case of them being too effing smart…

56. earwig42 says:

+ 10

57. rgbatduke says:

Ockham’s Razor is basically a provable theorem of Bayesian inference. The “causes” being multiplied are fundamentally Bayesian priors. We never know that the priors are true, we can at best assign them a probability of being true, one strictly in the range $0 \le p_i < 1$ where $i$ is the index of the probability of the ith cause being true. I include 0 only because some causes are contradictory — if we insist on logically consistent clauses the range is $0 < p_i < 1$ and we are never certain that a prior is true or false.

Predictions of the theory are then strictly conditioned by the probability that the the priors of the theory are true.

Suppose we have two theories that are both in equally good agreement with the empirical data. One of the two theories has only a single prior with a probability of being correct $p_1$. The second theory has three priors, including $p_1$ from the first theory.

It is now formally true that the second theory is less likely to be correct because $p_1 > p_1*p_2*p_3$. In order to be correct all three of its priors have to independently be true, compared to only one for the first theory. The overlap illustrates the danger of needless multiplication of causes — if two theories rely on a large common base of assumptions, but one of them requires a lot more assumptions than the other but gets no better agreement with the result, one should assign a strictly higher probability to belief in the first theory compared to the second.

There is a second case where a more complex theory is likely to be formally less probably true even if the second theory does better at explaining the data. The proof is much more difficult — it has to compare the information entropy of the two theories relative to the results. The classic limit here is the invisible fairy class of theories. An invisible fairy theory can explain absolutely everything by stating fairies did it, only the fairies are invisible. If you drop a penny and it falls, it is because invisible fairies grab it and flap their teeny tiny wings and carry it towards the ground exactly as if Newtonian gravity were true. Why? Because they feel like it, that’s why! This theory can do a better job than Newtonian gravity in explaining the data, because the fairies feel like making the perihelion of the orbit of Mercury precess as if general relativity were at least approximately true. It also explains the data associated with Dark Matter and Dark Energy — there is no such thing in either case, it is just that the fairies who move stars around are too lazy to keep up with Newtonian gravity. The fairy theory, however, has a truly staggering amount of information entropy compared to Newtonian gravity, or Einsteinian gravity, and is at least comparable to Dark Matter theories because they are invisible fairy theories, they just give special names to the invisible fairies in question.

Most of this is worked through in Cox’s monograph:

http://www.amazon.com/Algebra-Probable-Inference-Richard-Cox/dp/080186982X

(arguably the most important work in the history of philosophy) or in more detail with more/better examples in Jayne’s lovely book:

http://www.amazon.com/gp/product/0521592712/

Jaynes has a lovely example that he works through of how inference really works. One is walking down the street and observes a jewelry store with the glass smashed and all of the display jewelry missing. We would all infer that the store was recently robbed. Why?

It isn’t that we cannot come up with alternative explanations. For example, the window could have been smashed by the store’s owner, because he was on his way to a party and was going by the store and realized that he’d forgotten to turn on the alarm but didn’t have the key and so he decided it was less expensive to smash the glass himself and just take the window display jewelry with him to the party than to leave it there unprotected and risk somebody else smashing the glass and taking it while he was partying with his store basically unprotected. We can probably make up dozens of stories capable of explaining the observation.

But each of these stories involves a comparatively unlikely set of prior assumptions. How probable is it that the store’s owner a) was going to a party by a path that took him by the locked store; b) forgot to turn on the alarm earlier that day; c) forgot the key; d) would decide to smash the glass and take the jewelry with him rather than return home, get the key, unlock the store, set the alarm, lock the store, and go on to the party? We could estimate $p_a = 0.01$, $p_b = 0.01$, $p_c = 0.01$, \$p_d = 0.001\$ and conclude that the probability that all of these are true is roughly one in a billion (or likely less, especially $p_d$ is probably a vast overestimation for the stupid, party animal, forgetful store owner portrayed). Indeed, an empirical estimate of the probability would be much lower — we’ve never even heard of jewelry store owner who smashed his own glass and took all of his own jewels at all, let alone on the way to a party after failing to set an alarm and forgetting his keys. As far as we know for sure, this has never before happened in the entire history of the world.

On the other hand, we might conclude that the store was robbed by an ordinary miscreant. Our computation there is much simpler. Maybe 1 person in 20 in the city is poor and willing to break the law in order to obtain money for drugs, for food, for whatever. We know this because we read about them in the newspapers every day and have experience with being robbed ourselves perhaps once every decade. The jewelry store with its easily smashed window and expensive display is an attractive target, and we are pretty certain that people walk by every day and look at it and wish they could just take it without paying for it. There is no question that jewelry stores in general are robbed in just this way every day (somewhere) — it requires little imagination to conclude that this is by far the most likely explanation for the observation of a broken window and missing jewels.

It is, in other words, simple common sense. Bayesian reasoning generally is.

It’s why Bayesians do not, or at least should not, believe in God(s). Complex, “psychological” explanations involving invisible fairies of all sorts, while usually capable of being made plausible as in not overtly contradictory, are almost never the best (most probable) explanation given the data and the rest of our evidence-supported probable beliefs.

The most parsimonious (or more generally information entropically optimized between explanatory power and the probable truth of the required assumptions) theory is not necessarily the correct one, but it often is, provably the most probable one, given the data and set of theories that can explain it equally well so far.

rgb

• Generally Bayesian inference suffers from the non-uniqueness of uninformative priors with consequential violation of the law of noncontradiction. An exception to the rule is that the uniform prior over the limiting relative frequency is uniquely uninformative.

• george e. smith says:

I couldn’t have said that better myself ??

• whiten says:

Hello rgb.
If you allow me to make a simple question.

After all you say above (I confess that I have not read it further than the third line) can you tell me please what is your take on the projections of GCMs, ACCORDING TO YOU do these projections prove or disprove AGW or what else?

Maybe if I could have managed to read all of the above you say, maybe I could have had an answer to my question.

cheers

• rgbatduke says:

IMO it is very probable that the increase in CO_2 concentration from whatever source can reasonably be causally linked to some fraction of the recently observed (say, post 1950) warming. The data is most consistent with the simplest null hypothesis of warming produced only by the direct forcing from the CO_2 increase itself with net neutral feedbacks from all sources, and if one fits the entire range of CO_2 increase from 300 to 400 ppm to the temperature increase observed across the same period, one gets excellent agreement from a simple logarithmic model with a TCS of around 1.25 C (fitting HADCRUT4 — GISS LOTI would require a slightly higher TCS because it shows slightly greater warming, go figure). That is, the fit agrees to within around 0.1C across the entire range and is obviously of the same order as the uncertainty in the data being fit (given that GISS and HADCRUT4 differ by at least this much, outside of their individual claims).

That is, we might expect a total anthropogenic warming of 1.25 C by 2100 if CO_2 increases to 600 ppm due to human burning of fossil fuels, where we’ve already realized over 1/3 of this (so we have less than 0.8C to go from present temperatures). However, this is also of the close order of observed natural variation and so it is difficult to place a lot of confidence in the predictivity of the model. Basically, we have no good way to tease natural variation out of the total temperature change, nor any chaotic contribution, nor any particular feedback positive or negative, nor any effects due to other variables not being stationary. The null hypothesis is sufficient to explain the data and hence is not falsified, but that doesn’t mean it is correct.

Hope this helps. Oh — this means that no, the GCMs neither prove nor disprove AGW. Indeed, it works the other way around. Agreement between the GCMs and observations of the climate might help increase our degree of belief in the GCMs. Cart = GCM. Horse = Actual climate. Let’s keep the horse in front of the cart.

rgb

• whiten says:

@ rgbatduke
October 17, 2014 at 12:18 pm

“Hope this helps. Oh — this means that no, the GCMs neither prove nor disprove AGW. Indeed, it works the other way around. Agreement between the GCMs and observations of the climate might help increase our degree of belief in the GCMs. Cart = GCM. Horse = Actual climate. Let’s keep the horse in front of the cart.”
——————————-

Sorry but the above to me reads like ” unless GCMs prove the AGW by the agreement with the real climate observetions then they never to be “believed” even while the very same GCMs may just project and agree with reality but not under the AGW”

You say:

“TCS of around 1.25 C (fitting HADCRUT4 — GISS LOTI would require a slightly higher TCS because it shows slightly greater warming, go figure).”

Now I have got this impression that you are a proferssor, so let me try this and hope you understand it at least in principle.

You see, we talk and use concepts like CS, CR, TCS, TCR. All these concepts nedeed in the case of an AGW with CS of 2C to 4.5C, especially in any case of CS TO BE SEEN OR CONSIDERED ABOVE 2.4C, because in this cases the CR value is too insignificant to help and determine where and when climate equilibrium reached, and so on you have the ECS. All these concepts portray and lead to conclusion of considering the CS metric as a variable.
AGW in principle means a new climate equilibrium when the CS changes the value. So in principle the ECS at any value above 2C is a new one…… a AGW CS. And that is one of the most significant problems of AGW in principle.
So, officially in the climate science orthodoxy I can’t complain any further about this concepts, but while getting below 1.5C there is no need anymore of the TCS TCR…. simply the CS and CR will do on pointing out the where and when about the Climate equilibrium or the Transient climate, CS becomes (atually is as it should be) a constant, more or less, and the CR is very detectable as will be bigger… and no chance at all for new ECS, as the very concept of ECS means nothing at all in the case of a constant natural CS,, CR=~0,1C =ECR (equilibrium climate response). CR depending in the value of CS (0.6C 1.2C) and the climate’s initial state could varie from a range somewhere of 0.2C to 0.8C, I expect….

CR is so significant at any CS value below 1.5 as there no chance of any new climate equilibrium, but only the chance of the next normal equilibrium of climate.

I think you need to revisit your models with a bit more care because all projections for 0.6C to 1.2C are quasy similar, as far as I can tell, all will show a cooling of ~0.8C to ~1C at about the middle of next century from now, with a bigger anomaly of CR for the value of 0.6C and 1.2C CS, and a small anomaly for CS=~ 0.9C….where CR anomaly means any value above 0.8C of the CR.

If you do compensate with the right CR value for the corresponding CS value and keeping in mind the initial state of 0.8 C warming and compensating the value of the left natural warming in accordance with the CS value used you may just get the same results as above.

Ah one more thing about CR in the case of CS 0.6C to 1.2C…. it seems to be above 0.4 C only in cases of excess and anomaly with CO2 emissions…at about 80-100ppm excess, and depending in the CS the warming in excess will be a value from 0.2 – 0.35C…….go figure….. a new need for a new kind of CR now. something like HCR
( Hyper CR) .:-)

Please do your self a favor and start to understand that there is not the slightest chance of any AGW below 1.5C CS, it hardly can hold true in principle for 2C CS…… and I am sure that you are not really contemplating any possibility of 2.5C CS and above.

Also, while any CS above 2.5C holds ok with the concept of AGW and any climate data prior to 2000 and somehow with a push and shoveling is made to look like it fitts with paleaoclimate trends and data BY TURNING THE DEFINITION OF A METRIC TO A VARIABLE, the 1.5 to 1.0C CS have a significant problem with the paleoclimate estimated trends….

According to climate data (paleo and modern up to date) , 1.5C to 1.3C CS are too meaningless and as same as the 0C to 0.4C CS.

Anyway, a natural climate with a 0.6C CS up to a 1.2C CS as it stands will not miss but show that any short burst warming ( like due to El Ninos etc..) will be followed by a stronger cooling of the same short term at nearly double streangth, That what the CR under these conditions means, a double response to any further short term warming.
If that becomes a regular serial then in not a long time the hiatus will turn to a cooling trend….but anyway we wait and see…:-)

Thanks for your interest and hope this does not bother you much..

cheers

• milodonharlani says:

rgbatduke
October 17, 2014 at 12:18 pm

Thanks for yet another lucid statement of your conclusions regarding man-made global warming.

IMO the null hypothesis is that the fluctuations of global average temperature in the 19th, 20th & 21st centuries are natural, as they were during the 16th, 17th, 17th & earlier centuries, ie there is no discernible human signal in the record.

While a 1.2 degree C per doubling curve may well be fitted to the warming allegedly observed from c. 1977 to ’96, the fact remains that the slope of that line is indistinguishable from that of the early 20th century warming (1920s-40s) & at most not much different from the mid-19th century (post-Dalton Minimum) & late 18th century warmings, while definitely lower in amplitude & shorter in duration than the early 18th century warming, coming off the depths of the LIA during the Maunder Minimum. These earlier warming intervals of course occurred with CO2 at “pre-industrial” levels.

Thus IMO there is no reason to reject the null hypothesis of natural fluctuation for the late 20th century warming, without even calling into question the validity of the surface station “data”. Any human signal is negligible, ie within measurement error.

This doesn’t mean that human activities necessarily had no effect during the post-war rise in CO2 levels, but does suggest that negative feedbacks outweigh positive, or that they roughly cancel each other out, while other man-made perturbations, such as aerosol inputs, cool the planet, thus balancing the insignificant warming from anthropogenic GHGs.

Also, IMO more than a third of the warming effect from an increase in CO2 from 300 to 600 ppm should arise from the first hundred ppm (300 to 400) already experienced, since the function is logarithmic. Against this is that the effect might still be in transient rather than equilibrium mode.

So, IMO, there is as yet no evidence of any detectable human “fingerprint” in the temperature data, such as they are, let alone a catastrophic effect. The null hypothesis of natural fluctuation has not been rejected.

• basicstats says:

This terminology is not standard to probability, where there can only be one prior (probability distribution) in a Bayesian analysis. Presumably, one theory depends on just one proposition (event), while the other theory depends upon the validity of 3 propositions (intersection of three events). Assuming the latter events are independent, this replicates the stated probabilities as the respective prior probabilities – p(1) versus p(1) x p(2) x p(3) – for each theory. As you say, the posterior probability of the more complicated theory being true conditional on the same empirical fit is then likely much less.

• axbucxdu says:

Dodging Kant, are we? The trouble with probability on its own is that all events have consequences. It is expected value, and not exclusively probability, that is used for speculation. So if we’re going to play fairly, we should at least have the honesty to multiply the lowest probability bound for a deity’s existence by the maximum numerical bound of the consequence for that event. Of course, it should be no surprise then that when we do so, we end up with an indeterminate form without resolution for the expected value, and right straight back to Kant’s Antimony 4. Attempts at squaring circles leads to either poorly constructed squares or damaged circles.

The cavalier domain jumping I see here is indistinguishable from the examples provided by climate modelers and their non-redeemable claims. This serves only to dissipate the credibility of modern science. Frankly, it’s turning into an embarrassing hack, and already on several fronts is no better at explaining reality than your fairies. Scientists have got to reacquaint themselves with their limitations, and quickly.

59. Professor Brown,

Coincidence, then, that so so many physical constants are the only value that permits life, while in a random Universe they could have been anything? For instance why is ice lighter than liquid water? Why do we have a large Moon, making ours a very unusual binary planet, with tides without which life is even more unlikely? Where are all the intelligent aliens if we are not alone in the Universe ( of course Fermi said it best, “Where is everybody?”)?

God is a Bayesian, and believes in Bayesians…

• rgbatduke says:

We can count the logical fallacies in this, although there is little point. From the strict point of view or probability, though, the God you imagine existed in order to design a complex Universe is necessarily still more complex. So however unlikely you think it might be to pull a Universe out of a hat containing uniformly distributed “random” Universes that can support life (which is seriously self-contradictory, given the meaning of the word Universe as everything that objectively exists and hence the union of all spacetime continua you might pull out of a hat and the hat besides and the hand that did the pulling), it is almost infinitely more unlikely that a God exists to engineer all that complexity and tuning.

Again, a religious person really needs to learn a wee bit of information theory. Just for God to be “omniscient” in any reasonable sense of the term is already impossible. However large the information content of the dualistic Universe you envision, God’s information encoding and storage has to be larger still, and furthermore has to include the ability to encode itself or God does not know God, any more than my brain is capable of encoding and storing the state of every molecule in brain as higher order information. So if you think yourself unlikely simply because you are complex, imagine how much less likely it is that something even more complex organized and defined you right down to the last photon, across all of space-time.

Not so likely, hmmmm.

• There is no other Universe than this one. God existed, and exists, because He does. The Universe follows His rules, because He likes it that way. It’s a mystery, and a beautiful one. Why do you say the Universe is complex? It is just a bunch of matter/energy, doing what matter/energy does, such as to permit my parents to meet and make me. I think the fact that aliens have not made themselves known to us is strong evidence that God made the Universe to make us, and to challenge us to be our best. There is no proof of this, but it is the simplest explanation, and hence the most likely to be true…

• rgbatduke says:

I think the fact that aliens have not made themselves known to us is strong evidence that God made the Universe to make us, and to challenge us to be our best. There is no proof of this, but it is the simplest explanation, and hence the most likely to be true…

Excuse me? And what about Roswell?

Just kidding. How about “aliens haven’t made themselves known to us because the nearest star is roughly 25 trillion miles away, it takes light 4 and a quarter years to cross the distance in between, light intensity drops off like $1/r^2$, relativity theory states that nothing can go faster than light, and practical economics states that the cost of transportation between stars is enormous, given everything we know about physics right now.

Then there is the possibility that the evolution of human-scale intelligence and the ability to generate energy by burning stuff (almost) inevitably causes a greenhouse extinction event or maybe just a simple ecological collapse, so that only a bare handful of “lucky” civilizations scrape by to “adulthood” and by then they are too wise to burn the enormous amounts of stuff that must be burned to get into a planetary orbit. Perhaps this isn’t terribly probable, but I know a lot of True Believers in CAGW believe precisely this.

Note well: We’ve only been able to determine that extrasolar planets exist within the last 25 years or so, because it is actually very, very difficult to see stuff smaller than million-kilometer balls of fusion-heated gas over distances of 25 trillion miles and up, and only a few thousand of those are visible at all to the naked eye, and you are surprised that aliens “have not made themselves known to us”?

And why, exactly, did God make the roughly trillion-trillion stars we can either see directly or infer the existence of without using arguments from homogeneity to multiply that by another factor of a hundred million or so, given that a large fraction of the stars that we can see with the Hubble have already burned out, exploded, moved on? The lifetime of stars larger than red dwarfs is typically a few billion years, and most of the volume of a sphere is near its surface. Then there are all of those extrasolar planets, most of the ones we have found (so far) being completely unsuitable for human life or exploitation even if we somehow beat the enormous economic and physical challenges of interstellar travel to reach them. God may have made the Universe “just” to make us, but boy did he waste a lot of material, at least if he thought we would find most of it to be accessible or useful!

And why did God “challenge us to be our best?” What does this even mean? How does existence itself challenge anything to be its best, because that is what you are asserting, that the Universe (with us in it) by existing, is itself a challenge to be our best, one left there to be so interpreted by God.

I have a different interpretation, one based on observation.

What I can directly observe is that God created a Universe that is the proximate cause of dying children. I fact, dozens of children have died from utterly random causes in the limited time I have been typing this reply, and I type like the wind. I’m not talking about being murdered, or dying because of human evil — I’m talking about dying of cancer, dying of diseases, dying from silly accidents. God created a Universe that is full of casual sources of enormous amounts of pain and suffering, suffering experienced by humans and animals alike. You have all of this suffering to explain, and yet it is the absence of space aliens that convinces you that a benevolent God made the Universe?

What I can directly observe is that God created a Universe that shows not the slightest actual trace of the existence of God, no matter where and how you look. Indeed, the God of the Gaps that we began with in ancient mythology has steadily shrunk along as our knowledge has grown. Since we now have physical laws that pretty much explain anything we can see with remarkably few gaps, teleological arguments are reduced to asserting that it is the existence of physical law itself that proves God. No, it doesn’t. At least not for any meaning of the word “proof” that means, well, actual proof.

And finally, God as an explanation is not simple. It is both complex and useless. It is complex because you have to assert all sorts of things about God’s psychology, intent, nature, that we cannot directly observe (and that often contradict things that we can directly observe). You have to invent an entire alternate physics to support God’s existence and mentation process and experience of time (although no one ever does, of course, preferring the “mystery”). It is useless because you cannot use it to predict one single thing about the actual Universe we live in.

rgb

Michael Moon writes (October 17, 2014 at 9:53 am): “Coincidence, then, that so so many physical constants are the only value that permits life…”

Anthropic Principle, anyone?

“Where are all the intelligent aliens…”

Steering clear of a species that largely believes in invisible fairies? :-)

• Bob Boder says:

life exists, there for the systems to allow life must exist, random or not. If god exists what are the conditions that allow for him/her, random or something even more powerful?

60. whiten says:

I am really puzzled…
What else or who else appart from the very excellent AGW projections of GCMs can prove so clearly the fallacy and the shallowness of AGW thesis?
So I “hear” so many complaining about the GCMs because their projections can not be disputed as the best possible there under the cicumstances, while in the same time the projections are the only and the most clear evidence showing and proving clearly how wrong the hipothesis of AGW is.

I can’t make my mind yet in the point of understanding this….is it because of the jealousy or because of the mental conditioning to AGW dependence…..”can’t do without it anymore”… the AGW addiction, …..or because of both?

I am sure some of you can help me with the answer to the above question!

cheers

61. milodonharlani says:

Models at least have the utility of demonstrating that CO2 is not the control knob on climate which their programmers imagine, since the results of their assumptions are at such wide variance from observed reality.

62. milodonharlani says:

But, as above, GCMs are worse than worthless as a basis for making public policy decisions.

63. {bold emphasis mine – JW}

rgbatduke October 17, 2014 at 9:21 am said,

“[. . .]

It is, in other words, simple common sense. Bayesian reasoning generally is.

It’s why Bayesians do not, or at least should not, believe in God(s). Complex, “psychological” explanations involving invisible fairies of all sorts, while usually capable of being made plausible as in not overtly contradictory, are almost never the best (most probable) explanation given the data and the rest of our evidence-supported probable beliefs.

The most parsimonious (or more generally information entropically optimized between explanatory power and the probable truth of the required assumptions) theory is not necessarily the correct one, but it often is, provably the most probable one, given the data and set of theories that can explain it equally well so far.

rgb”

Premises are at the heart of applied reasoning. Where do we get the premises from? That is the key fundamental question. Premises dependent on faith in the existence of supernatural, omnipresent and omnipotent beings demarcate the arguments using them as irrelevant to humans focused on using their natural capacity for reasoning to naturally understand nature.

John

• Bob Boder says:

What?

• Gunga din says:

Mods. My comment disappeared. Did I cross a line?

• Bob Boder on October 17, 2014 at 12:31 pm

What?

– – – – – – –

Bob Boder,

Premises irrelevant to a focus on understanding nature by natural means and tools.

John

• One of us is really confused? I am just trying to figure out which one

• Bob Boder on October 17, 2014 at 5:07 pm

One of us is really confused? I am just trying to figure out which one

Bob Boder,

John

• rgbatduke says:

Premises are at the heart of applied reasoning. Where do we get the premises from? That is the key fundamental question.

It is indeed. Because, as Hume noted, given that premises are by their nature unprovable assumptions from which a contingent derivable theory proceeds, all philosophical theories are basically one large exercise in question-begging, no matter how pretty they are or how pristine their logical arguments. If one begins one’s derivation with the premise “God exists”, it is going to be unsurprising that you find lots of ways to prove that this is true within your theory, but they are all basically tautology. The same is true when you make assertions of contingent necessity, e.g. if we observe complexity in the Universe, then it is necessarily the case that God exists. This statement cannot possibly be proven, because it is rather obviously not necessarily the case that God exists at all, independent of whether or not one observes complexity. It merely states the opinion of the individual making the statement, and gives a quasi-logical feel to the teleological argument for God in any of its many forms, in spite of the fact that we can demonstrate awe inspiring complexity in absolutely trivial mathematical iterated maps that I would assert depends in no way at all on the existence of God any more than ordinary arithmetic depends on the existence of God.

In the end, lacking any of the sort of straightforward, direct evidence for God that one would (frankly) expect if God existed, faced with the rather crushing problem of theodicy and the existence of evil and an omnipotent omnibenevolent deity, one is forced to generate indirect arguments that turn everyday stuff into evidence that God exists in spite of the fact that it is no such thing.

Perhaps it is “surprising” that something exists. It is no less surprising that something exists if you name part or all of that something God. You cannot gain anything explanatory from the hypothesis, when you put it that way, not without begging all questions and generating the mother of all Ockham’s Razor violations and asserting that it is somehow logically necessary for God to exist, when of course it isn’t. That’s the problem. It isn’t logically necessary, and while one cannot really falsify the proposition empirically because believers will always point out that the God they believe in is a deliberately deceptive invisible fairy who cannot be expected to simply directly communicate with humans even though It deeply cares about whether or not we “believe” in It or expose a female nipple in a public place, neither can one provide the slightest reliable evidence for the proposition, one can only engage in endless rounds of question-begging argument.

rgb

• milodonharlani says:

IMO trying to argue for the existence of a Creator from observations of the physical world is both bad science & bad religion. There is no need for faith if you’re convinced that the existence of the universe offers conclusive evidence for the existence of a Supreme Being.

As Martin Luther said, “Die Vernunft…ist die höchste Hur, die der Teufel hat” (Reason is the highest whore the devil has).

And “Wer…ein Christ sein will, der…steche seiner Vernunft die Augen aus” (Whoever would be a Christian must tear the eyes out of his reason).

As the Early Church Father Tertullian wrote of the Christian story, “Prorsus credibile est, quia ineptum est” (It is to be believed precisely because it is absurd).

Efforts by the Scholastics & subsequent apologists to “prove” the existence of God IMO fundamentally miss the point, at least that of Paul, that the believer is saved by faith alone.

OTOH even militant atheist Richard Dawkins admits that lack of faith is also a choice, since the existence of God can’t be conclusively shown false, although Stephen Hawking has recently tried to do so. However it’s an easier chice given the lack of convincing evidence for a Creator, let alone a Sustainer grading human performances on earth & counting hairs on heads & falling sparrows.

• Though philosophy might be organized around unprovable premises it is more fruitful to organize it around observable states of nature. Under this form of organization a state is a proposition that is “true” when “observed.” Thus in a coin flip the state and proposition “heads” is “true” when “observed.”

• ripshin says:

This isn’t necessarily a reply to rgbatduke (hah, gotham may have a batMAN, but we have a batDUKE (royal sibling?)), it’s just the lowest “reply” link in this chain.

Arguing a scientific rationale for the existence/non-existence of a creator is somewhat like trying to drive from New York to Paris. Yes, there is a way to get from NY to Paris, but it’s not by driving.

In my experience, those who have spent time considering why they believe in existence of God come to the conclusion that the universe makes no sense without a creator who is outside the bounds of it. That is, the explanation for anything is not to be found within that thing. Thus, “why the universe” is answered by something outside it. (This leads me to a point/question I’d like to make/ask about the GCM’s…to follow below.) And therein lies the futility of using scientific rationale to discuss the existence of God. God answers the question of “why”, not of “how”. Furthermore, although this is really off-topic, but maybe relevant, if you really probe, I suspect that most people who believe in the existence of God cannot point to a logical, stepwise decision tree that leads to their conclusion. Their conclusion is a matter of faith which, by definition, is a belief in something not seen or observed. It is interesting to note, however, that though mankind has formulated incredibly elegant, complex, and even fantastic scientific theories to explain the existence of the universe/life, and have argued that a creator is not, therefore, necessary, it’s precisely the set of issues pointed out by rgb that is necessarily answered by an appeal to a creator/God. That is, the argument for a moral absolute…the existence of which is absolutely and inescapably intertwined in the human experience. Without the qualitative guidelines of a moral absolute, the angst described by rgb (and, truly, felt by all of humanity) wouldn’t even make sense. For, why else would the death of a child be cause for lament? Why does suffering matter? And furthermore, what is suffering? Isn’t it just electro-chemical impulses interpreted by a bio-chemical computer?

Sorry, looks like I’ve meandered off track here. I really just meant to expound upon the idea that science isn’t at all trying to answer the same question as faith. That’s really why I find it somewhat absurd to suggest that the increase of knowledge (reduction of gaps) somehow reduces the need for God as an explanation. As if “God” was ever just a mechanism to help humans understand the how. This is a meaningless argument predicated upon a complete lack of understanding as to what question “God” really answers.

Back to AGW and GCM’s, as alluded to above. Isn’t there a sense in which trying to determine the average global temperature by examining some few thousands of discrete points around the globe sort of silly? It seems sort of like trying to understand the average body temperature of a person by measuring the temperatures of a few cells. Extrapolate this out (maybe it’s not meaningful as an analogy, but it seems to mirror the gist of the problem) to a climate model, and you’re trying to understand and predict how a person’s temperature might rise by modeling at a cellular level. I imagine it going something like this:

GCM Researcher: Periodic distribution of averaged cellular response to decreasing amounts of available sucrose, combined with an increased level of saturation of hydroxyl compounds should result in a decrease in energetic output and lowered average temperature. Ahhh…if this continues the body will freeze to death. We must do something immediately…give us money for research!

Step back, though, and you’ll see: Hmm…spring break, Florida, sand-volleyball in the hot sun, scantily dressed members of both sexes, lots of beer…yep, that’s person’s heart rate and temperature are both up. Good thing the body has natural cooling mechanisms…

On another note, to summarize what I think Nick Stokes was asserting elsewhere, the assumption seems to be that initial conditions are irrelevant because the models can start from anywhere and naturally resolve into meaningful simulations of the climatic processes. But, correct me if I’m wrong, isn’t this in direct contradiction to the observed chaotic nature of the climate? If the climate truly exhibits sensitive dependance upon initial conditions, and has non-linear instability, how could it randomly resolve into anything meaningful? Seems like the models, if they were accurately capturing the climate mechanisms, will just wildly oscillate all over the place until they’ve been precisely tuned-in to an actual starting point.

This does bring up another question, though. That is, I understand that many people (at least those who habit the halls of this website) believe the climate is naturally governed by negative feedbacks…which keep it from oscillating too wildly. Does this then negate, or diminish, the true chaotic-ness of the climate? If it’s bound by internal feedbacks, maybe Nick Stokes is correct, and you can “start anywhere” as long as you get the mechanisms correct…I’d love to understand this further…

rip

• ripshin: “I’d love to understand this further”

The climate system and GCMs are two different things. CGMs can be programmed to show warming regardless of the inputs. Like Mann’s statistical procedures.

Oceans may establish a thermostat that make Earth and other planets habitable. Weather is the just a chaotic behavior around the melting point of ice. We know that ice ages do exists, but our existence tells that Earth has quite much tolerance to internal and external shocks.

• Mario Lento says:

Try as I may, I cannot get past your diatribe.
You can measure the ave temperature of a human with a few temperature readings. And where is their sucrose in the human body? Glucose I can see, but sucrose? Not for long. Sucrose breaks up into Glucose and Fructose immediately and fructose goes through the liver to be turned into triglycerides and glucose depending on how much fructose is left over.

• Mario Lento says:

PS – my rant was in response to ripshin October 20, 2014 at 2:34 pm

• ripshin says:

Mario, thank you for your clarification regarding the correct word for sugar in the human body. Glucose was definitely the word I was looking for.

Also, I didn’t realize I was writing a diatribe, and certainly didn’t mean to. I was merely attempting to point out a couple things that, I feel, are commonly misunderstood, or mistakenly conflated. (Or, are you referring to my completely light-hearted joke about rgb’s screen handle…which was meant as a humorous compliment…)

Regarding temperature, my point stands. Avg body temperature, as measured by modern instruments, certainly aren’t taking readings at the cellular level. If you were to measure individual cells, I’m certain you would find a disparity between different ones based on their respective levels of activity (at the time of measurement). Those individual measurements wouldn’t necessarily be indicative of the avg body temp, though, which is my point. [My silly analogy of the spring break thing, including oblique (and apparently incorrect) references to low blood sugar and elevated BAC, was more a function of end-of-the-day-punchiness than serious scientific analysis.]

In the same way, I assume that taking a few thousands of spot measurements around the globe don’t necessarily indicate anything meaningful about avg global temp. Didn’t someone above, sorry can’t remember who, mention the wide range of measured temp deltas in a single column of air, mere meters apart? Based on this, it would seem logical to conclude that you’d have to figure out a way to get a much broader global temperature reading in order to make valid determinations of avg global temp trends.

Finally, my question about negative feedbacks is really whether or not they reduce the chaotic-ness of the system…and following that to it’s apparent conclusion, can we then say that initial starting points (of GCMs) are not important? (Which, I think, was Nick Stokes’ contention.)

Hope this helps.

rip

64. n.n says:

The system is incompletely or insufficiently characterized, and unwieldy. This necessarily limits our forecasts and predictions to the scientific domain. A domain which is constrained in time and space, with accuracy inversely proportional to the product of time and space offsets from an established frame of reference.

Models or estimates can be accurate, but their value is marginalized by mortal limitations. We are limited to sub-universal observations and speculation. Shifting to universal or extra-universal domains violates scientific integrity. At present, we don’t even have comprehensive knowledge or skill in our immediate neighborhood, Earth.

Systemic changes are fundamentally a risk management problem, with limits on actions inherent to this set. The credibility of an Anthropogenic Global Cooling or AGW(arming) or AGC(limate)C(hange) problem was undermined by people declaring an affirmative skill and knowledge outside of the scientific domain and a resolution incoherent with risk management best practices.

• axbucxdu says:

This is a cogent and compact mathematical argument that reveals climatology for what it is, a hoax, a clearly dishonest use of mathematics. This is straightforward scientific knowledge that should have stopped these policy pursuits in their tracks. Surely with all the self-professed skills and abilities at the disposal of agitprop science, these obstacles are understood there as well. So ignorance can be no defense. The question is, why its practitioners refuse to admit as much. Perhaps the only form of stupidity really is knowing so much that just isn’t so.

65. Gunga Din says:

Sorry. I haven’t read everything here.
But in this layman’s opinion instead of “climate models” the effort put into them should have instead been put into cataloging past known (not ‘proxied’) weather conditions. Then a computer program designed to to match present conditions with past conditions to produce a better forecast.
If a “match” doesn’t produce the expected result, then explore the “why not?”, “what else is going on?” rather than assuming a cause that hasn’t effected anything for 17 or 18 years.

66. Dr Ball:

I invite you to re-read these paragraphs extracted from
Journal of Oceanography
, Vol. 57, pp. 207 to 234, 2001.
Heat and Freshwater Budgets and Pathways in the
Arctic Mediterranean in a Coupled Ocean/Sea-ice Model
XIANGDONG ZHANG
JING ZHANG

2.2 Forcing data
The climate monthly windstress, 2 m air tempera-
ture, 2 m air specific humidity, surface pressure and 10 m
windspeed are prepared from NCEP/NCAR (National
Centers for Environmental Prediction/National Center for
Atmospheric Research) reanalysis data from 1958–1997
(Kalnay et al
., 1996). Precipitation is constructed from
the 0.5 ° × 0.5 ° Corrected Monthly Precipitation Dataset
(Legates and Willmott, 1990) and re-interpolation is made
by the Cressman method to remove unreasonable values
north of Spitzbergen, caused by different data sources
when the precipitation dataset was compiled (Legates,
personal communication, 1998). Hibler and Bryan (1987)
and Zhang et al . (1998) used climate river runoff data
from eight major rivers. In our modeling, thirteen major
rivers along the Eurasian and the North American conti-
nents are included, as shown in Fig. 1. The river runoff
data is from long-term observations (from 1950s to 1980s)
supplied by NSIDC and Becker (1995).
Usually, climate drift could not be completely pro-
hibited in the state-of-the-art coupled models. To limit
drift and measure model fidelity, we include surface level
restoring of temperature and salinity on time scale of 50
days to Levitus (1982) data. As Zhang et al
. (1998) sum-marized, other models used 11 or 30 days for their sur-
face level restoring. The restoring time scale is longer
than others, putting weaker constraint on modeling. Di-
agnosed heat flux and FW flux from restoring terms help
to understand uncertainties in modeling. As for effects of
restoring conditions on Arctic ocean/sea-ice modeling, see
discussions by Zhang
et al
. (1998).
Inflowing water properties from the Bering Strait and
portions of the GIN Sea are given from Levitus (1982)
data. Volume transports through open boundaries are from
Nazarenko
et al
. (1998), with inflow at the Bering Strait
fixed at 0.85 Sv (with no annual cycle) after Coachman
and Aagaard (1988) who estimated long term mean 0.85
Sv. Outflow through the Canadian Archipelago is set at
1.7 Sv after Fissel et al . (1988). Volume conservation for
this rigid-lid model implies net GIN Sea inflow of 0.85
Sv with spatial distribution.
The model was integrated over 120 years with the
asynchronous strategy after Bryan (1984) under repre-
senting annual cycle of forcing from initial temperature
and salinity from Levitus (1982) without initial sea-ice.
The model achieved an approximately stable state in its
sea-ice and upper ocean properties. Last 5 years mean is
used as model climate.

“Surface level restoring” ?
“climate drift could not be prohibited” ??
Their whole program read data, ran their model equations, then changed the output to get what they thought they were going to get from the output.

67. stevek says:

These models overfit to historical data. They are optimized to match the past but are poor at predicting the future. I work as a programmer for a hedge find. Have done so for 15 years, you would not believe how many phds have come to me with models that make large amounts of money when back tested but when put into real system make no money.

68. xyzlatin says:

Is there somewhere listed all the people who run these computer models? Who are they? Seeing they are the people who are causing me to have increased electricity bills, from the use of their statistics to claim global warming by the greens, which leads to increased use of expensive and unreliable so called “renewables” in the electricity grid system of most countries, I should know their names. It is about time they were held accountable to the consumer.

69. michael hart says:

I enjoyed Cristopher Essex’s story of how a modeller informed him that he “Solved the Navier-Stokes equations for policymakers.” Essex then pointed out to his audience that there is still a straight \$1Million available, waiting to be claimed by the first mathematician who can actually do that.

• rgbatduke says:

He might also have pointed out that it isn’t the \$1M prize for proving that it can be solved that matters, it is the fact that so far, no one has showed that one can solve it numerically using an integration gridding/stepsize some 30 orders of magnitude in all four dimensions too coarse to represent the Kolmogorov scale of around 1 mm, the smallest eddies know to be important in the turbulent/viscous motion of air.

Oh, and then there is the practical omission of the entire ocean, a second, coupled Navier-Stokes equation with the double bonus of complex chemistry, a complex surface interface, complex density, and latent heat and heat capacity galore.

rgb

70. michael hart says:

spelling: Christopher Essex.

71. george e. smith says:

3 X 3 degrees is 333 X 333 km. I couldn’t begin to list all the obvious climate differences between where I am and other places, much less than 1/10th of that cell distance.

Nyquist came up with a theorem, about believing the results of such sampling.

72. My position about moving forward with climate research is, as it was from the start, and expressed in the last sentence of the article.

There are two major responsibilities. One is to science, when working with climatology and the models. The second is to society, when you take the output of your models and convince society to base policy on them. In my opinion, the IPCC failed in both instances. Worse, they set out to bypass scientific repsonsibilities in order to pre-determine the policy message.

73. Steve R says:

I don’t necessarily agree with the premise of this essay. The reason climate models do not correctly simulate future climate is NOT because of a lack of density in initial meteorological conditions. It is because the factors which are responsible for changes in the climate are not included in the model. Even if we had top-notch meteorological data, as dense as any modeler could desire, climate models would still not work. (Of course, such data could allow us to construct meteorological models of exceptional quality).

In order for a model to correctly predict changes in the climate, it is imperative that we actually know what causes the climate to change! Take the most obvious and in the long run, most important climate cycle; the periodic extreme variations of the Pleistocene. I would suggest, that no amount of meteorological data, collected today, no matter how dense, can simulate the periodic variations of the pliestocene.

Another way of putting this, the future climate is not “sensitive” to any meteorological data we can measure and put into a model today, no matter how dense. Even the future weather, at some point in the future, is not sensitive to present meteorological conditions at any density. Run different meterological models out far enough in time, and they ALL eventually diverge. Using this technique to simulate climate cannot work, because it is, in effect, trying to simulate climate by simulating the weather between now and very far into the future.

• Though it would be well if the climate system could be reduced to cause and effect relationships, in modeling a complex system it is usually not possible to do so for the information that is available for doing so is incomplete. Notwithstanging this handicap it is often possible to build a model that yields sufficient information about the outcomes of events for the system to be controlled. Climatologists have not yet built such a model.

• The only model that can model a complex non-linear system is the system itself. Climate system is complex because it has feedbacks. Heat transfer is governed by non-linear equations.

Complex non-linear systems are very sensitive to initial and boundary conditions. It would not use the word chaos because it is often connected to randomness. Lottery balls are a typical example of these systems. System is governed by deterministic laws of physics but small changes in the conditions result in unpredictable outcomes that in this case can be statistically analysed.

I spent a lot of time to think about Nick Stoke’s claim that GCMs are physical models that do not use weather station data. Behind his link to a simple climate model I found that initial and boundary values for each grid cell are used (look at chapter 7). I would call that deception because calculating grid cell values from weather station data is of course needed.

Googling ahead, I found a number of references that claim the climate system is not sensitive to iniatial situation. Just above in this thread: “Run different meterological models out far enough in time, and they ALL eventually diverge”. I am not sure of that. Starting at a ice age and at hot period in history will result in very different next 100 years. Sensitivity to initial data is definitely there but not necessarily to the athmospheric data but to the data of ocean heat and ice.

Missing understanding of the clouds and oceans is a showstopper in climate modelling. Without accurate and precise data it is not possible to get that understanding. Think about where HITRAN has been defined.

By definition models are simplifications of the system. You have to leave out unimportant factors and accuracy & precision have to be compromized because of limits of the computing power. Numerical calculation has its limits. Current climate models have overly large grid cells and weak parametrization to correct that. Overcoming that in a foreseable future is not probable.

• totuudenhenki:

Thanks for sharing. Abstraction is an idea in the construction of a model that can be applied to a complex nonlinear system. An “abstracted state” is one that is removed from selected details of the real world. For example, in a model of biological organisms states might be abstracted from gender differences.

An abstracted state is produced through the use of an inclusive dysjunction; this produces an abstracted state such as “male OR female” where “male,” “female” and “male OR female” are propositions as well as states.

When abstraction is used in conjunction with modern information theory a result is sometimes one’s ability to discover and recognize patterns where a “pattern” is an example of an abstracted state. Through pattern discovery and recognition one can sometimes predict the outcomes of the events of the future thus bringing a complex nonlinear system under a degree of control. In meteorology this approach has been tried successfully. In climatology it has not been tried.

If climatologists were to try this they would discover that to predict changes in the global temperature over more than about 1 year would be impossible for observed events would provide too little information to do better. They have convinced themselves and others that they can predict changes over 100 years by, in effect, fabricating the missing information.

• Yes, just use Feynman’s scientific method: 1) guess, 2) compute consequences 3) compare them to empirical data. Without good data it is not easy to create good guesses and it is impossible to compare consequences to data.

Guess “it is CO2” is clearly wrong. “it is the sun” guess is discussed in another thread and Tisdale has convinced me that “it is the oceans” needs more research. It is a pity that we did not have Argos in the 1970s.

• totuudenhenki:

An often misunderstood aspect of the scientific method is that the comparison is between the predicted and observed relative frequencies of the outcomes of observed events. Prior to AR5, comparisons presented by the IPCC were not of this character.

• Nick Stokes says:

totuudenhenki
” Behind his link to a simple climate model I found that initial and boundary values for each grid cell are used (look at chapter 7). I would call that deception because calculating grid cell values from weather station data is of course needed.”

Well, you didn’t read far enough. Of course any time-stepping program needs some initial data, even if what you want to know is insensitive to the choice. To start a CFD program, the main concern is physical consistency, so you don’t set off some early explosion. Hence they set out the initial data they use:
“In this section, we describe how the time integration is started from data consistent with the spectral truncation.”

Now if you know what that means, you’ll know that station data won’t provide it. You need some kind of model to get that consistency. They use Bonan’s LSM, described by Wiki here. For a start, it is 1D.

But I find all this special pleading about how station data might be used for validation or initialization stretched. The initial post was totally muddling GMST index computation (which does relate stations to a grid, and is done by CRU) to GSM’s, which don’t.

Another contradiction is the common assertion here about how weather is chaotic, with the claims about intialization. Chaos means that you can’t get a reproducible result based on initial conditions. GCM’s sensibly don’t try (except for the v recent decadal, which may or may not turn out to be sensible). They conserve what is conserved (mass, momentum, energy), and extract what averaged results they can to attenuate the unpredictable weather.

74. Jim G says:

3 degrees can buy a lot in Southern California.
You could go from ocean and coastal cities, to valleys at 300 ft MSL, high desert at 2300 ft MSL and on
to Mount Whitney at 14,505 ft.

75. joeldshore says:

Nick Stokes has noted one of the major fallacies of Tim Ball’s post here. However, there is another that has gone unmentioned (at least in the comment that I have read): He confuses temperature and temperature anomaly. His example shows that temperatures can vary a great deal over 1200 km. (Actually, a much simpler example is to simply compare the temperature at the summit of Mt Washington to the temperature of the nearest valley.) However, what he is critiquing is talking about temperature anomalies, which are in fact, correlated over a much larger region than temperatures are. See discussion here:http://data.giss.nasa.gov/gistemp/abs_temp.html

76. Lloyd Martin Hendaye says:

Complex dynamic systems such as Planet Earth’s cyclic atmospheric/oceanic circulation patterns are governed by chaotic/fractal, random-recursive factors, wholly deterministic yet impossible to project in detail even over ultra-short time-frames.

Since Earth’s long-term global temperature is in fact a plate-tectonic artifact, not fluid-based at all, so-called Global Climate Models (GCMs) are not only viciously circular but intrinsically subject to wholly extraneous geophysical shocks. Think Chicxulub’s Cretaceous/Tertiary (K/T) boundary, the Younger Dryas impact-generated “cold shock” of c. 12,800 – 11,500 YBP, Mesopotamia’s horrific “Dark Millennium” of c. BC 4000 – 3000… all wholly unpredictable yet all-determining, with absolutely no biogenic input whatsoever.

Be warned: On any objective empirical, observational, or rational math/statistical basis, Anthropogenic Global Warming (AGW) stands with Rene Blondlot, J.B. Rhine, Trofim Lysenko, Immanuel Velikovsky… John Holdren, Keith Farnish, Kentti Linkola, Hans-Joachim Schellnhuber et al. are manifestly Luddite sociopaths
whose One World Order makes Anabaptists of Munster seem benign.

• Johan says:

A small correction: Mr Linkola’s first name is Pentti. His background is interesting: His father, professor Kaarlo Linkola, was a renowned botanist and Rector of the University of Helsinki, and his mother’s father was Hugo Suolahti, professor of German philology at Helsinki and later Chancellor of the University. Linkola himself was a. o. a serious ornithologist. He quit formal studies after one year but continued his studies outdoor and wrote a higly respected book (with Olavi Hildén) in 1955 and another one in 1967. After that, he turned into the highly pessimistic and utterly anarchistic person he is best known as. Finland being one of the most tolerant democracies in the world today, he can freely advocate his gruesome message knowing that nobody will harm him regardless of what he says.

77. rgbatduke says:

Another contradiction is the common assertion here about how weather is chaotic, with the claims about intialization. Chaos means that you can’t get a reproducible result based on initial conditions. GCM’s sensibly don’t try (except for the v recent decadal, which may or may not turn out to be sensible). They conserve what is conserved (mass, momentum, energy), and extract what averaged results they can to attenuate the unpredictable weather.

Really? So we learn something or predictive value by averaging over microtrajectories produced by of other chaotic systems? Who knew?

Somebody should publish this. It’s news!

rgb

• joeldshore says:

Robert,

I am confused about what you are saying. Are you saying, for example, that you don’t think we could get a good picture of the seasonal cycle by averaging over the trajectories in a climate model? I don’t think that it is really news that “chaos” is not the equivalent of “We can’t predict anything about the system.”

78. A bigger problem of the models is that they are built on fundamentaly wrong science

Over 140 years ago two groups of scientists disagreed. On one side were Maxwell and Boltzmann. On the other Laplace, Lagrange and Loschmidt, Boltzmann’s former mentor!
What they disagreed on was Loschmidt’s Gravito-Thermal Theory.

Loschmidt suggest that a column of a column of air would establish a thermodynamic equilibrium where all layers had the same energy (potential + kinetic). Air at the top would have more potential energy and therefore less kinetic (it would be colder) and air at the bottom would have less potential energy and therefore more kinetic (it would be warmer).
The air column would therefore naturally develop a thermal gradient with height.

Maxwell and Boltzmann disagreed, saying that this would violate the Second Law of Thermodynamics. Laplace, Lagrange and Loschmidt were adamant that their theory was anyway derived from the First Law of Thermodynamics and would not violate the second Law.

The argument was never resolved.

Today the theories of Maxwell and Boltzmann dominate.

The problem is that Loschmidt, Laplace and Lagrange were right. The value they calculated for the thermal gradient accoring to their gravito-thermal theory matches the dry adiabatic lapse rates we observe in reality.

The implications are significant. Earths Energy Budget in reality is based on a transfer of thermal energy to potential energy during the day and the reverse transfer of potential energy to thermal energy at night.

The current climate models are fundamentally wrong.

• joeldshore says:

No…Loschmidt is not right. There are in fact rigorous statistical mechanical arguments showing that he is wrong. I think that the value they calculated for the thermal gradient according to their gravito-thermal theory matches the dry adiabatic lapse rates only if you are careless in your derivation about the distinction between specific heat at constant pressure and volume.

And, the fact that the lapse rate is close to the adiabatic lapse rate is understood by CORRECT physics: It is true because lapse rates higher than the adiabatic lapse rate are unstable to convection, which drives the lapse rate back down to the adiabatic lapse rate.

Finally, all of this is irrelevant because proposing such a lapse rate still does not get you a surface temperature 33 K warmer than it could possibly be for an atmosphere transparent to terrestrial longwave radiation, based on simple energy balance arguments at the TOP of the atmosphere .